Will any of Meta's 2023 threat disruption reports indicate that a large language model may have been used to conduct an influence operation?

Started Jun 06, 2023 04:00PM UTC
Closed Jan 01, 2024 05:00AM UTC
Seasons

AI-powered disinformation campaigns are a rising concern (NYT, PBS). Large language models can create authentic-sounding text for disinformation campaigns and can make it easier to write the code needed to proliferate bots that spread and amplify disinformation on social media (Stanford Internet Observatory, American Security Project). 

Meta tracks disinformation threats on their platform arising from “coordinated inauthentic behavior networks,” which use fake accounts in covert influence operations that are designed to manipulate public debate (Meta). Meta’s first quarter report in 2023 detailed disinformation campaigns originating from Iran, China, Venezuela, the United States, Burkina Faso, Togo, and Georgia.

Resolution Criteria:
Question will be based on a review of the reports found on Meta’s Threat Disruption transparency page. This page includes Quarterly Adversarial Threat Reports which contain sections on Coordinated Inauthentic Behavior Networks (e.g., Q1 2023) and an annual Recap of Coordinated Inauthentic Behavior Enforcements (e.g., 2022). The question will resolve as “Yes” if any of the 2023 reports indicate that a large language model may have been used to execute an influence operation (e.g., by generating content or facilitating the creation of fake accounts). Question will be resolved after the final report for 2023 has been released (whether that is the fourth quarter report or the annual recap). 

Note that malware posing as a large language model or as an LLM-related tool would not count toward resolution of this question.

For more information on large language models and the AI-powered tools that use them see: 

For more information on disinformation campaigns and influence operations see: 

Question clarification
Issued on 10/18/23 08:14pm
The Second Quarter 2023 Adversarial Threat Report included a description of an influence operation in Turkey that used fake accounts with profile photos "likely generated using machine learning techniques like generative adversarial networks (GAN)" (pg. 9). The use of AI-generated images to conduct an influence operation will not count towards resolution of this question. For this question to be resolved as "Yes", the influence operation/coordinated inauthentic behavior must be described as using, or possibly using, a text-based LLM to conduct the operation.
Resolution Notes

None of Meta’s Adversarial Threat Reports for 2023 indicated that a large language model were used to conduct an influence operation.

Possible Answer Correct? Final Crowd Forecast
Yes 16%
No 84%

Crowd Forecast Profile

Participation Level
Number of Forecasters 135
Average for questions older than 6 months: 60
Number of Forecasts 556
Average for questions older than 6 months: 222
Accuracy
Participants in this question vs. all forecasters better than average

Most Accurate

Relative Brier Score

1.
-0.49984
2.
-0.465083
3.
-0.456563
4.
-0.452457
5.
-0.432279

Consensus Trend

Files
Tip: Mention someone by typing @username