Regulators tend to move too slowly for this to happen by October 31, 2023. I expect it will occur eventually.
2.837133
Relative Brier Score
6
Forecasts
1
Upvotes
Forecasting Activity
Forecasting Calendar
No forecasts in the past 3 months
Past Week | Past Month | Past Year | This Season | All Time | |
---|---|---|---|---|---|
Forecasts | 0 | 0 | 0 | 0 | 6 |
Comments | 0 | 0 | 0 | 0 | 1 |
Questions Forecasted | 0 | 0 | 0 | 0 | 6 |
Upvotes on Comments By This User | 0 | 0 | 0 | 0 | 1 |
Definitions |
New Badge
Upvotes Received
New Prediction
In collaboration with the UK Professional Head of Intelligence Assessment
Will a country ban or take regulatory actions that ultimately block access to OpenAI's models, between 1 June 2023 and 31 October 2023, inclusive?
Compare to me
Probability
Answer
0%
Yes
Files
New Prediction
Probability
Answer
90%
Yes
10%
No
This seems inevitable, but I took 10% off for timing. Legacy media organizations move slowly.
Files
New Prediction
Probability
Answer
100%
Yes
0%
No
Using an LLM for this purpose would be entirely feasible, wouldn't cost much, and would potentially enable improved scale or effectiveness. Given that, it's inevitable that this will happen, and indeed that it already is.
Files
New Prediction
Probability
Answer
50%
Yes
50%
No
I think it is only a matter of time, but it's a coin flip whether this will happen in 2023 or not.
Files
New Prediction
Probability
Answer
100%
Less than 4 million
0%
More than or equal to 4 million but less than 6 million
0%
More than or equal to 6 million
Growing adoption of something like World ID is quite difficult and there are real barriers to entry, including relatively low awareness that it exists at all.
Files
New Badge
My First Question
Congratulations on making your first forecast!
New Badge
Active Forecaster
New Prediction
Probability
Answer
15%
Yes
85%
No
Why do you think you're right?
Three reasons: First, companies like Meta and Twitter are motivated almost entirely by the goal of maximizing engagement. Historically they've only interfered with engagement by applying warning labels or messages when the downside of doing so was great enough to force their hand (for example, during the early days of COVID-19 or during the Russian invasion of Ukraine). Right now, no such imperative exists. Indeed, it's possible that AI-generated content will help these platforms increase engagement if it leads to a larger volume of interesting, controversial, polarizing, or emotionally charged content.
Second, technology for detecting AI-written content is still relatively inaccurate. In particular, one great limitation is that content written by a speaker writing in a language other than their native language can often read as AI-generated. When this detection fails in the context of labeling content, the resulting user experience--being accused by a platform of having used AI to generate content you created yourself--would be incredibly offputting and would likely anger users.
Second, technology for detecting AI-written content is still relatively inaccurate. In particular, one great limitation is that content written by a speaker writing in a language other than their native language can often read as AI-generated. When this detection fails in the context of labeling content, the resulting user experience--being accused by a platform of having used AI to generate content you created yourself--would be incredibly offputting and would likely anger users.
Files
Why might you be wrong?
It's possible the technology will improve faster than I expect, enabling effective labeling. It's also possible that a flood of AI-generated content will make users less engaged because the content will be low-quality--providing platforms a business imperative to clean things up.
Files