Too soon to tell, if you're talking now through the next 4 months. There will be lots of scrutiny, testimony, committee analyses, etc., but well-defined, vetted courses of action will trigger much debate over (a) should it be done, and if so (b) what is the proper level (and document language) of the ban or regulation. This takes time, thus my 10% answer. If the question went substantially into 2024, the the the probability increases.
No Scores Yet
Relative Brier Score
0
Forecasts
0
Upvotes
Forecasting Activity
Forecasting Calendar
No forecasts in the past 3 months
Past Week | Past Month | Past Year | This Season | All Time | |
---|---|---|---|---|---|
Forecasts | 0 | 0 | 0 | 0 | 6 |
Comments | 0 | 0 | 0 | 0 | 1 |
Questions Forecasted | 0 | 0 | 0 | 0 | 6 |
Upvotes on Comments By This User | 0 | 0 | 0 | 0 | 0 |
Definitions |
New Prediction
In collaboration with the UK Professional Head of Intelligence Assessment
Will a country ban or take regulatory actions that ultimately block access to OpenAI's models, between 1 June 2023 and 31 October 2023, inclusive?
Compare to me
Probability
Answer
10%
Yes
Files
New Prediction
Probability
Answer
75%
Yes
25%
No
The anecdotal reporting thus far in various media about fakes, etc. is on the increase weekly, if not daily, and will (or should) drive the editors/broadcasters to be proactive in assuring the public about the trust and validity of their articles/broadcasts. If the population senses a negative trend (more fakes), the likely reaction will be to move to another, more trustworthy platform, ... which translates into a drop in advertising revenue, ... which translates to a drop in shareholder value.
Files
New Prediction
Probability
Answer
50%
Yes
50%
No
I'd say the probability is higher than 50-50 that an LLM has been used. The real question is whether Meta's reporting process will be totally transparent that it occurred: thus my 50-50 assessment. This is because of the scrutiny that could ensue regarding steps (or not) that Meta is taking to mitigate.
Files
New Prediction
Probability
Answer
25%
Yes
75%
No
2023 is half over and there's not much being reported in this regard. Is the public even aware? Again, as a previous thread to my answers, the general population could likely care less. The fact that it is ENABLED suggests NOT mandatory, or can be opted-out. People may view as an intrusion or method of personal tracking.
Files
New Prediction
Probability
Answer
80%
Less than 4 million
15%
More than or equal to 4 million but less than 6 million
5%
More than or equal to 6 million
Great idea, but I for one have never heard of it and I consider myself fairly up to speed on such tech innovations regarding information protection, etc. If I've never hear of it, how many others out there haven't either; and then too, there's the general population that don't care or don't pay attention.
Files
New Badge
My First Question
Congratulations on making your first forecast!
New Badge
Active Forecaster
New Prediction
Probability
Answer
70%
Yes
30%
No
Why do you think you're right?
Increasing awareness of the potential misuse of AI and the increasing number of independent "fact-checking" organizations trying to keep social media honest. In the extreme, distrust of one or more of these entities will translate to the "bottom line" and hence stockholders will increasingly call for transparency.
Files
Why might you be wrong?
Public indifference to the issue, coupled with respective company bravado, corporation rights, etc.
Files