Luna-25 reached the orbit of the moon on Aug 16th 2023. With just one major milestone to go (a successful landing), I reckon that the chances of a completed successful moon mission are high for Russia.
No Scores Yet
Relative Brier Score
0
Forecasts
0
Upvotes
Forecasting Calendar
Past Week | Past Month | Past Year | This Season | All Time | |
---|---|---|---|---|---|
Forecasts | 0 | 0 | 0 | 0 | 9 |
Comments | 0 | 0 | 0 | 0 | 4 |
Questions Forecasted | 0 | 0 | 0 | 0 | 7 |
Upvotes on Comments By This User | 0 | 0 | 0 | 0 | 3 |
Definitions |
Upvotes Received
Russia launched a mission to the moon in August, yet not all launched missions land successfully. With technological advances and the amount of effort Russia put into the Luna-25 project, I expect a successful landing.
fellow forecaster @Fishfingersandcustard as pointed out by Mr @PeterStamp, this is the requirement for a Yes:
quote: "Resolution Criteria
This question resolves based on reporting from reputable news outlets. A "successful launch" includes both launching and landing safely on the moon. If a launch occurs, we will wait to resolve the question until the mission makes a landing; scoring will be based on the launch date, not the moon landing date. For example, if a mission launches on 1 Sep 2023 and lands on the moon days later past the end date of this question, we will resolve as "Yes" end of quote.
I expect these models to be used in this exact way. Meta should have the ability to detect such, and I expect them to report such accordingly given past sanctions and fines they have faced for not reporting such manipulation endeavors...Cambridge Analytica.
Why do you think you're right?
Detecting AI-written content is extremely tricky as a lot of models confuse AI and human-generated posts. I don't envision an accurate enough model to be available within the year, and I won't expect these tech giants to utilize a model that is not highly accurate.
Why might you be wrong?
If an AI signature technology is created and is embedded in the generated content to clearly identify as text as being AI-generated. The current models use linguistic-based comparisons to evaluate if the content is AI-generated, which often leads to confusion with human-generated texts as both are meant to look indistinguishable...Turning Test.
Why do you think you're right?
The technology is not there yet. Technology to generate videos only create extremely short videos.
Why might you be wrong?
This might happen if an entity has a strong enough technology to generate such content.
Why do you think you're right?
OpenAI CEO said that we should not expect GPT5 anytime soon, a lot of this is to do with the concerns around LLM and AI, as well as the vast data and hardware resources needed to build such a model.
Why might you be wrong?
Pressure from competition might force OpenAI to release an even more powerful model for the company to stay relevant.
AI is advancing fast and people are curious. Microsoft Alexa had the ability to sing years ago, so the technology is there.
Why do you think you're right?
Why might you be wrong?
This might not work if there is no technology to help in establishing digital provenance as it relates to AI-generated images.
i think there is tech... the guardian project has something called "Proofmode" iirc... but more to your first point, i think the reputation for credibility will be the only edge news outlets have on each other once AI starts really ramping up content creation...thus why not adopt an edge, esp if it already has a standards body to vouch for it?
then again, the requirement is for ALL new published content online... they might have too little to report!