In a recent experiment, a fake AI-generated video created using the tool Sora was uploaded to eight major social media platforms. The goal? To test whether any of them would clearly disclose that the video wasn’t real. The outcome was concerning: only YouTube included any kind of label, and even that was buried deep in the video description. The remaining platforms stripped away the metadata that could have identified the content as synthetic.
This test uncovers a serious flaw in how tech platforms handle synthetic media — not just in detection, but in transparency and accountability. As AI-generated content becomes more indistinguishable from real media, the lack of labeling puts users at risk of being misled.

What Platforms Promised vs. What They Delivered
The Promise:
- Many major tech companies joined initiatives to support Content Credentials, a digital standard that embeds tamper-proof metadata into content.
- This metadata, also known as “provenance data,” should allow platforms to automatically recognize and label AI-generated or altered media.
- AI developers committed to including these credentials in their generated content.
- Social platforms agreed to respect and display these labels to users.
The Reality:
- In this test, platforms removed the metadata, effectively erasing all traces of AI involvement.
- Only YouTube displayed a label noting that the content was “altered or synthetic,” and even then, the notice wasn’t prominently displayed.
- Despite the availability of metadata and AI detection tools, most platforms aren’t using them in a user-visible way.
Why This Is a Big Deal
1. AI Deepfakes Are Getting Too Convincing
Tools like Sora can generate highly realistic video that mimics real people and events. Without clear labeling, it’s nearly impossible for average users to spot fakes — which opens the door to misinformation, fraud, and manipulation.
2. Trust Is on the Line
If users can’t trust what they see online, platforms risk losing credibility. We’re approaching a point where people might begin to question everything, even real footage, creating an “authenticity vacuum.”
3. Policy Isn’t Keeping Up
Although some jurisdictions are beginning to legislate AI disclosure (such as new rules in California), enforcement mechanisms are weak. The rules are only as effective as the systems in place to support them — and currently, those systems are underdeveloped or ignored.
4. Voluntary Compliance Isn’t Working
The experiment clearly shows that relying on platforms to voluntarily implement these safety features is ineffective. Even with access to AI metadata and labeling standards, platforms aren’t proactively protecting their users.
What the Report Didn’t Fully Explore
- User Interface Problems: Even when labels are technically present, they’re often buried in video descriptions or require multiple clicks to access.
- Loss of Metadata: When platforms reprocess or compress uploaded videos, metadata often disappears. There’s no guarantee that provenance data will survive these steps.
- Regional Variations: The test focused on a few major platforms, mostly in the U.S. It’s unknown how international platforms or niche apps perform.
- Accountability: There’s little transparency into how platforms handle synthetic content internally, or what penalties (if any) exist for failing to comply with public promises.
- Lack of User Awareness: Many users don’t know how to interpret labels like “synthetic,” even when they are present.
- Platform Incentives: Engaging content — even if fake — often drives views and ad revenue. Without pressure, platforms have little incentive to police it.
What Users Can Do Right Now
- Pause Before You Share: Especially if a video is shocking, controversial, or emotionally charged.
- Look for Context Clues: Are there sources? Comments? Independent reporting to verify it?
- Check for Labels or Metadata: If available, review the video’s description or platform notices for AI disclosures.
- Use Reverse Image Search: To check if the footage or stills appear in unrelated contexts elsewhere.
- Advocate for Better Tools: Call on platforms to make AI labeling more transparent and user-friendly.
Frequently Asked Questions (FAQs)
1. What is Content Credentials?
Content Credentials is a digital standard that embeds metadata into media files to record how, where, and with what tool the content was created or modified. It’s designed to help verify authenticity.
2. Why didn’t the platforms display any warnings?
Either they stripped the metadata during upload, lacked the infrastructure to display it, or chose not to show it. In some cases, internal detection tools may exist but aren’t surfaced to end-users.
3. Can AI-generated videos always be detected?
Not reliably. Some AI tools include invisible watermarks or metadata, but these are often removed during uploading or sharing. As AI improves, detection gets harder.
4. Is this a threat only during elections or crises?
No. Deepfakes can be used for fraud, impersonation, harassment, and viral hoaxes in everyday life — from celebrity fakes to scam calls.
5. Are platforms required by law to label AI content?
Not universally. Some jurisdictions are introducing laws, but there is no global standard, and enforcement is currently limited.
6. Can you tell a deepfake just by watching?
Sometimes — look for odd lighting, lip-sync issues, strange eye movements. But modern tools are getting better, and even trained eyes can be fooled.
7. What should platforms be doing?
They should retain metadata, flag AI-generated content clearly, create public policies on synthetic media, and educate users on how to identify fake content.
8. What about watermarking AI content?
Some tools embed visible or invisible watermarks, but these are easy to crop or compress away. They’re not foolproof unless integrated with platform-level enforcement.
9. Does labeling actually help?
Yes — when it’s visible and understandable. Clear warnings can reduce the spread of false content, especially if platforms slow its distribution algorithmically.
10. What’s the long-term solution?
A combination of strong platform accountability, enforceable regulation, public education, and robust detection tools that work across devices and media formats.
Final Thoughts
This investigation highlights a fundamental gap between what tech platforms promise and what they actually deliver. As synthetic content becomes more common, transparency and verification must become non-negotiable.
If platforms want to preserve trust in the digital public square, they need to do more than embed metadata — they must surface it, explain it, and empower users to act on it.
Until then, it’s up to all of us to stay alert, ask questions, and think twice before believing what we see on screen.

Sources The Washington Post


