Last week, former CNN correspondent Jim Acosta released a deeply controversial “interview” featuring an AI-generated avatar of Joaquin Oliver—a teenager tragically killed in the 2018 Parkland school shooting. Created by his parents and designed to echo Joaquin’s voice and persona, the segment was meant to raise awareness about gun violence and honor what would have been his 25th birthday.
While Joaquín’s father saw it as a meaningful form of advocacy, public response was immediate—and sharp. Words like “ghoulish,” “unsettling,” and “exploitative” flooded social media, with critics arguing this was a dangerous blurring of boundaries between memory and manipulation.

The Deeper Ethical Dilemmas at Play
1. Journalistic Integrity and Human Respect
Classic journalistic ethics emphasize minimizing harm and treating vulnerable subjects with dignity. Applying the same lens here, an AI interview with a deceased child—no matter how well-intentioned—can cross lines of respect and authenticity, especially when accepting responses that the real Joaquin could never have generated.
2. Public Trust and Media Responsibility
Part of a journalist’s duty is transparency. Audiences need clarity—not only that a segment features AI, but also who created it and why. Without clear framing, such content risks misleading people—or fueling conspiracy theories that “anything could be fake.”
3. Precedent for Exploitation
While this case was rooted in personal grief and advocacy, it raises concerns about misuse: What if future AI avatars become commercialized memorials, PR tools, or have emotional manipulation embedded into mainstream media?
Ethical Frameworks & Guidelines
- The ‘do no harm’ principle—especially when dealing with trauma—is foundational in media ethics. This means avoiding scenarios that reopen wounds for grieving families or the public.
- Emerging frameworks for AI in journalism emphasize transparency, accountability, and human oversight. Journalists must clearly disclose AI use, be watchful of its limitations, and ensure editorial control remains with humans.
- When minors or sensitive subjects are involved, guidelines (such as those from UNICEF) demand extra caution—prioritizing rights, consent, and emotional safety over sensational content.
Frequently Asked Questions
Q: Was Joaquin’s AI avatar created to deceive readers?
No. His parents created it out of love—to keep his voice and message alive. Still, when presented by a journalist, the publication must stress its artificial nature clearly.
Q: Could AI avatars be more widely used for advocacy?
Yes—they already are. But while advocacy can benefit from emotion, using AI avatars must not erode the boundary between symbolic representation and problematic resurrection.
Q: Did Acosta cross an ethical line as a journalist?
Many believe so. While Acosta followed a request from the family, his role required careful ethical framing and disclosure—which critics say was lacking.
Q: What should newsrooms do moving forward?
They must invest in AI ethics training, adhere to transparency standards, and refuse to publish AI-generated interviews with deceased individuals unless it’s done with full context and respect.
Q: Is it always wrong to use AI to recreate loved ones?
Not inherently. For private grieving, it may offer solace. But turning such AI avatars into public media content carries a high ethical burden—one that demands scrutiny and restraint.
Final Thoughts
This episode isn’t just about one journalist or one grieving family—it’s a crossroads for journalism in the AI era. As this technology evolves, society must define what counts as respectful remembrance—and what crosses into emotional exploitation.
Grief is universal, but journalism must never leverage it without conscience.

Sources The Guardian


