Address
33-17, Q Sentral.

2A, Jalan Stesen Sentral 2, Kuala Lumpur Sentral,

50470 Federal Territory of Kuala Lumpur

Contact
+603-2701-3606
info@linkdood.com

What Happened?

AI Misuse in an Ad Campaign

Recently, a story surfaced about BBC science presenter Liz Bonnin that raises concerns about the use of AI in advertising. An online ad for an insect repellent featured Bonnin’s face without her okay. The company behind the ad, Incognito, was tricked by fake AI-generated voice messages that sounded just like her, leading them to believe they had her approval.

Advertisement in a rainy city

How the Trick Was Pulled Off

The trouble started when Howard Carter, the boss of Incognito, got voice messages from someone pretending to be Bonnin. The voice was a fake, created by AI to sound just like her, including her Irish accent and speaking style. Fooled by the lifelike voice, Carter paid £20,000 to a fake account and used Bonnin’s image, thinking it was all legit.

Getting Into the Details

Analyzing the Fake Voice

Experts examined the voice and confirmed it was made by AI, noting that although it sounded real, there were hints it wasn’t a human—like an overly consistent accent and too-clear speech. This situation shows how advanced AI has become in mimicking human voices, which is both impressive and a bit scary.

What This Means for Advertising

This incident is a wake-up call about the ethics and rules needed in advertising to handle AI’s capabilities. It’s too easy to fake someone’s identity online, which can undermine trust in what we see and hear in ads.

Aftermath and Bigger Picture

Immediate Actions and Broader Lessons

When Incognito realized the mistake, they reported the fraud and admitted the error publicly, which was a responsible move. This story teaches other companies about the dangers of not double-checking who they’re dealing with, especially when AI is involved.

The Role of AI in Future Ads

While this case shows the dangers of AI, we shouldn’t forget that AI can also do a lot of good in advertising by creating engaging and personalized content. However, it’s clear that stronger rules and better verification methods are needed to prevent misuse.

This case involving Liz Bonnin, where her image was wrongly used in an ad because of AI trickery, highlights the technical, ethical, and legal issues facing the advertising world. It’s a call for businesses to be more cautious and for better safeguards against AI fraud.

You may see you advertisement here.

Questions and Answers

Q1: How can businesses avoid AI scams like this?

A1: Businesses can fight AI scams by using better security like multi-factor authentication, keeping a close eye on the AI tools they use, and training their teams to spot fakes.

Q2: What happens if you use someone’s picture without their permission for an ad?

A2: Using someone’s image without permission can lead to big legal problems, including lawsuits for misrepresentation or violating privacy. Always get permission first!

Q3: How can AI be improved to prevent misuse?

A3: AI could get better at stopping misuse by adding features that verify where a message comes from or by marking communications as authentic. Setting higher ethical standards for AI use in ads could also help keep things in check.

Sources The Guardian

author avatar
linkdoodsupport