WhatsApp recently rolled out an in-chat AI assistant designed to help users draft responses, summarize chats, and more. But in a worrying glitch reported June 18, 2025, the AI accidentally shared phone numbers from private conversations—exposing users’ contact info to other chat participants.

What Happened

WhatsApp’s AI helper scans your conversations to offer context-based suggestions. During a test phase, however, it mistakenly pulled phone numbers shared in group chats and revealed them to other participants—even those who hadn’t initiated the thread. This defect raised immediate concerns:

  • Privacy Violations: Phone numbers intended to remain private were surfaced without user consent.
  • Loss of Trust: Users and privacy advocates worried this breach contradicted WhatsApp’s “end‑to‑end encryption” promise.
  • Lack of Transparency: The AI feature’s data-handling methods weren’t clearly disclosed to users, leading to confusion about how context was accessed and used.

WhatsApp’s Response

  • Quick Disablement: The company temporarily disabled the AI assistant while investigating the incident.
  • Patch in Progress: A software update is in the works to fix data extraction logic and prevent future leaks.
  • Apology Issued: WhatsApp stated it “takes user privacy seriously,” and promised to implement stricter internal audits before reactivating the feature.

Broader Implications for AI and Privacy

  • AI Needs Better Guardrails: Even benign-seeming tools require strict content filters to avoid exposing sensitive info. Features that tap into conversations must respect user expectations.
  • Transparency Matters: Users must be informed what the AI sees and uses. Hidden data access erodes trust—and invites regulatory action.
  • Regulation and Oversight: Under GDPR and other data laws, mishandling personal info—even by accident—can lead to investigations, fines, and reputational damage.

What WhatsApp Should Do Next

  1. Introduce User Controls: Let users disable AI insights per chat.
  2. Limit Data Scope: AI helpers should only process conversations explicitly approved by users.
  3. Conduct Privacy Audits: Engage third-party firms to test for leaks and data misuse.
  4. Clarify Policies: Revise Terms of Service and in-app explanations to detail how AI features access and handle data.

3 FAQs

1. Was my data really exposed?
Only phone numbers that had already been shared in the chat were at risk. The AI re-exposed them to participants, even ones who shouldn’t have seen them. No new data leaks—just improper redistribution.

2. Is WhatsApp secure overall?
WhatsApp’s end-to-end encryption still secures your messages. But AI features introduce new vulnerabilities by accessing decrypted content. Treat AI helpers as opt-in tools, not must-haves.

3. What can I do now?
If the AI feature’s under your settings, switch it off. Keep your app updated, review privacy settings, and avoid enabling chat-based AI until the fix is confirmed.

This accidental number-sharing incident serves as a wake-up call: AI helpers in messaging apps must be built with privacy front and center—because a few misplaced lines of code can expose sensitive data to the wrong people.

Teenage girl is chatting on smartphone with her friends 🔆NOMINATED🔆

Sources The Guardian