A fresh cybersecurity scare has emerged from Microsoft’s AI ecosystem. Security researchers have uncovered a vulnerability nicknamed “EchoLeak” in Copilot’s AI agents—autonomous tools embedded in Microsoft 365—that can be exploited to exfiltrate private data and impersonate users in workplace environments.

What Is EchoLeak?

EchoLeak is a method where malicious prompts or crafted files embedded in documents or chats can manipulate Microsoft’s AI agents to:

  • Leak sensitive user data like internal emails, customer records, or financial forecasts.
  • Execute unauthorized actions such as sending emails, modifying documents, or changing calendars—without user knowledge.
  • Mimic employees, especially when Copilot’s memory features are enabled, increasing the risk of social engineering inside trusted platforms like Teams or Outlook.

How It Works

Researchers from Fortune’s report say the vulnerability occurs when AI agents “echo” prior actions or prompts that remain stored in memory. Attackers can plant hidden triggers that the AI later reinterprets as commands, bypassing standard controls.

Microsoft describes this as a prompt injection risk—a known but difficult-to-patch problem in generative AI, especially in agentic systems that operate autonomously over time.

Microsoft’s Response

  • Mitigation in Progress: Microsoft acknowledged the issue and has issued partial patches, disabling certain memory-based features until a broader fix is ready.
  • Security Team Mobilized: The company is working with external researchers and internal red teams to audit other AI features for similar weaknesses.
  • Copilot Restrictions Updated: Temporary safeguards limit AI agent actions in shared documents and reduce access to high-permission content.

Why This Matters

  1. Enterprise AI Risk Is Real: Businesses using Copilot for productivity must now weigh the convenience of automation against data security risks.
  2. Agentic AI Is Hard to Control: Autonomous systems can act across platforms and time, making traditional firewalls and permission settings less effective.
  3. Human Trust Is at Stake: If users believe AI is leaking private info or impersonating teammates, adoption could stall across sectors.

Frequently Asked Questions

Q1: What is a prompt injection attack?
It’s when hidden text or files trick an AI into following a harmful instruction, like leaking data or taking unauthorized action—without the user realizing it.

Q2: Can EchoLeak be used remotely by hackers?
Yes. Attackers could send malicious files or emails that, when opened, silently activate Copilot’s AI agents to perform harmful tasks using the user’s identity.

Q3: What should companies do now?
Limit Copilot’s access to sensitive systems, disable memory for high-risk accounts, and monitor AI behavior closely until Microsoft rolls out a full fix.

Sources Fortune