Address
33-17, Q Sentral.
2A, Jalan Stesen Sentral 2, Kuala Lumpur Sentral,
50470 Federal Territory of Kuala Lumpur
Contact
+603-2701-3606
info@linkdood.com
Address
33-17, Q Sentral.
2A, Jalan Stesen Sentral 2, Kuala Lumpur Sentral,
50470 Federal Territory of Kuala Lumpur
Contact
+603-2701-3606
info@linkdood.com
A fresh cybersecurity scare has emerged from Microsoft’s AI ecosystem. Security researchers have uncovered a vulnerability nicknamed “EchoLeak” in Copilot’s AI agents—autonomous tools embedded in Microsoft 365—that can be exploited to exfiltrate private data and impersonate users in workplace environments.
EchoLeak is a method where malicious prompts or crafted files embedded in documents or chats can manipulate Microsoft’s AI agents to:
Researchers from Fortune’s report say the vulnerability occurs when AI agents “echo” prior actions or prompts that remain stored in memory. Attackers can plant hidden triggers that the AI later reinterprets as commands, bypassing standard controls.
Microsoft describes this as a prompt injection risk—a known but difficult-to-patch problem in generative AI, especially in agentic systems that operate autonomously over time.
Q1: What is a prompt injection attack?
It’s when hidden text or files trick an AI into following a harmful instruction, like leaking data or taking unauthorized action—without the user realizing it.
Q2: Can EchoLeak be used remotely by hackers?
Yes. Attackers could send malicious files or emails that, when opened, silently activate Copilot’s AI agents to perform harmful tasks using the user’s identity.
Q3: What should companies do now?
Limit Copilot’s access to sensitive systems, disable memory for high-risk accounts, and monitor AI behavior closely until Microsoft rolls out a full fix.
Sources Fortune