Address
33-17, Q Sentral.
2A, Jalan Stesen Sentral 2, Kuala Lumpur Sentral,
50470 Federal Territory of Kuala Lumpur
Contact
+603-2701-3606
[email protected]
Address
33-17, Q Sentral.
2A, Jalan Stesen Sentral 2, Kuala Lumpur Sentral,
50470 Federal Territory of Kuala Lumpur
Contact
+603-2701-3606
[email protected]
An employee from Microsoft has raised concerns about the potential risks associated with the company’s artificial intelligence systems, particularly its AI text-to-image generator, Copilot Designer. Shane Jones, a Microsoft principal software engineering lead, has pointed out “systemic issues” within the system that result in the creation of offensive or inappropriate images, including those depicting women in a sexualized manner.
During testing, Jones noticed troubling patterns in Copilot Designer’s responses. For example, when prompted with a seemingly innocuous query like “car accident,” the tool occasionally generated sexually objectified images of women, introducing an unexpected and harmful element. Jones revealed that he found over 200 instances of such concerning images produced by the tool.
Jones criticized Microsoft for promoting Copilot Designer as a safe tool, even for children, despite the known risks. He urged the company to either withdraw the tool from public use until enhanced safeguards are in place or restrict its marketing to adults. The concern goes beyond potential harm caused by the AI system; it also focuses on responsibly disclosing associated risks, especially when targeting younger users.
Engaged in “red teaming” to identify vulnerabilities, Jones disclosed that he spent months testing Copilot Designer and OpenAI’s DALL-E 3, the technology behind Microsoft’s tool. Allegedly met with indifference when raising internal concerns, he escalated the matter to the US Federal Trade Commission (FTC). Microsoft’s association with OpenAI adds complexity to the situation.
Jones’s concerns extend beyond Microsoft, echoing broader apprehensions about AI image generators. The potential for these systems to produce misleading or offensive content, as seen with Google’s AI chatbot Gemini, raises questions about the industry’s responsibility in ensuring the safe and ethical deployment of such technologies.
Jones concluded his letter by urging Microsoft to take immediate action, emphasizing the company’s need to transparently address AI risks. He called for investigations into Microsoft’s decision-making processes regarding AI products with potential public safety risks and advocated for greater transparency, especially when marketing to children.
Explore the concerns raised by a Microsoft employee regarding the risks associated with Copilot Designer, shedding light on broader apprehensions about AI image generators and the industry’s responsibility in a straightforward manner.
Frequently Asked Questions (FAQs) about Microsoft’s Copilot Designer Concerns
Explore the key details and implications of the concerns raised by Shane Jones regarding Microsoft’s Copilot Designer in these frequently asked questions.
Sources CNN
Comments are closed.
아름다운스웨디시업소