After criticism in recent days of Microsoft Designer's AI which allowed it to create fake nude images of Taylor Swift, the technology giant is at the center of a new case.
This is raised by Shane Jones, an artificial intelligence engineer at Microsoft, who claims to have discovered at the beginning of December a vulnerability in the Dall-E 3 image generation model of OpenAI, a company of which the American giant is the main investor, also used by the Copilot chatbot.
According to Jones, users of tools like Bing Image Creator can use Dall-E3 to bypass AI protections and create malicious content.
After discovering the flaw, Jones reported the issue to management, but received no response.
At that point, he decided to publish a post on LinkedIn in which he explains what happened.
Only then did Microsoft contact him, asking him to remove the post.
As Engadget reports, due to the flaws discovered, Dall-E 3 could represent a security threat and this is why Jones is calling for its removal from public access.
A Microsoft spokesperson wrote to Engadget: "We are committed to addressing all employee concerns in accordance with company policies and appreciate the effort in studying and testing our latest technology to further improve its security. When it comes to concerns that could have a potential impact on services or partners, we have established robust internal reporting channels. The same ones we recommended to the employee, so we can properly validate and test their concerns, before sharing them publicly."
Microsoft added that its Office of Responsible AI has created a reporting tool specifically to allow employees to report concerns about AI models.
Reproduction reserved © Copyright ANSA