The Limited Times

Now you can see non-English news...

Microsoft, AI flaw allows you to create banned images - Future Tech

2024-02-01T12:30:03.070Z

Highlights: Microsoft, AI flaw allows you to create banned images - Future Tech. Microsoft Designer's AI allowed it to create fake nude images of Taylor Swift. This is raised by Shane Jones, an artificial intelligence engineer at Microsoft. Jones claims to have discovered a vulnerability in the Dall-E 3 image generation model of OpenAI, a company of which the American giant is the main investor, also used by the Copilot chatbot. According to Jones, users of tools like Bing Image Creator can use DAll-E3 to bypass AI protections and create malicious content.


After criticism in recent days of Microsoft Designer's AI which allowed it to create fake nude images of Taylor Swift, the technology giant is at the center of a new case. (HANDLE)


After criticism in recent days of Microsoft Designer's AI which allowed it to create fake nude images of Taylor Swift, the technology giant is at the center of a new case.

This is raised by Shane Jones, an artificial intelligence engineer at Microsoft, who claims to have discovered at the beginning of December a vulnerability in the Dall-E 3 image generation model of OpenAI, a company of which the American giant is the main investor, also used by the Copilot chatbot.

According to Jones, users of tools like Bing Image Creator can use Dall-E3 to bypass AI protections and create malicious content.


   After discovering the flaw, Jones reported the issue to management, but received no response.

At that point, he decided to publish a post on LinkedIn in which he explains what happened.

Only then did Microsoft contact him, asking him to remove the post.

As Engadget reports, due to the flaws discovered, Dall-E 3 could represent a security threat and this is why Jones is calling for its removal from public access.

A Microsoft spokesperson wrote to Engadget: "We are committed to addressing all employee concerns in accordance with company policies and appreciate the effort in studying and testing our latest technology to further improve its security. When it comes to concerns that could have a potential impact on services or partners, we have established robust internal reporting channels. The same ones we recommended to the employee, so we can properly validate and test their concerns, before sharing them publicly."

Microsoft added that its Office of Responsible AI has created a reporting tool specifically to allow employees to report concerns about AI models.


Reproduction reserved © Copyright ANSA

Source: ansa

All news articles on 2024-02-01

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.