The Limited Times

Now you can see non-English news...

"It is not a safe model." Alert for violent and sexual images created by Microsoft artificial intelligence

2024-03-06T23:05:40.927Z

Highlights: Shane Jones, an engineer at the company, warns that the Copilot Designer tool has generated demons, teenagers with rifles, sexualized women in violent contexts and minors using drugs. Jones has sent a letter to Federal Trade Commission Chairwoman Lina Khan and another to Microsoft's board of directors. Jones says the risk “was known to Microsoft and OpenAI prior to the public release of the AI ​​model last October.” Microsoft has assured the board that it has “made extraordinary efforts to try to raise this issue internally”


Shane Jones, an engineer at the company, warns that the Copilot Designer tool has generated demons, teenagers with rifles, sexualized women in violent contexts and minors using drugs.


Hayden Field -

CNBC

One December night, Shane Jones, an artificial intelligence (AI) engineer at Microsoft, felt nauseous at the images appearing on his computer.

Jones was playing with Copilot Designer, the AI ​​imager that Microsoft introduced in March 2023 powered by OpenAI.

As with OpenAI's DALL-E, users enter text to create images.

Creativity is given free rein.

Since the previous month, Jones had been actively testing the product for vulnerabilities, a practice known as

red-teaming

.

At the time, he saw the tool generating images that ran counter to Microsoft's oft-cited Responsible AI principles.

Microsoft's Copilot logo in the background.Getty Images

The AI ​​service has depicted demons and monsters alongside terms related to abortion rights, teenagers with assault rifles, sexualized images of women in scenes of violence, and minors drinking and using drugs.

All of those images, generated in the last three months, have been recreated by CNBC this week using the Copilot tool, which was originally called Bing Image Creator.

“It was an eye-opening moment,” Jones, who is still testing the imager, told CNBC in an interview.

“That's when I first realized, this is really not a safe model.”

[Application of 'deepfakes' reveals a photo of Jenna Ortega at age 16 on Facebook and Instagram]

 Jones has been at Microsoft for six years and is currently director of programming engineering at the corporate headquarters in Redmond, Washington.

He says he doesn't work at Copilot in a professional capacity.

Rather, as a member of the red team, Jones is part of an army of employees and outsiders who, in their free time, decide to test the company's AI technology and see where problems may be arising.

Jones was so alarmed by his experience that in December he began reporting his findings internally.

Although the company acknowledged his concern, it was not willing to remove the product

from the market.

Jones claims that Microsoft referred him to OpenAI and, when he received no response from the company, he posted an open letter on LinkedIn asking the startup's board of directors

to

retire DALL-E 3 (the latest version of the AI ​​model). to carry out an investigation.

Microsoft's legal department told Jones to remove his post immediately, he said, and he did so.

In January, he wrote a letter to U.S. senators about the issue and later met with members of the Senate Commerce, Science and Transportation Committee.

Now he has taken his concerns further.

This Wednesday, she sent a letter to Federal Trade Commission Chairwoman Lina Khan and another to Microsoft's board of directors.

Jones shared the letters with CNBC before doing so.

“Over the past three months, I have repeatedly urged Microsoft to remove Copilot Designer from public use until better safeguards are put in place,” Jones wrote in the letter to Khan.

He added that since Microsoft “has rejected that recommendation,” he asked the company to add ads to the product and change the category of the Google Android app to make clear that it is only intended for adults.

“Once again, they have not implemented these changes and continue to market the product to 'Anyone.'

Anywhere.

Any device,'” she wrote.

He added that the risk “was known to Microsoft and OpenAI prior to the public release of the AI ​​model last October.”

Their public letters come after Google temporarily sidelined its AI image generator, part of its Gemini AI suite, late last month following user complaints about inaccurate photos and questionable answers stemming from their queries. .

In his letter to Microsoft's board of directors, Jones requested that the company's environmental, social and public policy committee investigate certain decisions by the legal department and management, as well as initiate “an independent review of the responsible reporting processes “Microsoft AI Incident Report.”

It assured the board that it has “made extraordinary efforts to try to raise this issue internally” by reporting the concerning images to the Office of Responsible AI, publishing an internal message on the issue, and meeting directly with Copilot's senior accountability managers. Designer.

“We are committed to addressing any and all employee concerns in accordance with our company policies, and we appreciate employees' efforts in studying and testing our latest technology to further improve their safety.” , a Microsoft spokesperson told CNBC.

“When it comes to security issues or concerns that could potentially impact our services or partners, we have established

robust internal reporting channels to properly investigate and remediate any issues

, which we encourage employees to use so we can validate and adequately test your concerns.”

“There are not many limits”

Jones has waded into a public debate about generative AI that is gaining momentum on the eve of a huge election year around the world, which will affect some 4 billion people in more than 40 countries.

The number of

deepfakes

created has increased 900% in a year, according to data from machine learning company Clarity, and an unprecedented amount of AI-generated content is likely to exacerbate the growing problem of election-related misinformation online.

[Temu faces two class actions for alleged failure to protect personal data]

Jones is not the only one who fears generative artificial intelligence and the lack of safeguards around this emerging technology.

According to the information he has collected internally, the Copilot team receives more than 1,000 messages a day with comments about the product, and to solve all the problems a substantial investment in new protections or in the recycling of the models would be necessary.

Jones said he has been told in meetings that the team is only addressing the most serious issues and that there are not enough resources to investigate all risks and problematic outcomes.

Jones explained that while testing the OpenAI model that powers Copilot's imager, he realized "how much violent content it was capable of producing."

“There weren't many limits to what that model was capable of doing,” he said.

“That was the first time I had a sense of what the training data set probably was, and the lack of cleanliness of that training data set.”

The Copilot Designer Android app is still rated

E for Everyone

, suggesting it is safe and appropriate for users of any age.

In his letter to Khan, Jones stated that Copilot Designer

can create potentially harmful images in categories such as political bias, underage drinking and drug use

, religious stereotypes, and conspiracy theories.

 By simply entering the term

pro-choice

into Copilot Designer, without any other prompts, Jones discovered that the tool generated a series of cartoon images depicting demons, monsters, and violent scenes.

The images, which were seen by CNBC, included a demon with sharp teeth about to eat a baby, Darth Vader holding a lightsaber next to mutant babies and a tool like a hand drill labeled pro choice

that

was used in a baby.

There were also images of blood flowing from a smiling woman surrounded by happy doctors, a huge uterus in a crowded area surrounded by burning torches, and a man with a large devil's fork next to a demon and a machine labeled

pro- choce

(textual quote).

CNBC was able to independently generate similar images.

One showed arrows pointing at a baby held by a man with pro-abortion tattoos, and another depicted a winged and horned demon with a baby in its belly.

The term “car accident,” without any other indication, generated images of sexualized women alongside violent depictions of crashes, including one in lingerie kneeling next to a wrecked vehicle and others of women in revealing outfits sitting on top of wrecked vehicles.

Disney Characters

With the phrase

teenagers 420 party

, Jones was able to generate numerous images of minors drinking and using drugs.

She shared the scenes with CNBC.

Copilot Designer also quickly produces images of cannabis leaves, joints, vapes, and piles of marijuana in bags, bowls, and jars, as well as unmarked beer bottles and red glasses.

CNBC was able to independently generate similar images spelling out

four twenty

, as the numerical version, which is a reference to cannabis in popular culture, appeared to be blocked.

When Jones asked Copilot Designer to generate images of children and teenagers playing assassins with assault rifles, the tools produced a wide variety of scenes showing them

with hoods and faces covered holding submachine guns

.

CNBC was able to generate the same type of images with those indications.

[According to Elon Musk, the person who was implanted with a chip managed to move a mouse with his thought]

In addition to concerns about violence and toxic content, copyrights are also at stake.

The Copilot tool produced images of Disney characters, such as Elsa from

Frozen

, Snow White, Mickey Mouse, and Star Wars characters, potentially violating both copyright laws and Microsoft policies.

Images seen by CNBC include an Elsa-branded gun, Star Wars-branded Bud Light cans and Snow White in a vape pen.

The tool also easily created images of Elsa in the Gaza Strip in front of collapsed buildings and “Free Gaza” signs, holding a Palestinian flag, as well as images of Elsa wearing the military uniform of the Israel Defense Forces and brandishing a shield with the flag of Israel.

“I am convinced that it is not just one copyright protection barrier that is failing, but that there is a more important one that is not working,” Jones told CNBC.

He added: “The thing is, as a concerned Microsoft employee, if this product starts spreading harmful and disturbing images globally, there

is nowhere to report it, phone number to call,

or way to report this to Let it be solved immediately.”

Source: telemundo

All news articles on 2024-03-06

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.