“We will work to improve Gemini's AI but it is intended as a creativity and productivity tool and may not always be reliable, especially when it comes to generating images or text about current events, breaking news or hot topics. It will make mistakes” : Prabhakar Raghavan, senior vice president of Google, admits this in an official post after the decision to pause the generation of images of people who showed inaccuracies and historical errors, especially if associated with people with white skin.
From Raghavan's long explanation post what we had already understood emerges: the Gemini artificial intelligence system erred in excessive inclusiveness, so photos of Vikings, Hitler's soldiers or medieval knights did not present white people but only black or Asians.
It has been "optimized to ensure it doesn't fall into some of the pitfalls we've seen in the past with image generation technology," notes the manager, and "since our users come from all over the world, we want it to work well for everyone ", so the AI model "overcompensated, leading to awkward and incorrect images," Raghavan adds.
"As we have said from the beginning - he concludes - 'hallucinations' are a known challenge for all models. It was not our intention to create photos only with one ethnic group or to make historical mistakes. I can promise that we will continue to act every time We identify a problem but I can't promise that Gemini won't occasionally generate embarrassing, inaccurate or offensive results. Artificial intelligence is an emerging technology that's useful in so many ways, with huge potential, and we're doing our best to implement it safely and responsibly." .
Reproduction reserved © Copyright ANSA