Google CEO Sundar Pichai is putting heat on the internet company's engineers to fix its Gemini AI app pronto, calling some of the tool's responses "completely unacceptable."
The new search tool, which the company has touted as revolutionary, came under fire after some users asked it to generate images of people drawn from history, such as German soldiers during World War 2, and popes, who have historically been White and male. Some of Gemini's images portrayed Nazi soldiers as Black and Asian and popes as female.
Google has temporarily halted its Gemini image generator following backlash to the AI tool's responses.
"I want to address the recent issues with problematic text and image responses in the Gemini app," Pichai wrote in an email to employees on Tuesday that was first published by Semafor and confirmed by Google. "I know that some of its responses have offended our users and shown bias – to be clear, that's completely unacceptable and we got it wrong."
The hitch in Gemini's image generator represents a setback for Google's push into AI, with the search giant seeking to keep pace with rivals like Microsoft, which offers the competing Copilot AI tool. Last month, Google rebranded Bard, a chatbot introduced a year ago, as Gemini and described the revamped product as its most capable AI model.
Tech companies "say they put their models through extensive safety and ethics testing," Maria Curi, a tech policy reporter for Axios, told CBS News. "We don't know exactly what those testing processes are. Users are finding historical inaccuracies, so it begs the question whether these models are being let out into the world too soon."
In his memo, Pichai said Google employees "have been working around the clock to address these issues. We're already seeing a substantial improvement on a wide range of prompts."
He added, "No AI is perfect, especially at this emerging stage of the industry's development, but we know the bar is high for us and we will keep at it for however long it takes. And we'll review what happened and make sure we fix it at scale."
AI-powered chatbots are also attracting scrutiny for the role they might play in the U.S. elections this fall. A study released on Tuesday found that Gemini and four other widely used AI tools yielded inaccurate election information more than half the time, even steering voters head to polling places that don't exist.
Experts have raised concerns that the advent of powerful new forms of AI could result in voters receiving false and misleading information, or even discourage people from going to the polls.
- In:
Aimee Picchi is the associate managing editor for CBS MoneyWatch, where she covers business and personal finance. She previously worked at Bloomberg News and has written for national news outlets including USA Today and Consumer Reports.
Thanks for reading CBS NEWS.
Create your free account or log in
for more features.