For those attending the trend-setting tech festival here, the scandal that erupted after Google's Gemini chatbot created images of black and Asian Nazi soldiers was a stark reminder of the power artificial intelligence could give tech giants. It was considered a warning.
Google CEO Sundar Pichai said last month that gaffes, including images of ethnically diverse Nazi troops, forced him to temporarily stop users from creating photos of people. In response, the company called the errors made by its AI app Gemini “totally unacceptable.”
Social media users mocked and criticized Google for historically inaccurate images, including one showing a black female U.S. senator from the 1800s. At the time, the first black senator was not elected until 1992.
“We definitely failed at image generation,” Google co-founder Sergey Brin said at a recent AI “hackathon,” adding that the company should have tested Gemini more thoroughly.
People interviewed at South by Southwest, Austin's popular art and technology festival, say Gemini's stumbling block is that a small number of companies are struggling with artificial intelligence platforms that are changing the way people live and work. He said that it highlighted the immeasurable power that people have.
Advertisement – SCROLL TO CONTINUE
Joshua Weaver, a lawyer and tech entrepreneur, said it was “essentially too 'woke'”, meaning Google had gone too far in its efforts to promote inclusion and diversity. He said he is doing so.
Google quickly corrected the error, but the fundamental problem remains, said Charlie Burgoyne, CEO of the Valkyrie Institute of Applied Sciences in Texas.
He equated Google's Gemini fix to putting a Band-Aid on a bullet wound.
Advertisement – SCROLL TO CONTINUE
Weaver noted that Google has long had the luxury of having time to improve its products, but now it's in an AI race with Microsoft, OpenAI, Anthropic and others. “We're moving faster than we know we should be,” he added.
Mistakes in trying to be culturally sensitive are a flashpoint, especially given America's tense political divisions, exacerbated by Elon Musk's X Platform (formerly Twitter).
“People on Twitter are gleefully celebrating an embarrassing thing that happened in the tech industry,” Weaver said, adding that the reaction to the Nazi gaffe was “overblown.”
Advertisement – SCROLL TO CONTINUE
However, he argued that the incident calls into question the degree of control those using AI tools have over their information.
In the next 10 years, the amount of information, or misinformation, produced by AI is likely to dwarf the amount of information produced by humans, and this will have major implications for the world in which those who manage AI safeguards Weaver said that means giving.
Karen Palmer, an award-winning mixed reality creator at Interactive Film, said that if someone gets into a robotaxi and “the AI scans you and determines that there are outstanding violations against you,” He said he could imagine the future. …You will be taken to the local police station,” which was not the intended destination.
Advertisement – SCROLL TO CONTINUE
AI is trained on mountains of data and can be used for an increasingly wide range of tasks, from generating images and sounds to deciding who gets a loan or whether a medical scan detects cancer. It will look like this.
But that data comes from a world rife with cultural bias, misinformation, and social inequality, and online content that can include casual chats between friends and intentionally exaggerated and provocative posts. AI models, not to mention content, can reflect those flaws.
With Gemini, Google engineers sought to rebalance the algorithm to provide results that better reflect human diversity.
Advertisement – SCROLL TO CONTINUE
The effort backfired.
“Understanding where bias is and how it's involved can be very difficult and nuanced,” said the technology lawyer, managing partner of Promise Legal, a law firm for technology companies. says Alex Charestani.
He and others believe that even well-meaning engineers training AI can't help but bring their own life experiences and subconscious biases to the process.
Valkyrie's Burgoyne also accused big tech companies of hiding the inner workings of their generative AI in “black boxes” that prevent users from detecting hidden biases.
“The power of the deliverables far exceeds our understanding of the methodology,” he said.
Experts and activists are calling for greater diversity in the teams that create AI and related tools, and how they work, especially when algorithms rewrite user requests to “improve” outcomes. We are calling for greater transparency.
The challenge is how to properly incorporate the perspectives of the world's many diverse communities, Jason Lewis of the Indigenous Futures Resource Center and Associated Groups said here.
At Indigenous AI, Jason works with remote Indigenous communities to design algorithms that use their data ethically while reflecting their perspectives on the world. This is not necessarily the kind of “arrogance” you see in the leaders of big technology companies.
He told the group that his work is “in stark contrast to the rhetoric in Silicon Valley, where there's this top-down, 'Oh, we're doing this because it benefits all of humanity' kind of shit.” Told.
His audience laughed.
juj-gc/bbk