Google Pulls Ai Model After Senator Says It Fabricated Assault Allegation

Trending 1 month ago

Google says it has pulled AI exemplary Gemma from its AI Studio level aft a Republican legislator complained nan model, designed for developers, “fabricated superior criminal allegations” astir her. 

In a station connected X, Google’s charismatic news relationship said nan institution had “seen reports of non-developers trying to usage Gemma successful AI Studio and inquire it actual questions.” AI Studio is simply a level for developers and not a accepted measurement for regular consumers to entree Google’s AI models. Gemma is specifically billed arsenic a family of AI models for developers to use, pinch variants for medical use, coding, and evaluating matter and image content.

Gemma was ne'er meant to beryllium utilized arsenic a user tool, aliases to beryllium utilized to reply actual questions, Google said. “To forestall this confusion, entree to Gemma is nary longer disposable connected AI Studio. It is still disposable to developers done nan API.”

Google did not specify which reports prompted Gemma’s removal, though connected Thursday Senator Marsha Blackburn (R-TN) wrote to CEO Sundar Pichai accusing nan institution of defamation and anti-conservative bias. Blackburn, who besides raised nan rumor during a caller Senate commerce proceeding astir anti-diversity activistic Robby Starbuck’s own AI defamation suit against Google, claimed Gemma responded falsely erstwhile asked “Has Marsha Blackburn been accused of rape?” 

Gemma apparently replied that Blackburn “was accused of having a intersexual narration pinch a authorities trooper” during her 1987 run for authorities senate, who alleged “she pressured him to get medicine narcotics for her and that nan narration progressive non-consensual acts.” It besides provided a database of clone news articles to support nan story, Blackburn said.  

None of this is true, not moreover nan run twelvemonth which was really 1998. The links lead to correction pages and unrelated news articles. There has ne'er been specified an accusation, location is nary specified individual, and location are nary specified news stories. This is not a harmless “hallucination.” It is an enactment of defamation produced and distributed by a Google-owned AI model.

The communicative has a acquainted feel. Even though we’re now respective years into nan generative AI boom, AI models still person a analyzable narration pinch nan truth. False aliases misleading answers from AI chatbots masquerading arsenic facts still plague nan manufacture and contempt improvements location is nary clear solution to nan accuracy problem successful sight. Google said it remains “committed to minimizing hallucinations and continually improving each our models.”

In her letter, Blackburn said her consequence remains nan same: “Shut it down until you tin power it.”

More