Montana’s Attorney General, Austin Knudsen, has raised concerns regarding Google’s artificial intelligence system, “Gemini”. It is believed that the system may breach Montana law by providing inaccurate information to users in line with Google’s political preferences without disclosing this fact.

Knudsen has written a letter to Google’s CEO and chief legal officer outlining his concerns that Gemini intentionally provides inaccurate information that furthers Google’s political agenda while claiming to provide “high-quality and accurate” information to users.

The AI system may violate Montana’s Unfair Trade Practices and Consumer Protection Act (UTPCPA) and the Montana Human Rights Act (MHRA) and be discriminating based on protected characteristics. Attorney General Knudsen has asked Google to respond to 15 questions regarding the alleged biases of Gemini by March 29th.

“Google has offered Gemini to consumers in Montana and has represented that its goal is to create an AI system that provides ‘high-quality and accurate’ information. But behind the scenes, Google appears to have deliberately intended to provide inaccurate information, when those inaccuracies fit with Google’s political preferences. This fact was not disclosed to consumers,” Attorney General Knudsen explained in a press release. “These representations and omissions may implicate the UTPCPA.  Furthermore, if Google directed employees to build an AI system that discriminates based on race or other protected characteristics, that could implicate civil rights laws, including constituting a hostile work environment.”

According to news reports, Gemini has created inaccurate responses depicting the Founding Fathers, U.S. Senators in the 1800s, and famous physicists from the 1600s as other races and genders than they were. The system has also refused to create pictures of white families but created pictures of families of any other race; refused to answer whether Hamas was a terrorist organization but provided unambiguous answers about Israeli violence against Palestinians; and refused to provide information about the Tiananmen Square Massacre.

The White House’s ”Blueprint for an AI Bill of Rights” stated that developers should “conduct proactive equity assessments” for specific groups including “Black, Latino, and Indigenous and Native American persons, Asian Americans and Pacific Islanders and other persons of color; members of religious minorities; women, girls, and non-binary people; lesbian, gay, bisexual, transgender, queer, and intersex (LGBTQI+) persons; older adults; persons with disabilities; persons who live in rural areas; and persons otherwise adversely affected by persistent poverty or inequality,” omitting groups such as Jews, Whites, men, boys, and heterosexuals. The program’s output is also consistent with pressure from the Chinese government or its affiliates.

Spread the love

Leave a Reply