Cambridge, Massachusetts (AP) — After stepping back from initiatives focusing on diversity, equity, and inclusion (DEI), tech companies are revisiting DEI efforts in AI products.
The term “Woke AI” has been supplanted by the White House and the Republican-led Congress with the issue of Harmful Algorithm Discrimination, identified as a necessary challenge to address. An investigation is underway to address “harmful and biased outputs,” building on earlier attempts to “enhance equity” in AI development. This follows a subpoena issued by the House Judiciary Committee last month targeting Amazon, Google, Meta, Microsoft, OpenAI, and ten other tech firms.
The U.S. Department of Commerce’s Standards Setting Division has also discontinued references to the fairness, safety, and “responsible AI” in collaboration with external researchers. Instead, a document obtained by the Associated Press instructs scientists to concentrate on “reducing ideological bias” in ways that “promote human prosperity and economic competitiveness.”
Tech workers are somewhat accustomed to the fluctuating priorities coming from Washington.
Nevertheless, this recent change has sparked concern among industry experts, including Harvard sociologist Ellis Monk. A few years ago, Google sought to enhance AI products to make them more inclusive.
At that time, the high-tech industry was already aware of the challenges. I knew there was a problem in the AI sectors where machines are trained to “see” and comprehend images. Computer vision held great commercial potential but repeated the historical biases found in previous imaging technologies, often misrepresenting people of color.
“Black or brown individuals sometimes appear in photos in a ridiculous manner,” said Monk, an expert on colorism, a form of discrimination based on skin tone and other characteristics.
Google adopted a color scale developed by Monk, enhancing the portrayal of diverse human skin tones in AI imaging tools, which replaced outdated standards originally created for white dermatology patients.
“Consumers certainly responded positively to this change,” he noted.
Now, Monk is left wondering whether such initiatives will persist. While he believes that the skin tone scale he developed is securely integrated into many Google products and beyond—including camera phones, video games, and AI image generators—he and other researchers are concerned that the new political climate may dampen funding and motivation for future advancements in technology.
“Google aims to make products accessible globally, including in India, China, and Africa, which provides some reassurance,” Monk said. “However, could funding for these types of projects diminish? Absolutely, especially if the political atmosphere shifts, coupled with the pressure for rapid market delivery.”
The Trump administration cut Hundreds of science and technology and Health Funds Grants related to DEI in a more subtle manner, affecting the impact of chatbots and other AI products on commercial development. In a probe into AI firms, Rep. Jim Jordan, chair of the Judiciary Committee, expressed a desire to determine if former President Joe Biden’s administration “forced or conspired to censor legal discourse.”
Michael Kratzios, director of the White House’s Department of Science and Technology Policy, stated at a Texas event this month that Biden’s AI policy “fuels social division and redistribution in the name of equity.”
The Trump administration did not make Kratzios available for comments, but he offered examples of his claims. One example was a passage from the Biden-era AI research strategy, which states, “without proper oversight, AI systems can amplify, perpetuate, or exacerbate unfair or negative outcomes for individuals and communities.”
Even before Biden assumed office, growing numbers of studies and personal accounts highlighted the dangers of AI bias.
One study found that autonomous driving technology struggles to detect dark-skinned pedestrians, increasing the risk of accidents. Another study revealed that AI-generated images of surgeons overwhelmingly depicted white males, comprising about 98% of the outcomes.
Facial recognition software for mobile phone security has misidentified Asian individuals. In several U.S. cities, police mistakenly arrested a Black man due to erroneous facial recognition. A decade ago, Google’s photography application classified two images of Black individuals under the label “Gorillas.”
Even members of the Trump administration concluded in 2019 that their facial recognition technology functioned inconsistently based on race, gender, or age.
Biden’s election prompted some tech firms to prioritize AI equity. The release of OpenAI’s ChatGPT in 2022 introduced new objectives, driving a surge in AI applications for document creation and image generation, compelling companies like Google to enhance their focus and maintain pace.
This led to the introduction of Google’s Gemini AI Chatbot. Last year, an inadequate product emerged that became emblematic of “Woke AI,” the issue conservatives sought to address. Without constraints, AI tools generating images from textual prompts often reinforce stereotypes derived from all accumulated visual training data.
Google’s system reflects this, as its studies indicated that when asked to illustrate individuals in various professions, young women tended to be represented with lighter skin, while young men were also predominantly featured, according to the company’s official findings.
Prior to deploying Gemini’s AI image generator over a year ago, Google attempted to implement technical safeguards to mitigate these discrepancies. This resulted in Bias overcompensation, which inaccurately situated people of color and women in historical contexts, such as portraying founding fathers with images of black, Asian, and Native Americans in 18th-century attire. Google swiftly apologized and retracted the feature temporarily, but the backlash became a rallying point for political conservatives.
Sundar Pichai, Google’s CEO, was present when Vice President JD Vance addressed an AI Summit in Paris in February, condemning the development of “a truly historic social agenda by AI.” Google’s AI image generator was criticized for suggesting that George Washington could be portrayed as Black, or that a World War I American soldier could be presented as a woman.
“We need to remember the lessons from those absurd instances,” Vance stated at the summit. “What we must take from this is to ensure that AI systems developed in the United States are free from ideological bias and do not infringe upon our citizens’ rights to free speech.”
Alondra Nelson, a former science adviser under Biden who attended Vance’s speech, remarked that the new emphasis on the Trump administration’s focus on AI’s “ideological bias” signals a perceived neglect of years spent addressing algorithmic biases that can affect housing, mortgages, healthcare, and other life aspects.
“Essentially, labeling AI systems as ideologically biased indicates our recognition and concern surrounding algorithmic bias issues that many of us have long been worried about.” Co-authored a set of principles aimed at safeguarding civil rights and liberties in AI applications.
However, Nelson perceives minimal opportunity for collaboration amidst the discord surrounding impartial AI initiatives.
“Sadly, in this political environment, that seems quite unlikely,” she observed. “The issues labeled under different names—algorithmic identification or algorithmic bias versus ideological bias—are regrettably treated as distinct problems.”
Source: apnews.com