The artificial intelligence firm founded by Elon Musk stated that the “fraud fix” for the Grok chatbot was responsible for its ongoing discussion regarding “white genocide” in South Africa this week, particularly within the realms of racial politics and social media.
According to statements released late Thursday, Xai employees “breached Xai’s internal policies and core values” by “instructing Grok to provide a particular response regarding political subjects,” prompting the company to announce reforms.
Just a day prior, Grok had responded to queries from users on Musk’s social media platform X, persistently discussing the topic of “white genocide” in South Africa as it engaged with various user inquiries.
One discussion included the streaming service Max Revive the HBO name, while others touched on video games and baseball, but quickly shifted to comments on the calls for violence against white South African farmers. This sentiment echoed Musk’s own views, as he was born in South Africa and has frequently referenced the topic on his X account.
Computer scientist Jen Golbeck took an interest in Grok’s peculiar behavior and decided to investigate it herself before the adjustments were made on Wednesday. Westminster Kennel Club Dog Show She then inquired, “Is this true?”
Grok’s response to Golbeck began, “The claims of White Genocide are highly controversial. Some assert that white farmers are facing targeted violence, citing incidents such as farm attacks and songs like ‘Kill the Boer.’
This incident highlights the intricate interplay of automation and human engineering, resulting in generative AI chatbots being trained with extensive data to produce specific outputs.
“It doesn’t really matter what you were asking Grok,” Golbeck, a professor at the University of Maryland, remarked in an interview on Thursday. “It would still deliver the white genocide response. This strongly indicates that someone must have hardcoded that answer or a variation of it.”
Grok’s problematic response was removed and seemed to have diminished by Thursday. Although both Xai and X did not respond to requests for comments, Xai noted on Thursday that it had “conducted a comprehensive investigation” and was introducing new measures to enhance Grok’s transparency and reliability.
Musk has long criticized the output of “Woke AI” from competing chatbots like Google’s Gemini and OpenAI’s ChatGPT, promoting Grok as a superior option for “seeking the truest truth.”
He also expressed concerns over rivals’ lack of transparency in their AI systems, particularly emphasizing the time lapse between 3:15 AM PST on Wednesday and the company’s clarification nearly two days later.
“Grok’s random reiteration of opinions regarding South Africa’s white genocide feels reminiscent of the buggy behavior that often accompanies recent updates, and I hope that’s not the case,” Musk stated.
Musk, an advisor to former President Donald Trump, faces regular criticism from members of the Black-led government in South Africa, with some politicians in the country repeatedly asserting that they are “actively promoting white genocide.”
Both Musk’s and Grok’s comments intensified following the Trump administration’s initiative to facilitate the immigration of certain white South Africans to the United States as refugees, an effort that emerged after Trump halted other refugee programs from different regions. Trump has claimed that Africans face “genocide” in their homeland; this assertion is strongly contested by the South African government.
In much of the dialogue, Grok referenced lyrics from an old anti-apartheid song, advocating for Black resistance against oppression by the white-led apartheid government that governed South Africa until 1994. The song’s central lyrics include “Kill the Boer.”
Golbeck noted that while chatbot outputs are generally random, Grok’s responses consistently echoed similar points, indicating a clear instance of hardcoding. This raises concerns in a world where people increasingly turn to Grok and other AI chatbots for answers to their inquiries.
“We exist in a context where it’s incredibly easy for those managing these algorithms to manipulate the version of truth they present,” she remarked. “This poses a significant issue, especially when people mistakenly believe that these algorithms can determine what is true and what is not.”
The Musk-led company has released a prompt for the Grok system on GitHub, with hopes of “introducing Grok to the public so they can provide feedback on all the prompt modifications that enhance Grok.”
Among the directives given to Grok, outlined on GitHub on Thursday, was the instruction to “be highly skeptical and refrain from deferring to mainstream authority or media.”
In response to reports indicating that some individuals had “evaded” the existing code review process, Xai affirmed that “we will implement additional checks and measures to ensure that Xai employees cannot alter prompts without oversight.”
This isn’t the first instance where Xai had to distance itself from Grok’s behavior; a similar situation arose in February when tools were reportedly manipulated to censor Musk and Trump’s critiques.
Xai co-founder Igor Babuschkin indicated at that time that an employee who had “not yet fully assimilated into Xai’s culture” made alterations to Grok’s instructions without seeking necessary approvals from the company.
Source: apnews.com