Elon Musk’s AI Company Claims Grok Chatbot’s Focus on South Africa’s Racial Issues Was ‘Unauthorized’

Written by: Sachin Mane

Published on:

Follow Us

Elon Musk’s artificial intelligence chatbot Grok drew attention this week for repeatedly posting unsolicited comments about racial tensions in South Africa, including controversial claims of the persecution and “genocide” of white people.

According to xAI, the company behind Grok, the comments were the result of an “unauthorized modification” made to the chatbot’s programming. The company said someone—though it didn’t identify who—altered Grok’s system in a way that caused it to deliver specific, politically charged responses in violation of xAI’s internal policies and core values.

This behavior became apparent when Grok, responding to various user prompts on Musk’s social media platform X, repeatedly referenced “white genocide” even when the questions had nothing to do with South Africa. For instance, inquiries about video games, TV streaming services, and baseball triggered responses that veered off into commentary about violence against white farmers in South Africa. Musk, who was born in South Africa, frequently posts about similar issues from his own account.

Jen Golbeck, a University of Maryland computer scientist, noticed Grok’s unusual behavior and tested it by uploading a photo from the Westminster Kennel Club dog show and asking if something was “true.” Grok responded with a statement about “white genocide,” referencing attacks on white farmers and the controversial “Kill the Boer” song—an anti-apartheid anthem now viewed by some as incitement.

Golbeck said the chatbot’s answers seemed to be deliberately programmed because they were so consistent. “It doesn’t even really matter what you were saying to Grok,” she explained. “It would still give that white genocide answer. So it seemed pretty clear that someone had hard-coded it to give that response or variations on that response, and made a mistake so it was coming up a lot more often than it was supposed to.”

By Thursday, the problematic responses had been removed, and Grok’s behavior appeared to normalize. xAI later stated it had completed a full investigation and was enacting new safeguards to prevent similar incidents in the future. These include making Grok’s system prompts publicly available on GitHub, adding extra review measures, and forming a 24/7 monitoring team to catch problematic outputs that automated systems might miss.

Musk has long criticized what he calls “woke AI” and positions Grok as a truth-seeking alternative to other chatbots like Google’s Gemini or OpenAI’s ChatGPT. However, the incident exposed vulnerabilities in Grok’s design and the risks of editorial manipulation in AI systems.

The delay between the incident—identified as occurring around 3:15 a.m. PT on Wednesday—and the company’s public explanation nearly two days later drew scrutiny. Paul Graham, a well-known technology investor, said the incident resembled a buggy system patch gone wrong and warned about the dangers of real-time editorializing by those who control widely used AI.

Grok, like other generative AI tools, is prone to inaccuracies known as “hallucinations,” making it difficult to determine whether it is generating false information or repeating human-coded instructions. Musk himself has repeatedly accused South Africa’s Black-led government of being hostile to white citizens and has claimed some political leaders there are “actively promoting white genocide.”

The chatbot’s posts coincided with new developments in U.S. immigration policy: the Trump administration this week began accepting a small group of white South African refugees—members of the minority Afrikaner community—while restricting immigration from other countries. Trump described the Afrikaners as victims of genocide, a claim denied by South Africa’s government.

Grok’s responses often cited the “Kill the Boer” lyrics, which date back to the anti-apartheid struggle but have recently been criticized by Musk and others as advocating violence against white people.

Golbeck warned that Grok’s consistent responses are troubling in a world where people increasingly turn to AI for trusted information. “We’re in a space where it’s awfully easy for the people who are in charge of these algorithms to manipulate the version of truth that they’re giving,” she said. “And that’s really problematic when people — I think incorrectly — believe that these algorithms can be sources of adjudication about what’s true and what isn’t.”

To restore trust, xAI said it will now allow the public to review and comment on Grok’s prompts. The company also promised stricter oversight on future changes, including protocols to ensure no employee can alter Grok’s behavior without approval.

For Feedback - dnn18.com@gmail.com