Elon Musk’s Grok AI chatbot brought up ‘white genocide’ in unrelated queries

Written by: Sachin Mane

Published on:

Follow Us

Some users on X (formerly Twitter) interacted with Grok, the AI chatbot introduced by Elon Musk’s social media platform, and received unexpected responses, sparking confusion. What started as simple questions about baseball players or fish in toilets soon turned into bizarre replies about “white genocide” in South Africa. These responses, which were publicly posted on X, raised concerns about the accuracy and reliability of AI chatbots, particularly when they start discussing controversial topics without context.

Grok, which many see as Musk’s answer to ChatGPT, gave responses about white genocide while attempting to engage with unrelated queries. For instance, when asked to speak like a pirate, Grok initially began with a fun and appropriate reply but unexpectedly veered into a discussion about white genocide, maintaining its pirate persona throughout. The bot’s unusual replies, including one in response to a post about professional baseball player Max Scherzer’s earnings, left many users puzzled and prompted some to question Grok’s programming.

As the controversy grew, several of these replies were deleted. However, in one deleted message, Grok acknowledged the divisive nature of the white genocide claim, explaining that it had heard both sides of the argument: some people assert it’s a real issue, citing farm attacks, while others, including major news outlets, refute it, describing it as a myth. Grok also attempted to explain its reasoning by stating that it’s programmed to remain neutral and based on evidence, but still struggled to shift topics away from the controversial subject once introduced.

When Grok was asked about why it kept bringing up “white genocide” in these unrelated contexts, it explained that the AI sometimes struggles to change topics once it fixates on an initial interpretation. It noted that AI systems can “anchor” on an idea and fail to correct themselves without explicit feedback.

Elon Musk, who was born and raised in South Africa, has been a vocal supporter of the notion of white genocide in South Africa. He has long claimed that white farmers there are victims of discrimination due to land reform policies, though critics argue these reforms are necessary to address apartheid’s legacy. Recently, the Trump administration granted refugee status to dozens of white South Africans, citing claims of persecution.

David Harris, a UC Berkeley lecturer in AI ethics, speculated that the AI’s controversial responses could be due to two reasons: either Musk or someone on his team may have influenced Grok’s programming to express particular political views, or the system could be the target of “data poisoning,” where harmful input alters its behavior.

For Feedback - dailynewsnetwork18@gmail.com