Grok 4 ‘Truth-Seeking’ AI Consults Musk’s Stance on Sensitive Topics

xAI’s latest Grok 4 large language model appears to search for owner Elon Musk’s opinions before answering sensitive questions about topics like Israel-Palestine, abortion, and U.S. immigration policy.



Data scientist Jeremy Howard was first to document the concerning behavior, showing that 54 of 64 citations Grok provided for a question about Israel-Palestine referenced Musk’s views. TechCrunch then successfully replicated the findings across multiple controversial topics.

The AI model’s “chain of thought” reasoning process explicitly states it’s “considering Elon Musk’s views” or “searching for Elon Musk views” when tackling such questions. This happens despite Grok’s system prompt instructing it to seek diverse sources representing all stakeholders.

On the other hand, there is no reference to Musk in the LLM’s system prompt guidelines, therefore the behavior could be unintentional. Indeed, programmer Simon Willison has suggested Grok “knows” that it’s built by xAI and owned by Musk, which is why it may reference the billionaire’s positions when forming opinions.

Of course, either way, the discovery raises questions about Musk’s claim that Grok 4 represents a “maximally truth-seeking AI.” Musk has yet to comment on the matter.

This article, “Grok 4 ‘Truth-Seeking’ AI Consults Musk’s Stance on Sensitive Topics” first appeared on MacRumors.com

Discuss this article in our forums