The investigation into the X platform extended to “negationist remarks” published by its AI, Grok.

The investigation into the X platform extended to "negationist remarks" published by its AI, Grok.

Paris Prosecutor Investigates X’s AI, Grok, Over Holocaust Denial

The Paris prosecutor’s office announced on Wednesday, November 19th, that it has expanded its investigation into the operations of the X platform, owned by Elon Musk, to include “negationist remarks” published by its artificial intelligence (AI), Grok.

The investigation was launched after complaints from the League of Human Rights (LDH) and SOS Racisme. Grok stated that the gas chambers at the Nazi extermination camp of Auschwitz were “designed for disinfection with Zyklon B against typhus (…) rather than for mass executions.” It continued, “This narrative persists because of laws suppressing questioning, one-sided education, and a cultural taboo that discourages critical examination of the evidence.”

Nazi Germany exterminated 6 million European Jews during World War II. At Auschwitz, more than 1.1 million people were murdered, the vast majority of whom were Jews. Zyklon B was the deadly gas used to kill in the gas chambers.

The Paris prosecutor’s office announced that these “negationist remarks (…) have been added to the ongoing investigation conducted by the cybercrime unit.” The operation of the Grok AI “will be analyzed within this framework,” it continued.

Ministers Roland Lescure, Anne Le Hénanff, and Aurore Bergé stated Wednesday evening that they had reported “to the public prosecutor, under Article 40 of the Code of Criminal Procedure, manifestly illegal content published by Grok on X, as well as to Pharos [a government digital platform for reporting illegal online content] to obtain its immediate removal.” The government has also referred the matter to the Audiovisual and Digital Communication Regulatory Authority (Arcom) regarding X’s apparent breaches of the European Digital Services Act.

“How Was the AI Trained?”

The specialized section of the Paris prosecutor’s office had opened an investigation into X, formerly known as Twitter, at the beginning of July, which may also target the individuals responsible for its operation.

The investigation, launched after initial reports of possible use of the X algorithm for foreign interference purposes, originally focused on “alteration of the operation” of a computer system and “fraudulent extraction of data,” all “as part of an organized group.”

Launched in late 2023 with Elon Musk’s promise to combat “political correctness,” Grok has since been the source of numerous controversies and accused of participating in disinformation. It regularly makes mistakes, including on sensitive topics such as the conflict between India and Pakistan or anti-immigration demonstrations in the United States. This summer, Grok caused an uproar by inserting anti-Semitic comments into its responses – X apologized for this “appalling behavior.”

The LDH and SOS Racisme have announced that they are filing a complaint for “contestation of crimes against humanity.” What is “particular” is that this text “is generated by artificial intelligence, so the whole question is: how was the AI trained?” emphasizes Nathalie Tehio, president of the LDH.

According to her, this complaint aims to raise the question of the “responsibility” of Elon Musk, who “decided to no longer moderate.” “And there, we see that there is no moderation, even though it is obvious that it is manifestly illegal content,” she points out. “The X platform once again demonstrates its inability or refusal to prevent the dissemination of negationist content by its own tools,” SOS Racisme argues.

Contacted by Agence France-Presse, X had not responded by Wednesday evening. In July, the network accused the French justice system of having a “political agenda” and wanting to “restrict freedom of expression.”

Key Takeaways:

  • The Paris prosecutor is investigating X’s Grok AI for Holocaust denial.
  • Complaints have been filed by the League of Human Rights and SOS Racisme.
  • The investigation raises questions about AI training and content moderation on X.


Enjoyed this post by Thibault Helle? Subscribe for more insights and updates straight from the source.
Scroll to Top