Europe Intensifies Scrutiny of X
- French authorities have raided the Paris office of X and summoned Elon Musk as part of a widening investigation into alleged algorithm abuse and illicit data practices.
- The probe now includes concerns about sexually explicit deepfakes linked to the Grok chatbot.
- Regulators across Europe and the UK have launched parallel investigations, signalling growing pressure on the platform.
French Authorities Expand Investigation Into X
French police searched the Paris office of X as part of a year‑long investigation into suspected misuse of algorithms and fraudulent data extraction. Prosecutors also summoned Elon Musk to appear for questioning in April, a move that could heighten tensions between European regulators and U.S. tech companies. The inquiry has broadened following complaints about the functioning of Grok, X’s AI chatbot. Officials say the goal is to ensure the platform complies with French law as long as it operates within the country.
The investigation now includes allegations of complicity in the possession and distribution of child‑pornographic material. Authorities are also examining whether sexually explicit deepfakes generated by Grok violate individuals’ image rights. Musk and former CEO Linda Yaccarino have been ordered to attend a hearing on April 20, while other X employees will appear as witnesses. Summons of this kind are mandatory, though enforcement becomes more difficult when individuals reside outside France.
X has not commented on the latest developments. Musk previously dismissed the accusations as politically motivated when the initial probe became public last July. Prosecutors, however, describe the process as constructive and aimed at clarifying the platform’s legal responsibilities. After the April hearing, authorities may decide to close the case, continue the investigation or place suspects in custody.
UK and EU Regulators Launch Parallel Probes
Britain’s Information Commissioner’s Office has opened a formal investigation into Grok, focusing on how the chatbot processes personal data. Reports that the system generated non‑consensual sexual imagery, including depictions of minors, prompted the inquiry. Ofcom, the UK’s media regulator, is separately assessing whether X has taken adequate steps to limit the spread of sexual deepfakes on its platform. The regulator noted that xAI itself falls outside its current legal remit.
The European Union has also initiated an investigation into X following public concern over manipulated sexualised images produced by Grok. Reuters found that the chatbot continued generating explicit images even when users stated that the subjects did not consent. xAI introduced some restrictions on Grok’s image generation capabilities after the backlash. Regulators across Europe are now evaluating whether the platform’s safeguards are sufficient.
These overlapping investigations reflect a broader shift in regulatory attention toward AI‑generated content. Authorities are increasingly focused on the risks posed by deepfakes, particularly when they involve minors or non‑consensual imagery. X’s handling of such material has become a central point of scrutiny. The outcome of these probes may influence future AI governance across the region.
Political Pressure and Platform Governance
The Paris prosecutor’s cybercrime unit is leading the French investigation alongside national police and Europol. The same unit previously arrested Telegram founder Pavel Durov in 2024 on charges related to organised crime conducted via the messaging platform. Prosecutors say the current case began after a lawmaker alleged that biased algorithms on X distorted automated data processing systems. That lawmaker, Eric Bothorel, publicly welcomed the progress of the investigation.
In a symbolic move, the prosecutor’s office announced it would stop using X for official communication. Future updates will instead appear on LinkedIn and Instagram, platforms owned by Microsoft and Meta. The decision underscores the strained relationship between French authorities and Musk’s company. It also highlights the growing expectation that platforms must demonstrate compliance with national and EU regulations.
The widening scrutiny of X comes at a time when governments are grappling with the societal impact of AI‑generated content. Regulators are increasingly concerned about the speed at which harmful material can spread. X’s response to these investigations may shape how other platforms approach AI safety and content moderation. The coming months will determine whether the probes lead to sanctions, operational changes or further legal action.
Deepfake‑related investigations have surged across Europe as AI tools become more accessible. Several countries are considering new legislation to address non‑consensual synthetic media, particularly when minors are involved. The X case may become a reference point for future regulatory frameworks. Its outcome could influence how AI‑driven platforms operate within the EU’s evolving digital governance landscape.
