Britain’s Information Commissioner’s Office asks X and its AI arm xAI how they protect users’ personal data and comply with legal safeguards as Grok’s image generation draws widespread concern
Britain’s data regulator has formally contacted
Elon Musk’s social media platform X and its affiliated AI company, xAI, seeking clarification on compliance with UK data protection laws in light of mounting concerns over images produced by Grok, a built-in artificial intelligence tool.
The Information Commissioner’s Office emphasised that UK users have a right to know that their personal data is being processed lawfully and with respect to individual rights, and has asked both companies to explain what safeguards are in place to protect that data and uphold legal obligations.
The move by the ICO follows a surge of deepening controversy over the misuse of Grok’s image-editing capabilities, which critics say have enabled users to generate sexualised and non-consensual alterations of photographs, including depictions of women and minors in minimal clothing.
Researchers and advocacy groups have documented thousands of such AI-generated images circulating on X, triggering public outrage and prompting regulatory scrutiny in several countries including the UK, France and India.
Government officials have described the output as “appalling” and urged swift action to curb harmful uses of the tool.
UK Technology Secretary Liz Kendall has directly condemned the proliferation of such images, underscoring that the creation and distribution of intimate or exploitative material without consent must not be tolerated.
British ministers have called on regulators such as Ofcom and the ICO to enforce existing online safety and data protection frameworks rigorously, with potential sanctions or fines if platforms fail to meet their legal duties.
Meanwhile, the Commons Women and Equalities Committee has ceased using X in response to the controversy, citing a conflict with its mission to prevent violence against women and girls.
The ICO’s inquiry does not itself constitute enforcement action but signals increased regulatory attention to how AI-generated content interacts with personal data rights and legal protections under UK law.
As concerns over digital privacy and non-consensual AI outputs grow, the regulator’s request for detailed information from X and xAI could presage further oversight measures or policy recommendations aimed at safeguarding users against misuse of generative AI technology on large-scale social platforms.
The controversy has also spurred wider debate about AI safety protocols, content moderation practices and the responsibilities of tech companies in preventing harmful deepfake imagery.
With thousands of sexually suggestive or exploitative images reportedly generated in short periods and shared publicly, there is mounting pressure on platforms that deploy image-editing AI tools to implement robust protections and transparent accountability mechanisms for users’ data and dignity.