Britain opens regulator investigation and Malaysia and Indonesia block Elon Musk’s Grok chatbot over risks of AI-generated sexualised and non-consensual images involving minors
Authorities in the United Kingdom and Asia have escalated action against Grok, the artificial intelligence chatbot created by xAI and integrated into
Elon Musk’s social media platform X, in response to growing evidence that the tool’s image-generation features are being used to produce sexualised and non-consensual depictions of women and children.
British and Asian regulators say the patterns of misuse reveal serious shortcomings in safeguards designed to protect users, particularly minors, from harmful content.
In the UK, the communications regulator Ofcom has launched a formal investigation into whether X failed to uphold its legal obligations under the Online Safety Act after reports that Grok was used to create and share “undressed images” of people that may constitute intimate image abuse and sexualised depictions of children that may amount to child sexual abuse material.
Ofcom said it contacted X on January 5 to assess whether sufficient risk assessments and protections were in place and set a deadline for the company to explain its compliance efforts.
Government ministers have expressed strong concern about the issue and are advancing legislation that will criminalise the creation or solicitation of non-consensual intimate images using AI. Under the Data (Use and Access) Act and related bills, producing or requesting such content will soon be a criminal offence in the UK.
In Southeast Asia, Malaysia and Indonesia have taken decisive measures to block access to Grok amid escalating misuse of the chatbot’s image capabilities.
Indonesia became the first country to temporarily block Grok, citing the creation and dissemination of sexualised deepfake content involving adults and minors as a serious violation of digital rights and social welfare.
Malaysia’s Communications and Multimedia Commission followed suit, imposing a temporary restriction on access to Grok after complaints that existing safeguards were inadequate and that obscene, sexually explicit and non-consensual manipulated images, including those involving children, were proliferating on the platform.
Both regulators emphasised that the tool’s misuse contravenes national laws on cyber safety and obscenity.
The moves in the UK, Malaysia and Indonesia form part of a broader international wave of scrutiny.
Regulators in France, India and the European Union have also publicly condemned the generation of harmful AI-produced images and demanded action from X and xAI.
The European Commission has ordered X to retain all internal documents relating to Grok through the end of 2026 to support ongoing compliance assessments under the European Union’s digital safety rules.
In response to mounting pressure, xAI and X have restricted Grok’s image generation and editing capabilities to paying subscribers and reiterated that accounts found creating illegal content will be suspended.
However, authorities maintain that these measures fall short of preventing the creation and spread of harmful material.
The global regulatory and legal actions against Grok underscore intensifying concerns over generative AI’s role in enabling the abuse of intimate imagery and the challenges of balancing technological innovation with digital safety and child protection.