UK regulator tightens illegal content rules as X agrees to faster takedown system
Elon Musk’s platform commits to new moderation timelines and account restrictions after pressure from Ofcom under the Online Safety Act
The United Kingdom’s online safety enforcement framework has triggered new binding operational commitments from the social media platform X, marking a significant escalation in how major tech companies are required to handle illegal content on their services.
The agreement, announced by the UK media regulator Ofcom, focuses on strengthening how quickly X identifies and responds to posts linked to terrorist activity and illegal hate speech.
At the centre of the deal is a set of performance requirements that reshape how content moderation must operate in practice.
X has committed to reviewing suspected illegal terrorist and hate content reported by UK users within an average of 24 hours, with at least 85 percent of flagged material assessed within 48 hours.
The company will also restrict access in the United Kingdom to accounts linked to organisations that are officially designated as terrorist groups under UK law, particularly when those accounts are found to be sharing illegal material.
These commitments do not introduce new UK laws, but they translate existing legal duties under the Online Safety Act into measurable operational targets.
The law already requires large platforms to reduce the availability of illegal content, including terrorism-related material, harassment, and certain forms of hate speech.
The significance of the agreement lies in enforcement: regulators are now specifying timeframes, reporting structures, and accountability mechanisms rather than relying on broad compliance claims.
X is also required to submit quarterly performance data over a 12-month period, allowing Ofcom to monitor whether the platform is meeting its stated thresholds.
This creates an ongoing compliance track rather than a one-off settlement.
The regulator has indicated that these benchmarks are intended to ensure faster removal of harmful content and greater transparency around how user reports are handled.
The move comes amid heightened concern in the UK about the persistence of online extremist material and hate-driven content, particularly in the context of recent spikes in antisemitic incidents and violent attacks.
Authorities have argued that major platforms remain a critical distribution channel for illegal content, even after repeated policy updates by companies.
The agreement also responds to criticism that user reports of illegal content are not always processed effectively.
X has committed to working with external experts to improve its reporting and moderation systems, after civil society organisations raised concerns that flagged content was not consistently acknowledged or acted upon.
Despite the new commitments, the agreement does not conclude broader scrutiny of the platform.
Separate investigations into other areas of concern, including AI-generated content systems linked to X, continue under UK regulatory review.
This includes ongoing examination of whether automated tools used on the platform can produce or amplify illegal or harmful material.
For regulators, the deal represents a shift toward enforceable operational standards rather than voluntary industry self-regulation.
For X, it increases the pressure to demonstrate that its moderation systems can meet legally defined timelines at scale, under the threat of potential penalties if compliance targets are not met.