X pledges faster hate content review to UK regulator Ofcom
Ofcom monitors compliance quarterly as regulator cites recent crimes against Jewish community and UC Berkeley research as drivers for the agreement

X has formally committed to the UK communications regulator Ofcom to accelerate the review of reported hate and terror content, pledging to assess such material within 24 hours on average. The social media platform agreed to evaluate at least 85 per cent of reported hate content within a maximum of 48 hours, a move designed to curb the spread of illegal material under the country’s Online Safety Act.
The commitment also includes a provision to withhold access in the UK from accounts determined to be operated by or on behalf of terrorist organisations. Ofcom will monitor X’s compliance with these new obligations by reviewing the platform’s performance data on a quarterly basis over the next 12 months.
Oliver Griffiths, Ofcom’s Online Safety Group Director, described the agreement as a step forward but emphasised that significant work remains to address persistent illegal content. Griffiths noted that evidence shows terrorist content and illegal hate speech continue to persist on some of the largest social media sites, prompting regulators to challenge platforms to take firm action.
The regulator’s pressure on X follows a study by the University of California, Berkeley, which found that the weekly rate of hate speech on the platform increased by 50 per cent following Elon Musk’s acquisition of the company. The research attributed part of this rise to an increase in bots, while recent hate-motivated crimes against the UK’s Jewish community were cited by Ofcom as a key driver for the urgency of the new commitments.
Ofcom’s scrutiny extends beyond X’s content moderation practices. The regulator is simultaneously continuing its investigation into Grok, the artificial intelligence tool developed by Musk, for generating child sexual abuse material and non-consensual intimate images. This follows a recent decision to fine the image board 4chan nearly $700,000 for offences against the Online Safety Act, a penalty that prompted a response from the site’s legal team involving an AI-generated image of a hamster.


