Beautiful Virgin Islands

Wednesday, May 13, 2026

Amazon and Google under UK competition watchdog scrutiny for not ‘doing enough’ to tackle fake review scourge on their platforms

Amazon and Google under UK competition watchdog scrutiny for not ‘doing enough’ to tackle fake review scourge on their platforms

Citing suspicions that Amazon and Google have not adequately addressed their fake review problems, the UK’s competition regulator has launched probes against the tech giants in relation to breaches of consumer protection law.

Announcing the opening of formal investigations on Friday, the Competition and Markets Authority (CMA) said “specific concerns” were raised about whether the two companies were “doing enough” to detect “fake and misleading reviews or suspicious patterns of behaviour.”

An initial sweep, which began in May 2020, cast doubts on whether the firms investigate and remove such reviews, and if they impose “adequate sanctions” to deter reviewers or businesses from violating rules on honest posts – by taking action against repeat offenders in particular.

“Our worry is that millions of online shoppers could be misled by reading fake reviews and then spending their money based on those recommendations,” CMA Chief Executive Andrea Coscelli said in an official statement.

“Equally, it’s simply not fair if some businesses can fake five-star reviews to give their products or services the most prominence while law-abiding businesses lose out,” Coscelli added.

If the CMA’s investigation finds Amazon and Google have not sufficiently protected consumers, it can take enforcement action. This could range from “securing formal commitments” to change how they deal with fake reviews to “court action” if necessary.

Misleading and even incentivised consumer reviews have been an e-commerce scourge, with sellers using them to artificially improve their star ratings. This in turn determines how prominently their stores, and products, are displayed on online marketplaces.

The CMA also expressed concerns that Amazon’s detection systems fail to “adequately prevent and deter” some sellers from manipulating product listings – for instance, by “co-opting positive reviews from other products.”

Last September, Amazon had to delete nearly 20,000 product reviews, written by seven of its top UK reviewers, following a Financial Times investigation that discovered the reviewers were being paid to post thousands of five-star ratings.

Ahead of Amazon’s Prime Day sale this month, an investigation by UK consumer protection watchdog Which? found that buyers of some bestselling products were being offered incentives for positive reviews.


In a blog post earlier this month, Amazon attempted to shift the blame to social media companies for not being fast enough to act against the fake reviews it reported, and for not adequately investing in “proactive controls to detect and enforce fake reviews ahead of our reporting the issue to them.”

Noting an increasing trend of “bad actors” soliciting fake reviews – or hiring a third-party to do so on their behalf – outside of Amazon, the company said this “obscure(s) our ability to detect their activity and the relationship between the multiple accounts committing or benefiting from this abuse.”

While the blog did not call any social media platform out by name, it was likely referring to Facebook, which had to sign agreements to “introduce more robust systems to detect and remove such content” last year after being pulled up by the CMA.

However, in April, a follow-up investigation found that little had changed, and Facebook had to remove another 16,000 groups engaging in fake reviews.

Meanwhile, Google’s fake review detection systems have come in for criticism as well. In March, Which? exposed a network of paid-for reviewers providing bogus reviews to several UK businesses’ listings on Google to show how easy and cheap it was to create an artificially inflated customer rating on the search engine’s review system.

Responding to the Which? undercover sting, Google admitted that its automated detection systems allowed “inauthentic reviews” to “slip through from time to time,” despite the tech giant deploying “teams of trained operators and analysts who audit content both individually and in bulk.”

Newsletter

Related Articles

Beautiful Virgin Islands
0:00
0:00
Close
The Great Western Exit: Why Best Citizens Are Fleeing the Rich World [PODCAST]
The New Robber Barons of Intelligence: Are AI Bosses More Powerful Than Rockefeller?
The End of the Old Order [Podcast]
Britain’s Democracy Is Now a Costume
The AI Gold Rush Is Coming for America’s Last Open Spaces [Podcast]
The Pentagon’s AI Squeeze: Eight Tech Giants Get In, Anthropic Gets Shut Out [Podcast]
The War Map: Professor Jiang’s Dark Theory of Iran, Trump, China, Russia, Israel, and the Coming Global Shock [Podcast]
Labour Is No Longer a National Party [Podcast]
AI Isn’t Stealing Your Job. It’s Dismantling It Piece by Piece.
Lawyers vs Engineers: Why China Builds While America Litigates [Podcast]
Churchill’s Glass: The Drunk, the Doctor, and the Myth Britain Refuses to Sober Up From
Apple issues an unusual warning: this is how your iPhone can be hacked without you doing anything
The Met Gala Meets the Age of Billionaire Backlash
Russian Oligarch’s Superyacht Crosses Hormuz via Iran-Controlled Route
Gunfire Disrupts White House Correspondents’ Dinner as Trump Is Evacuated
A Leak, a King, and a Fracturing Alliance
Inside the Gates Foundation Turmoil: Layoffs, Scrutiny, and the Cost of Reputational Risk
UK Biobank Breach Exposes Health Data of 500,000, Listed for Sale on Chinese Platform
KPMG Cuts Around 10% of US Audit Partners After Failed Exit Push
French Police Probe Suspected Weather-Data Tampering After Unusual Polymarket Bets on Paris Temperatures
News Roundup
Microsoft lost 2.5 millions users (French government) to Linux
Privacy Problems in Microsoft Windows OS
News roundup
Péter András Magyar and the Strategic Reset of Hungary
Hungary After the Landslide — A Strategic Reset in Europe
×