Beautiful Virgin Islands

Thursday, May 14, 2026

LinkedIn is using AI to spot and remove inappropriate user accounts

LinkedIn is using AI to spot and remove inappropriate user accounts

LinkedIn says it is using AI and machine learning to spot accounts containing content that runs afoul of its community guidelines.
Social networks including Facebook, Twitter, and Pinterest tap AI and machine learning systems to detect and remove abusive content, as does LinkedIn. The Microsoft-owned platform -which has over 660 million users, 303 million of whom are active monthly -today detailed its approach to handling profiles containing inappropriate content, which ranges from profanity to advertisements for illegal services.

As software engineer Daniel Gorham explained in a blog post, LinkedIn initially relied on a block list -a set of human-curated words and phrases that ran afoul of its Terms of Service and Community Guidelines -to identify and remove potentially fraudulent accounts. However, maintaining it required a significant amount of engineering effort, and the list tended to handle context rather poorly. (For instance, while the word “escort” was sometimes associated with prostitution, it was also used in contexts like a “security escort” or “medical escort.”)

This motivated LinkedIn to adopt a machine learning approach involving a convolutional neural network -a class of algorithm commonly applied to imagery analysis -trained on public member profile content. The content in question contained accounts labeled as either “inappropriate” or “appropriate,” where the former comprised accounts removed due to inappropriate content as spotted using the block list and a manual review. Gorham notes that only a “very small” portion of accounts have every been restricted in this way, which necessitated downsampling from the entire LinkedIn member base to obtain the “appropriate” labeled accounts and prevent algorithmic bias.

To further tamp down on bias, LinkedIn identified problematic words responsible for high levels of false positives and sampled appropriate accounts from the member base containing these words. The accounts were then manually labeled and added to the training set, after which the model was trained and deployed in production.

Gorham says the abusive account detector scores new accounts daily, and that it was run on the existing member base to identify old accounts containing inappropriate content. Going forward, LinkedIn intends to use Microsoft translation services to ensure consistent performance across all languages, and to refine and expand the training set to increase the scope of content it is able to identify with the model.

“Detecting and preventing abuse on LinkedIn is an ongoing effort requiring extensive collaboration between multiple teams,” wrote Gorham. “Finding and removing profiles with inappropriate content in an effective, scalable manner is one way we’re constantly working to provide a safe and professional platform.”

LinkedIn’s uses of AI extend beyond abusive content detection. In October 2019, it pulled back the curtains on a model that automatically generates text descriptions for images uploaded to LinkedIn, achieved using Microsoft’s Cognitive Services platform and a unique LinkedIn-derived data set. Separately, its Recommended Candidates feature learns the hiring criteria for a given role and automatically surfaces relevant candidates in a dedicated tab. And its AI-driven search engine leverages data such as the kinds of things people post on their profiles and the searches that candidates perform to produce predictions for best-fit jobs and job seekers.
Newsletter

Related Articles

Beautiful Virgin Islands
0:00
0:00
Close
The Great Western Exit: Why Best Citizens Are Fleeing the Rich World [PODCAST]
The New Robber Barons of Intelligence: Are AI Bosses More Powerful Than Rockefeller?
The End of the Old Order [Podcast]
Britain’s Democracy Is Now a Costume
The AI Gold Rush Is Coming for America’s Last Open Spaces [Podcast]
The Pentagon’s AI Squeeze: Eight Tech Giants Get In, Anthropic Gets Shut Out [Podcast]
The War Map: Professor Jiang’s Dark Theory of Iran, Trump, China, Russia, Israel, and the Coming Global Shock [Podcast]
Labour Is No Longer a National Party [Podcast]
AI Isn’t Stealing Your Job. It’s Dismantling It Piece by Piece.
Lawyers vs Engineers: Why China Builds While America Litigates [Podcast]
Churchill’s Glass: The Drunk, the Doctor, and the Myth Britain Refuses to Sober Up From
Apple issues an unusual warning: this is how your iPhone can be hacked without you doing anything
The Met Gala Meets the Age of Billionaire Backlash
Russian Oligarch’s Superyacht Crosses Hormuz via Iran-Controlled Route
Gunfire Disrupts White House Correspondents’ Dinner as Trump Is Evacuated
A Leak, a King, and a Fracturing Alliance
Inside the Gates Foundation Turmoil: Layoffs, Scrutiny, and the Cost of Reputational Risk
UK Biobank Breach Exposes Health Data of 500,000, Listed for Sale on Chinese Platform
KPMG Cuts Around 10% of US Audit Partners After Failed Exit Push
French Police Probe Suspected Weather-Data Tampering After Unusual Polymarket Bets on Paris Temperatures
News Roundup
Microsoft lost 2.5 millions users (French government) to Linux
Privacy Problems in Microsoft Windows OS
News roundup
Péter András Magyar and the Strategic Reset of Hungary
Hungary After the Landslide — A Strategic Reset in Europe
×