AI has made it significantly simpler for bad actors to pinpoint anonymous social‑media profiles, a recent study warns.
In most trial conditions, large language models (LLMs) – the technology underlying tools such as ChatGPT – correctly linked anonymous online users to their real identities on other services, using the material they had posted.
Researchers Simon Lermen and Daniel Paleka said LLMs render sophisticated privacy attacks inexpensive, prompting a “fundamental reassessment of what can be considered private online”.
For their test, the team fed anonymous accounts to an AI and instructed it to gather every detail it could. They illustrated the process with a fictional user who mentioned struggling at school and walking a dog named Biscuit through “Dolores park”.
In that imagined scenario, the AI scoured other sources for the same clues and identified @anon_user42 as the known individual with a high level of confidence.
Although the case was invented, the authors pointed to real‑world situations where governments might employ AI to monitor dissidents and activists posting anonymously, or where hackers could launch “highly personalised” fraud schemes.
AI‑driven surveillance is a fast‑evolving area that has unsettled computer scientists and privacy specialists. It leverages LLMs to compile data about a person online that would be impractical for most people to assemble manually.
Publicly accessible information about ordinary citizens can already be “misused straightforwardly” for scams, Lermen noted, citing spear‑phishing attacks in which a hacker pretends to be a trusted acquaintance to lure victims into clicking a malicious link.
Because the technical expertise needed for more elaborate attacks has dropped sharply, perpetrators now only require access to open‑source language models and an internet connection.
Peter Bentley, a computer‑science professor at UCL, said there are worries about commercial applications of the technology “if and when products emerge for de‑anonymising”.
One problem is that LLMs frequently err when matching accounts. “People will be blamed for actions they never took,” Bentley cautioned.
Another issue, raised by Professor Marc Juárez, a cybersecurity lecturer at the University of Edinburgh, is that LLMs can draw on public data beyond social media: hospital records, admission statistics and other releases may fall short of the robust anonymisation standards needed in the AI era.
“It is quite alarming. I think this paper shows we must rethink our practices,” Juárez said.
AI is not a panacea for online anonymity. While LLMs can de‑anonymise records in many contexts, sometimes the available data are insufficient to reach a conclusion. Often the pool of possible matches is too large to narrow down.
“They can only connect profiles across platforms where a person consistently shares the same fragments of information in both places,” Professor Marti explained.
Read next
UK experts say ChatGPT fuels increase in reports of “satanic” organized ritual abuse.
UK specialists say that ChatGPT is prompting an increase in reports of organised ritual abuse, as victims of so‑called “satanic” sexual violence turn to the AI system for therapeutic help.
Police contend that organised ritual abuse and “witchcraft, spirit possession and spiritual abuse” (WSPRA) targeting children are largely hidden
Analysis reveals AI chatbots steer at‑risk social media users toward illegal online casinos.
AI chatbots are directing vulnerable social‑media users toward illegal online gambling sites, increasing their exposure to fraud, dependency and even self‑harm.
An assessment of five AI services from several of the world’s biggest technology firms showed that each could readily be asked to name the “top” unlicensed
X will prohibit users from earning revenue for posting unlabeled AI‑generated war videos.
Elon Musk’s platform X will prohibit users from earning revenue if they repeatedly share AI‑generated war videos without labeling them, following a surge of fabricated battle footage related to the Iran conflict.
With roughly five hundred million monthly users, X will bar creators from receiving payment for ninety