Study finds AI helps hackers uncover anonymous social media profiles.

AI has made it significantly simpler for bad actors to pinpoint anonymous social‑media profiles, a recent study warns.

In most trial conditions, large language models (LLMs) – the technology underlying tools such as ChatGPT – correctly linked anonymous online users to their real identities on other services, using the material they had posted.

Researchers Simon Lermen and Daniel Paleka said LLMs render sophisticated privacy attacks inexpensive, prompting a “fundamental reassessment of what can be considered private online”.

For their test, the team fed anonymous accounts to an AI and instructed it to gather every detail it could. They illustrated the process with a fictional user who mentioned struggling at school and walking a dog named Biscuit through “Dolores park”.

In that imagined scenario, the AI scoured other sources for the same clues and identified @anon_user42 as the known individual with a high level of confidence.

Although the case was invented, the authors pointed to real‑world situations where governments might employ AI to monitor dissidents and activists posting anonymously, or where hackers could launch “highly personalised” fraud schemes.

AI‑driven surveillance is a fast‑evolving area that has unsettled computer scientists and privacy specialists. It leverages LLMs to compile data about a person online that would be impractical for most people to assemble manually.

Publicly accessible information about ordinary citizens can already be “misused straightforwardly” for scams, Lermen noted, citing spear‑phishing attacks in which a hacker pretends to be a trusted acquaintance to lure victims into clicking a malicious link.

Because the technical expertise needed for more elaborate attacks has dropped sharply, perpetrators now only require access to open‑source language models and an internet connection.

Peter Bentley, a computer‑science professor at UCL, said there are worries about commercial applications of the technology “if and when products emerge for de‑anonymising”.

One problem is that LLMs frequently err when matching accounts. “People will be blamed for actions they never took,” Bentley cautioned.

Another issue, raised by Professor Marc Juárez, a cybersecurity lecturer at the University of Edinburgh, is that LLMs can draw on public data beyond social media: hospital records, admission statistics and other releases may fall short of the robust anonymisation standards needed in the AI era.

“It is quite alarming. I think this paper shows we must rethink our practices,” Juárez said.

AI is not a panacea for online anonymity. While LLMs can de‑anonymise records in many contexts, sometimes the available data are insufficient to reach a conclusion. Often the pool of possible matches is too large to narrow down.

“They can only connect profiles across platforms where a person consistently shares the same fragments of information in both places,” Professor Marti explained.