UK regulatory body alleges Apple neglected duty in reporting child-related explicit content

A recent investigation by the National Society for the Prevention of Cruelty to Children (NSPCC) suggests Apple may not have fully monitored its platforms, specifically in relation to child sexual abuse material (CSAM). Child safety advocates believe that as Apple's products continue to grow and evolve, there is concern regarding their ability to handle an increase in such content associated with advancements like artificial intelligence.

Over the past year, UK police data indicates a higher number of instances where child predators utilized Apple’s iCloud, iMessage, and Facetime for storing and exchanging CSAM within England and Wales than what Apple reported across all other countries combined. According to figures obtained by The Guardian through freedom of information requests, the NSPCC found that from April 2022 to March 2023, there were recorded offenses involving child abuse images in these areas totaling 337 cases.

In contrast, Apple's global reports for the same period stood at just 267 suspected CSAM on its platforms to the National Center for Missing & Exploited Children (NCMEC), a stark difference compared to Google and Meta/Facebook which reported over 1.47 million and more than 30.6 million cases respectively, according to NCMEC’s annual report.

US tech companies are required by law to report instances of CSAM detected on their platforms to the NCMEC. The organization acts as a hub for child abuse reports from around the world, forwarding them to appropriate authorities when necessary. While Apple's encrypted messaging services like iMessage and competitors such as Meta’s WhatsApp may not directly view user content, these companies do report instances of suspected CSAM they come across.

Richard Collard, NSPCC's head of child safety online policy, expressed concern over the discrepancy between the number of UK-based CSAM cases and Apple’s comparatively low reporting figures to authorities. He emphasized that all tech companies should be prioritizing safety measures in response to growing legislation such as the Online Safety Act in the UK.

Apple did not provide comments for this article, instead directing The Guardian towards statements from last August where it explained its decision to halt a program aimed at scanning iCloud photos for CSAM citing a preference for prioritizing user security and privacy.

The company also shelved plans for an earlier version of the tool known as neuralMatch, which would have screened images before uploading them to iCloud's photo storage using hash values from a database of known child abuse imagery. Despite this technology, there were objections raised by digital rights groups due to concerns about privacy implications for all users.

Sarah Gardner, CEO of Heat Initiative, expressed alarm over Apple’s announcement regarding its new artificial intelligence system, dubbed 'Apple Intelligence', and the potential risks it poses in relation to child safety. The company stated that this AI technology was developed with OpenAI and is intended to enhance user experiences while maintaining privacy standards.

According to NCMEC, reports of AI-generated CSAM rose over 4,70ited the year, an increase expected as these models continue learning from real child abuse images, potentially leading to more such content being created in the future.