Why it matters: There'south an epidemic of child sexual abuse cloth on online platforms, and while companies like Facebook have responded by flagging it wherever it popped up, Apple tree has been quietly developing a set of tools to scan for such content. Now that its response has come into focus, information technology's become a major source of controversy.

Earlier this calendar month, Apple tree revealed that it plans to start scanning iPhones and iCloud accounts in the United states for content that tin can be described as kid sexual corruption fabric (CSAM). Although the company insisted the feature is just meant to assist criminal investigations and that it wouldn't exist expanded beyond its original telescopic regardless of authorities pressure, this left many Apple fans confused and disappointed.

Apple has been marketing its products and services in a manner that created the perception that privacy is a core focus and high on the list of priorities when considering any new features. In the case of the AI-based CSAM detection tool the company developed for iOS 15, macOS Monterey, and its iCloud service, it accomplished the exact opposite, and sparked a meaning amount of internal and external fence.

Despite a few attempts to analyze the confusion around the new feature, the company's explanations have only managed to raise fifty-fifty more questions almost how exactly information technology works. Today, the visitor dropped another flop when it told 9to5Mac that it already scours iCloud Postal service for CSAM, and information technology has been doing that for the past three years. On the other hand, iCloud Photos and iCloud Backups haven't been scanned.

This could be a potential caption for why Eric Friedman -- who presides over Apple's anti-fraud department -- said in an iMessage thread (revealed in the Epic vs. Apple trial) that "we are the greatest platform for distributing kid porn." Friedman too noted that Apple's obsession with privacy made its ecosystem the go-to place for people looking to distribute illegal content, as opposed to Facebook where the extensive data drove makes information technology very easy to reveal nefarious activities.

It turns out that Apple has been flying an "image matching technology to help discover and report kid exploitation" largely under the radar for the final few years, and but mentioned it briefly at a tech conference in 2022. Meanwhile, Facebook flags and removes tens of millions of images of child abuse every twelvemonth, and is very transparent near doing it.

Apple tree seems to exist operating nether the supposition that since other platforms make information technology hard for people to do nefarious things without getting their account disabled, they'd naturally gravitate towards using Apple services to avoid detection. Scanning iCloud Mail for CSAM attachments may have given the company some insight into the kind of content people ship through that road, and possibly fifty-fifty contributed to the decision to expand its CSAM detection tools to cover more than ground.

Either way, this doesn't make information technology any easier to understand Apple's motivations, nor does it explicate how its CSAM detection tools are supposed to protect user privacy or prevent governmental misuse.