Children’s safety groups, for their part, also immediately applauded Apple’s moves, arguing they strike a necessary balance that “brings us a step closer to justice for survivors whose most traumatic moments are disseminated online,” Julie Cordua, CEO of the child safety advocacy group Thorn, wrote in a statement to WIRED.

Other cloud storage providers, from Microsoft to Dropbox, already perform detection on images uploaded to their servers. But by adding any sort of image analysis to user devices, some privacy critics argue, Apple has also taken a step toward a troubling new form of surveillance and weakened its historically strong privacy stance in the face of pressure from law enforcement.

“I’m not defending child abuse. But this whole idea that your personal device is constantly locally scanning and monitoring you based on some criteria for objectionable content and conditionally reporting it to the authorities is a very, very slippery slope,” says Nadim Kobeissi, a cryptographer and founder of the Paris-based cryptography software firm Symbolic Software. “I definitely will be switching to an Android phone if this continues.”

Apple’s new system isn’t a straightforward scan of user images, either on the company’s devices or on its iCloud servers. Instead it’s a clever—and complex—new form of image analysis designed to prevent Apple from ever seeing those photos unless they’re already determined to be part of a collection of multiple CSAM images uploaded by a user. The system takes a “hash” of all images a user sends to iCloud, converting the files into strings of characters that are uniquely derived from those images. Then, like older systems of CSAM detection such as PhotoDNA, it compares them with a vast collection of known CSAM image hashes provided by NCMEC to find any matches.

Apple is also using a new form of hashing it calls NeuralHash, which the company says can match images despite alterations like cropping or colorization. Just as crucially to prevent evasion, its system never actually downloads those NCMEC hashes to a user’s device. Instead, it uses some cryptographic tricks to convert them into a so-called blind database that’s downloaded to the user’s phone or PC, containing seemingly meaningless strings of characters derived from those hashes. That blinding prevents any user from obtaining the hashes and using them to skirt the system’s detection.

The system then compares that blind database of hashes with the hashed images on the user’s device. The results of those comparisons are uploaded to Apple’s server in what the company calls a “safety voucher” that’s encrypted in two layers. The first layer of encryption is designed to use a cryptographic technique known as privacy set intersection, such that it can be decrypted only if the hash comparison produces a match. No information is revealed about hashes that don’t match.

The second layer of encryption is designed so that the matches can be decrypted only if there are a certain number of matches. Apple says this is designed to avoid false positives and ensure that it’s detecting entire collections of CSAM, not single images. The company declined to name its threshold for the number of CSAM images it’s looking for; in fact, it will likely adjust that threshold over time to tune its system and to keep its false positives to fewer than one in a trillion. Those safeguards, Apple argues, will prevent any possible surveillance abuse of its iCloud CSAM detection mechanism, allowing it to identify collections of child exploitation images without ever seeing any other images that users upload to iCloud.

That immensely technical process represents a strange series of hoops to jump through when Apple doesn’t currently encrypt iCloud photos, and could simply perform its CSAM checks on the images hosted on its servers, as many other cloud storage providers do. Apple has argued that the process it’s introducing, which splits the check between the device and the server, is less privacy invasive than a simple mass scan of server-side images.

But critics like Johns Hopkins University cryptographer Matt Green suspect more complex motives in Apple’s approach. He points out that the great technical lengths Apple has gone to to check images on a user’s device, despite that process’s privacy protections, only really make sense in cases where the images are encrypted before they leave a user’s phone or computer and server-side detection becomes impossible. And he fears that this means Apple will extend the detection system to photos on users’ devices that aren’t ever uploaded to iCloud—a kind of on-device image scanning that would represent a new form of invasion into users’ offline storage.

Copyright ©2023 jb casino.