Apple’s iPhone-Scanning Tool Already Displays Glaring Flaw, Costing Users Their Privacy

Reading Time: 4 minutes.
Two images, one with photo manipulations. The hashes are the same. Text reads: "The image on the right is a black and white transformation of the image on the left. Because they are different versions of the same photo, they have the same NeuralHash."

Screenshot from Apple’s report on their expanded protections for children.

Apple’s upcoming tool for fighting child sexual abuse material (CSAM) was rife with privacy and security issues from the start. The tool scans through your photos on your iPhone and reports matches for CSAM back to Apple. If that was all it ever did, it worked perfectly, governments couldn’t abuse it, and no one could ever change it with a simple hash injection hack, then it would be great. Stopping the spread of CSAM? I think we all want that. The problem is, the technology isn’t perfect. To make up for that imperfection, Apple assures us that a human will investigate the photos before passing them along to the authorities.

So if their system has a lot of false matches, that means people would inadvertently share many of their personal photos—which absolutely do not include CSAM—with Apple. Fortunately, Apple says such false matches are extremely rare.

In just a matter of hours, a researcher proved Apple wrong.

One engineer was able to reverse engineer Apple’s version of NeuralHash from previous versions of iOS. NeuralHash is responsible for turning images into garbled strings of characters, so they can scan for CSAM without ever storing actual images or videos of CSAM. Apple says that the upcoming version of NeuralHash isn’t exactly like the one in previous versions of iOS. However, just a few hours after the current version of NeuralHash was shared, someone was able to make a hash collision. That is, an image that produces a false match. Apple’s solution for false matches is that they’d view your photos themselves. However, if false matches really come up this often, how many of your private, non-CSAM photos will be in the hands of someone at Apple?

NeuralHash

A hashing algorithm, put basically, takes one thing and turns it into another. It can turn an image into a garbled string of letters and numbers that nothing can read unless it has the algorithm to decrypt the hashed image. Since only the authorities have the methods for decrypting those images, they can safely share the hashes with companies, who can use the technology to scan for and report CSAM. However, the system is broken if someone can falsify it.

Apple seemingly had already been using a version of their “NeuralHash,” the program Apple uses for testing these hashes against images on a device, in previous versions of iOS. It may have simply been for shared images, as Apple has not been scanning all images on users’ devices. They’ll only start doing that with iOS 15.

This current version of NeuralHash likely isn’t much different from what Apple’s using with iOS 15. The main changes may be that Apple has added a threshold before they can decrypt and view thumbnails of your personal photos to see if they’re of CSAM. We don’t know, Apple’s not stating. However, they have stated that the previous version is different, though they didn’t state if it would offer more security, fewer false matches, or better accuracy.

Testing shows the hashing can be fooled quite easily. Resizing and compressing the images won’t break the hash match, but simply rotating them or cropping them will throw the hashing off completely. Someone would barely have to change CSAM to fly under Apple’s radar. But at least it’s not easy to have false positives then, right?

Oh, wait.

NeuralHash Broken

A photo of a dog and a gray image that was computer generated.

NeuralHash can’t tell these images apart.

For NeuralHash to work, it needs to be able to detect images and video of CSAM that have been altered in an attempt to bypass these censors. Therefore, it’s not a perfect match. It’s a fuzzy hash. This also ensures that even if you can decrypt the hashes used for detecting CSAM, you really only have a somewhat unique ID, not the image itself.

A GitHub user, Asuhariet Ygvar, shared a reconstructed version of Apple’s NeuralHash. Within hours, another GitHub user, Cory Cornelius, was able to find two different images that, when hashed and matched using this reconstruction of Apple’s NeuralHash, there was a collision. That means both images returned the same hash value. It’s a false positive. This wasn’t a one in a trillion chance, it was something someone was able to do in just a few hours.

If NeuralHash is this broken, then the tool responsible for combing through your photos could return a lot of false positives. Perhaps even enough for Apple to flag your device for additional attention, where a person will look at your photos and potentially pass them off to the authorities. This means you could potentially be arrested for false matches, or someone could spam your phone using AirDrop or iMessage with false matches to get your device flagged. There’s a gigantic hole in Apple’s security for this feature here, yet Apple seems to want to go ahead as planned anyway.

A Privacy Nightmare

Before someone found a hash collision in Apple’s NeuralHash program, we were worried about privacy for a litany of other reasons. Abusive partners able to spy on your iMessages, oppressive countries inserting their own hashes into a person’s phone so they can view images stored on the device, find political rivals, silence protestors, or oppress minorities. However, now we have an entirely new problem on our hands. Apple’s tool is already broken. We’ve been saying, “Even if this works perfectly…” for a while now, but the truth is, we already know, it’s broken.

So far, Apple’s continuing to push for this technology in our devices. It will turn every iPhone into a spying tool. In some cases, it may even become a tool for oppression. Apple, the company that supposedly prides itself on its privacy, is selling its users out, and not just the criminals, but anyone, as it turns out.

What Can I Do?

Here are a few options for telling Apple this technology isn’t secure, doesn’t respect privacy, and isn’t ready to help anyone yet:

  1. Sign the Electronic Frontier Foundation open letter
  2. If you have a GitHub account, sign the first Apple Privacy letter
  3. Leave Apple feedback, asking them not to create a tool to scan through users phones with broken software that anyone can easily manipulate
  4. Tweet, share articles, and spread awareness

Apple wasn’t expecting this level of push back, and they’ve already been reconsidering their path forward. Apple’s marketing is everything to them. Now they’ve realized the one thing they’ve pushed in their marketing consistently over the past few years, privacy, isn’t something you can have on an iPhone. Apple could change their mind, finding ways to scan for CSAM that don’t violate our privacy, like only scanning images that are shared or received, not those stored on device. Apple has options here to protect children and protect your privacy. They just have to realize that the sunk cost fallacy they’re leaning into to protect this poor software isn’t worth the losses they’ll see in sales, iOS users, and iCloud storage plans.


Sources: