Apple’s Upcoming Child Protection Measures Could Endanger Them… and Everyone Else Too

Reading Time: 12 minutes.
Apple's Headquarters, Apple Park, in Cupertino, CA

Photo: Arne Müseler/arne-mueseler.com/CC-by-SA-3.0

When having a debate, many people fall into the slippery slope argument. “If we legalize gay marriage, soon people will want to marry their dogs!” That’s a slippery slope argument, there’s no logical connection between consenting adults and goats. However, with tech, things that seem like a slippery slope argument, such as, “If you make this privacy back door, someone else will use it improperly,” are really more of a Chekhov’s Gun. That is, if you saw it in the first act, it’s going to go off in the second act. It’s like, if you take your door off your home, someone you’re not expecting will eventually enter your home. The two are related directly, cause and effect.

Apple’s measures to protect children by monitoring photos on users’ devices as well as photos users send or receive are made to protect children. They likely will. However, there will be unintended consequences that could also harm other people, innocent people at that. On the surface, they don’t seem dangerous. One monitors your photos for known abusive content towards minors using a secure hashing method. Another monitors incoming photos with machine learning to look for possible nudity. Both will absolutely protect children. However, both open up massive privacy problems that will lead to serious misuse.

The door’s open, someone’s going to come in.

What Apple’s Making

The internet has made it easier than ever for predators to capture and share child sexual abuse material (CSAM). Fortunately, we’ve created tools over the year to track these people down and arrest them. Those tools can always be better though. The number of acceptable child predators and abusers is zero.

Apple announced three measures, two that will monitor for CSAM, and one that will help prevent people from finding it and help those exposed to it find help. The first are changes to Siri. Siri will block requests that could be for CSAM. It’ll also help users who want information or assistance on escaping or coping with abuse. This will help children who encounter an unsafe situation get help. The other two are for preventing the distribution of CSAM.

Hashing and Identifying

Illustration from Apple's PDF describing the measures Apple's taking to fight CSAM (described below).

From Apple’s PDF explaining their abusive media-fighting technology.

The other two systems look to “scan” photos on your device and report them if they may be exploitative of children or could be sexually explicit materials sent to or from a child. The first is similar to what you’d find on many servers across the web, only tailored to work on a person’s device. This creates a hash, basically a unique key, for known pieces of CSAM. It’s made to be a sort of “fuzzy” match for items, not rigid. That way, if someone crops an image, desaturates it, or otherwise changes it, the key itself won’t change, so it can be tracked down. Many websites, including Facebook, Twitter, and others, use this technology to ensure people are not uploading CSAM.

Two images, one with photo manipulations. The hashes are the same. Text reads: "The image on the right is a black and white transformation of the image on the left. Because they are different versions of the same photo, they have the same NeuralHash."

Screenshot from Apple’s report on their expanded protections for children.

Because hashing isn’t perfect, and can actually be fooled with specially-created files made to look like normal photos but match the hash of known CSAM, Apple will have a threshold system with eventual human verification. Your phone will identify images and place whether or not they are safe or dangerous into a packet. That packet is encrypted and sent to Apple. Apple cannot read whether or not the image file is CSAM until your phone detects that a threshold has been reached, say the 6th image when the limit was 5. At this point, Apple feels it’s safe to assume this was no false positive. Your phone will send the key to Apple, which will be able to look into the results files for possible CSAM files. The results packet will contain whether or not it’s suspected of being CSAM, along with a blurry thumbnail image. Apple can then contact the appropriate authorities if it’s CSAM.

By doing it this way, Apple’s ensuring that your photos, even whether they contain CSAM or not, is private until a threshold is reached. By doing so, they’ll only go after people who are downloading CSAM. Apple says that, at this point, the odds of a false positive are 1 in 1 trillion. Before Apple reports this content to the National Center for Missing and Exploited Children (NCMEC), a government-ran agency, they review blurred thumbnails. Only NCMEC will identify the media itself and decide the appropriate legal action. By doing this, Apple involves people only enough to ensure false positives are incredibly unlikely, and only after their system has reviewed the photos.

Most photo databases already use a technology like this for every photo uploaded, without Apple’s additional privacy measures. The difference here is that the scanning will be on your device. Apple doesn’t allow users to opt-out, obviously, as it would likely defeat the purpose. However, it only applies to users who are using iCloud photos, which is most iOS users. You can opt-out by refusing to use a feature that is practically a must-use on Apple devices.

Monitoring Messages

Four screenshots of messages showing how it hides potentially sensitive photos and can tell whoever is set up as a parent that a child looked at it.

The second part of Apple’s new child protections is a change to the Messages app. This uses monitored devices under a parental control setup. If someone is monitoring your device, every message you send or receive goes through a machine learning-based scanning process. Apple’s machine learning will identify photos that could be sexually explicit. This will likely rely on facial detection and skin detection, which, as we all know, isn’t always reliable for every person’s skin tone.

Everyone who is monitored by this system through their parents or guardians will see an alert when they receive media that could be sensitive. When they attempt to view it, they’ll have to assert that they are aware that their parents will get an alert that they viewed or sent potentially sensitive content. If they choose not to view the material, no one is alerted. The alert will also contain a link for potential victims of abuse. This helps every child both get a push to make the right decision for this content, as well as find help if they’re in an abusive situation.

Apple’s system will only alert parents for children 12 and under. Children from the ages 13-17 will only receive a message that the content will be sensitive, but their parents will not be alerted that they’ve viewed it. Age is determined when the iCloud account is created for the child, so it’s possible for parents to claim their children are younger than they are, but this is the only way to force monitoring of people in a family plan.

Harmful Messaging for Children and Adults

On the surface, both of these measures sound fantastic. If this was all these features could ever do, this would be a very different post. The problem is, there are some gaps in the implementation that could hurt people.

Abusive Partners

The first potential issue—and the easiest to exploit—comes in the form of abusive partners. It’s an extreme case, but an abusive partner could use the feature to control and monitor their partner. Someone could create an iCloud account for their victim, lie about the person’s age, and use that as part of a family plan. Often, abusers already do this. It allows them to use Find My to locate their victim at all times, potentially tracking them. Now it can also ensure they’re not cheating on them by watching potentially scandalous messages they could send to someone else. That includes received photos. Women all know all too well that online dating often means getting a few photos of genitals you don’t want to see, sent without provocation. Viewing those photos could alert the worst person in their life to know about those messages, their abuser.

Adults are more capable of getting help in these situations, but abuse makes that very difficult. Victims already feel like they have no privacy and can’t reach out for help. They already feel trapped. This feature could increase that feeling.

Overbearing Parents and Outing Kids

https://twitter.com/KendraSerra/status/1423365222841135114?s=20

Around half of all homeless children are LGBTQ+. This is because they’re either kicked out of their homes or run away from home due to homophobic or transphobic abuse at home. An overbearing parent will most assuredly lie about their child’s age to continue monitoring them. From there, they could see photos exchanged between teenage significant others, or photos from transition or dressing. It’s very likely that someone will be outed by this feature, and it’ll likely be to parents who already maintain an abusive level of oversight on their children. It could result in children and teenagers forced out of their homes, conversion therapy, self harm, and suicide.

False Positives and Bad AI

False positives alone could put someone in hot water. A vacation photo at a beach or pool, for example, could trigger the alert. Someone could view it, knowing that the content isn’t anything bad, alert parents or an abusive partner, who won’t believe them that they didn’t simply delete the message that contained offending material to present a different photo that seems more innocent.

Searching for "bra" in the photo app will show you photos of the owner or their friends in only their underwear.

If you think Apple’s AI is benevolent and never makes mistakes, check your photo app right now. If you’re a woman, or have photos with women in them in your photos, go ahead and start searching for “Bra.” Apple will recommend “brassiere.” Yes, Apple will help you find photos of potentially scantily clad women. I say “potentially” because only one of the photos I’m tagged in is actually a bathing suit. The rest are just tank tops. Still, Apple has to choose which tags users can search by and what they scan photos for. They have blacklists as to not suggest certain objects. For years they’ve known that anyone who gets their hands on a woman’s phone, from the police, to friends, to abusive partners, could quickly scan a woman’s phone for photos of them in a state of undress. In fact, it likely has already been abused by police officers who would steal photos of women off their phones.

Apple, despite knowing that their AI had harmful elements, has continued to allow people to search for bras in women’s photos. They simply don’t care that it hurts women’s safety with no other benefit. Major news organizations reported on this four years ago. Apple has released 4 different versions of iOS with many updates in between. They’ve kept the potentially dangerous feature in every version.

Apple won’t eliminate software, even if it could hurt people while providing no meaningful features to anyone but creeps.

If You Build It, They Will Come. Spying via Hash

Files containing CSAM aren’t unique to a computer. Once the image is hashed, it’s just a string of letters and numbers, and, thankfully, can’t be turned back into a photo. It can use those strings of characters to find images that match it, without having to load or store the offensive material anywhere. It’s a fantastic way to mange a problem with photos no one wants to see. It’s also a clever tool for tricking a system made to save lives to instead oppress them.

You can create a hash for the Tank Man photograph. Add it to a list of other hashed strings and, if a person has the Tank Man photograph on their device, it’ll find it and report you to the authorities. Before, Apple and other company’s systems only worked with photos voluntarily uploaded. However, this new system allows your iPhone to scan your local photos and report to Apple that it has found offending content. What if it’s anti-Putin posters? Photos from widely distributed gay porn in a homophobic nation? What if a government just wants to track down the person who may have taken or saved a photo at a protest? All can be accomplished with Apple’s new tool.

Apple says it can’t be though, because they wouldn’t let it. Apple says that, before they report any photos to the authorities, they personally look into it, including a blurry thumbnail of the photo in question. They claim that, at this point, they’ll only pass along photos to the authorities if it’s CSAM. But will they be looking closely? Wouldn’t it be better to investigate a potential child abuser rather than ignore it because it might be a protest poster or politically sensitive photo?

Furthermore, will these be local Apple employees who have to follow local laws? Will countries force Apple to comply? Apple says they’ll refuse, but you can’t find VPN apps in China because Apple removes them at the request of the Chinese government. You can’t find LGBTQ apps all over the world. In Saudi Arabia, men can use an app to restrict women’s travel. Apple has thrown human rights aside when a country threatens to ban the sale of their devices, and they’ll cave for this too. After all, they’ve given us no reason to believe otherwise. Every other time a regime makes a demand, Apple falls in line.

Apple’s Already Doing This

Apple’s hashing method for detecting CSAM isn’t new, even within Apple. Apple has included the fact that they could be scanning images uploaded to their services for some time. Apple’s numbers for reporting the people sharing CSAM is lower than what you’d see from other companies, but they have been reporting criminals. Apple hasn’t said how they’re doing this, though it could be that they’re only scanning photos and videos sent through email, large attachments in iMessage, or iCloud photo sharing. This would be a perfect middle ground. It only scans photos and videos users willingly provide, and it stops users from sharing or receiving CSAM.

The hashing method used to fight the distribution of CSAM is everywhere on the web, from databases to social networks. By stopping people from sharing this media, you stop them from storing it on their devices. At that point, there’s no reason to go into people’s phones, the equivalent of doing daily searches through your house. Especially since, now that internet creeps know about this feature, those dangerous abusers will simply turn iCloud photo sharing off. The scanning won’t happen, but the tools to scan for other photos, like those dangerous regimes want to suppress, will remain.

Minorities Hurt Worst

That “screeching voices of the minority” Apple is dismissive of? Yeah, it’s actual minorities. LGBTQ+ people, ethnic minorities, and others in danger of their governments. Are minority voices really this easily dismissed at Apple? Its not like Apple employs a diverse group of engineers and managers. Apple employees are more homogeneous than the population, and overwhelmingly white and male. This is a group of people who never had to fear their own government coming after their basic civil liberties or bodily autonomy. They’re not the people who are the most frequent targets of domestic abuse. While our goals align, ending the distribution of CSAM online, some people aren’t willing to toss aside the screeching voices of the minority to get there.

The road to hell is paved with good intentions, and tools that can create harm aren’t acceptable when we’re trying to help people.

Apple’s Response… And What They’re Missing

Apple’s response covers a number of questions people could have. However, it doesn’t answer the most pressing questions. Apple’s dodging those. For example, they offer no promise that, if a country threatens to ban Apple if they don’t allow the technology to be used for other things, they won’t comply. Apple also doesn’t point out a potential safety feature for children on a parent’s account. Many children who believe their parent lied about their age, or partners in an abusive relationship with an account their partner set up, would be better off being able to report the problem themselves. Apple doesn’t have a direct way for people to report the problem in this tool. For example, the popup that displays when potentially problematic material comes through in a text message has no way to report to Apple that it may be a mistake or the person is being improperly tracked. There’s no way here for victims to get help.

Apple has made AirTags alert people when they’re being “followed” by one of the devices. It was a clear way to help keep people from using the devices to track one another. But they didn’t build similar protections into iOS for this feature. As a result, some people will be monitored without ever knowing about it, and will have no way to stop or appeal that tracking.

Apple has compromised what they claim are their values before for countries that demand it. They’ve pulled LGBTQ+ apps, allowed apps that result in abuse of women, and have pulled VPN apps. They’ve made no promise that they won’t allow countries to use this tool for other purposes, even if threatened to leave. It’s just not enough. This tool, if made to avoid abuse, could be useful. But in its current form, it turns your iPhone into a dangerous spying tool.

Balancing Privacy and Safety

Privacy is safety. Software engineers know that. As of this writing, nearly 8,000 engineers have signed an open letter asking Apple to abandon or change this feature. A majority of AppleInsider readers have stated they’re uncomfortable with the spying technology Apple created. We all agree that the motive is a good one, and, if the tools were better focused and unable to be used maliciously, they’d be fantastic. These are some great new ways to protect children, and we should be celebrating them. However, Apple hasn’t done the due diligence to protect users, and that’s a serious problem that could lead to more harm than good. When you realize Apple’s already scanning for CSAM in publicly shared media, as are nearly every other company, you start to wonder if you really need CSAM scanning software running through everyone’s personal devices.

There’s one benefit here, one possibility. It’s that Apple is doing this so they can begin encrypting all iCloud backups, and tell FBI and other law enforcement agencies that the backups are secure and free of CSAM. However, Apple hasn’t announced this yet. Currently, Apple can look into your iCloud backups, which they say they only do for law enforcement. However, that’s no guarantee. Full end-to-end encryption would be a perfect reason for these changes, and a way to protect users. If Apple tweaks their implementation and is willing to promise to leave countries that demand they use the tools to monitor users in other ways, it could be worth it for full encryption. Still, Apple hasn’t stated this is their goal, it’s only speculation. There are no reports that Apple is planning to improve user privacy at this time, only dismantle it.

Good Intentions, Bad Possibilities

If Apple goes through with this, your phone will be a surveillance tool. It will use an automated process that could send police to your door for the most heinous of crimes, even if you’re the victim of a prank or attack. Unfortunately, proving innocence against a machine is an expensive battle that some victims of bad AI have found can take years and thousands of dollars. Besides that, the tools are easy for bad actors to abuse. Apple’s putting everyone in danger for a tool that will most likely do more good than harm, but it will still harm innocent people. This system has the potential to hurt too many people.

Apple built these privacy backdoors for good purposes. However, once you open that door, you can’t control who comes in. The messaging interception would be easy to abuse and the local storage scanning technology could be used for other means. Any image could potentially be hashed and monitored like this using Apple’s software. Apple could refuse to cooperate with nations that mandate it, but they’ve always caved when a regime came knocking. When they threaten to ban Apple unless they allow the blocking of anti-LGBTQ apps or apps that can track women, Apple complies.

I won’t lie to you, for most people, this is nothing to worry about. In some cases, it will catch child abusers and those who are sharing or storing those materials. These features, together, could protect children and put those looking for explicit materials of children behind bars. However, others could use these features for abuse, and Apple’s tools in place for hashing photos would be perfect for authoritarian regimes. The door’s open, and eventually Apple’s bouncer is going to let the wrong people in.


Sources/Further Reading: