Background information

NeuralHash: Apple responds to questions about privacy

Apple will soon scan your smartphone for images of child abuse and send data to authorities. In the wake of initial outcry from the Internet community, the company based in Cupertino has provided additional information.

The NeuralHash feature has caused quite a stir in recent days. In a second white paper, Apple has responded to questions and faced up to criticism from privacy advocates and its users.

  • Background information

    Apple NeuralHash vs. privacy – Pandora’s box is opened

    by Dominik Bärlocher

Privacy advocates are appalled. Users are, too.

«What’s on your iPhone, stays on your iPhone»* * Yeah, unless Apple just so happens to find your photos «interesting» – weighted by criteria which surely only flag the «bad» pictures as «interesting»... Surely. Uncle Apple is, of course, one of the good guys. (And if you claim otherwise, maybe your nudes will be leaked «accidentally»)
Rem3_1415926, comments column, digitec.ch

Apple has confronted this criticism and other similar arguments. You’ll find the most important questions and answers here. Even so, the six-page white paper is worth reading if you want to understand how Apple continues to see itself as a protector of data – despite circumventing its own encryption and violating the privacy of its users.

What is the white paper about?

An upcoming version of iOS 15 will roll out two features to protect children. For the time being, these functions will only work in the United States. Apple has not yet said anything about a worldwide rollout. These two features are called:

  1. Communication safety in Messages
  2. NeuralHash

If you’re familiar with these two features, feel free to skip to the section titled «Apple answers questions about communication safety in Messages». The following two sections contain a technical overview of NeuralHash, as well as a mini-analysis of the «Communication safety in Messages» feature.

In a nutshell: NeuralHash

NeuralHash compares images on your iPhone or iPad with data showing child abuse. This is done on your device, i.e. before your images are uploaded to iCloud. In doing so, Apple bypasses its own encryption of iCloud backups while still technically preserving it.

The dataset that serves as a basis for the analysis on your smartphone or tablet is provided by the National Center for Missing and Exploited Children (NCMEC) and other child safety organisations. The NCEMC is the organisation responsible for contacting the authorities should any suspicions of child abuse be confirmed. If NeuralHash detects a sufficient number of images showing child abuse, the images are decrypted and sent to Apple. These images are then looked at by humans, and if the suspicion is confirmed, the NCMEC is alerted.

The images on your device are matched with numerical values generated by Apple’s own hashing system – NeuralHash – which abstracts image data. These numerical values are known as hashes. So far, Apple has not confirmed whether NeuralHash will also check newly generated material for nudity.

In a nutshell: communication safety in Messages

Communication safety in Messages is a feature that will be included in the «Messages» app on iPhones and iPads. It’s an optional tool parents can use to protect their children. If the function is turned on, then it’s not possible for a child to send or receive nude pictures.

To use this feature, the account that’s not allowed to send or receive nudes must be set up as part of a family. Like NeuralHash, communication safety in Messages analyses data directly on your phone. Contrary to NeuralHash, no data is sent to Apple if the system raises an alarm.

Should communication safety in Messages detect a nude image, the photo will be blurred, the child warned, and helpful resources or tips, provided. The iPhone or iPad also displays a message saying that it’s okay if the child doesn’t want to view the picture.

In addition, parents of «young children» can be informed when their child sends or receives nude pictures via Messages.

Communication safety in Messages only intervenes when the images are transferred through the «Messages» app. Communication safety in Messages doesn’t monitor communication that happens in WhatsApp, Signal, Threema, Snapchat, TikTok, Instagram, any other messaging apps or Safari and the like.

Communication safety in Messages proves that Apple is capable of analysing images not only based on already existing material; it can also detect newly generated nude images, i.e. images that don’t yet exist in any database.

Apple answers questions about communication safety in Messages

In a PDF published on 9 August 2021, Apple addresses public criticism and explains how the two features work to protect child welfare.

The following questions and answers have been adapted from the PDF and shortened for readability. I still recommend you to read the six pages from Cupertino.

Do NeuralHash and communication safety in Messages use the same technology?

No, the features are neither identical nor do they use the same technology.

  • Communication safety in Messages is able to analyse newly generated data for its content and react accordingly.
  • NeuralHash matches data on the iPhone with images in a database. If NeuralHash raises an alarm, the data is decrypted and sent to Apple. A human then confirms the suspicion and transmits the data to the NCMEC. The NCMEC will then contact other authorities around the world.

Who can use communication safety in Messages?

Communication safety in Messages can only be used for accounts that are integrated into Family Sharing. The feature is switched off by default; parents and guardians must activate it manually.

Notifications that a nude image has been sent or received can only be activated for children 12 years of age or younger.

Does communication safety in Messages share information with Apple, the police or authorities?

No, Apple does not have access to communication data in Messages. Communication safety in Messages doesn’t transmit data to Apple, the NCMEC, or the police.

NeuralHash, on the other hand, does transmit data. However, NeuralHash does not interact with communication safety in Messages.

Does communication safety in Messages break end-to-end encryption in Messages?

No. Communication safety in Messages does not violate the privacy assurances of Messages made by Apple. Apple does not gain visibility into messages sent and received in the «Messages» app.

Communication safety in Messages is a fully automated system that parents or guardians must consciously turn on.

Does communication safety in Messages prevent children in abusive homes from seeking help?

No. Communication safety in Messages analyses images only for «sexually explicit» content. It does not affect any other communication.

However, Apple will add additional support to Siri and Search to provide more assistance to victims and their acquaintances.

Apple answers questions about NeuralHash

Since NeuralHash and communication safety in Messages are two different features, different rules and measures apply to NeuralHash.

Apple also calls NeuralHash «CSAM detection». CSAM stands for «Child Sexual Abuse Material», in other words, child pornography.

Is Apple going to scan all the photos stored on my iPhone?

No. CSAM Detection only scans images that the user wants to upload to iCloud.

CSAM Detection only strikes in the following cases:

  • if multiple instances of child pornography are found on one device.
  • if the pornographic material is in the NCMEC’s database.

If a user does not have iCloud Photos enabled, then NeuralHash aka CSAM detection is not enabled either.

Will child pornography be downloaded to my iPhone to compare against my photos?

No. Images with CSAM content are not stored on the device. Apple turns images in the NCMEC database into numerical values. These numerical values are known as hashes. It’s not possible to restore the original image based on its hash.

Why is Apple doing this now?

Apple sees it as a great challenge to maintain the privacy of their users while also protecting the welfare of children. With NeuralHash, or CSAM detection, Apple can find out which iCloud accounts are being used to store multiple instances of child pornography.

Apple does not see what happens locally on the iPhone.

According to Apple, existing techniques as implemented by other companies scan all user photos stored in the cloud, which is a privacy risk for all users. For this reason, Apple believes NeuralHash is superior – it does not allow Apple to learn about other photos.

Can the CSAM detection system in iCloud Photos be used to detect things other than CSAM?

Apple has designed NeuralHash, or CSAM detection, to prevent this. The databases used to match photos uploaded to iCloud are provided by the NCMEC and other child safety organisations.

No automatic reports are made to the police or other authorities.

In most countries, possession of child pornography is a crime. Apple is required to report such material to the authorities.

Could governments force Apple to add non-CSAM images to the hash list?

No. Apple would refuse any such requests. NeuralHash only searches iCloud for what has been deemed child pornography by experts from the NCMEC and other child safety organisations.

Apple has faced demands by the government and regulators in the past to undertake actions that violate or weaken user privacy. Apple has categorically rejected such demands.

Apple will continue to reject such demands.

Before data is sent to the NCMEC, alerts from NeuralHash, or CSAM detection, are reviewed by a human at Apple.

Can non-CSAM images be «injected» into the system to identify accounts for things other than CSAM?

Apple has taken measures to prevent any non-CSAM images from being added to the databases NeuralHash relies on. The dataset used to match images is verified by the NCMEC and other child safety organisations.

The databases, which are provided by child safety organisations, are abstracted into numerical values called hashes and stored on each iPhone and iPad. Under this design, it’s not possible to monitor individuals with NeuralHash.

Before the NCMEC is informed, images are reviewed by a human at Apple. At the latest, this person will recognise if the images do not contain child pornography, and no report will be made.

Will CSAM detection falsely report innocent people to the police?

No. Apple has designed NeuralHash to have an extremely low likelihood of a false alarm. Apple estimates the probability of incorrect identification to be less than one in one trillion per year.

Because every identification is additionally reviewed by a human, incorrect reports will be noticed and will not be forwarded to the NCMEC.

As a result, attacks and system failures will not result in innocent people being reported to the NCMEC or the authorities.

The bottom line

Apple’s white paper was clearly written to smooth the waters. It aims to make people feel secure again and not view their privacy as being at risk.

But security is deceiving.

A question of technology, not child safety

The fact is that Apple is unleashing a system on humanity that can analyse personal images and transmit them without the user’s knowledge. It’s easy to nip this dialogue in the bud with the argument that if you're critical of these features, «you’re supporting child pornography». In the context of the discussion about the technology behind NeuralHash, this argument should not be allowed to stand.

Apple doesn’t deny that NeuralHash could be used with other data. It’s technologically possible for NeuralHash to be applied to hijabs, penises, cats, or transsexuals. That’s where the danger lies. It’s absurd to believe that anyone would legitimately oppose the fostering of child welfare. Speaking out against the technology behind these measures is, on the other hand, reasonable.

Technological change

With communication safety in Messages, Apple has proved that it can analyse transmitted data in real time. Just because NeuralHash doesn’t do this now, that doesn’t mean it never will. The mechanisms behind communication safety in Messages could easily be integrated into NeuralHash. Reasons why this hasn’t been done yet could include the following:

  • Apple doesn’t have the manpower required to sift through all the messages.
  • iPhones and iPads are not yet powerful enough to do this analysis in real time, given the string of notifications.
  • Apple doesn’t want to do this.
  • Image recognition in communication safety in Messages is not yet reliable enough to be let loose on other data.

Security and Apple’s partners

Security-wise, there’s another factor to consider. Apple may have servers that are difficult to hack into, even if Apple is nowhere near as secure as the company likes to portray itself to be. But this may not apply to the servers of Apple’s partners.

  • Background information

    Pegasus: you can run but you can’t hide

    by Dominik Bärlocher

Is their data as well secured as Apple’s? Can the external databases be manipulated to track other things? Apple hasn’t denied this. The company is depending on the last factor that can do this reliably: humans. This is probably why Apple decided to implement a human review. It’s the last bastion against the misuse of a system that’s highly attractive to the leaders of any government.

A question of trust

If communication safety in Messages and NeuralHash were merged, the system could be applied to essentially anything – to track down women in Iran who don’t wear the hijab, or simply to create the world’s largest collection of cat pictures. All this can be done without your knowledge, without your consent, and without your cooperation. This cannot be allowed to happen.

Companies are not absolutely and permanently immune to interference from a government. In the end, Apple, like any other company, has to do the math: can the company afford to lose an entire political area as a market – like a country, or the EU – if it decides not to follow a piece of regulation? Here’s an illustrative example: if the Chinese government were to demand that NeuralHash be used on Uyghurs, would Apple be willing to forgo the Chinese market?

Apple does currently promise that such requests will be categorically rejected. But there’s no guarantee that it will stay that way. Future management doesn’t have to act in the same way as current management does.

A more reasonable measure

Communication safety in Messages – while strongly reminiscent of the episode «Arkangel» of the dystopian TV series «Black Mirror» – is a more reasonable approach to preserving child welfare. Parents don’t hand over their responsibility to a machine or to a corporation that requires blind trust. Trust isn’t something to be given blindly or permanently, because economic interests always factor into a company’s actions and shouldn’t be underestimated.

Parents getting a feature they can use to protect their children is reasonable. So is the fact that the feature isn’t activated by default. It forces parents to ask themselves whether communication safety in Messages is a tool they want to use in raising their own children in their own family.

It gives power and freedom of choice to the family. Raising your own children must not be left in the hands of economically driven international conglomerates. No matter how nice they may presently seem.

The «why»

Apple has tipped its mitt more than usual in the white paper. Still, the paper doesn’t answer the question of why Apple is introducing NeuralHash. The answers in the PDF are vague and don’t even begin to address the reasons for the existence of NeuralHashes.

Or do they?

On page 2 of the white paper is the question about the differences between NeuralHash, or CSAM Detection, and communication safety in Messages. Apple's answer includes the statement that possession of child pornography is illegal in most countries. If you read between the lines, the following questions arise:

is Apple liable to prosecution if child pornography is stored on iCloud? Is this all just a matter of Apple defending itself against being held legally accountable?

43 people like this article


User Avatar
User Avatar

Journalist. Author. Hacker. A storyteller searching for boundaries, secrets and taboos – putting the world to paper. Not because I can but because I can’t not.

These articles might also interest you

  • Background information

    Apple NeuralHash vs. privacy – Pandora’s box is opened

    by Dominik Bärlocher

  • Background information

    NeuralHash update: of attempts at explaining, hackers and the backdoor that supposedly isn’t

    by Dominik Bärlocher

  • Background information

    Pegasus: you can run but you can’t hide

    by Dominik Bärlocher

Comments

Avatar