Apple to scan all iPhones and iCloud accounts for child abuse images

So, let's say someone uses his or her phone to take naughty pix of his or her girl/boyfriend(s) & said object of affection happen to look young but are without doubt of legal above 18 years of age...ahem...crazy I know. But um yeah people have sex and like to use their phones to take pictures and video of each other in the process. So this so-called "neural" system is going to be tripping a lot of false alarms & alerting human apple employees who will then be authorized to scan through/jerk off to your private home-made naughty time images/clips? You know, all for the sake of the children, right? Then what happens when said human "analyst" sees the person in images and also mistakenly believes he/she is under the legal age of consent? What happens in cases where the legal age of consent varies?

Methinks this is a very bad idea waiting to turn into a **** storm of privacy invasion, false accusations, and potentially getting a knock on one's door from the FBI from agents who have carefully reviewed you and/or your partners bodies in the name of crime prevention.

If this is a real thing people are going to have to go back to using discrete digital cameras to maintain any semblance of privacy.
 
So, let's say someone uses his or her phone to take naughty pix of his or her girl/boyfriend(s) & said object of affection happen to look young but are without doubt of legal above 18 years of age...ahem...crazy I know. But um yeah people have sex and like to use their phones to take pictures and video of each other in the process. So this so-called "neural" system is going to be tripping a lot of false alarms & alerting human apple employees who will then be authorized to scan through/jerk off to your private home-made naughty time images/clips? You know, all for the sake of the children, right? Then what happens when said human "analyst" sees the person in images and also mistakenly believes he/she is under the legal age of consent? What happens in cases where the legal age of consent varies?

Methinks this is a very bad idea waiting to turn into a **** storm of privacy invasion, false accusations, and potentially getting a knock on one's door from the FBI from agents who have carefully reviewed you and/or your partners bodies in the name of crime prevention.

If this is a real thing people are going to have to go back to using discrete digital cameras to maintain any semblance of privacy.
Not at all how this works, If a child has an account as part of a “family” and the account is identified as under 13: if they send or receive an image that an algorithm determines to be of a naked person, regardless of what they look like, they’ll get an on device alert and their family “parent” will be alerted. The image is not sent to apple for review.

If an account identified as over 13 sends images of naked people, uploads them to icloud etc, nothing happens.

There is a group that works with law enforcement and has a library of known images of child abuse from previous cases. iOS has the hash fingerprints of these images, it will hash each image that a user has on their device and compare them against those. If there is an exact match it means that the image is very likely to be a known abuse image, if the user sends a certain number of these to icloud (the number is bot disclosed) Apple will decrypt the icloud copies of those images, check they are abuse images and then refer the matter to police.
 
This is actually probably good for privacy overall, politicians today can play the “child protection” card to argue for legislation requiring backdoors to device encryption, that is going to get broad public support.

By implementing a system this way specifically targeted at child sexual abuse, Apple is preventing that argument, the next best argument politicians have is “catching terrorists” which the public is more skeptical of than child abuse.
 
Not at all how this works, If a child has an account as part of a “family” and the account is identified as under 13: if they send or receive an image that an algorithm determines to be of a naked person, regardless of what they look like, they’ll get an on device alert and their family “parent” will be alerted. The image is not sent to apple for review.

If an account identified as over 13 sends images of naked people, uploads them to icloud etc, nothing happens.

There is a group that works with law enforcement and has a library of known images of child abuse from previous cases. iOS has the hash fingerprints of these images, it will hash each image that a user has on their device and compare them against those. If there is an exact match it means that the image is very likely to be a known abuse image, if the user sends a certain number of these to icloud (the number is bot disclosed) Apple will decrypt the icloud copies of those images, check they are abuse images and then refer the matter to police.
Let me get this straight: What you're saying is that the "tiny-but-necessary" invasion of privacy of having AI looking at private user pictures isn't enough, we need a widescale, far reaching privacy breach into every single detail Apple has claimed not to keep anyway to make this work?

There is 0% chance this overreach won't be abused for other purposes. In fact there's very little chance it isn't already being abused for other things by the very same company that claimed they will never do such thing as a marketing talking point to deride their competitors that are just as overreaching but are at least partially more honest about it.
 
Let me get this straight: What you're saying is that the "tiny-but-necessary" invasion of privacy of having AI looking at private user pictures isn't enough, we need a widescale, far reaching privacy breach into every single detail Apple has claimed not to keep anyway to make this work?

There is 0% chance this overreach won't be abused for other purposes. In fact there's very little chance it isn't already being abused for other things by the very same company that claimed they will never do such thing as a marketing talking point to deride their competitors that are just as overreaching but are at least partially more honest about it.
I think you quoted the wrong post, but what I’m saying is that I think this is apple taking a compromise position to discourage politicians passing laws to weaken device encryption (e.g. like requiring device makers to send encryption keys to the cloud so they can be subpoenaed.)
 
I think you quoted the wrong post, but what I’m saying is that I think this is apple taking a compromise position to discourage politicians passing laws to weaken device encryption (e.g. like requiring device makers to send encryption keys to the cloud so they can be subpoenaed.)
I guess that's true but the effect is kind of the same: encryption is rendered useless all the same.

In fact if the government was manually requesting encryption to be cracked, at least we'd have some checks and balances before a judge signs off on it. Apple is by their own admission, not even letting it up to a human being to interpret it's all AI.

We already know from previous tests that AI basically amplifies serious human biases and issues like racism and bigotry, no imagine there's also 0 civilian oversight and it's just up to a private company to train their AI into what they'd consider worth invading privacy for.

That is, in the case they don't just do a far reaching, blanket scan and use a possibly less-than-accurate report to decide "Yes for these cases as possible possitive matches, we're tying back the metadata to a person and passing it to authorities"

I should know because I actually work on data warehousing and machine learning and know how dangerous is to leave life-and-death decisions about people's lives up to potential human issues with data collection, categorization and interpretation all of which have human elements. We even sometimes have a hard time selling the idea to some managers and executives to trust potential issues with data and make business decisions on it for something as inconsequential as basic manufacturing so I can't begin to describe how unethical this is when applied to potentially ruining people's lives with false accusations of pedophilia.
 
I guess that's true but the effect is kind of the same: encryption is rendered useless all the same.

In fact if the government was manually requesting encryption to be cracked, at least we'd have some checks and balances before a judge signs off on it. Apple is by their own admission, not even letting it up to a human being to interpret it's all AI.

We already know from previous tests that AI basically amplifies serious human biases and issues like racism and bigotry, no imagine there's also 0 civilian oversight and it's just up to a private company to train their AI into what they'd consider worth invading privacy for.

That is, in the case they don't just do a far reaching, blanket scan and use a possibly less-than-accurate report to decide "Yes for these cases as possible possitive matches, we're tying back the metadata to a person and passing it to authorities"

I should know because I actually work on data warehousing and machine learning and know how dangerous is to leave life-and-death decisions about people's lives up to potential human issues with data collection, categorization and interpretation all of which have human elements. We even sometimes have a hard time selling the idea to some managers and executives to trust potential issues with data and make business decisions on it for something as inconsequential as basic manufacturing so I can't begin to describe how unethical this is when applied to potentially ruining people's lives with false accusations of pedophilia.
Ok good so you understand tech, read about their approach https://www.apple.com/child-safety/ .

Initiative 1. Alerting children and parents when nude images are shared with or by young children in Family accounts. They are using ML on device to scan the images in imessage of accounts identified as young children (under 13 I believe) and raising alerts to parents, those photos in imessage are not sent to apple, not reviewed and not being shared with law enforcement.

Initiative 2. Stopping the sharing of known abuse material. On phones set to upload to icloud they are running on device analysis to compare image fingerprints to a set of known child sexual abuse image fingerprints. Once a certain number of images with matching fingerprints are uploaded to icloud they are then flagging the icloud images for review and referral. This is intended to find and involve law enforcement to stop the sharing of known abuse images through their cooud service. There are a lot of companies scanning for and reporting abuse material on their systems, you can read more from the NCMEC https://www.missingkids.org/theissues/csam

There are more details available on the crypto approaches.
 
Last edited:
Ok good so you understand tech, read about their approach https://www.apple.com/child-safety/ .

Initiative 1. Alerting children and parents when nude images are shared with or by young children in Family accounts. They are using ML on device to scan the images in imessage of accounts identified as young children (under 13 I believe) and raising alerts to parents, those photos in imessage are not sent to apple, not reviewed and not being shared with law enforcement.

Initiative 2. Stopping tge sharing of known abuse material. On phones set to upload to icloud they are running on device analysis to compare image fingerprints to a set of known child sexual abuse image fingerprints. Once a certain number of images with matching fingerprints are uploaded to icloud they are then flagging the icloud images for review and referral. This is intended to find and involve law enforcement to stop the sharing of known abuse images through their cooud service.

There are more details available on the crypto approaches.
1) It's still an invasion of privacy because there's no way to know unless you scan every picture when an account is flagged as "minor".

Other than that I could att that only parents are alerted and photos are "not sent to apple, not reviewed and not being shared with law enforcement" at the time of release.

See the issue with privacy intrusion is that once you open that door for whatever reason it's really hard to close it: once this is in place most goverment agencies will try to compel Apple to share the data with them directly. I know Apple has fought them in the past but you might be misjudging how much more issues they could have when basically every single country in the world forcing them to cooperate. It's not a matter of just the US government and their battle to intrude privacy, unlike terrorism, CP is so widely prosecuted it would be impossible for Apple to defend or basically lose iphone sales everywhere people you know, buy iphones.

And while wouldn't authorities want a quick way into prosecuting potential CP predators? As I said, this is akin to opening pandora's box and they won't be able to go back.

2) "Known abuse material" can only be positively identified by an actual human being and yes, that sadly might require other investigative methods because there is no way ML can be accurate enough to not flag false positives.

To give you a perhaps slightly flawed analogy, here the used car marketplace is usually laden with traps less astute buyers are not aware like a car having out-of-state registration: this is usually done to make it harder for local buyers to do a police check, because the car was reported stolen at one point. It might not be stolen mind you but once a car is reported stolen, even after insurance company recovers it and tries to re-certify it as a legal car, the police being well, the police (a.k.a. bastards) never actually purge their databases so you now own a legitimate car and it still means you'll have a long conversation and potential arrest for every single traffic stop or accident as your car will appear as "stolen" and most cops won't just let you go because you casually say "Oh yeah insurance company recovered this car is clean now"

So this police forces are the ones we should trust to be extremely diligent about clearing out false positives sent by Apple? Let me tell you how that will go: that person's life is ruined because unbeknownst to them, they took a picture set that shared some visual similarities to a CP case but no actual crime was committed.
 
1) It's still an invasion of privacy because there's no way to know unless you scan every picture when an account is flagged as "minor".

Other than that I could att that only parents are alerted and photos are "not sent to apple, not reviewed and not being shared with law enforcement" at the time of release.

See the issue with privacy intrusion is that once you open that door for whatever reason it's really hard to close it: once this is in place most goverment agencies will try to compel Apple to share the data with them directly. I know Apple has fought them in the past but you might be misjudging how much more issues they could have when basically every single country in the world forcing them to cooperate. It's not a matter of just the
2) "Known abuse material" can only be positively identified by an actual human being and yes, that sadly might require other investigative methods because there is no way ML can be accurate enough to not flag false positives.
1) Yes it is an invasion of privacy of course and I never said it wasn’t, just that it may end up being a better compromise position for privacy than the alternative invasions of privacy. I personally think that eventually we will get laws in many countries banning end to end encrypted messaging services and therefore having a way to get search warrants for the server copies of all communication. But that is my perspective from Australia where we have less privacy protections than some other places.

2) They are looking for specific known images, not using image recognition to find csam based on similarities (remember the csam itself is never even provided to apple by NCEMC, just the fingerprints). I believe the idea is that many people who collect and shares csam will end up having images from the internet in their collection which will be in the NCMEC database. And yes I believe based on what I’ve read previously that all the images they are checking for from the NCEMC database come from investigations and prosecutions and are investigated to try to identify and rescue children from those situations.
 
Schumer: Intelligence Agencies ‘Have Six Ways From Sunday Of Getting Back At You’

Apple has now becomes another Intelligence agency.
 
After refraining from comment for a few days and watching the disturbing chatter on offer, I am left wondering how people are defending this total rubbish thinking from Apple. It is a thinly veiled attempt at subverting personal privacy and the right to be free from unlawful search & seizure.

As this is noted to be Tim Cooks idea, one has to wonder how long it is going to be until the Board of Directors and shareholders get rid of the clueless tyrant.

Anyone with active brain cells would kick Apple to the curb or start a class-action against Apple for violating basic rights.
 
Last edited:
Back