A software engineer from San Francisco reports that his life was “ruined” by Google after photos sent to a doctor of his sick son’s groin were flagged by AI as potential child sexual abuse material (CSAM).

In February of 2021, the California father took photos of his son’s genitals to send to his doctor who was tracking the progression of swelling in the child’s genital region. Ahead of the virtual emergency consultation, the father – who wishes to remain anonymous – was instructed to send photos of the issue.

The father, at the doctor’s request, uploaded the images to the health care provider’s messaging system, which were later flagged by AI as potential CSAM and triggered a full investigation.

Two days after the pictures were uploaded, the father was locked out of his Google account due to “harmful content” that was a “severe violation of Google’s policies and might be illegal.” The tech giant Google had closed the father’s accounts and filed a report with the National Center for Missing and Exploited Children.

He reported that he tried to appeal the decision to lock his account, but Google denied his request. The father was blocked from his mobile provider, Google Fi, and all of his data. He also lost access to all his emails, contacts, photos, and his phone number.

Months after the initial account lock, the father was informed that the San Francisco Police Department opened a case against him.

Finally, in December 2021, he received an envelope from the police which contained documents informing him that he had been investigated, and also contained copies of search warrants served on Google and his internet service provider. The case had been closed, and the investigators had ruled that the incident “did not meet the elements of a crime and that no crime had occurred.”

After receiving word that his case had been closed, the father requested that he regain access to his accounts, but Google refused, informing him that his account was to be permanently deleted. He considered suing the company but decided that it would be far too expensive to do so.

This case highlights the complications that arise when using AI technology to identify abusive digital material. Google’s AI was trained to recognize “hashes,” or unique fingerprints, of CSAM. The flagged content is then passed on to human moderators who determine the proper channels to report the potentially harmful material to.

Christ Muldoon, a Google spokesperson, told The Verge,

“Child sexual abuse material is abhorrent and we’re committed to preventing the spread of it on our platforms. We follow US law in defining what constitutes CSAM and use a combination of hash matching technology and artificial intelligence to identify it and remove it from our platforms. Additionally, our team of child safety experts reviews flagged content for accuracy and consults with pediatricians to help ensure we’re able to identify instances where users may be seeking medical advice.”

Jon Callas, a director of technology projects at the Electronic Frontier Foundation (EFF), a nonprofit digital rights group, called Google’s use of AI “intrusive,” and said “This is precisely the nightmare that we are all concerned about. They’re going to scan my family album, and then I’m going to get into trouble.”

Many people have raised concerns about the new use of AI to identify CSAM. After Apple announced its own Child Safety plan, which would scan images on Apple devices before they’re uploaded to iCloud, users expressed some unease about a potential invasion of privacy.

The EFF criticized Apple’s plan, saying it could “open a backdoor to your private life” and that it represented “a decrease in privacy for all iCloud Photos users, not an improvement.”

Apple ended up putting its image scanning plan on hold, but now offers an option for parents to allow Apple technology to censor their child’s account for nudity in the Messages app.

Join The Conversation. Leave a Comment.


We have no tolerance for comments containing violence, racism, profanity, vulgarity, doxing, or discourteous behavior. If a comment is spam, instead of replying to it please click the ∨ icon below and to the right of that comment. Thank you for partnering with us to maintain fruitful conversation.