Last Friday, Apple announced that it was implementing measures to combat the distribution of child sexual abuse media, or CSAM, on its services. Apple, the company that famously defied the FBI by refusing to provide technical assistance in hacking its own iPhones after a terrorist attack in San Bernardino, California, surprised commentators in both the tech and human rights communities with this announcement, and there was a predictable torrent of criticism from both ends of the policy spectrum.
The electronic distribution of child abuse images has been a perennial and unsolved issue for more than 20 years. The growing popularity of end-to-end encrypted apps such as Apple’s iMessage and Facebook’s WhatsApp has made it more difficult for both law enforcement and the platform providers themselves to access evidence of criminal activity or detect abuse.
In the fractious and divisive policy debates over what to do about it, opposing arguments are often reduced to absurdity. Those who question the wisdom of firewalling encrypted messaging off from legitimate law enforcement inquiries find themselves accused of wanting to introduce mass surveillance and suppress freedom of expression in the name of protecting children. Those who raise concerns over allowing government access to personal communications find themselves accused of placing a higher value on the integrity of technologies such as encryption than on the value of a child’s life.