Apple has revealed plans to roll out a system for checking photos for child abuse imagery on a country-by-country basis, depending on local laws. Apple said it would implement a system that screens photos for such images before they are uploaded from iPhones in the United States to its iCloud storage.
Child safety groups praised Apple as it joined Facebook, Microsoft, and Google in taking such measures.
However, Apple’s photo check on the iPhone itself raised concerns that the company is probing into users’ devices in ways that could be exploited by governments. Many other technology companies check photos after they are uploaded to servers.
The company said nuances in its system, such as ‘safety vouchers’ passed from the iPhone to Apple’s servers that do not contain useful data, will protect Apple from government pressure to identify material other than child abuse images.
Apple has a human review process that acts as a backstop against government abuse, it added. The company will not pass reports from its photo checking system to law enforcement if the review finds no child abuse imagery.
Regulators are increasingly demanding that tech companies do more to take down illegal content. For the past few years, law enforcement and politicians have wielded the scourge of child abuse material to decry strong encryption, in the way they had previously cited the need to curb terrorism.