Business Updates

Apple Will Investigate ICLOUD Photo Uploads For Images Of Child Abuse

Thursday Apple Inc. announced that it is implementing a system that tests photos on iPhones in the United States before uploading them to its iCloud storage services to ensure that downloads are free images of known images of child sexual abuse.

By contentwriteramisha

Apple Will Investigate ICLOUD Photo Uploads For Images Of Child Abuse

Apple said that detection of child abuse image uploads that is sufficient to protect against false positives, human review, and reporting of the user would trigger law enforcement. The system is said to be designed to reduce false positives by one trillion.

Apple's new system aims to meet law enforcement requirements to prevent child sexual abuse while adhering to the privacy and security principles that are at the heart of the company's brand. However, some privacy advocates have said the system could open the door to view political speeches or other content on the iPhone.

Most other major technology providers, including Google Inc., Facebook Inc., and Microsoft Corp. - compare images already with a database of known images of child sexual abuse.

John Clark, executive director of the National Center for the Missing and Exploited, said, "With so many people using Apple products, these new security measures have the potential to save lives for children who are lured online and whose frightening images Child sexual abuse is spread in the material it said in a statement. "The reality is that privacy and child protection can coexist."

This is how the Apple system works. Police officers maintain a database of known images of child sexual abuse and translate those images into "hashes" - digital codes that uniquely identify the image, but you can’t use it to recreate it.

Apple implemented this database with a technology called "NeuralHash", which is also supposed to capture processed images similar to the originals.

About iCloud Photos and My Photo Stream - Apple Support (IN)

When a user uploads a photo to Apple's iCloud storage service, iPhone creates a bag of uploaded video and compares it to a database.

Photos stored only on phones are not verified, Apple said, and human verification before reporting an account to law enforcement is designed to ensure any matches are genuine before an account is suspended.

Apple said users who believe their account has been unfairly banned can appeal to have it reinstated. The Financial Times had earlier reported on some aspects of the programme.

One of the features that sets Apple apart is that it verifies the photos stored on your phone before sending them, rather than verifying them after they arrive on the company's servers.

On Twitter, some privacy and security experts expressed concern that the system could potentially be expanded to scan phones for prohibited content or political statements.

Apple "sent a very clear signal". In his opinion, he is sure to create a (very impressive) system that scans users' phones for restricted content,” warned Matthew Green, a security researcher at Johns Hopkins University.

"It will break the barrier - governments will call on everyone to do the same." Other privacy researchers like India McKinney and Erica Portnoy from the Electronic Frontier Foundation wrote in a blog post that it may be impossible for external researchers to verify that Apple has kept its promise not to check that devices contain a small set of content.  This move is a secure change for users who trust the company's leadership in terms of privacy and security, ”the duo wrote.

What's Your Reaction?