Researchers produce collision in Apple’s child-abuse hashing system

Researchers have produced a collision in iOS’s built-in hash function, raising new concerns about the integrity of Apple’s CSAM-scanning system. The flaw affects the hashing system, called NeuralHash, which allows Apple to check for exact matches of known child-abuse imagery without possessing any of the images or gleaning any information about non-matching pictures.

On Tuesday, a GitHub user called Asuhariet Ygvar posted code for a reconstructed Python version of NeuralHash, which he claimed to have reverse-engineered from previous versions of iOS. The GitHub post also includes instructions on how to extract the NeuralMatch files from a current macOS or iOS build.


Once the code was public, more significant attacks were quickly discovered. A user called Cory Cornelius produced a collision in the algorithm: two images that generate the same hash. If the findings hold up, it will be a significant failure in the cryptography underlying Apple’s new system.

American tech media and bloggers have been shoving the valid concerns aside ever since Apple announced this new backdoor into iOS, and it’s barely been a week and we already see major tentpoles come crashing down. I try not to swear on OSNews, but there’s no other way to describe this than as a giant clusterfuck of epic proportions.


  1. 2021-08-18 12:10 pm
  2. 2021-08-18 1:44 pm
    • 2021-08-18 3:15 pm
  3. 2021-08-18 2:16 pm
  4. 2021-08-18 4:58 pm
    • 2021-08-19 1:48 am
      • 2021-08-19 5:26 am
        • 2021-08-21 10:23 pm
  5. 2021-08-18 5:24 pm
    • 2021-08-18 6:26 pm
    • 2021-08-19 2:39 am
  6. 2021-08-18 8:57 pm
  7. 2021-08-19 10:15 am
    • 2021-08-19 12:43 pm
  8. 2021-08-21 5:21 pm
    • 2021-08-21 10:03 pm