Loading...
Loading...
A cluster of reports and discussion is raising fresh scrutiny of Microsoft’s PhotoDNA-based CSAM detection after a Windows 11 user says his face profile photo was repeatedly flagged, triggering rapid account closures, identity/age checks, and even police contact. Commenters debate whether the incident reflects perceptual-hash collisions, cross-account linkage to older images, or overly broad scanning across Microsoft services such as OneDrive and Windows sign-in. The controversy spotlights the tension between child-safety enforcement and user privacy, especially when automated perceptual-hash systems—designed to detect previously known illegal content at scale—produce hard-to-appeal false positives with serious consequences.
Users on Hacker News are discussing a report that Microsoft’s PhotoDNA scanning flagged a user’s profile photo (or an image linked to an old account) as matching child sexual abuse material (CSAM), triggering an escalation that reportedly involved Microsoft contacting local police to verify identity and age. Commenters debate whether the detection was a perceptual-hash collision, an account linkage issue, or worse — improper scope of scanning across Windows 11 and OneDrive. The thread raises privacy and false-positive concerns about automated content-detection systems, potential overreach when platforms escalate to law enforcement, and the risks of relying on perceptual hashing for sensitive enforcement actions. It matters for platforms, users, and regulators overseeing content moderation and automated scanning.
A Windows 11 user reports that Microsoft’s PhotoDNA scanning repeatedly flagged his profile photo as abusive content, causing at least a dozen Microsoft accounts to be immediately closed. He claims some accounts were created with Microsoft Support present; when he uploaded his real face photo, the accounts were swiftly suspended with automated warnings. The user says Microsoft required age and identity verification and even contacted local police via address data. He also alleges PhotoDNA scans across Microsoft services and, per support, can access Windows 11 devices. The issue reportedly persisted despite confirmations from support and removal of images from linked vendor accounts. The report raises concerns about false positives, account recovery, privacy, and how PhotoDNA is applied across the Microsoft ecosystem.
A Windows 11 user reports Microsoft’s PhotoDNA technology repeatedly flags his personal profile photo as abusive material, causing at least a dozen Microsoft accounts to be immediately closed. He says Microsoft Support verified the rapid closures—sometimes seconds after adding his face photo—and once even involved local police to confirm his identity. The user alleges PhotoDNA scans tied to his Microsoft account and Windows 11 sign-in are broadly applied across Microsoft’s ecosystem and can surface old, closed-account matches that trigger enforcement. He removed the photo from a linked vendor account to resolve one incident but says appeals are ineffective and the problem persists. The case raises questions about face-image hashing, false positives, cross-account linking, and user recourse.
The essay outlines how organizations like NCMEC handle a massive surge of child sexual abuse material (CSAM): 21.3 million reports and 61.8 million files in 2025, including 1.5 million reports tied to generative AI. The piece argues that scalable detection hinges on perceptual hashing — irreversible fingerprints of images — which lets systems identify known CSAM without exposing or reconstructing original photos, preserving user privacy. It frames CSAM detection as one part of a broader child-safety ecosystem (grooming, sextortion, trafficking) and emphasizes the engineering trade-offs between privacy and safety. The distinction between detecting previously seen content and novel material is presented as central to how tools must evolve. Key players include NCMEC and platforms deploying perceptual-hash pipelines.