How the EU’s Age‑Verification App Was Hacked in Two Minutes — and What That Teaches Us
# How the EU’s Age‑Verification App Was Hacked in Two Minutes — and What That Teaches Us
It was “hacked in two minutes” because researchers didn’t need to break the cryptography or find a remote exploit—they simply edited (or deleted) two sensitive values stored in a local Android shared_prefs XML file (commonly cited as eudi-wallet.xml). Those values included the encrypted PIN ciphertext and its initialization vector (IV), kept in plaintext local storage. Because the app lacked cryptographic binding between the PIN/auth state and the enrolled credentials (“identity vault”), tampering with those stored values let an attacker reset the PIN and unlock the wallet’s credentials with only local file access.
The app architecture, in plain terms
The European Commission’s open‑source app was presented as a privacy‑preserving way for users to prove they are over an age threshold (for example, 18) without sharing additional identity data. Conceptually, it aims to provide a minimal disclosure result—closer to a “yes/no” age assertion than an identity handover.
Under the hood, the model is familiar to anyone who’s used a mobile wallet:
- Local credential store (the “identity vault”): the app holds enrolled credentials on the device.
- Authentication layer: the user authenticates—typically via PIN (and possibly biometrics)—to access or present an assertion.
- Persistent app state: configuration and security settings are stored locally, including in Android’s
shared_prefsmechanism, which serializes values into XML files.
This architecture is not inherently flawed. But it has a hard requirement: if you’re going to keep credentials and authentication state on-device, you must treat the device as a contested environment—lost phones, device backups, forensic tools, and physical access are all normal in the real world.
What went wrong: concrete security failures
Researchers and commentators who analyzed the early release converged on a set of failures that, together, made the bypass trivial.
1) Insecure local storage of sensitive material
The app stored highly sensitive values—reported as including the PIN ciphertext and IV—inside a shared_prefs XML file in plaintext. Even if the PIN itself is “encrypted,” the security of that design depends on where the decryption key lives and whether the stored data can be tampered with undetected.
A key critique repeated in coverage: writing critical cryptographic state into ordinary preferences storage makes it easy to copy, edit, delete, and replay.
2) Missing cryptographic binding between PIN and vault
More damaging than the storage choice was the apparent absence of cryptographic binding between the authentication state and the underlying credentials.
In practice, binding means: if you change anything about the authentication material (PIN-related values, counters, salts, keys), the wallet should be able to detect that tampering because the vault’s ability to decrypt or verify itself is mathematically tied to the legitimate auth state—often via MACs/signatures or authenticated encryption.
According to the reporting, the PIN was not bound to the identity vault in a way that would prevent a simple preference edit from becoming a “reset PIN” path.
3) Weak platform integration
Analyses cited insufficient reliance on Android Keystore and hardware‑backed protections, along with missing tamper detection and weak/absent logging or alerting around credential changes. You can build a secure wallet without every bell and whistle—but for a government‑adjacent identity tool, these omissions compound risk, because the threat model must include physical compromise.
Attack mechanics: why it was trivial
The key point about the “two minute” hack is that it isn’t a clever exploit chain—it’s an edit‑a‑file attack.
Researchers described a flow roughly like this:
- Obtain local access to the app’s stored files (through physical access to the device or access to local backups).
- Locate the
shared_prefsXML (often cited aseudi-wallet.xml). - Delete or modify two entries storing PIN‑related values (reported as the encrypted PIN ciphertext and IV).
- Relaunch the app and set a new PIN.
- Access the already‑enrolled credentials in the identity vault.
No remote code execution. No privilege escalation (as reported in initial disclosures). No cryptanalysis. Just a design where changing local state is effectively treated as legitimate “fresh setup,” without forcing a re‑verification step that’s cryptographically anchored to the original enrollment.
Researchers said they reproduced the bypass within about 48 hours of the app’s release, and public commentary amplified the claim that it could be done in under two minutes.
Broader critiques: more than one bug
The sharpest takeaways from the research coverage weren’t about a single mistake; they were about mindset and governance.
First, commentators argued that storing IV and ciphertext together in plaintext preferences is emblematic of treating cryptographic artifacts like ordinary app settings. Even when the cryptography is theoretically sound, implementation shortcuts can make it irrelevant if attackers can rewrite the inputs.
Second, the episode highlighted a missing or underweighted threat model for physical access. For consumer apps, vendors sometimes downplay physical threats. For identity and age‑verification tooling—where the device may be lost, stolen, confiscated, or imaged—physical compromise is not a corner case; it’s baseline.
Third, reporting noted a governance mismatch: the GitHub repository reportedly carried warnings that the code was not production‑ready, while the Commission publicly described the tool as “technically ready.” That gap matters because it shapes how third parties, policymakers, and the public interpret risk—and how quickly the tool might be pushed toward real-world reliance.
Why It Matters Now
This incident landed at an especially sensitive moment: the EU’s app was unveiled in mid‑April 2026, promoted as a privacy‑preserving way to support online age restrictions. Within days, the public narrative shifted from “privacy tech” to “government app bypassed almost immediately,” creating a credibility problem not just for this codebase but for the broader idea of device‑resident, privacy‑preserving proofs.
That matters because debates over age checks and digital identity are often framed as a tradeoff between safety and civil liberties. A high-profile failure undermines both sides at once: it can weaken child‑safety enforcement claims and heighten fears that new identity plumbing will be rolled out without adequate safeguards. In other words, rushed deployment can erode trust in the very privacy guarantees these systems are supposed to provide.
For a broader look at how EU policy decisions translate into technical constraints on consumer devices, see What the EU’s Replaceable Battery Rule Will Mean for Smartphones and Consumers.
Practical lessons for builders and policymakers
The fixes implied by the reporting are not exotic; they’re foundational.
- Use platform secure storage (for example, Android Keystore) and prefer hardware‑backed keys where available. Don’t keep sensitive auth state as tamperable app preferences.
- Add cryptographic binding so that the vault’s integrity is tied to the legitimate auth state—using MACs, signatures, or authenticated encryption—making edits detectable and non‑bypassable.
- Design explicitly for physical-access threats: stolen devices and accessible backups should be assumed. Add tamper detection, rate limiting, secure backup considerations, and stronger on-device protections.
- Require independent security audits before public “ready” declarations, and align public messaging with repository warnings and security posture.
A recurring theme in modern security incidents is that the weakest link is often not the algorithm but the “glue” code and storage decisions. This is the same kind of practical failure mode that shows up in other productivity and platform tools—see How Notion Leaks Editors’ Email Addresses — and What You Should Do.
What to Watch
- Whether project updates adopt Android Keystore, authenticated encryption, and explicit tamper checks around PIN and vault state.
- Any independent audit reports or third‑party reviews commissioned after the disclosures.
- Policy fallout: whether regulators move toward requiring mandatory security reviews and clearer “production readiness” standards for government-backed age‑verification and digital identity tooling.
Sources: wired.com , politico.eu , independent.co.uk , undercodetesting.com , themeridiem.com , sofx.com
About the Author
yrzhe
AI Product Thinker & Builder. Curating and analyzing tech news at TechScan AI. Follow @yrzhe_top on X for daily tech insights and commentary.