What Is Anubis‑Style Client‑Side Proof‑of‑Work — and Does It Stop AI Scrapers?
# What Is Anubis‑Style Client‑Side Proof‑of‑Work — and Does It Stop AI Scrapers?
Anubis‑style client‑side proof‑of‑work (PoW) is a lightweight web firewall technique that can slow and deter large‑scale AI scraping by making every request “cost” CPU time in the browser—but it does not stop scraping outright. The key idea is simple: if a site can force each visitor to do a small amount of computation before getting access, that work is barely noticeable for a human loading a few pages, yet becomes expensive when multiplied across millions of automated requests.
Anubis is one prominent implementation of this idea: an open‑source tool designed to protect websites and upstream resources from aggressive automated scraping—especially mass harvesting associated with AI data collection—while trying to keep friction low for legitimate users.
The basic idea: making every request “pay” (a little)
Traditional bot defenses often revolve around identifying bad traffic (fingerprints, behavior analysis, reputation scores) and then blocking it. Client‑side PoW flips the model: it doesn’t try to “know” who you are first. Instead, it asks every client to prove it can spend a small amount of compute.
Anubis uses a Hashcash‑style challenge. In the Hashcash model, a client must find a value (a nonce) such that when hashed, the result matches a difficulty target—commonly expressed as “a certain number of leading zero bits.”
In Anubis’s case, the computation happens in the browser. If the user can solve the challenge, the firewall lets them through—either by issuing a short‑lived access token or allowing traffic onward to upstream resources.
How Anubis works (mechanics in brief)
According to the project’s GitHub and documentation, Anubis is maintained by Techaro (TecharoHQ) and created by Xe Iaso. It’s MIT‑licensed, implemented in Go (server/middleware) and JavaScript (client), with source published on GitHub. The project became public on January 19, 2025, and the brief notes stable releases into 2026 (including v1.25.0, dated February 18, 2026).
Here’s the core flow, as described in the project’s design documentation:
- Request hits the site (often via a reverse proxy or middleware placement).
- The middleware responds with a challenge page rather than the requested content.
- The browser runs JavaScript to compute SHA‑256 hashes repeatedly, searching for a nonce that produces a hash matching the difficulty target.
- The client submits the solution; the server verifies it quickly.
- If valid, Anubis allows access (commonly via a short‑lived token or pass‑through behavior), and the client can reach upstream resources.
By default, Anubis uses a difficulty target equivalent to five leading zeroes in the SHA‑256 output (configurable, and admins can tune it dynamically). This “small puzzle” is designed to be trivial at human scale but punitive at scraper scale.
Why PoW helps against AI scrapers
PoW is attractive to smaller operators because it targets the economic reality of scraping:
- For a single browser, solving a modest SHA‑256 PoW is usually fast enough to feel like a brief speed bump.
- For a scraper at massive scale, that same speed bump becomes a line item. Multiply “a bit of CPU time” by millions of requests, and it turns into real compute cost, slower throughput, or both.
That makes PoW useful for two related goals Anubis emphasizes:
- Protecting availability: If aggressive bots are driving resource exhaustion and traffic spikes, forcing work onto clients can blunt the load before it hits upstream infrastructure.
- Raising the cost of harvesting: It doesn’t require the operator to correctly classify traffic as “AI scraper” versus “human.” The deterrent comes from forcing the economics to change.
This is also why some community operators see PoW as a pragmatic stopgap compared with sophisticated bot management platforms. It’s a relatively direct mechanism: do the work, then you can load the page.
If you’re following broader changes in developer tooling and attribution debates, there’s a similar “small change, big implications” dynamic in areas like commit metadata. See: VS Code auto-tagging in commits — small change, big implications for dev tooling.
Limitations and evasion risks
Anubis‑style PoW meaningfully changes the cost curve—but it’s not a silver bullet.
Well‑resourced scrapers can still scrape. A determined actor can amortize or parallelize the work: throw more machines at it, distribute requests across a botnet, or otherwise pay the compute bill. PoW is fundamentally about raising cost, not establishing identity or intent.
JavaScript requirements can block legitimate clients. Anubis relies on modern JavaScript to compute SHA‑256 in the browser. Users running strict privacy tools or hardened setups that disable required features can fail challenges. The brief notes examples like JShelter causing incompatibility. This is one of the most practical trade‑offs: protection for operators can translate into friction for privacy‑conscious users or non‑browser clients.
PoW measures capability, not motivation. A client that can solve puzzles could be a human, a headless browser, or a scraper with enough compute. Anubis’s documentation and coverage point toward future plans to supplement PoW with stronger signals, including fingerprinting approaches such as headless detection and font rendering analysis—methods that may reduce false positives but also raise privacy concerns.
Deployments and the real-world context
Anubis is positioned as a lightweight, affordable defense, aimed at small sites and community infrastructure that can’t absorb waves of aggressive automation. The brief notes adoption by code hosts and smaller web operators, including Git forges and other community infrastructure where maintaining uptime and bandwidth budgets matters.
It’s distributed as middleware with deployment guidance and recipes referenced for environments like Docker and Nginx, aligning with the “drop-in” operational style many small administrators need.
Why It Matters Now
Publishers and community sites increasingly describe a breakdown in the informal “social contract” of web crawling: automated collection—especially associated with AI data needs—can create traffic patterns that look less like search indexing and more like extraction at scale. In that environment, operators are looking for defenses that don’t require building sophisticated ML-driven bot detection.
Anubis matters because it’s a pragmatic open-source stopgap: it doesn’t claim to solve the legal or contractual questions around training data, but it can help keep sites online and reduce abusive load while norms evolve. Its design reflects a moment where many operators are moving from “allow by default” to “verify first,” even if that verification is simply “prove you can pay a small computational toll.”
This shift also echoes a broader theme in modern web infrastructure: operational controls are increasingly moving “closer to the edge” (reverse proxies, middleware gates, automated policy). If you’re tracking how automation is managed across systems, the same pattern shows up in AI tooling stacks—see What Is an AI Agent Orchestration Platform — and Why It Matters Now.
Practical guidance for site operators
If you run a small site dealing with scraping or traffic spikes, Anubis is a fit when:
- You need a deployable, lightweight mitigation that reduces pressure on upstream resources.
- You can tolerate requiring modern JavaScript for access (or can offer alternate access paths where feasible).
Operationally, the key knobs and practices are:
- Tune difficulty: The default (five leading zeroes) is a starting point, but difficulty is configurable. Raising it increases deterrence but also user friction.
- Monitor false positives: Watch for complaints from privacy-hardened users and non-browser integrations that may break.
- Layer defenses: PoW can complement rate limiting and other controls; it’s not meant to be the only gate.
What scrapers and researchers should know
On the other side of the fence, Anubis is also a clear signal: the operator is actively defending access, and bulk automated collection will have increased cost. Practically:
- Plan for compute overhead if you’re doing compliant crawling, because the “free ride” model is being replaced by pay-per-request friction.
- Prefer negotiation and published interfaces where available: APIs, data dumps, or explicit permission are more sustainable than brute forcing around defensive measures.
- Be aware that non-browser clients (or privacy-first browser modes) may fail outright due to the JavaScript requirement.
What to Watch
- Whether Anubis‑style PoW becomes routinely paired with stronger fingerprinting (headless detection, font rendering), improving filtering but increasing privacy tensions.
- How “difficulty tuning” evolves in practice: dynamic settings could become a standard operational response to traffic spikes.
- Whether broader ecosystem norms (technical and non-technical) reduce pressure for PoW gates—or instead make them a default expectation for protecting community infrastructure.
Sources: https://aitoolly.com/ai-news/article/2026-05-03-anubis-anti-scraping-shield-defending-web-infrastructure-against-aggressive-ai-data-harvesting, https://github.com/TecharoHQ/anubis, https://anubis.techaro.lol/docs/design/how-anubis-works/, https://en.wikipedia.org/wiki/Anubis_(software), https://hamradio.my/2025/07/how-anubis-works-fighting-bots-with-proof-of-work/, https://github.com/topics/proof-of-work?l=javascript
About the Author
yrzhe
AI Product Thinker & Builder. Curating and analyzing tech news at TechScan AI. Follow @yrzhe_top on X for daily tech insights and commentary.