Today’s TechScan: Tiny silicon, retro hacks, and civic data in Git
Top stories today span hardware innovations, civic tech reimagining, retro and preservationist projects, developer tooling for safety and productivity, and a surprising policy nudge on personalization algorithms. Highlights include ultra‑low‑latency AI in silicon at CERN, AMD’s new dual‑V‑Cache Ryzen, Spain’s entire body of law put into a Git repo, lightweight sandboxes for agent safety, and restored PDP‑11/PDP‑11‑era emulation work.
If you’re trying to predict where tech is going, you could do worse than watching where engineers are forced to be disciplined. Today’s stories share a strangely consistent moral: when the constraints are real—nanoseconds, watts, human trust, legal audit trails—flashy abstractions fall away and you get systems that are smaller, more explicit, and harder to hand-wave. That’s true in the literal sense of tiny neural nets “burned into silicon” to keep up with the Large Hadron Collider, and it’s true in the civic sense of turning an entire country’s laws into something you can git log. It’s even true in the retrocomputing scene, where “nostalgia” increasingly looks like a rigorous form of preservation engineering.
Start with CERN, which is effectively running a masterclass in what “edge AI” actually means when the edge is a detector generating a data flood that can peak at hundreds of terabytes per second. According to an account of its deployment, CERN’s Level‑1 Trigger is built from roughly 1,000 FPGAs running the AXOL1TL algorithm. Their job is brutally simple in description and brutally unforgiving in practice: decide, in under 50 nanoseconds, which proton collisions are worth keeping. Only about 0.02% of events survive. That selection is not a nice-to-have; it’s the difference between feasible science and an unmanageable deluge.
What’s striking isn’t merely that CERN uses ML here—it’s the kind of ML. These are ultra-compact, task-specific neural nets compiled from familiar training frameworks like PyTorch/TensorFlow, then pushed through HLS4ML into synthesizable C++ and mapped onto hardware (FPGAs and even ASICs). This is ML as a deterministic, scheduled circuit, not as a flexible cloud service. The tradeoffs are the ones you’d expect when your latency budget is measured in nanoseconds: predictability over generality, power and determinism over model sprawl, and a toolchain that looks more like hardware compilation than “deploy container, scale to zero.” It’s also a reminder that “GPU versus not-GPU” is sometimes not a debate; it’s simply the wrong category. In this world, GPUs/TPUs are sidestepped because they cannot promise what the trigger requires: extreme low-latency, consistent behavior under tight power constraints, and a design you can reason about like a circuit because, effectively, it is one.
That constraint-driven sensibility rhymes with a very different kind of hardware story: AMD’s launch of the Ryzen 9 9950X3D2 Dual Edition, a high-end desktop CPU that goes all-in on 3D V‑Cache by stacking it across both chiplets for a total of 208MB of cache. The point, AMD says, is speed—up to roughly 10% faster than the 9950X3D in games and cache-sensitive apps—by keeping more working data close, reducing the penalty of waiting on memory. It’s the CPU equivalent of saying: don’t just compute faster, wait less.
The architectural move is also a kind of simplification. Earlier hybrid V‑Cache approaches could require OS/driver scheduling to steer the right work toward the cache-enhanced cores, creating the infamous “core parking and scheduling quirks” that power users learned to troubleshoot. By putting the stacked cache on both dies, AMD removes that particular footgun. But the “no free lunch” clause is printed right on the spec sheet: a slightly lower peak clock (5.6GHz vs 5.7GHz), a higher default TDP (200W vs 170W), and an expected premium price. In other words, buyers are still choosing a posture: if your workload is latency- and cache-sensitive, this is a very intentional bet; if you’re chasing absolute peak frequency or efficiency, you’re paying for silicon that may not show up in your benchmarks. The chip is slated to ship April 22, and it supports AMD’s standard tuning and overclocking tools—though the subtext is that the “default” personality of this CPU is already an aggressive one.
From hardware constraints to civic ones: one of the most consequential developer workflow stories today isn’t about a new framework, but about reframing a public institution’s output as a software artifact. Developer Enrique Lopez has converted Spain’s consolidated national legislation into a Git repository called legalize-es, containing 8,642 laws pulled from the BOE open-data API. Each law lives as a Markdown file with YAML frontmatter metadata, and—crucially—each amendment is represented as a dated commit, with commit messages citing official reform identifiers and sources. The project preserves history back to 1960, and another write-up notes 27,866 historical reforms captured as commits.
In practical terms, this means the basic operations software teams take for granted—diffs, blame, reproducible history—become possible for statutes. You can inspect how an article changed over time (the example cited is Article 135), and you can do it with the same muscle memory you’d apply to a codebase. The project was built with a pipeline assembled quickly (a commenter notes “about four hours,” using Claude Code alongside the BOE API), but the implications are not “weekend project” sized. When the law is machine-readable and versioned, it becomes integrable: compliance tooling can treat legal requirements as data inputs rather than PDFs and folklore, researchers can ask more precise historical questions, and legal assistants can be trained with something they usually lack—provenance.
There’s also a subtle cultural shift embedded here: Git isn’t just storage, it’s a social contract about change. When laws are rendered as a repository, you implicitly invite a style of accountability where the unit of debate can be a diff. You also expose how much metadata is still missing, which is exactly what the community discussion gravitates toward—suggestions like enriching the data with authors, parties, and tags. Lopez also plans a programmatic API at legalize.dev, which points to a future where “read the law” isn’t solely a human activity, but something systems do continuously, checking their obligations as automatically as they check for security patches.
That impulse to preserve, reproduce, and make systems inspectable shows up in the retrocomputing world too—where the best projects are less about “vibes” and more about recovering truth from schematics. One standout is ll/34, a circuit-level emulator of the 1976 PDP‑11/34A. Rather than emulating at the instruction-set level alone, it models the KD11‑EA CPU from schematics and microcode ROMs, including combinational ROM truth tables, 512x48 PROM microcode, 74xx gate logic, a precise clock generator, and timing that respects UNIBUS behavior and MMU details. It goes further into the ecosystem with peripheral cards—boot ROM, serial DL11, clocks, RK/RL disk controllers, tape, and VT100—and includes a front-panel Programmer Console, a microcode-aware Debug Console, and even a built-in logic analyzer for signal-level tracing. There’s a WebAssembly front-end with a photorealistic panel, which feels like the right flourish: not just function, but a bridge back to how humans interacted with the machine.
In a similar spirit—but from the opposite direction of “simulate a historic computer”—an electronics hobbyist built Tic‑Tac‑Toe out of 2,458 discrete transistors, turning what’s often an abstract exercise into a hands-on proof that complex logic can be assembled from first principles. The project, called “Fets and Crosses,” moved from an initial ROM-based engine to a combinatorial logic implementation that can play perfectly, using 19 flip-flops and reusable MOSFET-based logic cells designed in hierarchical KiCad schematics. The creator documented simulation in Logisim and used layout workflows that mirror IC design, including Manhattan-style 2-layer routing, plus a homemade vacuum pick-and-place pen to make assembly less punishing. These projects aren’t merely “cool”—they’re educational artifacts that put the hidden layers of modern computing back on the surface.
If you want a third retro note that’s more playful than preservationist, there’s also a video project: an open-world engine for the N64. The source list doesn’t give technical specifics beyond the premise and the link, but it fits the broader pattern: using modern toolchains and contemporary ambition on classic constraints. Even when the details aren’t spelled out, the throughline remains: deliberate engineering within hard limits is having a moment.
Those same safety-and-limits themes are popping up in modern developer workflows around AI agents, where the constraint is: don’t let your helpful assistant wipe your machine. Stanford’s Secure Computer Systems group has released jai, pitched with a blunt warning—“Don’t YOLO your file system”—and the message lands because it’s anchored in real incidents where agents deleted user files. Jai is a lightweight Linux sandbox designed to bound an agent’s filesystem access with a one-command setup: your current working directory stays writable, while the rest of your home directory can be presented as a copy-on-write overlay or hidden; /tmp is private; other paths can be read-only. It offers three isolation modes—casual, strict, bare—to balance confidentiality, integrity, and UID handling depending on how paranoid (or how busy) you are.
The positioning is important: jai isn’t trying to replace heavy containment like VMs or hardened containers, and it explicitly warns it’s not a substitute for full hardening. Instead it’s aimed at the messy reality of ad-hoc workflows where Docker, bubblewrap, or chroot feel like too much ceremony. This is where developer safety tooling probably has to go if it wants adoption: small, composable primitives that make the safe path the easy path, especially as local agents become routine rather than exotic.
That preference for lightweight composability runs through today’s open-source tooling picks as well. One practical example: improving git diffs with delta, fzf, and “a little shell scripting.” Delta replaces git’s built-in pager with clearer character- and word-level diffs and configurable themes, and the author describes wiring it into .gitconfig so common commands like git show, git diff, git add -p, and git blame all benefit. They also share a gd script that uses fzf to present an interactive file menu, supports side-by-side views, and forwards git diff flags (like --staged or branch comparisons). It’s not a new platform; it’s a sharper knife. And in practice, sharper knives are what make teams faster and code reviews less error-prone.
On the UI/protocol side, Cocoa-Way is a native macOS Wayland compositor aimed at running Linux GUI apps on macOS “seamlessly” by forwarding Wayland traffic over Unix sockets via waypipe. Built in Rust and using Metal/OpenGL for hardware-accelerated HiDPI rendering, it’s designed for lower latency than approaches like XQuartz, VNC, or VM GUI solutions, and it includes server-side decorations for better desktop integration. The roadmap—multi-monitor support, clipboard sync, additional backends—signals ambition, but even in its current framing it’s an example of “protocol virtualization” research turning into a developer-facing tool. It’s also a reminder that interoperability is still a frontier: we’re surrounded by powerful systems that remain annoyingly siloed, and clever protocol bridges can be more impactful than yet another monolithic platform.
Finally, there’s a project that sits at the intersection of education, accessibility, and the browser as a runtime: velxio, a TypeScript project that claims you can emulate Arduino, ESP32, and Raspberry Pi boards in your browser—write code, compile, and run on 19 boards, with “no hardware, no cloud.” The repo framing suggests a push toward frictionless embedded experimentation, where the first barrier (“I don’t own the board”) disappears. Even without deeper technical detail in the source snippet, the direction is clear: bringing hardware learning loops closer to the immediacy of web development.
Policy, too, is tightening constraints—this time on business models that lean on opaque personalization. Colorado’s House has passed House Bill 26-1210 by 39-24, aiming to ban companies from sending individuals’ personal data through algorithms to set personalized prices or wages. The bill targets “surveillance pricing” based on data like search history, finances, geolocation, and online behavior, while exempting loyalty programs, group discounts, and ordinary supply-and-demand pricing. Sponsors Rep. Javier Mabrey and Rep. Jennifer Bacon frame it as consumer protection against opaque algorithmic decision-making, with the write-up noting FTC concerns about AI-enabled individualized pricing. Opponents, including Rep. Chris Richardson, argue the language could be overly broad and might sweep in standard HR analytics. Next stop is the Colorado Senate.
Whether or not this specific bill becomes law, the appetite it represents is hard to miss: lawmakers are increasingly willing to challenge personalization when it becomes inscrutable, unaccountable, or coercive. For companies, that means the “just optimize the funnel” era is colliding with an expectation of legibility—what data is used, for what purpose, and with what safeguards.
Today’s connective tissue is constraint as a forcing function. CERN’s triggers show ML thriving when it’s compact and deterministic; AMD’s cache-heavy CPU shows performance strategy shifting from raw clocks to memory behavior; Spain-in-Git shows that transparency can be a data structure; retro emulation and discrete transistor builds show that preservation is engineering, not sentiment; and tools like jai, delta, and Cocoa-Way show that developer experience often advances through small, sharp utilities rather than sweeping revolutions. The forward-looking bet is that we’ll see more of this: fewer “infinite” systems, more bounded ones—built to be audited, simulated, diffed, and trusted, because the next wave of complexity won’t be survivable without those rails.
About the Author
yrzhe
AI Product Thinker & Builder. Curating and analyzing tech news at TechScan AI. Follow @yrzhe_top on X for daily tech insights and commentary.