Today’s TechScan: From LLM quirks to cardboard drones and national nukes
Today's briefing highlights surprising AI model quirks and a dangerous PyPI supply‑chain compromise, a low‑cost cardboard drone prototype entering Japanese defense use, Belgium reversing its nuclear phase‑out, and developer‑facing releases and hacks that matter to engineers and builders. We group the top items into seven focused themes to surface what’s new and actionable for technologists.
The most unsettling tech story today isn’t a splashy new model release or a zero-day with a cinematic name; it’s the quiet reminder that tiny choices in AI training and product “personality” can echo for years, shaping what users experience long after the original tweak is forgotten. Researchers digging into a peculiar verbal tic in GPT‑5.x—an odd fondness for “goblins,” “gremlins,” and other creature metaphors—managed to trace the pattern back to reinforcement signals tied to a personality customization feature. The culprit wasn’t malice or incompetence, but a playful intent: a “Nerdy” personality meant to be unpretentious and fun. And yet, once those “fun” outputs were rewarded, the reward model kept leaning into creature words, effectively teaching the system that this style was desirable.
What makes this episode—documented in OpenAI’s “Where the Goblins Came From”—more than a linguistic curiosity is the way the effect persisted across model generations. Users of GPT‑5.1 complained about overfamiliar language, and analysis showed the Nerdy personality accounted for a tiny fraction of responses (2.5%) but an outsized share of goblin mentions (66.7%). Auditing of RL training data using Codex found that the Nerdy reward consistently favored outputs containing creature words, amplifying the trait through feedback loops. That’s the cautionary tale for model designers and auditors: reward design is not a garnish. Even small personalization knobs can become durable fingerprints that survive updates, drift into default tones, and influence user trust. If your assistant starts sounding like it’s auditioning for a fantasy tabletop campaign, users may laugh—until they start wondering what else is being subtly steered by invisible incentives.
From language quirks to outright compromise: the AI ecosystem remains a high-value target for supply-chain attackers, and this week’s PyPI incident is a harsh demonstration of how low-friction the blast radius can be when a widely used dependency gets poisoned. According to Semgrep’s write-up, the PyPI package ‘lightning’ (PyTorch Lightning) was compromised in versions 2.6.2 and 2.6.3 published on April 30, 2026. The grim punchline is that you didn’t need to run a suspicious script; simply installing the package and importing it could activate an obfuscated JavaScript payload hidden in a _runtime directory. In a world where ML teams routinely spin up new environments, containers, and notebooks, “pip install” is essentially muscle memory—exactly the sort of habitual action attackers love.
The malware’s behavior is both familiar and pointed: it attempts to exfiltrate credentials, tokens, environment variables, and cloud secrets, and then escalates into a kind of repo-level contamination by trying to poison GitHub repositories—creating themed public repos with names like “EveryBoiWeBuildIsaWormBoi.” Analysts linked it to earlier Shai-Hulud/mini Shai-Hulud campaigns based on Dune-themed commit naming and IOC structure. Semgrep issued detection rules and practical advice that reads like an incident response checklist for anyone shipping ML code: rescan impacted projects, audit repos for injected files (including .claude/ and .vscode/), and rotate GitHub tokens. The broader point isn’t just “dependencies are risky”—we already know that—it’s that AI tooling sits at the crossroads of secrets-heavy environments (cloud keys, model artifacts, private datasets) and fast-moving developer habits (rapid installs, shared notebooks, CI pipelines). That intersection is a supply-chain attacker’s playground.
Meanwhile, defense tech continues to compress timelines and budgets in ways that would have sounded implausible a decade ago: Japan’s defense minister Shinjirō Koizumi publicly showcased the AirKamuy 150, a low-cost, flatpack cardboard “suicide” drone made by startup AirKamuy, and confirmed that Japan’s Maritime Self-Defense Force is already using them as expendable targets, per 404 Media. The IKEA-like framing—lightweight, prefab, shipped flat—lands because it captures the strategic shift: unmanned systems aren’t just exquisite, expensive assets anymore. They can be mass-producible consumables, designed to be lost.
That change has procurement and doctrinal consequences that go beyond the novelty of cardboard as an airframe material. If a drone is cheap enough to treat as a training target today, it can be cheap enough to treat as an attritable strike platform tomorrow, or as part of swarm tactics where quantity is its own form of resilience. A defense buyer weighing a small fleet of high-end systems against a large inventory of disposable ones isn’t merely choosing a vendor—they’re choosing an operational philosophy, a logistics plan, and a risk posture. The AirKamuy 150 story also highlights the growing role of startups in military robotics, and the downstream debates that follow: how regulators treat export, how militaries think about safety and counter-drone measures, and how quickly doctrine evolves when “expendable” stops being an exception and becomes the default.
Energy policy, too, is showing how quickly “settled” plans can be reversed when security assumptions change. Belgium has halted the decommissioning of its nuclear power plants and entered exclusive talks with operator ENGIE to potentially nationalize the country’s full nuclear fleet, according to dpa-international. The proposed transfer would include seven reactors, personnel, subsidiaries, assets, and liabilities—including decommissioning obligations—with a basic agreement expected by October. In one stroke, Belgium is reversing its 2003 phase-out policy, reflecting energy-security concerns, heavy reliance on gas imports, and slow renewable buildout.
The second account, surfaced via a Hacker News item, adds context around the operational extensions already agreed for Doel 4 and Tihange 3 through 2035, and frames the policy pivot in the shadow of Russia’s 2022 invasion of Ukraine and Europe’s broader energy recalculations. The core tension is familiar but newly urgent: extend the lifetimes of aging reactors, invest in new nuclear construction, accelerate renewables, or accept greater dependence on imports—each path brings its own costs, timelines, and safety oversight burdens. Belgium’s move also sharpens a political-economic question that other countries keep circling: if nuclear is considered strategically essential, who should ultimately carry the long-tail liabilities—private operators, or the state? When you start talking nationalization, you’re admitting that the asset is not just an electricity generator; it’s infrastructure with sovereign-level implications.
Back on the ground, developers got two reminders today that progress isn’t always about bigger platforms—it’s often about clever consolidation and preservation, making small systems more capable and old systems playable again. Honker, a SQLite loadable extension, proposes something that feels almost mischievous in its simplicity: durable queues, event streams, pub/sub, Postgres-style NOTIFY/LISTEN semantics, and a cron scheduler inside a SQLite file. The pitch is less “replace your stack” and more “stop needing a separate broker like Redis when your application already depends on a database file.” Crucially, Honker lets applications enqueue jobs atomically with business writes in the same transaction, avoiding dual-write complexity and the backup/restore headaches that come with stitching together multiple stateful services.
It also aims to solve the practical pain of making SQLite “real” in production across processes: cross-process wake notifications (reported at roughly 0.7 ms p50 on an M-series laptop) so workers can react to commits without polling. With multi-language bindings—Python, Node, Rust, Go, Ruby, Bun, Elixir, C++, all sharing one on-disk format—the tool is clearly angling at teams who want the simplicity of SQLite-backed deployments but still need queues and background work. The examples cited—Bluesky PDS, Fly’s LiteFS, Turso—signal the broader moment: SQLite isn’t just a toy database; it’s increasingly a deployment primitive, and the ecosystem is racing to fill in the missing “distributed-ish” ergonomics without turning it into a heavyweight monster.
On the preservation-and-engineering side, the story of SimTower being recreated as towers.world is a masterclass in reverse engineering as a form of cultural infrastructure. Developer phulin reverse-engineered the original 1993 EXE and produced a near-perfect, tick-for-tick reproduction, documenting simulation details like population flow, elevator AI, and star rating in detailed specs on GitHub. That’s not just nostalgia; it’s an engineering artifact. By translating a closed binary into reproducible specifications and an open-source codebase (github.com/phulin/tower-together), the project demonstrates how communities can turn “old software” into something maintainable and inspectable—arguably the only sustainable kind of preservation.
What’s especially modern is that this isn’t a static museum piece. The reimplementation adds conveniences like shift-click grid building and, more ambitiously, multiplayer collaboration where multiple players connect to the same persistent simulation with real-time synced build actions. The server architecture runs on Cloudflare Durable Objects, showing how a 1990s simulation model can be re-homed in today’s web real-time tooling. It’s hard not to see this as a template: preservation not as a ROM file on a shelf, but as a living service with shared state, network synchronization, and UI affordances that respect the original while acknowledging contemporary expectations.
In hardware and maker land, two releases underline a theme: lowering friction for precision work, whether mechanical or temporal. Noctua has published official 3D CAD models for its cooling fans, a small move that has outsized practical value. If you’re designing a case, a shroud, a custom mount, or integrating cooling into a product, accurate models mean fewer “measure twice, print three times” cycles. For hobbyists and OEMs alike, this is a quiet gift: the difference between a fan being an approximate rectangle in your CAD assembly and being a verified part you can design around.
Timing precision, meanwhile, gets the DIY treatment in “My Stratum‑0 Atomic Clock,” where an author upgrades a Raspberry Pi-based GPS-disciplined desk clock to a more stable, portable atomic reference by integrating a chip‑scale atomic clock (CSAC). The motivation is practical: GPS 1PPS is great until it isn’t, and CSACs provide cesium‑133–based frequency stability in a surface-mount package, dramatically improving holdover when GPS signals are unavailable. The piece situates CSACs in the arc of DARPA-funded development and subsequent commercial products, arguing that compact atomic references are making lab-grade timing accessible for DIY and small-scale deployments. It’s the kind of hack that sounds indulgent until you remember how many systems—telecom, navigation, sensing—quietly depend on time being not just correct, but stable.
Finally, today’s privacy and moderation stories make an uncomfortable pairing: one is about silent technical probing at scale, the other about the human cost of reviewing what machines capture. Researchers and independent auditors found that LinkedIn has been scanning browsers for at least 6,278 Chrome extensions and embedding encrypted results into every request, a practice traced back to 2017, per 404privacy. The list reportedly grew from 38 to thousands and appears actively maintained, generated by tooling that crawls extension manifests to find probe targets. Tests showed LinkedIn triggers console errors while probing for extensions a user doesn’t have—evidence of client-side checks that can fingerprint visitors. The worry is straightforward: extension-based fingerprinting can enable cross-site profiling and identification without explicit consent, and LinkedIn sits on a uniquely sensitive blend of professional identity and personal data.
On the moderation side, the BBC reports that Meta cut ties with Kenyan contractor Sama weeks after workers said they had to review intimate, potentially non-consensual footage captured by Meta’s AI-powered Ray-Ban and Oakley smart glasses. Sama says about 1,108 staff face redundancies, while Meta says the firm failed to meet its standards and that it had paused the relationship while investigating. The revelations prompted probes by the UK Information Commissioner’s Office and Kenya’s data protection regulator, raising questions about consent practices and reviewer safeguards. Wearables take the old content-moderation dilemma—human beings sorting the internet’s worst moments—and strap it to a camera you might meet at a bar. The technical promise of “AI-powered glasses” is inseparable from the governance problem of what gets captured, what gets reviewed, and who pays the psychological price.
If there’s a thread through all of this, it’s that the systems we’re building—models, packages, drones, reactors, platforms—are accruing power in ways that make small design and policy choices feel permanent. A “playful” reward tweak can become a generational tell; a dependency compromise can turn routine installation into credential exfiltration; a cheap airframe can shift doctrine; a phase-out can become a nationalization plan; a website can treat your browser extensions as an ID badge; a wearable can turn private life into review queue material. The next months will reward teams and governments that treat these not as isolated incidents, but as signals: audit the incentives, harden the supply chain, and write the rules before the defaults write them for you.
About the Author
yrzhe
AI Product Thinker & Builder. Curating and analyzing tech news at TechScan AI. Follow @yrzhe_top on X for daily tech insights and commentary.