Today’s Tech Mosaic: Moon Code, Safer Web, and Surprising Hardware Wins
Today’s briefing surfaces a mix of historical preservation, developer and security shocks, and technical advances. Highlights include a fresh public release of the Apollo 11 AGC source, major crypto/SSL library churn, troubling backup and data‑exposure stories, and novel hardware and DB/compilation engineering updates.
The most striking tech story today isn’t a shiny new model or a trillion-parameter arms race. It’s a reminder that some of the most consequential software ever written was built under constraints so tight they feel alien in an era of “just add another dependency.” GitHub user chrislgarry has published the original Apollo 11 Guidance Computer (AGC) source code—for both the Command Module and the Lunar Module—in a modern repository format. The repo, titled “Apollo-11,” is described as a way to make historically significant flight software accessible for inspection, study, and preservation. The description is brief, but the implications are not: moving this code into a searchable, version-controlled home changes who can learn from it, how easily it can be referenced, and how likely it is to survive as living knowledge rather than a museum artifact.
It’s hard to overstate what it means to see mission-critical, real-time embedded software presented in a workflow modern engineers actually use. The repository framing makes the AGC code more approachable as an educational object: you can browse, diff, discuss, and cite it with the same habits you apply to contemporary systems code. And even without extra editorial context, just having the Command Module and Lunar Module software in one place invites the kind of comparative reading that real engineering depends on—how different operational demands shape structure and decisions. The repository’s stated goal of preservation also matters in a subtler way: it treats spaceflight computing history not as a nostalgia project, but as technical heritage worth maintaining with today’s best archiving practices.
That look backward pairs neatly with a look forward in the web’s safety rules, where expectations are being clarified—sometimes with a blunt instrument. Google has announced a new explicit spam policy banning “back button hijacking,” calling it malicious and making it enforceable starting June 15, 2026. The policy targets scripts and techniques that manipulate browser history so users can’t return to a previous page as expected, or get shunted into unsolicited pages or ads. Crucially, Google isn’t framing this as a vague quality guideline; it’s a specific prohibited practice that can trigger manual spam actions or automated demotions in Search.
For publishers and site operators, the practical message is unromantic: you’re responsible not only for your own code, but also for what sneaks in via third-party libraries and ad scripts. Google explicitly advises site owners to remove offending code, including third-party components, and points to Search Console for reconsideration after fixes. That nudges the industry toward better hygiene—auditing what runs client-side, documenting dependencies, and treating “monetization scripts” as a security and UX surface rather than a bolt-on. It also sharpens a long-running tension: many of the worst navigation abuses aren’t authored by the site itself, but by the ecosystem that finances it.
If Google is tightening the screws on deceptive navigation, another report suggests the ad ecosystem has a different relationship with user intent when it comes to privacy signals. An independent webXray audit of more than 7,000 popular California websites (conducted in March) found that Google, Meta, and Microsoft frequently ignored Global Privacy Control (GPC) opt-out signals, despite browsers sending a sec-gpc: 1 header. webXray reported Google failed to honor GPC 87% of the time, Microsoft 50%, and Meta 69%, pointing to cases where servers set ad cookies anyway. The stakes in the article are framed in legal terms as well as technical ones: potential conflicts with the California Consumer Privacy Act and exposure to enormous fines.
The companies disputed or pushed back on the findings; Google, for example, called the analysis a misunderstanding of its products. Still, the audit’s real impact may be cultural as much as regulatory. It reframes “privacy controls” as not just browser features or consent-banner theater, but a measurable signal that can be tested at scale, argued over with data, and potentially enforced. For publishers, it also complicates compliance storytelling: even if a site tries to honor user choices, third-party platforms embedded across the web may behave differently. The uncomfortable question becomes: whose obligation is it to respect opt-outs when the data flows through multiple hands?
That same theme—users assuming a service does one thing while reality quietly does another—shows up in today’s infrastructure and security shocks. A post from a longtime user reports that Backblaze has quietly stopped backing up certain folders, reportedly including .git repositories and OneDrive/Dropbox directories. For customers, this strikes at the core promise of a backup service: that it will preserve what’s on your machine. The complaint isn’t just that exclusions exist (many tools exclude caches or system folders), but that these exclusions appeared without clear documentation or user controls to re-enable them, leaving people to discover gaps only after trust was already spent.
The sharpest point in the critique is that sync is not backup. Sync services can have limited version histories and deletion windows, while backup products often sell themselves on longer retention and easier restores. If common synced folders are excluded, users may believe they have redundancy when they don’t—and certain losses (like deleted Git history) can become effectively permanent. Even if there are reasonable engineering motives behind exclusions, transparency is the real product feature here. A “safety net” that changes shape without telling you is less a safety net than a trapdoor.
A separate trust crack appears in a security researcher’s disclosure that Fiverr exposed hundreds of customer files via Cloudinary using public, non-expiring URLs, allowing PDFs and images—some containing PII and tax forms—to be indexed by Google. The researcher describes Cloudinary being used “like S3” but without signed/expiring links, and notes that Fiverr may be serving HTML that links to assets in a way that makes them searchable (the post gives an example query pattern: site:fiverr-res.cloudinary.com form 1040). The researcher says they notified Fiverr’s security email 40 days earlier with no response and is now disclosing the issue.
This is less about a single vendor mistake than about a recurring pattern: platforms treating storage URLs as implementation details when they’re actually publication mechanisms. “Non-expiring public URL” is functionally the same as “public document,” and search engines are very good at turning accidental exposure into durable discoverability. The post also flags that Fiverr advertises services that could generate regulated data, raising potential risks under GLBA/FTC Safeguards Rule—a reminder that compliance doesn’t care whether exposure happened through malice or misconfiguration.
While trust is fraying in backups and marketplaces, the cryptographic plumbing that underpins much of the internet is going through a major refit. OpenSSL 4.0.0 has been released with significant API, security, and build changes. On the removal side, it drops legacy protocols/components including SSLv2/SSLv3, engines, deprecated EVP/ERR functions, c_rehash, and some platform targets. It also tightens ASN.1 and X.509 handling—making ASN1_STRING opaque, adding AKID and CRL checks, and providing stricter time-check APIs—and enforces lower bounds in PBKDF2 when using the FIPS provider. If your mental model of OpenSSL includes “it’s everywhere, it’s stable, don’t touch it,” 4.0.0 is an invitation to rethink that posture.
At the same time, OpenSSL 4.0.0 adds features that map to where TLS is going, not where it’s been. It introduces Encrypted Client Hello (ECH), adds SM2/SM3, and supports hybrid post-quantum groups, alongside additions like cSHAKE/SP800-185, new KDFs (SNMP/SRTP), ML-DSA-MU digest, negotiated FFDHE for TLS 1.2, and deferred FIPS self-tests. For developers and operators, the story is twofold: modern privacy and future-proofing are arriving in mainstream libraries, but they arrive with ABI/const-signature changes and build-option adjustments that may require real engineering time. This is the trade: you don’t get the next decade of crypto without paying down some of the last decade’s compatibility debt.
On the data engineering front, today brings something refreshingly concrete: teaching materials and reference implementations that make “how does it actually work?” more accessible. The University of Tübingen has published a 15-week course by Torsten Grust titled “Design and Implementation of Database System Internals,” focused on DuckDB internals, with slides and materials on GitHub. The syllabus covers the gutsy stuff that practitioners often learn only by spelunking code: memory management, grouped aggregation, large-table sorting, ART indexing, query execution plans and pipelining, vectorized execution, and query rewriting/optimization. It assumes basic SQL and points newcomers to a companion “Tabular Database Systems” course.
What makes this noteworthy is less “DuckDB is popular” and more that embedded analytical databases are becoming a normal tool in the engineer’s kit, and pedagogy is catching up. A structured, 15-week path through internals can help teams build better extensions, debug performance problems with fewer myths, and reason about design tradeoffs in systems that are increasingly shipping inside products rather than behind a DBA curtain. When database knowledge becomes teachable in public materials like this, it also becomes easier to standardize a shared vocabulary across industry and academia.
That educational clarity is echoed in OpenDuck, an open-source implementation of a MotherDuck-style distributed DuckDB architecture. OpenDuck offers differential (append-only layered) storage, hybrid execution across local and remote workers, and a DuckDB-native “attach” experience. Its protocol is intentionally minimal: two gRPC RPCs with Arrow IPC, so any backend that speaks gRPC and returns Arrow can act as an execution engine. The project includes a Rust gateway handling auth, plan splitting, and routing to DuckDB workers, and persists data as immutable sealed layers with Postgres metadata for snapshots and consistent reads. The pitch is architectural legibility: remote tables integrate into DuckDB’s catalog and optimizer so JOINs and CTEs work transparently.
Zooming out, OpenDuck and the Tübingen course together point to a broader maturation: analytics architectures are becoming modular enough to explain cleanly and open enough to replicate. If you can teach the engine’s internals and also experiment with a distributed “attach and query” model using a minimal protocol, you’re building an ecosystem where performance work and product design can iterate faster—and where “local-first but cloud-capable” stops being a slogan and becomes something you can run and study.
Hardware and robotics deliver today’s most delightful bit of physical computing: Princeton engineers have developed a soft-rigid hybrid robot that moves without motors, gears, or external pneumatic systems. The trick is a co-fabricated structure: 3D printed patterned liquid crystal elastomer (LCE) polymers integrated with flexible printed circuit boards. By customizing printing to set molecular alignment zones that act as programmable hinges, and using embedded flexible electronics to heat targeted regions (with temperature sensors for closed-loop control), the robot achieves controlled motion via localized heating-driven contraction. The demonstrator—a kind of origami crane—can flap its wings and repeatedly return to shape without wear.
This matters not because motors are going away, but because actuation is often the bottleneck in making robots simpler, smaller, or safer around humans. A motor-free, co-fabricated approach could simplify production and enable more precise actuation sequences in soft systems, with the Princeton team pointing to applications like medical devices, delicate manipulation, and hazardous-environment exploration. It’s also a reminder that “robotics innovation” doesn’t always mean better perception stacks; sometimes it’s a hinge that behaves like software.
In a different corner of physical computing—one where the “machine” is a GPU rather than a polymer hinge—VectorWare reports it has made Rust’s std::thread work on GPUs. The article lays out the mismatch: CPUs tend to start one thread and spawn others, while GPUs launch kernels that run thousands of instances in parallel, and Rust’s ownership model and function-based entry points can clash with GPU concurrency patterns. VectorWare contrasts CUDA C and Rust (nvptx) examples, emphasizing how GPU programming often forces raw pointers, unsafe kernels, and index-based parallelism. Their argument is that enabling std::thread on GPUs reduces cognitive friction and helps express complex, non-uniform GPU workloads with familiar abstractions.
Finally, the tooling layer keeps evolving in ways that quietly shape everything above it. Cranelift’s mid-end optimizer now centers on a novel acyclic e-graph (aegraph), aiming to bring equality-based optimizations into a production compiler without the pitfalls of full equality saturation. The author describes introducing the aegraph in 2022, iterating through a rewrite, engaging with the e-graph research community, and building many rewrite rules. A key framing shift in the write-up is to present the sea-of-nodes translation first and only later introduce a minimal eclass notion—an approach meant to keep translation in and out efficient while still enabling powerful rewrites. Compiler work like this tends to be invisible until it suddenly isn’t: when codegen improves, when optimization bugs disappear, or when new language features become viable because the mid-end can reason better.
Zig, meanwhile, has shipped Zig 0.16.0 after eight months of development, with 1,183 commits from 244 contributors, and the headline change is I/O as an interface. The release also tightens language semantics (stricter rules around packed unions, pointer and vector semantics, comptime behavior), expands and reorganizes the standard library (including a thread-safe arena allocator and new crypto primitives like AES-SIV and Ascon variants), and improves build system capabilities (local package overrides, fetching, unit test timeouts), alongside compiler and backend work. Zig releases often read like a project methodically paying down ambiguity, and this one continues that pattern—pushing toward safety and portability while documenting migration away from legacy APIs.
Taken together, today’s mosaic has a clear through-line: the industry is renegotiating what it means to trust software—whether it’s 1969 flight code preserved in a modern repo, browser navigation protected by explicit anti-abuse policy, privacy signals tested against ad platforms, backups that may not back up what you think, or TLS libraries drawing a harder line between modern security and legacy comfort. The near future will likely reward teams that treat these as connected problems: provenance, transparency, and verifiable behavior across the stack. The moon code is a nostalgic headline, sure—but it’s also a reminder that when the stakes are high, you document, you test, and you assume the environment is hostile. The web and the cloud are finally starting to sound like that kind of mission.
About the Author
yrzhe
AI Product Thinker & Builder. Curating and analyzing tech news at TechScan AI. Follow @yrzhe_top on X for daily tech insights and commentary.