Today’s TechScan: Apple’s CEO Hand‑Off, EU Fixes Phone Batteries, and Weird New Dev Tools
Today’s briefing highlights major corporate governance and hardware policy moves, fresh developer tooling wins for non‑NVIDIA Macs, emerging platform trust issues, and surprising academic and open‑source findings that affect how we evaluate signals online. Expect coverage across product leadership, regulation, developer infrastructure, cybersecurity/privacy, and quirky research.
Apple doesn’t do surprise endings, but it does do symbolism, and today’s most consequential signal in tech is the one it sent about who, exactly, is meant to steer a trillion‑dollar hardware platform through its next decade. Apple says Tim Cook will become executive chairman and John Ternus—its longtime senior vice president of Hardware Engineering—will become CEO on September 1, 2026. The board approved the succession unanimously after what Apple describes as a long-term process, with Cook staying on as CEO through the summer to manage the handoff. Read one way, it’s simple continuity: Cook’s era has been about scaling operations and executing at planet scale, and Apple is carefully ensuring the next leader is steeped in the company’s core competence. Read another way, it’s a deliberate hardware‑centric realignment—a bet that the best way to protect Apple’s margins, ecosystem gravity, and product narrative is to put an engineer who has lived inside the roadmap in charge of day-to-day strategy.
The other eyebrow-raiser is what Cook’s new job explicitly includes. Apple says as executive chairman he’ll focus on select company matters, including global policy engagement. That’s not a throwaway line. It suggests Apple wants Cook, the company’s most recognizable corporate diplomat, spending more of his time in the rooms where rules get written—privacy, competition, content moderation, device regulation—while Ternus drives the product machine. Even the public reaction in the Hacker News thread captured the tension Apple has to manage: admiration for Apple’s world-class hardware and skepticism about parts of software and services, plus debate about how assertive Apple should be on political and human-rights issues. The structure Apple is choosing—policy-facing chair, engineering CEO—looks like an attempt to cover both fronts without changing the company’s DNA.
That DNA is about to meet one of the bluntest pieces of pro-consumer hardware regulation in years. The EU has set a clear deadline: from February 18, 2027, all smartphones and tablets sold in the bloc must have user‑replaceable batteries. The rule requires that batteries be removable without specialized tools—or that the necessary tools be provided free with the device—and it also mandates that replacement batteries remain available for at least five years after the last unit is sold. This sits alongside the EU’s existing USB‑C charger rules and is framed as part of broader sustainability and e‑waste measures approved in 2023. Officials argue it will reduce electronic waste, lower consumer replacement costs, and could save Europeans up to €20 billion by 2030.
This is not a “nice to have” guideline; it’s a redesign constraint that lands directly on industrial design, materials, adhesives, enclosure mechanics, supply chain planning, and service operations. For a decade, much of the premium phone market has optimized around sealed designs that trade repairability for thinness, rigidity, water resistance, and manufacturing efficiency. The EU rule forces vendors to re-litigate those tradeoffs with enforcement behind it, and to treat battery serviceability as a first-class product requirement rather than an aftermarket concession. For consumers, it implies longer usable lifecycles and fewer trips into the costly repair funnel. For manufacturers, it’s a reminder that regulatory product requirements now arrive with the same inevitability as new radio bands or port standards—and they can reshape the entire industry’s default assumptions.
If regulation is pushing hardware toward longevity, developers are pushing compute toward portability. One of the most interesting “small” stories today is a practical victory in the long war against CUDA lock-in: a developer has ported Microsoft’s TRELLIS.2, a 4B‑parameter image-to-3D model, to run on Apple Silicon using PyTorch’s MPS backend, eliminating the original CUDA-only dependencies. TRELLIS.2 previously relied on components that don’t work on macOS—flash_attn, nvdiffrast, and custom sparse convolution kernels—so the port replaces them with pure PyTorch implementations. That includes a gather-scatter sparse 3D convolution, SDPA attention for sparse transformers, and a Python mesh extraction path that stands in for CUDA hashmap operations. The author describes the changes as a few hundred lines across nine files, which is both encouraging and slightly haunting: encouraging because it suggests more models can be “freed” with focused work, haunting because it underlines how many projects quietly assume a specific vendor stack.
The performance claim is what makes it feel real rather than academic: roughly 3.5 minutes to generate a ~400,000‑vertex mesh from a single photo on an M4 Pro with 24GB RAM. That’s the kind of number that changes behavior. It turns “image to 3D” from a cloud demo into a local, offline creative workflow—useful not only for hobbyists but for professionals who don’t want to upload proprietary designs or customer imagery to a server just to test an idea. And it fits a broader pattern: the more inference projects get ported to run on Apple Silicon and consumer GPUs, the more we’ll see local toolchains become default. Not because people hate the cloud, but because they hate waiting, paying, and wondering who else can see their data.
That tension—between convenience and trust—shows up even more starkly in two stories that should make both policymakers and app developers sweat. Politico reports that the EU’s new open-source age-verification mobile app, unveiled by Ursula von der Leyen as part of efforts to enforce bans on minors’ access to social platforms, was found to have serious security and privacy flaws within hours. Researchers on GitHub said the app stores sensitive data unprotected on users’ phones, and that biometric/PIN protections could be bypassed; consultant Paul Moore said he hacked it in under two minutes, and other white-hat researchers corroborated key issues. The Commission says the app is “technically ready” but can be improved, but the damage here isn’t only technical—it’s political. Age verification is already a minefield involving privacy advocates, child-rights groups, tech firms, and regulators. When a high-visibility government tool gets publicly cracked at launch, it becomes fuel for every side’s argument at once.
The parallel private-sector example is more subtle, and arguably more alarming because it lives in the ambient fog of “helpful” AI software. A security researcher alleges that Anthropic’s Claude Desktop app for macOS silently added a Native Messaging manifest into their Brave browser folder, effectively pre-authorizing a local helper (a chrome-native-host) that can be launched by certain browser extensions—without the user installing any Claude browser extension. The behavior was described as undocumented and distinct from Anthropic’s documented Claude Code native-host, and framed as a dark pattern with real privacy and security implications: native bridges can grant helpers the same user-level privileges as the running user, and they create cross-vendor surface area that users didn’t explicitly request. Whether you see it as a misguided convenience feature or something worse, it highlights a brittle moment for consumer AI: vendors are racing to feel integrated everywhere, while users are still learning what integrations actually do.
Underneath all of this is an uncomfortable truth about how software gets discovered and trusted in the first place: our metrics are easy to game, and we’ve been pretending otherwise. A peer-reviewed ICSE 2026 study led by CMU researchers, reported as the StarScout analysis, found about 6 million suspected fake GitHub stars across 18,617 repositories, using around 301,000 accounts, with a sharp rise in 2024. AI/LLM repositories were the largest non-malicious recipients, and 78 flagged repositories hit GitHub Trending—meaning manipulation isn’t only possible; it’s already influencing what developers see. The reporting also describes a commercial ecosystem offering stars for $0.03–$0.90 each via marketplaces, Fiverr gigs, and Telegram sellers, with tiering by “account quality.” Add in the claim that some VCs use star counts as sourcing signals, and you get a neat little feedback loop where visibility can be bought, traction can be implied, and fundraising narratives can be nudged by an outsourced click farm.
This isn’t just about vanity. Stars feed discovery, discovery feeds adoption, adoption feeds hiring and fundraising, and soon enough you’ve built an economy where the metric becomes the product. The study’s scale—20TB of metadata analyzed—also implies that detection is possible, but not free, and not yet a default part of the platform’s social contract. If software ecosystems want to remain credible, they’ll need norms and tooling that treat metric integrity the way we treat dependency security: as a shared responsibility, with real consequences when it’s ignored.
On the more constructive side of the open ecosystem, today also brought a glimpse of how enterprise software is trying to reconcile two forces that normally fight: the customer’s need for privacy and compliance and the vendor’s need for operational control. A Rust project called Alien pitches a model where customers keep data local—software running in their own AWS, GCP, or Azure accounts—while the vendor retains remote capability to deploy, update, monitor, and manage those instances. The premise is painfully familiar to anyone who has sold self-hosted software: customers misconfigure environments, vendors lack visibility, and support becomes a slow-motion crisis. Alien’s promise is a kind of practical compromise, aiming to reduce support friction without demanding that customers surrender control of their data.
And if you want a sign that Rust is continuing its march from “beloved by enthusiasts” to “production plumbing,” Anthropic engineer Iain McGinniss has open-sourced two crates: buffa, a pure-Rust Protocol Buffers implementation with zero-copy message views and editions support, and connect-rust, a Tower-based ConnectRPC server/client that can handle Connect, gRPC, and gRPC-Web using the same handlers. Both pass upstream conformance suites, and McGinniss says they’re already used in production at Anthropic. It’s a reminder that a lot of AI progress depends on decidedly unglamorous infrastructure—serialization, RPC, and the ability to move data fast without unnecessary allocations.
Finally, two lighter items that still say something about the world we’re building. One report claims up to 8 million bees may be living in an underground network beneath a cemetery, but with no supporting details available in the provided material, it’s a story that mainly underscores how viral science headlines can outrun verification—especially when the number is irresistible and the setting is gothic enough to write itself. In contrast, the Mechanical Keyboard Sounds Listening Museum is delightfully concrete: an interactive web project cataloging 36 keyboards and switch families with 500+ community-sourced samples, letting you “type” and hear what a vintage Model M or a modern custom might sound like. It’s nerdy, meticulous, and explicit about limitations—how room acoustics and recording setups affect sound—making it a small example of trustworthy internet craft.
Put it all together and today’s through-line is control: who controls the roadmap (Apple’s leadership hand-off), who controls the lifecycle (EU battery rules), who controls where compute happens (ports that escape CUDA gravity), who controls user systems (native bridges that appear without consent), who controls reputation (fake star markets), and who controls operations when software runs in someone else’s cloud account (remote-managed self-hosting). The next year will reward the organizations that can offer users something that feels increasingly rare: capability without coercion.
About the Author
yrzhe
AI Product Thinker & Builder. Curating and analyzing tech news at TechScan AI. Follow @yrzhe_top on X for daily tech insights and commentary.