Loading...
Loading...
The Pentagon is exerting pressure on AI company Anthropic to relax its stringent safety policies in order to secure a $200 million contract. This shift comes as Anthropic moves away from its core safety promise, raising concerns about ethical implications and potential misuse of its AI technologies. The Department of Defense has threatened to blacklist Anthropic if it does not comply with demands for unrestricted military use of its AI model, Claude. This situation underscores the growing tension between government requirements and corporate ethics in the AI sector, as companies navigate the fine line between innovation and accountability.
The AI Doc: Or How I Became an Apocaloptimist, directed by Daniel Roher, hits theaters March 27 with rare on-camera interviews from OpenAI’s Sam Altman, Anthropic’s Dario Amodei, and DeepMind’s Demis Hassabis. The film frames AI anxiety through Roher’s impending fatherhood and mixes clear explainer segments, creative visuals, and alarmist voices like Tristan Harris. Critics say the documentary grants tech CEOs too much leeway, allowing familiar talking points and optimistic promises—about curing disease or addressing climate change—to go largely unchallenged. It raises important questions about AI’s societal risks and leaders’ responsibilities but stops short of rigorous scrutiny of AGI claims, governance failures, or corporate accountability.
A project titled “JCal” claims to recreate Jeffrey Epstein’s activities in Google Calendar, but the provided article content contains only a calendar interface view labeled “May – Jun 2016” with week navigation and hourly time slots. No descriptive text, sources, methodology, or attribution is included in the excerpt, and there are no details about who built JCal, what data it uses, or what specific events are represented. As shown, the page appears to present a timeline-style visualization of a week spanning May 30 to June 5, 2016, but without any listed entries. With limited information available, the main takeaway is that JCal is presented as a Google Calendar-based reconstruction of Epstein-related activity for specific dates in 2016.
A project titled “JCal – Jeffrey Epstein's Activities Recreated in Google Calendar” claims to recreate Jeffrey Epstein’s activities inside Google Calendar, but the provided article content contains almost no explanatory text. The only visible material is a calendar-style interface labeled “May – Jun 2016,” showing a week view with days (Mon 30 through Sun 5) and hourly time slots from 1 AM to 11 PM. No details are given about who created JCal, what data sources were used, what specific events are included, or how the calendar is intended to be used. With the limited information available, the main takeaway is that the item appears to be a timeline visualization concept using Google Calendar for a specific date range in 2016.
Anthropic wins injunction against Trump administration over Defense Department saga
A user reports that Anthropic's Claude, once reliable on a monthly plan, became unusable after switching to an annual Pro subscription: 34 short prompts exhausted 94% of a five-hour usage limit, with repeated web calls and redundant prompts forcing the user to re-enter information. The complaint alleges poor product design around context and network requests, parallels to criticism leveled at OpenAI, and suggests subscription changes may have introduced throttling or metering that harms UX. This matters because it highlights friction in pricing and usage policies for commercial AI services, raising concerns for developers, power users, and businesses relying on predictable, efficient model behavior.
New York City public hospitals have dropped Palantir after community and worker pushback, as the controversial AI data firm continues expanding its health-tech operations in the UK. The decision follows criticism over privacy, surveillance risks and ties to law enforcement; labor and patient advocates argued Palantir’s software undermined trust and posed ethical concerns. Meanwhile, Palantir is increasing contracts with UK health services to deploy its analytics and AI platforms for patient data management and operational decision-making. The moves matter because they highlight growing scrutiny of enterprise AI vendors in healthcare, the political and ethical pressures shaping procurement, and how trust disputes can affect adoption of data-driven clinical tools.
The Guardian argues that blaming Anthropic’s chatbot Claude for the February 28, 2026 US strike that killed roughly 175–180 children at Shajareh Tayyebeh primary school in Minab is a distraction. The strike used a targeting system called Maven, developed by Palantir after Google declined a Pentagon contract in 2018; Maven fuses satellite imagery, signals intelligence and sensors and relied on an outdated Defense Intelligence Agency record that still listed the building as a military facility. The article says human failures—stale data, engineering choices, and the entrenchment of targeting infrastructure—caused the atrocity, and warns that fixation on LLMs’ supposed autonomy misdirects accountability and policy around military AI systems. It calls for scrutiny of systems, processes and contractors, not chatbots alone.
Anthropic updated its public subprocessor list on March 26, 2026, adding three providers and explicitly naming Microsoft Azure as the cloud infrastructure provider for all Anthropic products worldwide. The change appears in Anthropic’s Trust Center updates and signals a clearer disclosure of third-party vendors supporting Anthropic’s AI services. This matters because naming Azure underscores Anthropic’s dependency on Microsoft cloud infrastructure, with implications for data residency, compliance, and customer risk assessments. The disclosure also prompted community discussion about usage limits and promotional changes tied to the company’s Claude service, reflecting customer sensitivity to cost, uptime and vendor relationships.
Stephanie Palazzolo / The Information : OpenAI has surpassed $100M in annualized revenue from ChatGPT ads, has expanded to 600+ advertisers, and plans to launch self-serve advertiser access in April — OpenAI has surpassed $100 million in annualized revenue from its ChatGPT ads business, six weeks after the pilot was announced, according to a spokesperson.
Judge's Remarks on Anthropic vs. Pentagon
New York City hospitals have ended contracts with Palantir as the controversial AI and data analytics firm expands work in the UK. The move follows public scrutiny over Palantir’s government ties and surveillance-related projects, reigniting debates about data use in healthcare. Commenters note disagreement over whether Palantir stores or misuses data and point out distinctions between government and commercial deployments; some argue the company’s on-prem or air-gapped setups limit risks. Others suggest Europe may fund local alternatives to reduce reliance on Palantir. The decision matters for hospital IT procurement, patient data governance, and broader conversations about vendor trust, surveillance, and sovereign tech capacity.
Judge's Remarks on Anthropic vs. Pentagon
OpenAI has indefinitely shelved plans for an “adult mode” ChatGPT after internal backlash from advisors, staff, and investors. The Financial Times reports concerns ranged from potential mental-health harms and users forming unhealthy emotional attachments to technical difficulties in safely training models to produce explicit content without enabling illegal behaviors like bestiality or incest. Advisors warned the feature could turn the bot into a “sexy suicide coach,” and OpenAI’s roughly 10% age-prediction error rate raised fears minors could access adult material. Investors reportedly saw limited business upside and reputational risk. OpenAI says it will instead pursue long-term research on sexually explicit chats and emotional attachment before deciding on productization.
Six Democratic lawmakers, including Senators Ron Wyden, Elizabeth Warren, Edward Markey and Alex Padilla, plus Reps. Pramila Jayapal and Sara Jacobs, asked Director of National Intelligence Tulsi Gabbard to clarify whether Americans using VPNs that route traffic through overseas servers can be treated as non‑US persons and thus lose Fourth Amendment protections against warrantless surveillance. The question stems from intelligence community guidelines that presume unknown-location communications are foreign, and from mass collection under Section 702 of FISA, which permits warrantless targeting of foreigners abroad. The letter urges public disclosure amid a contentious debate over renewing Section 702 and highlights a potential privacy paradox: following agency advice to use VPNs might inadvertently expose users to broader surveillance.
Michael J. de la Merced / New York Times : Defense tech startup Shield AI raised $2B at a $12.7B valuation, up from $5.3B after raising $240M in March 2025; half of its business is autonomous software — The company, which develops autonomous military technology, also plans to buy a maker of simulation software as interest in next-generation defense soars.
A ‘pound of flesh’ from data centers: one senator’s answer to AI job losses
The Guardian piece argues the February 28, 2026 US strike that destroyed Shajareh Tayyebeh primary school in Minab—killing 175–180, mostly girls—was misattributed to an LLM like Anthropic’s Claude. Instead, the author traces responsibility to human choices and entrenched military infrastructure: Palantir-built Maven, a targeting system that consolidated imagery, signals and sensor data, misclassified the school in an out-of-date DIA database. The article critiques media and policy fixation on chatbots and LLM risks, calling this attention a distortion that obscures governance failures, data hygiene, contractor influence and the sociotechnical systems that make automation lethal. It warns that charismatic tech narratives divert scrutiny from people, processes and institutional accountability.
Google’s top legal counsel in India, Bijoya Roy, has resigned after about 16 months in the role to start a new venture, according to two sources cited by Reuters. The departure is a notable leadership change for Google in a key growth market where the company has faced heightened regulatory and legal scrutiny. While the report does not detail Roy’s next business or the timing beyond her tenure length, it frames the exit as high-profile given ongoing issues for Google in India, including competition and compliance challenges. The move underscores the importance of senior legal leadership as global tech firms navigate increasingly active regulators and evolving rules in major jurisdictions such as India.
The US Army has selected private equity firms Carlyle and KKR to build two large data centers on US military bases, with each facility expected to cost about $2 billion, according to the Financial Times. The projects come as the Army’s “token” usage—an indicator of AI and compute consumption—has risen eightfold during the US war in Iran. Army Secretary Christine Wormuth said the conflict has underscored the need for the service to adapt to AI’s expanding role in modern warfare, driving demand for secure, on-base computing capacity. The planned investments highlight how military operations are increasing requirements for data processing, storage, and AI-enabled systems, and signal a growing role for private capital in defense infrastructure.
The Pentagon has designated Palantir’s Maven AI as a core military system and approved multi-year funding, boosting planned investment in the platform from $480 million in 2024 to $13 billion over the coming years. Palantir, a major defense contractor and data/AI software provider, will see Maven integrated more deeply into U.S. military operations, signaling long-term procurement and operational reliance. The move matters because it locks in a commercial AI platform as foundational defense infrastructure, raising stakes around procurement strategy, supply chain security, contractor influence, and future AI-enabled battlefield capabilities. It also highlights accelerating defense budgets for AI and the strategic role of private tech firms in national security.
The Pentagon has formalized Palantir’s Maven AI as a core military system and awarded it multi-year funding, accelerating the platform’s investment from $480 million in 2024 to $13 billion as part of a broader $13.4 billion U.S. defense AI spend this year. Palantir, long a major contractor for defense analytics and data integration, will scale Maven across military operations, embedding its software into decision pipelines and signaling deeper public-private entanglement in defense AI. The move matters because it locks a commercial AI vendor into foundational military infrastructure, raises procurement and competition questions, and amplifies debates about oversight, resilience, and ethical use of AI in warfare. It also highlights Pentagon priorities in operationalizing AI at scale.
Epic Microsystems, which designs power delivery architecture for better thermal and efficiency management of AI data centers, raised a $21M Series A (Chris Metinko/Axios)
Rebecca Szkutak / TechCrunch : Charlotte-based Lucid Bots, which manufactures autonomous drones for cleaning windows, raised a $20M Series B co-led by Cubit and Idea Fund — Andrew Ashur, the founder and CEO of window cleaning robot startup Lucid Bots, likes to joke that his company is the antithesis of the robotics industry right now.
Sharon Goldman / Fortune : Normal Computing, which uses AI to help chip companies design chips more efficiently, raised $50M led by Samsung Catalyst and says it has 5+ top chip clients — Normal Computing has raised $50 million in a round led by Samsung Catalyst as the startup pursues a two-pronged bet on the future of AI hardware …
Satellite data around the Gulf is becoming contested as delays, spoofing, withholding and state control undermine open visibility into conflict. WIRED reports that an AI-manipulated Google Earth image posted by Iran highlighted how easy disinformation is, but the broader problem is that satellite infrastructure—run by state-backed regional operators (Space42, Arabsat, Es’hailSat) and national programs like Iran’s Paya/Tolou-3—is being politicized. Commercial operators (Planet Labs, Maxar) are restricting imagery access or delaying releases citing safety concerns, pushing newsrooms and OSINT researchers toward slower or alternative providers such as Chinese MizarVision. That shift matters because who controls timely, high-resolution imagery affects journalism, military decisions, and regional power dynamics over communications and navigation infrastructure.
With Sift Stack, two ex-SpaceX engineers are bringing the software that helped launch rockets to the factory floor
A BBC reporter tested whether close relatives and the public can tell humans from AI deepfakes after an experiment where his aunt struggled to be certain. The piece ties this to Israeli PM Benjamin Netanyahu’s recent proof-of-life videos, which spiraled into conspiracy when a lighting trick made his hand appear to have a sixth finger. Experts consulted — including a founder of an AI-media watchdog — say Netanyahu’s clips are genuine and the apparent anomaly is an ordinary video artifact, but they warn that current tools for proving authenticity are weak. The article underscores how convincing AI-generated audio/video has become, the risks of misinformation and scams, and how hard it is for individuals or leaders to demonstrably prove they’re real.
Lucid Bots raises $20M to keep up with demand for its window-washing drones
A federal judge signaled skepticism at a hearing over the Pentagon’s designation of Anthropic’s Claude models as a supply chain risk and President Trump’s directive barring federal use. Judge Rita Lin pressed the Department of Defense on whether the blacklist was punitive—asking if Anthropic was being penalized for criticizing government contracting—and questioned the legal basis for the DOD’s action. Anthropic asked for a preliminary injunction to pause the designation and ban, saying the measures could cost the startup billions and harm its reputation while litigation proceeds. The DOD argued the designation responds to potential future sabotage risks; Lin said she will rule in the coming days.
A former Thiel fellow’s startup just launched a drone it says can replace police helicopters
Drew FitzGerald / Wall Street Journal : Sources: Anduril, Palantir, and Scale AI are part of the group developing software to run President Trump's planned $185B Golden Dome antimissile shield — The firms are part of a consortium working on the $185 billion project's operating system — Anduril Industries and Palantir Technologies …
Maria Curi / Axios : At a hearing, a US federal judge says the Pentagon's treatment of Anthropic is “troubling” and that “it looks like an attempt to cripple Anthropic” — A federal judge on Tuesday called the Pentagon's treatment of Anthropic “troubling” as the AI company urged the court …
The Electronic Frontier Foundation announced a leadership change as Cindy Cohn, its longtime executive director and author of Privacy's Defender, prepares to hand off the role amid rising fights over AI and Immigration and Customs Enforcement (ICE) surveillance. Under Cohn, EFF pivoted from early Internet-era government surveillance battles to tackle Big Tech abuses; recently the organization has renewed focus on state surveillance after aggressive ICE operations and Department of Homeland Security efforts to unmask critics. EFF has mounted and supported lawsuits defending the right to anonymously monitor and share information about ICE activity and opposing technologies like Flock cameras that can facilitate arrests. The transition matters for digital civil liberties advocacy as AI, surveillance tech and law enforcement clashes intensify.
A former Trump administration cybersecurity official reportedly pasted sensitive U.S. government documents, including details about critical infrastructure and classified protocols, into the public ChatGPT service. The disclosure — surfaced via social posts and screenshots — raises alarms about operational security, potential exposure of secrets, and the risks of using consumer-grade AI tools for official work. Key players include the ex-cyber chief, OpenAI (owner of ChatGPT), and U.S. national security stakeholders who must assess data-leak and compliance implications. The incident matters because it highlights gaps in policy, training, and technical controls around cloud/AI usage in government, and could prompt stricter rules or tooling for secure AI deployments.
Palantir turns poisonous on the campaign trail
The White House released a National Policy Framework for Artificial Intelligence that critics say aligns closely with OpenAI’s positions and could limit state-level AI regulation. The framework outlines federal legislative recommendations intended to create a single national approach to AI oversight, affecting liability, standards, and enforcement mechanisms. Key players include the White House, OpenAI (whose policy stances appear reflected), and state regulators who may see their ability to impose stricter rules curtailed. This matters because a federal-first framework could standardize rules for AI developers and platforms, shaping compliance costs, competition, and safety across the US tech sector. Observers warn it may favor large incumbents and influence future policymaking.
Palantir turns poisonous on the campaign trail
Joe Miller / Financial Times : How Palantir became a flashpoint on the US campaign trail due to its ICE work, ahead of the midterms; six lawmakers publicly refuse any further Palantir funds — Donald Trump's unpopular immigration crackdown has made links to the Peter Thiel-backed company a liability for candidates
Shannon K. Kingston / ABC News : The US State Department launches the Bureau of Emerging Threats to tackle current and future threats, including cyberattacks and AI weaponization by adversaries — Officials detailed the Bureau of Emerging Threats exclusively to ABC News. — Cyber attacks on the riseOne group of cyber experts …
A federal judge found major portions of the Pentagon’s October media-access policy unconstitutional, prompting the Defense Department to close the longtime in-building workspace for credentialed journalists and move press operations to an annex requiring escorts. Pentagon spokesman Sean Parnell said the department will revise credentialing agreements to clarify prohibited activities and will appeal the ruling while asserting the changes comply with the court order and protect security. The move follows a pattern under Defense Secretary Pete Hegseth of restricting media access, including revoking roaming privileges and removing some outlets’ on-site workstations; Times reporters and others previously surrendered passes rather than sign the policy. The shift affects press freedom and transparency around military coverage.
The Pentagon has implemented new limits on journalists’ access after losing a court case, prompting major outlets including the New York Times to return press passes and report from outside the complex. The policy change reshapes who can cover the Department of Defense in person, with reports noting replacement of traditional reporters by commentators and influencers more favorable to the administration. This matters because restricted physical access and selective accreditation can shape what military activities and controversies are visible to the public, affecting oversight, national-security reporting, and media independence. The dispute highlights tension between government control over information and press freedom following judicial pushback.
The Pentagon announced immediate changes to press access after a federal judge ruled parts of its October media policy unconstitutional in a suit by The New York Times. The Defense Department will close the longtime in-building workspace for credentialed journalists, relocate a press area to an annex, require escorts for all journalists seeking physical access, and revise the language in credentialing rules to more explicitly define prohibited activities. Pentagon spokesman Sean Parnell said the changes aim to comply with the court ruling while preserving security, and the department plans to appeal. The dispute matters for press freedom, government transparency, and rules governing media access to sensitive tech and defense facilities.
Three men have been charged by U.S. authorities for allegedly conspiring to smuggle advanced U.S. artificial intelligence technology to China. Federal prosecutors say the defendants sought to transfer AI hardware and related technical know-how that are subject to export controls aimed at limiting sensitive AI and semiconductor capabilities reaching foreign adversaries. The indictment names the individuals and outlines alleged schemes involving procurement, false documentation, and covert shipping routes. The case underscores increased U.S. enforcement of export controls on AI, chips, and dual-use technologies to protect national security and preserve technological leadership. It signals greater scrutiny of supply chains, procurement intermediaries, and cross-border transfers in the AI and semiconductor sectors.
The UK Financial Conduct Authority has given US analytics firm Palantir a three-month trial contract (over £30,000/week) to analyze its internal data lake, granting access to case files, bank and crypto reports, and communications linked to investigations. Palantir will act as a data processor with data hosted in the UK and barred from using it to train its own models, per FCA safeguards. Critics warn the deal echoes Palantir's ‘land-and-expand’ public-sector strategy seen across the NHS, policing and defence, raising vendor lock-in, surveillance and civil-liberties concerns even as regulators seek AI tools to spot financial crime faster. The arrangement spotlights tensions between efficiency gains from powerful analytics and dependency on large US vendors.
Reuters : Sources: OpenAI is offering PE firms a guaranteed minimum return of 17.5% and early access to new models to secure JVs, an improvement on Anthropic's terms — ChatGPT maker OpenAI is offering private-equity firms a sweeter deal than rival Anthropic as both artificial intelligence companies court buyout firms …
The Pentagon plans to adopt Palantir’s AI platform as a core military system, according to an internal memo obtained by the outlet. The move would position Palantir alongside the Department of Defense as a primary provider of AI-enabled data integration, analytics and decision-support for operations. Key players include Palantir Technologies and the U.S. Department of Defense; details in the memo reportedly outline integration timelines and roles but raise questions about procurement, competition and oversight. This matters because embedding a commercial AI system into military command-and-control could reshape battlefield data flows, accelerate AI-driven targeting and logistics, and prompt scrutiny over vendor lock-in, ethics, and security of sensitive defense data.
The Pentagon plans to adopt Palantir’s AI platform as a core military system, according to an internal memo reported exclusively. The memo reportedly designates Palantir’s software to be integrated across defense operations, standardizing data fusion, analytics and AI-driven decision support for U.S. military units. Palantir Technologies, a major defense contractor and data‑analytics firm, would see its platform become central to operational planning, intelligence processing and command systems. This matters because formalizing a single commercial AI stack for the military raises issues around vendor lock‑in, security vetting, supply chain resilience, and the governance of automated decision tools in warfare. The move could reshape procurement, interoperability and oversight of defense AI deployments.
A report titled “National survey of NIH-funded researchers shows precarious state of U.S. science” was published as an exclusive on March 20, 2026, but the provided excerpt contains no details from the survey itself. The only additional text references a separate STAT Plus item: “Automatic enrollment in Medicare Advantage plans under consideration, top Trump health official says.” With the available information, it is not possible to summarize the survey’s findings, methodology, sample size, or which NIH-funded researchers were surveyed, nor to identify specific indicators of “precarious” conditions. The headline suggests the piece focuses on challenges facing U.S. biomedical research and the NIH-funded workforce, but the excerpt does not include evidence, quotes, or numbers to confirm what changed or why it matters.
Project Maven, the Pentagon’s AI initiative to analyze surveillance video, has evolved from a controversial prototype into a deployed tool now used in US operations against Iran. The article traces Project Maven’s ascent through internal Pentagon debates and civil‑society pushback—highlighting key figures like founder Drew Cukor and skeptic-turned-decisionmaker Vice Admiral Frank “Trey” Whitworth—and raises enduring concerns about accountability, targeting rules, and the role of contractors such as Palantir. The story matters because it documents how military adoption, corporate partners, and bureaucracy shaped lethal AI use, underlining ethical, legal, and oversight challenges as autonomy and AI-assisted targeting become operational realities.
Palantir has been granted access to sensitive data from the UK Financial Conduct Authority (FCA), expanding its footprint within British government agencies. The deal gives Palantir’s analytics platform linkable datasets and tools to work on FCA records, raising concerns about oversight, data privacy, and the role of a U.S. defense-linked firm in domestic financial regulation. Palantir, already engaged with multiple UK bodies, argues its software helps detect financial crime and improve regulatory efficiency. Critics and privacy advocates warn of opaque contracts, potential misuse of personal and commercial data, and limited parliamentary scrutiny. The development matters because it touches on governance of critical datasets, vendor dependence, and public trust in digital regulation tools.