Loading...
Loading...
AI leaders are facing a multi-front squeeze as compute costs, regulation, and legal risk collide. In the U.S., a federal judge granted Anthropic a preliminary injunction blocking the Pentagon from labeling it a “supply-chain risk,” framing the move as likely retaliatory and raising due-process and First Amendment limits on government exclusion of AI vendors. Meanwhile, OpenAI is shutting down its compute-intensive Sora video product amid safety and monetization questions and the collapse of a reported $1 billion Disney partnership, signaling a pivot toward enterprise priorities. In Europe, lawmakers voted to delay key EU AI Act deadlines while banning “nudify” apps, underscoring shifting compliance timelines.
Anthropic considers IPO as soon as October
Akash Sriram / Reuters : SoftBank says it has secured a $40B bridge loan maturing in 2027 from JPMorgan Chase, Goldman Sachs, and other banks, to fund further investment in OpenAI — SoftBank Group (9984.T) said on Friday it has secured a $40 billion loan through a bridge facility to fund further investments …
A federal judge in California has indefinitely blocked the Pentagon’s effort to label Anthropic as a supply-chain risk and sever its government contracts, ruling those actions violated the company’s constitutional rights. The court found the Defense Department overstepped in singling out Anthropic for punitive measures tied to national security procurement, halting enforcement while litigation proceeds. Anthropic, a major AI developer, argued the designation would unfairly damage its business and reputation; the ruling preserves its ability to continue government work for now. The decision matters because it constrains the Pentagon’s authority to blacklist AI vendors and sets a legal precedent affecting how governments can regulate and distance themselves from commercial AI providers.
A federal judge in San Francisco granted Anthropic a preliminary injunction halting the Defense Department’s and Trump administration’s blacklist of the AI startup, finding Anthropic likely to succeed on First Amendment and procedural claims. Judge Rita Lin criticized the government’s effort as punitive for public dissent and questioned why Anthropic was branded a supply-chain risk. The Pentagon had labeled Anthropic a supply-chain risk under statutes used historically against foreign adversaries, forcing contractors (Amazon, Microsoft, Palantir) to certify they won’t use Claude models. Anthropic sued to block the designation and is pursuing a separate statutory review of the DOD determination; the case could continue for months with broader implications for government-AI vendor relations.
David Sacks is done as AI czar — here’s what he’s doing instead
A federal judge blocked the Department of Defense’s attempt to label AI developer Anthropic as a supply chain risk, a move that would have restricted the company’s ability to win government contracts. The decision halts the Pentagon’s use of that labeling mechanism against Anthropic, after the company challenged the designation in court. Anthropic — maker of the Claude models — argued the action was punitive and lacked proper legal basis; the ruling prevents the DoD from using the label to effectively exclude the company while litigation proceeds. The outcome matters for AI procurement, vendor due process, and how governments may regulate or exclude AI firms from public contracts.
A U.S. federal judge in California blocked the Pentagon’s designation of AI startup Anthropic as a supply chain risk, finding the move violated the company’s First Amendment and due process rights and amounted to retaliation for its public stance on safety guardrails. Judge Rita Lin wrote the label—previously used for firms tied to foreign adversaries—was imposed after Anthropic resisted Pentagon demands for unrestricted use of its Claude model in weapons and surveillance. The ruling pauses enforcement for a week to allow appeal. Anthropic hailed the decision, saying the label harmed contracts and reputation; the DoD under Secretary Pete Hegseth had ordered agencies to cut ties and require partners to prove they don’t use Anthropic products.
A federal judge granted a preliminary injunction in Anthropic v. U.S. Department of War, blocking the government from treating Anthropic as a potential adversary under the cited statute. The order largely favors Anthropic, rejecting the government’s characterization that an American AI company can be branded a saboteur for disagreeing with federal policy. The ruling pauses government actions against Anthropic while the case proceeds and signals constitutional and statutory limits on executive labeling of private tech firms. The decision matters for AI companies, national security policy, and future executive oversight of advanced AI, and it may be appealed up to the Supreme Court for a final resolution.
A federal judge in California has indefinitely blocked the Pentagon’s designation of Anthropic as a supply chain risk, finding the move violated the AI company’s First Amendment and due process rights. Judge Rita Lin wrote that labeling Anthropic—after it refused to remove contractual guardrails restricting military uses of its Claude model—amounted to retaliation for protected speech and exceeded statutory authority. The Pentagon had ordered agencies to stop using Anthropic and to sever ties with contractors using its tech; the label would have forced partners to prove they did not use Anthropic products. The ruling halts enforcement for a week to allow government appeal and underscores tensions between national-security demands and vendor-controlled AI safety policies.
A federal judge temporarily blocked the U.S. Department of Defense from labeling Anthropic a "supply-chain risk," restoring the company’s ability to do business without that designation while litigation proceeds. Judge Rita Lin granted a preliminary injunction, finding the Pentagon’s designation likely unlawful and "arbitrary and capricious," and said the government offered no basis to treat Anthropic as a saboteur. The DOD had moved to curtail use of Anthropic’s Claude AI across agencies, citing usage restrictions and security concerns, which hurt the startup’s contracts and reputation. The order pauses the designation for a week and preserves pre-directive status pending further proceedings, though agencies may still transition away from Claude on other lawful bases.
Ashley Capoot / CNBC : A US judge grants Anthropic's request for a preliminary injunction in its suit against the Trump administration over the DOD's decision to blacklist the company — A federal judge in San Francisco granted Anthropic's request for a preliminary injunction in its lawsuit against the Trump administration.
Bloomberg : In an interview, David Sacks says he has relinquished his role as AI and crypto czar after using up his time as a special government employee — White House adviser David Sacks said Congress could pass bipartisan artificial intelligence legislation within months, a move …
Anthropic altered Claude subscription limits to curb peak-time demand by tying hourly session allowances to token consumption during peak windows (05:00–11:00 PT). The change means users can expend a five-hour session allowance faster during peak hours, while weekly totals remain unchanged; about 7% of users—especially Pro tier customers—are expected to hit new session limits. Anthropic says API pricing remains token-based and subscription plans keep unpublished usage caps, and it urges token-heavy background jobs be shifted to off-peak times. The tweak aims to balance capacity without reducing overall weekly usage, but it raises planning and transparency concerns for developers and businesses relying on consistent, predictable access to Claude. Key players: Anthropic, Claude.
OpenAI has shut down Sora, its AI-driven video app, following reports that Disney walked away from a planned $1 billion partnership. Sora offered tools for generating or editing video with generative AI; its closure reflects scaling and safety challenges for high-capacity multimedia models. The reported collapse of the Disney deal signals growing commercial and reputational hurdles for large tech–media collaborations around synthetic content, including IP, content moderation, and regulatory scrutiny. For developers, creators, and platforms, the moves highlight fragility in monetizing advanced generative video and the need for clearer safety, rights management, and business models before wide deployment.
Robert Hart / The Verge : The European Parliament votes to delay EU AI Act deadlines, including pushing compliance for high-risk AI systems back to December 2027, and to ban nudify apps — The proposals would push back looming deadlines for watermarking AI-generated content and high-risk AI systems.
Viral AI-generated “fruit” videos—like Fruit Paternity Court and Fruit Love Island—have exploded on Instagram and TikTok, amassing millions of views by using text-to-video generators (examples named include Google Veo, Kling AI, and OpenAI’s Sora). Creators craft anthropomorphic fruit dramas with sensational, Pixar-like visuals via text prompts to boost engagement. Critics note a troubling pattern: many clips depict misogynistic and violent scenarios targeting female fruit characters and children (abuse, sexualized implications, fatal harm, even punishment for flatulence). The trend highlights how easy-to-use generative video tools can rapidly scale low-cost content while amplifying harmful narratives, raising ethical and moderation challenges for platforms and AI vendors.
OpenAI abruptly shut down its Sora video-creation app days after releasing usage guidance and following a $1 billion Disney partnership that collapsed when Disney withdrew. The move follows other rapid reversals at OpenAI — notably the quick deprecation of GPT-4o — and signals a strategic refocus toward business customers. The article compares OpenAI’s product-killing behavior to tech incumbents: Google and AWS typically offer warnings and alternatives, Broadcom bundles legacy software into new sales models, and Netscape’s overreach and chaotic product strategy is offered as a cautionary tale. The pattern raises concerns about product stability, customer trust, monetization strategy and whether OpenAI will emulate a platform-dominant survivor or a disruptive, inconsistent vendor.
Maria Curi / Axios : Sen. Bernie Sanders and Rep. Alexandria Ocasio-Cortez plan to introduce legislation to pause new data center construction until AI safeguards are in place — Sen. Bernie Sanders (I-Vt.) and Rep. Alexandria Ocasio-Cortez (D-N.Y.) on Wednesday will announce legislation to pause …
Palantir CEO Alex Karp told an audience that in the AI era, long-term success will fall to two groups: trade workers with hands-on skills and people who are neurodivergent, arguing that many white-collar jobs could be automated. Karp framed the shift as a structural change where AI replaces routine desk work, elevating skilled manual trades and those with atypical cognitive styles better suited to creative or nonstandard tasks. His comments sparked debate about workforce displacement and inclusion, touching on hiring, training, and how companies like Palantir position themselves amid automation. The remarks matter for tech industry hiring, reskilling programs, and policy discussions on AI’s labor impacts.
Disney has called off a planned $1 billion licensing and equity tie-up with OpenAI after the startup decided to shut down its Sora video-generation app. The three-year deal, announced in December, would have allowed use of over 200 Disney-owned characters in Sora-generated videos and included a $1 billion equity investment from Disney. Disney said it respects OpenAI’s pivot away from video generation, values the collaboration and will keep engaging AI platforms while protecting IP and creators’ rights. The collapse underscores the risks and strategic shifts in the fast-moving generative AI market, especially for content owners negotiating rights and investments tied to emerging products.
Disney has scrapped its short-lived Sora initiative and pulled out of a planned $1 billion investment in OpenAI after user-generated AI videos on Disney+ produced copyright violations and abusive content. Announced three months earlier with endorsements from then-CEO Bob Iger and OpenAI’s Sam Altman, the deal was meant to let fans create Sora-generated short videos featuring more than 200 Disney characters and to integrate OpenAI tools across Disney. The project faltered as Sora outputs became low-quality, harmful, and commercially useless—users largely exported content off-platform—undermining the promise that generative AI will cut Hollywood labor costs. The collapse raises questions for studios and AI vendors about content moderation, copyright, consumer demand, and the economic case for AI-driven filmmaking.
@PrajwalTomar_: This might be why OpenAI shut down Sora. Video gen was crazy expensive. Meanwhile Anthropic stayed f
When former EU Commissioner Thierry Breton tried to use the DSA to pressure Elon Musk into not platforming Donald Trump, Looking at the current state of the world, Breton was absolutely correct when he did this. I agreed with his stance back then, and I especially agree with it now.
Ryan Gallagher / Bloomberg : How internet censorship tech maker Sandvine, a vendor to repressive regimes like Egypt, nearly collapsed before US restrictions forced new ownership and a pivot — Sandvine built a business offering network management tools that, in the wrong hands, could be used for censorship, until the US government intervened.
OpenAI has shut down Sora, its AI-driven video generation service, after limited sustained user engagement despite an initial burst of creative use. Reports and user posts on Hacker News and coverage by Hollywood Reporter note that early adopters made many short, playful videos but stopped returning once novelty faded. The closure underscores challenges for consumer-facing generative media tools: high initial interest doesn't guarantee retention, product-market fit, or long-term monetization. For the broader industry, Sora’s shutdown signals that large-model capabilities alone don’t ensure a viable product; companies must solve discovery, utility, and habitual value to maintain users and justify investment in compute-heavy media generation. OpenAI’s move may refocus efforts toward more sustainable applications and integrations.
A New Mexico jury found Meta Platforms violated state consumer protection law and ordered the company to pay $375 million in civil penalties after a six-week trial in Santa Fe that accused Facebook, Instagram and WhatsApp of enabling child sexual exploitation and misleading users about platform safety. New Mexico Attorney General Raúl Torrez called the verdict a historic victory and plans a May remedy phase seeking platform changes and more penalties; Meta said it will appeal and argued it protects users and is shielded by the First Amendment and Section 230. The decision is the first jury verdict on such claims against Meta and could influence many pending youth-safety and addiction suits.
OpenAI abruptly shut down SORA, its AI video-generation product, marking the company’s exit from consumer-facing AI video tools. The Guardian reported the sudden closure after OpenAI discontinued access to the service without extended notice. SORA had been part of broader industry efforts to automate video creation using generative AI; its shutdown signals either strategic refocusing at OpenAI or challenges around safety, content moderation, or commercial viability. The move matters because it removes a high-profile player from an area of rapid innovation and regulatory scrutiny, potentially reshaping competition among AI startups, content platforms, and creators relying on such tools.
U.S. Government's Ban on Anthropic Looks Like Punishment Attempt, Judge Says
OpenAI will retire Sora, its experimental AI-powered video generator, winding down access and support as it reallocates resources to other projects. The announcement affects users and developers who were testing Sora’s text-to-video capabilities; OpenAI cited strategic prioritization and product focus as reasons for the shutdown. This matters because Sora represented OpenAI’s push into generative video, an area with high compute costs, safety concerns, and emerging policy implications; its closure signals caution and a possible shift toward more mature or safer multimedia offerings. The move may impact creators, startups building on Sora, and competitors accelerating investment in text-to-video tools.
OpenAI abruptly shut down Sora, its high-profile AI video generation app, and ended a planned $1 billion Disney investment tied to a three-year content deal. Sources say Disney teams were informed within hours of OpenAI’s decision; the investment never closed and no funds changed hands. OpenAI cited Sora’s heavy computational costs and a strategic pivot toward coding tools, enterprise customers and AGI development as reasons for the move. The firm is consolidating capabilities into a single “super-app,” reorganizing leadership roles, and shifting safety reporting lines. The cancellation highlights tensions between costly consumer-facing AI products and profitability pressures ahead of a potential IPO. The Sora team will share timelines for user data preservation later.
Reuters : Sources: some Sora team members were surprised by OpenAI's sudden decision to end Sora support, just a day after OpenAI posted a blog on Sora safety standards — On Monday evening, Walt Disney Co (DIS.N) and OpenAI teams were working together on a project linked to Sora, OpenAI's AI video tool.
U.S. Government's Ban on Anthropic Looks Like Punishment Attempt, Judge Says
OpenAI’s Sora was the creepiest app on your phone — now it’s shutting down
OpenAI is shutting down Sora video creation app
OpenAI has announced it is sunsetting Sora, its AI-driven video app, prompting discussion and disappointment across the tech community. Users on Hacker News and social posts noted the shutdown follows a recently published Sora safety primer, raising questions about internal coordination and whether a failed Disney deal or waning user engagement drove the decision. Commenters praised the brief creative burst Sora enabled but said novelty faded without compelling retention hooks. Critics also blasted the PR language and questioned whether teams were blindsided. The shutdown matters because it signals limits in consumer AI video products, highlights challenges in sustaining usage and partnerships, and reflects broader scrutiny of product strategy at leading AI firms.
OpenAI has reportedly abandoned Sora, a high-profile generative AI project tied to a proposed billion-dollar licensing deal with Disney. The move signals OpenAI stepping back from an initiative that aimed to create character-driven AI experiences, reportedly including voice and interactive storytelling features. Sources cite technical, business and IP challenges; the collapse of the Disney arrangement underscores difficulties in commercializing advanced generative agents at scale. This matters because it highlights limits facing AI labs when translating experimental models into complex entertainment partnerships, and it could reshape how studios, platform owners and AI companies structure future collaborations and licensing agreements.
Disney has ended a deal with OpenAI after the AI company abruptly shut down its consumer chatbot Sora. The split follows disagreements over product direction and brings to light tensions between major entertainment content owners and AI platforms over access, safety, and commercialization of copyrighted material. Disney had been negotiating rights and content access with OpenAI; Sora’s closure removed the immediate need for those arrangements and raised questions about how AI firms will source and monetize licensed media. The development matters because it highlights licensing risks for generative AI, potential shifts in partnerships between studios and AI providers, and the broader implications for content owners protecting IP and revenue from AI-driven services.
OpenAI announced it will shut down Sora, its high-fidelity video-generation app, thanking creators and promising timelines for preserving user work. The decision follows internal reports that OpenAI is refocusing on core business and productivity products, after leadership warned against “side quests.” Sora had debuted with photorealistic text-to-video capabilities and rapid feature updates—voice synthesis, lip-sync, face replacement—and attracted a $1 billion Disney investment tied to bringing Disney characters to the platform, raising questions about that deal’s future. The move occurs amid intensified competition from ByteDance’s SeeDance 2.0 and Google’s Veo/Genie efforts, underscoring shifting priorities in the fast-evolving AI video market and product strategy at OpenAI.
A federal judge signaled concern that the Pentagon may have illegally punished AI company Anthropic by labeling it a supply-chain security risk after the developer sought to limit military uses of its Claude model. U.S. District Judge Rita Lin said the designation looks like retaliation and could violate Anthropic’s First Amendment rights; Anthropic has filed suits seeking a temporary order to pause the label. The Department of Defense defended its assessment as a national-security determination and argued courts should not second-guess it. The judge questioned whether the DoD considered less punitive options and noted the sweeping nature of the designation, typically used for hostile actors. A ruling on the injunction is expected soon.
Disney has reportedly pulled out of a partnership with OpenAI after OpenAI shut down Sora, a product or division tied to their collaboration. The move was reported by The Hollywood Reporter and surfaced on Hacker News. Disney’s exit signals friction between major entertainment companies and AI firms when product changes or shutdowns affect strategic deals. Key players are Disney and OpenAI; the development matters because it highlights commercial and operational risks for media companies partnering with fast-moving AI vendors, and raises questions about governance, contractual protections, and continuity of services in AI collaborations. The incident may influence future deal structures and due diligence by content owners.
OpenAI will shut down its Sora AI video app months after launch, prompting Disney to walk away from a previously announced $1 billion investment and licensing deal tied to Sora. OpenAI said it will provide timelines for app and API wind-down and data preservation; Disney framed the exit as a shift in OpenAI priorities and said it will pursue other AI partnerships that respect IP and creators. Sora’s brief run included controversial use of established characters and likenesses, forcing early changes to give studios more control. The closure leaves Google as the dominant large-scale contender in AI video generation and raises industry questions about IP, creator rights and commercial deployment of generative video tools.
OpenAI is shutting down Sora video creation app
OpenAI announced it will shut down Sora, its resource-intensive AI video-generation app, signaling a strategic pullback ahead of a likely IPO. Sora, launched in 2024 and upgraded with a second-generation model in October, drew rapid consumer uptake and controversy for producing lifelike videos and deepfakes of copyrighted characters. The closure affects a planned three-year Disney partnership and $1 billion investment tied to Sora; sources say that deal is not proceeding. OpenAI cited a shift in priorities and the need to reallocate limited compute toward more lucrative text and code generation products amid competitive pressure from Anthropic and chip constraints. The company said it will provide timelines and preservation options for users’ work.
OpenAI is shutting down Sora, its short-form video app that went viral after launching six months ago, despite surpassing one million downloads within five days. The company announced the closure on X and said it will share timelines and preservation options for creators’ work. The move is part of a broader cost-cutting shift as OpenAI reins in expensive projects ahead of a potential IPO and a $730 billion valuation to justify. OpenAI has also paused features like Instant Checkout and is consolidating apps into a single desktop super app, while pivoting toward high-productivity and enterprise use cases. Disney, which had a $1 billion investment and licensing deal tied to Sora, acknowledged OpenAI’s decision.
OpenAI abruptly announced it will shut down Sora — its standalone AI video-generation app, social network, and the Sora 2 video model API — without giving a specific closure date but promising timelines for preserving user work. Sora debuted in early 2024, shipped a Turbo update and Sora 2 across iOS, Android and API, and briefly topped Apple’s download charts; it also figured in a recent $1 billion Disney investment to enable Sora-generated character videos. The closure follows OpenAI’s move to consolidate products into a single “super app,” a leadership and foundation restructuring, and a strategic pivot toward enterprise competition with Anthropic. The shutdown raises questions about partnerships, developer pipelines, and the future of AI-generated media at OpenAI.
OpenAI is shutting down its Sora AI video app months after launch, prompting Disney to pull out of a planned $1 billion investment and character-licensing deal tied to the product. Sora’s rapid rise and use of well-known IP and actor likenesses sparked industry pushback and forced early changes to content controls. Disney said it respects OpenAI’s strategic shift and will seek other AI partnerships to integrate tech into its platforms while protecting IP and creators’ rights. The closure leaves the standalone Sora app as a likely footnote and strengthens Google’s relative position in AI video generation, though Google faces its own legal and licensing challenges. The change signals shifting priorities and commercial risk in generative AI for video.
OpenAI plans to shut down Sora, its video generation app launched in late 2024, just 15 months after debuting. The company announced the decision on social media following a Wall Street Journal report and said it will provide timelines for the app and API and guidance on preserving creators' work. OpenAI thanked users and communities built around Sora, acknowledging the disappointment. The closure signals a rapid retreat from a high-profile generative video product and may affect creators, third-party integrations and developers relying on Sora's API. It also raises questions about product strategy and resource allocation within OpenAI as it prioritizes other models and services.
Todd Spangler / Variety : Disney ends its partnership with OpenAI, signed in December 2025, in which it pledged to invest $1B and agreed to license some characters to Sora — OpenAI said it will discontinue Sora, the generative-AI video creation platform it launched last year, without providing a reason for the decision.
OpenAI is reportedly discontinuing its Sora video platform app, removing a consumer-facing experiment in AI-generated video. The move affects users who relied on Sora for creating short clips and represents a pullback from a high-profile multimedia offering amid OpenAI’s broader focus on core products like ChatGPT and enterprise APIs. The decision matters because it signals strategic prioritization at one of the industry’s leading AI companies, potentially reshaping competition in AI video tools and impacting creators and startups that integrated Sora. Key players include OpenAI and Sora users; reasons likely include resource allocation, product-market fit, and regulatory or safety considerations tied to generative video. The shutdown highlights challenges in scaling consumer AI media services.
OpenAI announced it will shut down Sora, its resource-intensive AI video-generation app, and said it will provide timelines and guidance for preserving users’ work. Sora, launched in 2024 and upgraded in October with higher-fidelity video and audio, became a top iOS download but drew copyright and deepfake concerns and prompted a controversial three-year content deal and $1 billion investment pledge from Disney that now is not proceeding. The closure follows OpenAI’s move to sharpen focus ahead of a likely IPO and to reallocate scarce compute to more profitable text- and code-generation products amid competition from Anthropic. The decision underscores cost, regulatory and partnership pressures shaping generative-AI product strategy.