Loading...
Loading...
Meta is accelerating an “AI era” across its apps—adding AI-powered shopping tools, new creator ad features, and expanded scam-detection systems on Facebook, Instagram, WhatsApp, and Messenger—while facing intensifying scrutiny over platform harms. A New Mexico jury ordered Meta to pay $375 million for allegedly misleading users about child safety and allowing recommendation-driven exposure to predators, a landmark verdict Meta plans to appeal that could influence broader U.S. litigation and potential mandated product changes. Meanwhile, reports of tighter moderation around abortion information, resurfaced internal emails highlighting past competitive tactics, and a patent describing AI that could simulate deceased users’ accounts underscore mounting ethical, policy, and trust challenges as Meta scales AI-driven engagement.
A Facebook whistleblower told LBC that an ongoing landmark social media trial could expose Meta to damages as high as $1 trillion. The whistleblower, formerly inside Facebook/Meta, argued the case centers on alleged harms from the company’s products and could set a precedent for holding platforms financially accountable. Key players include Meta (Facebook), the whistleblower source, and the court overseeing the trial; regulators and civil-society groups monitoring platform accountability are also implicated. The potential scale of liability matters because a ruling against Meta could reshape legal risk for social platforms, influence product design and moderation practices, and prompt new regulatory and business responses across the tech industry.
A jury found Meta knowingly harmed children for profit, delivering a landmark verdict tied to the company’s design and business model. Reported by the LA Times and amplified across tech forums, the decision alleges Meta prioritized engagement and ad revenue over child safety on its platforms. Key players include Meta (Facebook/Instagram) and plaintiffs representing harmed minors; the ruling could fuel regulatory scrutiny, class actions, and demands for platform changes or liability reforms. The verdict matters because it underscores legal risks for tech platforms that design for addictive engagement, may influence product safety obligations, and could accelerate policy and design shifts across social media and ad-driven services.
A New Mexico jury ordered Meta to pay $375 million, finding in a state trial that the company misled parents and failed to protect children from exploitation on Facebook and Instagram. The lawsuit, brought by New Mexico Attorney General Raúl Torrez after a Guardian investigation and an undercover probe called Operation MetaPhile, presented evidence that fake child profiles were quickly flooded with solicitations and images from predators; three suspects were arrested during the sting. The verdict is the first of three child-safety trials this year and centers on consumer-protection claims and Meta’s representations about app safety. The ruling could spur similar legal exposure and regulatory scrutiny for social platforms’ content moderation and product responsibility.
Meta turns to AI to make shopping easier on Instagram and Facebook
A New Mexico jury ordered Meta to pay $375 million after finding the company misled the public about how safe its platforms are for children. The verdict, announced in a seven-week trial brought by the New Mexico Attorney General, concluded Meta violated the state’s Unfair Practices Act by downplaying risks that its recommendation algorithms exposed minors to sexually explicit content and contact with predators. Prosecutors presented internal research and testimony from a former Meta engineer-turned-whistleblower, Arturo Béjar, who described experiments showing underage users were served sexualized content. Meta said it disagrees and will appeal, noting steps like Instagram’s Teen Accounts and new parental alerts. The ruling is a landmark state win amid many related US lawsuits over platform design and child safety.
A U.S. court ordered Meta to pay $375 million after a judge found the company misled users about its handling of child safety on Facebook and Instagram. The settlement resolves claims that Meta knowingly allowed content harmful to minors and presented deceptive assurances about safety measures. Key players include Meta (Facebook, Instagram), plaintiffs alleging consumer deception, and the presiding federal court overseeing the settlement. The ruling matters because it imposes a significant financial and reputational cost on one of the largest tech platforms, may prompt stricter scrutiny of platform safety claims, and could influence how social networks disclose content-moderation practices and liability exposure going forward.
A New Mexico jury ordered Meta to pay $375 million after finding the company misled the public about how safe its platforms are for children and allowed minors to be exposed to sexual content and predators. The verdict, announced after a seven-week trial, relied on internal Meta documents and testimony from whistleblower and ex-engineer Arturo Béjar, who said experiments showed underage users were served sexualized content. New Mexico’s suit argued Meta’s recommendation algorithms “steered” young users toward harmful material, violating the state’s Unfair Practices Act. Meta said it will appeal and highlighted product changes like Teen Accounts and parental alerts. The ruling could influence thousands of related US cases and platform regulation.
A California judge ordered Meta to pay $375 million to settle a class-action lawsuit accusing the company of misleading users about child safety protections on Facebook and Instagram. Plaintiffs argued Meta misrepresented tools and practices designed to protect minors, while the company denied wrongdoing but agreed to the payout to resolve litigation. The settlement covers users in the proposed class and includes no admission of liability by Meta. The case highlights legal and regulatory risk for major platforms over safety claims, signaling increased scrutiny of platform statements about content moderation and child-protection features. It could prompt stricter disclosures and influence how tech companies communicate safety measures to users and regulators.
A New Mexico jury found Meta Platforms violated state consumer protection law and ordered the company to pay $375 million after concluding Facebook, Instagram and WhatsApp enabled child sexual exploitation and misled users about safety. The six-week trial in Santa Fe, brought by Attorney General Raúl Torrez, alleged Meta allowed predators access to minors and failed to protect young users; the state had sought over $2 billion. Meta said it will appeal and argued it provides safeguards and is protected by the First Amendment and Section 230. Torrez plans a second phase seeking platform changes and further penalties; the verdict is the first jury ruling of its kind amid wider litigation over youth harms.
Sydney Bradley / Business Insider : Meta announces new shopping and ad features, including affiliate marketing tools for Instagram and Facebook creators and a “buy now” button for Facebook ads — Follow Sydney Bradley … - Meta announced a slew of new shopping and advertising features to woo brands and creators.
A New Mexico jury found Meta willfully violated the state’s consumer protection laws and ordered the company to pay $375 million in civil damages after a trial over allegations that Facebook and Instagram exposed minors to child predators. The 2023 suit by New Mexico AG Raúl Torrez followed an undercover probe that showed a fake 13-year-old account was inundated with solicitations; prosecutors argued Meta misled the public and ignored internal warnings. Meta says it will appeal and denies the claims, citing its efforts to protect teens. A bench phase begins May 4 to consider public-nuisance findings and possible operational changes like age verification and limits on encrypted communications.
A New Mexico jury found Meta liable for violating state law by failing to warn users and protect children from sexual predators on Facebook and Instagram, awarding $375 million in damages. The 2023 suit by New Mexico Attorney General Raúl Torrez accused Meta of creating a “breeding ground” for child exploitation; the trial featured testimony from Meta executives and whistleblowers and evidence from an undercover probe that led to arrests. Meta says it disagrees and will appeal; additional judge-decided remedies and penalties may follow. The verdict is the first jury win holding Meta accountable for youth safety and comes amid a broader wave of litigation alleging social platforms harm children and teens. This could pressure product design, moderation practices, and regulatory scrutiny across the industry.
A New Mexico jury found Meta liable for violating state law by failing to warn users and protect children from sexual predators on Facebook and Instagram, ordering $375 million in damages. The 2023 suit by Attorney General Raúl Torrez accused Meta of creating a “breeding ground” for predators; jurors heard testimony from Meta executives, former employee whistleblowers, and undercover investigation evidence that led to arrests. Meta says it disagrees and will appeal; additional judge-led proceedings could impose operational changes or further penalties. The verdict is the first jury trial loss for Meta on youth-safety claims and comes amid broader litigation against social platforms over addiction and harm to minors.
Meta rolls out new scam detection tools to Facebook, WhatsApp, and Messenger
Meta said it removed 10.9 million Facebook and Instagram accounts in 2025 linked to “criminal scam centers” and took down more than 159 million scam ads, as it rolled out new protections to surface suspicious activity earlier in scam interactions. Announced Mar. 11, 2026, the measures include wider Messenger scam detection, WhatsApp warnings when linking a new device, and tests of Facebook alerts for potentially suspicious friend requests. Meta also detailed a Thailand-led operation that produced 21 arrests and the disabling of 150,000+ accounts tied to Southeast Asian scam compounds, involving the Royal Thai Police, FBI, UK National Crime Agency, and Australian Federal Police. Meta aims for 90% of ad revenue from verified advertisers by end-2026, up from 70%, amid criticism over scam advertising.
Jess Weatherbed / The Verge : Meta adds more scam detection tools to its platforms, including unrecognized device linking warnings on WhatsApp and Facebook friend request warnings — Users will be alerted about suspicious activities like unrecognized device linking and friend requests.
Internal emails from Mark Zuckerberg, dating back to Facebook's early days, have been rendered into a Messenger format, revealing insights into the company's strategic decisions. Key discussions include Zuckerberg's motivations for acquiring Instagram to neutralize competition and concerns about Facebook's messaging capabilities compared to WhatsApp. The emails highlight the competitive landscape of social media and the pressures Facebook faced from emerging platforms. This transparency into Zuckerberg's thought process is significant as it sheds light on the company's growth strategies and its approach to competition, which are critical for understanding the tech industry's evolution.
Internal emails from Mark Zuckerberg have been rendered as Facebook Messenger conversations, revealing insights into Facebook's strategic decisions and competitive landscape. Key discussions include Zuckerberg's motivations for acquiring Instagram to neutralize competition and his concerns about the performance of Messenger against WhatsApp. The emails highlight the company's approach to competition, innovation, and market positioning, particularly in relation to emerging platforms like Snapchat and TikTok. This transparency into Zuckerberg's thought process underscores the ongoing challenges Facebook faces in maintaining its dominance in the social media space, especially amid growing scrutiny over privacy and antitrust issues.
Leaked internal documents reportedly show Meta tightening enforcement around abortion-related information on its platforms, according to a Reddit post linking to the report. The materials suggest Meta has increased restrictions on content that could help users access abortion services, potentially affecting what information can be shared or found on Facebook and Instagram. The development matters because Meta’s moderation and policy decisions can shape access to sensitive health information for large audiences, especially in jurisdictions where abortion is restricted. The available excerpt does not include the documents’ dates, specific policy language, affected regions, or examples of removed posts, so the precise scope and implementation details are unclear from the provided text.
An item titled “The Subject Supposed to Know Nothing: Lacan and the Large Language Model” appears to link Jacques Lacan’s psychoanalytic concept of the “subject supposed to know” with modern large language models (LLMs). Based on the title alone, it likely examines how users attribute knowledge, authority, or understanding to AI systems that generate fluent text without human-like comprehension, and may contrast psychoanalytic ideas about transference and expertise with the probabilistic nature of LLM outputs. The topic matters because it frames a common risk in AI deployment: over-trust in generated answers and misinterpretation of model behavior as genuine understanding. No publication date, author, outlet, or additional details are available.
A developer has launched an iOS version of an app designed to block Instagram Reels and YouTube Shorts while retaining essential features like stories and direct messaging. Initially created for Android, the app faced challenges due to Apple's restrictive policies on app access. The developer adapted by utilizing WebApps, which, while not identical to the native experience, effectively meets user needs. Additionally, the app integrates with iOS Shortcuts to redirect users from the native app to the WebApp, allowing for notifications without the distraction of Reels. This innovative approach highlights the ongoing demand for user-centric solutions in social media consumption.
DistServe published “Part 1: Understanding Prefill, Decode, and Goodput in LLM Systems,” an introductory piece focused on core performance concepts in large language model (LLM) serving. Based on the title alone, the article likely explains the difference between the prefill phase (processing the input prompt and building attention caches) and the decode phase (generating tokens step by step), and how these stages affect end-to-end throughput and latency. It also signals a discussion of “goodput,” a metric commonly used to describe useful, user-visible output rate under real workloads, accounting for factors like batching, queueing, and system overhead. No publication date, benchmarks, or implementation details are available from the provided information.
Meta has been granted a patent for an AI system that could simulate a user’s social media activity—including posting, commenting, chatting, and reacting—using their historical data, potentially continuing after the person dies. The patent, filed in 2023 and granted in late December, describes using a large language model to mirror an account holder’s behavior across Meta platforms such as Facebook and Instagram, with references to possible voice or even video-call simulations. Meta told Business Insider it has “no plans” to build the feature, noting patents often protect concepts that never ship. The idea echoes CEO Mark Zuckerberg’s 2023 comments about AI replicas helping people interact with memories of loved ones, stressing consent. The filing underscores fast-moving “digital persona” tech and looming ethical concerns.