Loading...
Loading...
A widening techlash is colliding with an intensifying AI-driven talent race. Juries in Los Angeles and New Mexico delivered landmark verdicts finding Meta and YouTube negligent for allegedly addictive, youth-harming product design, while New Mexico separately hit Meta with major penalties over child safety and predator risks—cases cast as a potential “Big Tobacco” moment that could pressure Section 230, insurance coverage, and platform UX choices like infinite scroll and recommendations. Governments are also moving: the UK is piloting teen social-media bans and curfews, and Alaska advanced a bill targeting AI sexual imagery and children’s social use. Meanwhile, investors warn AI investment will accelerate job displacement, raising pressure to reskill.
A Facebook whistleblower told LBC that an ongoing landmark social media trial could expose Meta to damages as high as $1 trillion. The whistleblower, formerly inside Facebook/Meta, argued the case centers on alleged harms from the company’s products and could set a precedent for holding platforms financially accountable. Key players include Meta (Facebook), the whistleblower source, and the court overseeing the trial; regulators and civil-society groups monitoring platform accountability are also implicated. The potential scale of liability matters because a ruling against Meta could reshape legal risk for social platforms, influence product design and moderation practices, and prompt new regulatory and business responses across the tech industry.
Meta shares plunged about 7% after U.S. juries in New Mexico and Los Angeles found the company liable for harms to young users, fueling investor fears that these decisions could expose platforms to a flood of design-based lawsuits. The LA verdict awarded $6 million over alleged Instagram and YouTube-linked depression; New Mexico jurors ordered Meta to pay $375 million for misleading children’s safety and enabling exploitation. Experts warn that targeting platform design — not just user content — could bypass Section 230 protections and lead to billions in damages, forcing product changes and impacting ad-driven revenue and AI investment plans. Meta, Google and others plan to appeal.
Meta and YouTube Found Negligent in Social-Media Addiction Trial
A Los Angeles jury found Instagram and YouTube deliberately engineered addictive features and were negligent in protecting a teen user, ordering Meta and Google to pay $6m in damages. The ruling — challenged by both companies, who plan appeals — centers on claims the apps caused body dysmorphia, depression and suicidal thoughts. Experts call it a potential “big tobacco” moment that could reshape platform design, regulation and liability, and may weaken Section 230 protections that currently shield tech firms. The verdict raises questions about features like endless scrolling and algorithmic recommendations, and whether platforms will face stricter rules, health warnings, or limits on ad-driven engagement models.
A California jury found Meta and YouTube (Google) negligent in a social-media addiction lawsuit, marking a significant legal loss for major platforms. The verdict, reported by the Wall Street Journal and discussed on Hacker News, centers on claims that product designs and algorithms contributed to harmful addiction-like behaviors. Plaintiffs argued platforms knew about harms yet continued engagement-driven design; defendants likely face reputational, regulatory, and product-design scrutiny. This case could influence future litigation, content-moderation policies, algorithm transparency, and product safety engineering across tech companies. It signals growing legal risk for platforms over user mental-health impacts and may spur industry changes in recommendation systems and safety features.
A Los Angeles jury found Instagram (Meta) and YouTube (Google) liable for designing features that allegedly addicted a now-20-year-old plaintiff, awarding $6 million total — $3 million compensatory plus $3 million in punitive damages split between Meta and Google. The seven-week civil trial focused on whether the platforms’ design choices and warnings were negligent and contributed to harm to children. Snapchat and TikTok had settled earlier; Meta and Google say they will evaluate appeals. The verdict follows a major New Mexico ruling against Meta and could influence thousands of pending suits and insurance disputes, potentially reshaping legal accountability for major social platforms and prompting design, policy and regulatory changes.
A Los Angeles jury delivered a landmark verdict finding that Instagram and YouTube were designed to addict children, a decision seen as a bellwether for more than 1,000 similar lawsuits. The ruling centers on design choices and recommendation algorithms used by Meta and Google-owned YouTube that plaintiffs argue intentionally engineered addictive engagement among minors. Legal teams and public-health advocates say the verdict could reshape industry practices, prompt stricter regulation, and expose platforms to large damages and design changes. Tech companies will likely appeal, but the case marks a major challenge to how social platforms balance growth-driven algorithms with user safety and regulatory scrutiny.
A Los Angeles jury found Instagram (Meta) and YouTube (Google) liable for designing their platforms to addict a young user, awarding $6 million total — $3 million in compensatory and $3 million in punitive damages — in a closely watched civil trial. The plaintiff, a 20-year-old woman who testified she became hooked on the apps in grade school, accused the companies of negligent product design and failing to warn about harms to children. Snapchat and TikTok settled with the plaintiff before trial. Legal experts say the verdict could set precedent for thousands of pending suits and increase liability exposure for major platforms amid related rulings about insurers refusing coverage for similar claims.
The Wall Street Journal editorial board argues that a $6 million Los Angeles verdict against Meta and YouTube over alleged social media harms is primarily a win for trial lawyers rather than for children or society. The piece frames the case as part of broader litigation targeting major platforms and suggests courtroom outcomes may not address underlying youth mental health concerns. It contends that parenting and household limits can reduce risks from social media use, and that most children do not experience severe problems. The editorial implies the verdict could encourage more lawsuits and legal costs for tech companies without delivering clear public benefits. The article provides limited detail beyond the verdict amount, location, and the companies involved.
A Los Angeles jury found Meta 70% and Google (YouTube) 30% liable for harming a woman who developed a social media addiction as a child, ruling the companies intentionally designed addictive platforms. The verdict follows a five‑week trial that centered on Instagram and internal research showing Meta knew children under 13 used its services despite policies against them. Mark Zuckerberg testified in person, defending company efforts to limit under‑age use while acknowledging imperfect progress. Snap and TikTok settled with the plaintiff before trial. The decision is unprecedented and could influence hundreds of similar lawsuits nationwide, raising legal and regulatory risks for major tech platforms and prompting appeals and strategic responses from defendants.
A Los Angeles jury found Meta 70% and Google (YouTube) 30% liable for a now-20-year-old plaintiff's childhood social media addiction, concluding the companies intentionally designed platforms that harmed her mental health. The five-week trial highlighted internal research showing young children used Instagram despite company policies; Mark Zuckerberg testified and said he wished the company had moved faster to identify under-13 users. Snap and TikTok settled with the plaintiff before trial. The verdict is unprecedented and could shape hundreds of similar lawsuits nationwide, raising legal and regulatory stakes for major platforms over design choices, youth safety, and corporate responsibility.
A Los Angeles jury found Meta and Google’s YouTube negligent in a lawsuit alleging their platforms caused social media addiction in a plaintiff, marking a significant legal setback for major tech companies. The verdict assigns fault to Meta and YouTube for harms linked to addictive design and content delivery, spotlighting accusations over algorithmic amplification and engagement-driven features. Plaintiffs argued platform design deliberately exploited psychological vulnerabilities; defendants defended product practices and algorithmic moderation. The decision could influence future litigation, regulatory scrutiny, and product design priorities around user well-being, algorithm transparency, and duty of care. Tech firms face renewed legal and public pressure to balance engagement-driven business models with safety and ethical design obligations.
A New York Times report says a court found Meta and YouTube (Google) negligent in a landmark lawsuit over social media addiction, holding the platforms accountable for harms tied to their design and algorithms. Plaintiffs argued that features intended to maximize engagement—recommendation algorithms, infinite scroll, and notification design—caused addictive use and mental-health harms, especially for young people. The ruling marks a potential legal turning point by treating platform design choices as actionable negligence rather than mere speech or neutral tools. That could push tech companies to alter product design, transparency, and moderation policies, and may spur more litigation and regulatory scrutiny of algorithmic engagement tactics. Key players: Meta, YouTube/Google, plaintiffs, and the court.
New York Times : The jury in LA's social media trial finds Meta and YouTube harmed a young user via addictive design features and orders them to pay her $3M; Meta will pay 70% — A jury found the companies negligent in their app designs, harming a young user with design features that were addictive and led to her mental health distress.
Jury finds Meta and YouTube negligent in landmark social media addiction trial
A New Mexico jury found Meta knowingly harmed children’s mental health and concealed knowledge of child sexual exploitation on its platforms, awarding about $375 million in penalties after determining there were thousands of violations of the state’s Unfair Practices Act. Prosecutors argued Meta prioritized profit over safety by designing addictive features and failing to curb predators; the jury also found Meta made false or misleading statements and engaged in unconscionable trade practices. The verdict does not immediately force platform changes — a judge will decide in May whether Meta created a public nuisance and must fund remedial programs. Meta says it will appeal and defends its safety efforts. The case is part of wider litigation by state attorneys general.
A New Mexico jury ruled that Meta knowingly harmed children’s mental health and concealed what it knew about child sexual exploitation on its platforms, finding thousands of violations of the state’s Unfair Practices Act and awarding $375 million in penalties. The verdict, from a nearly seven-week trial, concluded prosecutors proved Meta prioritized profit over safety and made misleading statements about platform risks. Meta said it will appeal and defended its safety efforts; a judge will next decide whether Meta created a public nuisance and must fund remedial programs in a May hearing. The case is part of a broader wave of state lawsuits and whistleblower disclosures targeting social platforms’ role in youth harms.
The UK government will run a real-world pilot putting social media bans, digital curfews and one-hour daily limits into hundreds of UK teenagers' homes to test impacts. About 300 teens will be split into four groups: full app blocks (mimicking an under-16 ban), 60-minute daily caps, overnight blocks (21:00–07:00) and a control group, with interviews before and after to measure effects on sleep, family life and schoolwork and to spot practical workarounds. The trials accompany a national consultation on whether to make many social platforms illegal for under-16s, drawing input from charities like the NSPCC and campaigners urging better built-in safety from tech companies. Results will inform possible policy changes.
New Mexico just handed Meta its first courtroom defeat over child safety, and the rest of the country is watching
Jonathan Vanian / CNBC : A New Mexico jury finds that Meta violated state laws by failing to safeguard its platforms from child predators and orders it to pay $375M in damages — A jury has reached a verdict in a major New Mexico trial in which the state's attorney general alleged that Meta failed to safeguard its family of apps from child predators.
Peter Thiel, the influential Silicon Valley investor, has cultivated a public persona mixing techno-libertarianism, conspiratorial religious rhetoric and high-profile tech bets. The piece highlights Thiel’s outspoken views on existential threats, his funding of controversial startups, and his political activities—arguing they shape Silicon Valley culture and investment patterns. Key players include Thiel himself, his venture firm Founders Fund, and companies he backs; the article ties his worldview to broader debates about tech governance, free speech, and the social impact of disruptive technologies. This matters because Thiel’s blend of ideology and capital influences which technologies and teams get resources, affecting industry direction and policy discussions.
The World Happiness Report 2026 finds heavy social media use — especially more than seven hours daily on algorithm-driven, image-focused platforms — is linked to a sharp decline in well-being among under-25s, notably teenage girls in English-speaking and Western European countries. Based on surveys of about 100,000 people across 140 countries by Oxford’s Wellbeing Research Centre, Gallup and the UN, the study reports many US college students would prefer social platforms did not exist, though light use (under an hour a day) correlates with higher well-being than no use. Finland ranks happiest for the ninth year, with Nordic welfare models cited as key factors. The findings add weight to policy debates on restricting minors’ social media use.
Canada has dropped in the World Happiness rankings, and the title attributes the decline partly to social media use. With no article body provided, details such as the specific ranking position, the year of the report, the data source (for example, the World Happiness Report), and the evidence linking social media to lower happiness are not available. The headline nonetheless highlights an ongoing policy and research concern: how digital platforms and online behavior may affect well-being at a national level. If accurate, the reported slip could matter for Canadian policymakers, public health officials, and technology companies as they assess the societal impacts of social media and consider interventions related to youth mental health, screen time, and platform design.
The World Happiness Report 2026 finds heavy social media use is linked to a sharp decline in well-being among under-25s, especially teenage girls in English-speaking and Western European countries. Produced by Oxford’s Wellbeing Research Centre with Gallup and the UN from surveys of ~100,000 people across 140 countries, the study flags more than seven hours daily on algorithmic, image-focused platforms and influencer content as key drivers of lower life satisfaction. It also notes many US college students would prefer platforms not to exist, while light users (under one hour/day) report higher well-being than non-users. Finland tops the happiness index for the ninth year, with Nordic welfare models cited as a major factor.
Canada has fallen in the World Happiness rankings, with the decline attributed in part to social media use, according to the article’s title. No additional details are provided about the specific report edition, Canada’s new position, the size of the drop, or the methodology used to link social media to changes in happiness. The title suggests a connection between technology-driven behavior and national well-being metrics, a topic that can influence public policy debates, platform regulation, and mental health initiatives. Without the article body, it is not possible to identify the organizations behind the rankings, the timeframe of the data, or whether other factors besides social media were cited as contributors to Canada’s lower standing.
Kati Pohjanpalo / Bloomberg : The UN-backed World Happiness Report says passively consuming algorithmic social media hurts teens' mental health, disproportionately affecting girls — Passively consuming algorithmic social media hurts teens' mental health, according to the World Happiness Report for 2026 …
Karissa Bell / Engadget: New Mexico child safety trial: Mark Zuckerberg downplayed Meta's own research on how the company's apps affect teens; Adam Mosseri made similar comments — Instagram chief Adam Mosseri made similar comments during his testimony. — Jurors in a New Mexico child safety trial heard testimony from Meta CEO Mark Zuckerberg today.
The candidate that Silicon Valley built is now the one they want to tear down
A Guardian essay argues everyday conversations with strangers are vanishing—and we should revive them. Journalist Viv Groskop recounts two recent encounters—a sympathetic older passenger on a train and a shy waitress from Seoul—to show how brief public interactions can enrich individuals and communities. She links the decline to smartphones, noise-cancelling headphones, social media and changing work patterns, and suggests younger people especially fear initiating ordinary talk. Groskop calls for reclaiming the skill of talking to anyone: learning social cues, accepting occasional awkwardness, and taking small interpersonal risks. The piece matters to the tech industry because technology-driven habits are reshaping public social norms and human-centered interaction design.
The UK government is set to recruit hundreds of teenagers for a social media restriction trial aimed at mitigating the negative impacts of smartphone use. This initiative, part of Prime Minister Starmer's plan, includes measures such as a complete ban on social media, a digital curfew, and daily screen time limits. The trial will involve approximately 150 participants aged 13 to 15, assessing the effects of these restrictions on sleep, mood, and physical activity. The consultation, described as the largest of its kind globally, seeks public feedback on potential policies, including age limits and platform features that encourage addiction. While some child protection organizations oppose a total ban, others advocate for clearer age boundaries and accountability for tech companies.
Silicon Valley investor Bill Gurley warns that individuals who lack passion for their work are most at risk of losing their jobs to AI advancements. He argues that those who find meaning in their careers are less likely to be replaced, as they naturally engage in continuous learning and skill enhancement. Gurley highlights that major companies like Meta, Microsoft, Amazon, and Alphabet are investing heavily in AI infrastructure, leading to increased job insecurity. He suggests that becoming the most knowledgeable person about AI in one's role can provide a competitive edge, likening AI to 'jet fuel' that amplifies personal capabilities. His insights align with his recent book advocating for career choices driven by genuine interests.
The Alaska House has passed a bill aimed at regulating the use of artificial intelligence in creating sexual imagery and restricting children's access to social media platforms. This legislation reflects growing concerns over the implications of AI technology on privacy and child safety. By addressing these issues, the bill seeks to protect minors from potential exploitation and harmful content online. The move is part of a broader trend among states to establish legal frameworks governing the use of AI and social media, highlighting the need for responsible tech policy in an increasingly digital world.
The article discusses the misconception that the best engineers naturally make the best mentors. It highlights that technical expertise does not equate to mentoring skills, which require different interpersonal abilities. The piece emphasizes the importance of recognizing that mentoring is a distinct role that may not align with an engineer's strengths. This insight is crucial for tech companies aiming to foster effective mentorship programs, as it encourages a more nuanced approach to pairing mentors and mentees. By understanding these dynamics, organizations can enhance their talent development strategies and improve overall team performance.
The lines between eng & design are blurring. Fewer handoffs. Fewer, more empowered in people in the room. -> Everyone moves faster, and there's more room for taste to shine through. [翻译] 工程和设计之间的界限正在模糊。更少的交接。更少但更有能力的人在房间里。→ 每个人行动更快,有更多空间展现品味。
CNN reports some parents are installing home landlines instead of giving young children cell phones, citing safety and screen-time concerns. San Diego communications executive Alison Lundberg said she added a landline about five months ago after her 4-year-old daughter learned to call 911 at preschool, prompting worries about how a child could reach emergency services if only adults have mobile phones. The family’s new phone has large numbers and emergency icons, and the lack of caller ID makes ringing calls feel novel. Lundberg also says the landline helps her daughter independently talk with out-of-state grandparents, reducing the need for scheduled calls and limiting exposure to social media. Communication professor Kara Alaimo, who teaches screen-time management, is referenced in the segment. The story was published Feb. 23, 2026.
The article titled “2026 is the New 2016 and Honestly? I Kind of Get It.” appears to be a nostalgic commentary on social media rather than a reported tech news item. Based on the limited excerpt provided, it recalls an earlier era when users could share low-effort posts—such as a blurry lunch photo—and still receive meaningful engagement (the example cites “47 likes”). The piece seems to contrast that past with today’s more competitive, algorithm-driven platforms, implying a desire to return to simpler online interactions. No specific companies, platforms, product launches, policy changes, or dates beyond the title’s reference to 2026 and 2016 are included in the available text, so the scope and claims cannot be fully verified from the provided content.
The article revisits the EU IST Advisory Group’s (ISTAG) 2001 paper “Scenarios for ambient intelligence in 2010,” marking 25 years since its publication, and compares its “Ambient Intelligence” vision with reality in 2026. Using the “Maria – Road Warrior” scenario, it finds partial matches: smartphones and smartwatches reduce device clutter; NFC e-passports and automated airport gates speed border checks; cars commonly offer keyless entry, push-button start, and app-based rental access; and Bluetooth enables in-car calling. However, several predictions remain unmet or limited: seamless walk-through immigration, government-run traffic guidance (now largely satellite- and private-app driven), and autonomous “personal agents” negotiating services. Personalized hotel rooms and always-on voice-controlled environments exist technically but are not widely deployed, partly due to privacy and user preference.
Pinterest users, particularly artists, told 404 Media that the platform’s increased reliance on AI is degrading the service through mistaken moderation and a surge of AI-generated images. Artist Tiana Oreglia said automated systems have issued takedowns and threatened bans over reference images, with a pattern of clothed female figures being flagged; she cited examples including a bikini photo and a painting of two clothed women, plus a stock image flagged for “self-harm.” Pinterest said it enforces nudity rules using a mix of AI and human review and offers human-reviewed appeals. On r/Pinterest, users report repeated “AI modified” labels on hand-drawn work, creating 24–48 hour appeal cycles and brand confusion, while others complain feeds are dominated by AI content.
Venture capitalist Bill Gurley argued that “playing it safe” is currently the worst career move, and criticized common self-help advice to “go get a mentor.” In the excerpt provided, Gurley said people often internalize an idealized mentoring model and then “cold call” someone far above their reach, which typically fails. His comments frame conventional mentorship-seeking as an unproductive tactic that can distract from taking bolder, more proactive steps in career development. The limited available text does not specify where or when Gurley made the remarks, nor does it provide additional context, examples, or data. Still, the message matters for professionals navigating competitive tech and startup environments, where networking strategies and risk tolerance can influence opportunities and advancement.
An article titled “The Dance Floor Is Disappearing in a Sea of Phones” reports on a trend in nightlife and live events where smartphone use is increasingly dominating audience behavior. Based on the title alone, the piece suggests that people are spending more time recording or viewing performances through their screens, reducing active participation such as dancing and changing the atmosphere on dance floors. The implied focus is on how ubiquitous mobile cameras and social media sharing are reshaping in-person experiences and crowd dynamics. No additional details, locations, dates, data, or named venues, artists, or companies are available from the provided information, so the specific scope and evidence used in the article cannot be confirmed.
A Reddit-linked post highlights that Peter Thiel and other tech billionaires have publicly limited their children’s exposure to the same digital products and platforms that helped build their fortunes. The item frames this as a pattern among prominent Silicon Valley figures who restrict screen time, social media use, or device access at home, despite promoting or investing in consumer technology. The significance is the implied gap between how tech leaders market products to the public and how they manage perceived risks—such as distraction, addiction, or mental health impacts—for their own families. The provided content contains only a title and link preview, with no additional reporting details, dates, or specific examples beyond naming Thiel and referencing “other tech billionaires.”
An item titled “All Look Same?” was published, but no article body or additional context is available. Based on the title alone, the piece likely addresses concerns about similarity or lack of differentiation—potentially in technology products, user interfaces, AI-generated content, or market offerings—but the specific subject, companies involved, and any supporting evidence cannot be confirmed. Without the full text, it is not possible to accurately report what happened, who the key players are, or why the issue matters, nor to include dates, figures, or concrete claims. More information from the article content is required for a reliable summary of the news and its implications.
Reporter Zoë Bernard investigated a long-rumored Silicon Valley subculture: informal networks of gay men in senior tech roles who support and elevate one another. Bernard spent months interviewing 51 people, including 31 gay men, to document how these relationships operate as a “boys’ club” within the industry’s upper ranks. The reporting frames the network as an open secret and compares it to other power structures in business, where influential groups create pipelines for opportunity and influence. The story matters because it highlights how informal social networks can shape hiring, promotion, and access in tech leadership, adding nuance to discussions about diversity, inclusion, and power dynamics in Silicon Valley. No specific companies, dates, or financial figures are provided in the excerpt.
Silicon Valley Is Hoarding Technical PhDs
Silicon Valley is building a shadow power grid for data centers across the US