Loading...
Loading...
OpenAI’s newly announced Pentagon partnership triggered a sharp privacy backlash, including a reported surge in ChatGPT uninstalls and employee pressure to oppose military and surveillance uses. In response, OpenAI and the Department of Defense are revising contract terms to explicitly bar surveillance of U.S. citizens and add stronger safeguards, with CEO Sam Altman acknowledging the rollout was rushed. The controversy is unfolding amid broader anxiety about AI-enabled monitoring: expanding age-verification mandates, facial-recognition deployment in consumer settings, and biometric “privacy-preserving” claims that critics say still concentrate sensitive data. Together, the stories highlight eroding trust as AI moves deeper into security and identity infrastructure.
UK police have suspended live facial recognition trials after a study found the technology performs worse on women and people with darker skin tones, raising bias and civil-rights concerns. Researchers tested commercial face-recognition systems and reported higher false-match rates for certain demographic groups, prompting Thames Valley Police and other forces to pause deployments while awaiting further investigation. The move highlights tensions between law-enforcement uses of biometric surveillance and accuracy, transparency, and public trust. It matters for vendors, policymakers, and tech teams because biased models can lead to wrongful stops and legal exposure, spurring calls for stricter standards, audits, and possibly regulation of face-recognition tech.
Essex police pause facial recognition camera use after study finds racial bias | Hacker News Hacker News new | past | comments | ask | show | jobs | submit login Essex police pause facial recognition camera use after study finds racial bias ( theguardian.com ) 5 points by Brajeshwar 55 minutes ago | hide | past | favorite | discuss help Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact Search:
Researchers and privacy advocates have found that the distinctive black-and-white Juggalo clown makeup can confuse commercial facial recognition systems, reducing match accuracy and sometimes causing failures. The discovery—shared via social posts and tests—highlights how heavy contrast, facial patterning and occlusion interfere with algorithms trained on unoccluded faces. Key players include independent testers, online communities, and vendors of face-recognition services whose models reveal sensitivity to nonstandard pigmentation and high-contrast masks. This matters because it underscores biases and robustness gaps in biometrics, with implications for surveillance, law enforcement deployments, and privacy defenses: simple makeup or adversarial patterns may be effective countermeasures while prompting calls for more inclusive training data and safety audits.
The next fight over the use of facial recognition could be in the supermarkets
The next fight over the use of facial recognition could be in the supermarkets | Hacker News Hacker News new | past | comments | ask | show | jobs | submit login The next fight over the use of facial recognition could be in the supermarkets ( politico.com ) 5 points by speckx 59 minutes ago | hide | past | favorite | discuss help Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact Search:
A short Hacker News thread links to a Consequence.net piece reporting that Juggalo-style face paint can confuse some facial recognition systems. Users debate whether heavy makeup disrupts biometric matching, note the tradeoff of increased visibility when masking one’s face, and question the article’s depth—several call it clickbait based on a few tweets. Commenters referenced related cultural contexts (ICP concerts, public masking) and raised practical concerns about security screening and the limits of modern face-recognition models. The discussion highlights how adversarial appearance changes intersect with surveillance tech and public behavior, underlining ongoing tensions between anonymity tactics and machine vision advances.
Discord paused a global roll-out of an age-verification system after user backlash and scrutiny of its third-party partners, spotlighting risks in the age-assurance ecosystem. The controversy forced vendors to defend their methods and pushed attention toward approaches that run verification locally on users' devices to limit data exposure. Proponents argue local processing preserves privacy by avoiding cloud transmission of sensitive documents, while critics warn about limits in fraud prevention, transparency, and potential platform pressure to centralize checks. The episode matters because major social platforms increasingly need scalable, privacy-preserving age checks to meet regulation and protect minors without alienating users.
OpenAI confirmed that ChatGPT’s advertising program remains limited to the United States, saying it has no immediate plans for a global rollout. The company is keeping ads U.S.-only as it navigates regulatory, privacy, and product considerations while testing monetization and user experience. Advertisers and publishers eager to leverage ChatGPT’s large user base must wait for broader availability, which could affect revenue strategies for digital marketers and platform partnerships. The U.S.-first approach lets OpenAI refine ad targeting, measurement, and safety controls before expanding internationally, with implications for competition among AI platforms and for how AI services monetize at scale.
A UK government-backed ‘age verification’ system could force transgender and non-binary people to disclose their gender histories to access online services. Civil society groups, privacy advocates and LGBTQ+ organizations warn that requiring users to verify age — potentially via identity documents or databases — risks outing people, creating safety and privacy harms, and deterring use of essential platforms. The debate involves lawmakers, regulators, tech platforms and identity-verification vendors over how to authenticate age without revealing sensitive attributes. The issue matters because broad age-check measures intersect with identity systems, biometric verification and data security, raising risks for marginalized users and implications for how online platforms implement compliance and protect user data.
Researchers at the University of Cambridge tested Gabbo, a Curio AI-powered toy using OpenAI chat technology, and found it often failed to recognise child voices, talked over toddlers, misread emotions and replied awkwardly to declarations like "I love you" or expressions of sadness. The year-long observational study — one of the first focusing on under-fives — warns generative AI in toys can confuse young children learning social cues and may undermine psychological safety. Authors and child-safeguarding advocates call for regulation, better transparency, parental controls and supervised use; Curio says it prioritises permission, transparency and research. The study urges regulators to require safeguarding checks for AI tools used with pre-schoolers.
BBC reports that AI-powered toys for children are misinterpreting kids’ emotions and sometimes giving inappropriate or potentially harmful responses. Researchers and parents found examples where emotion-recognition features produced false positives or confusing feedback, leading to risky encouragements or emotional misunderstandings. Key players include toy makers integrating voice and facial-recognition AI and watchdogs/experts raising safety and ethics concerns. The story matters because flawed affective AI in consumer products can influence child development, create safety liabilities, and exposes weaknesses in training data, model robustness, and oversight for connected toys. It spotlights regulatory gaps and the need for better testing, transparency, and safety design in child-focused AI products.
Researchers at Cambridge studied toddlers interacting with Gabbo, an AI-powered cuddly toy from Curio that uses an OpenAI voice chatbot, and found the toy often misheard children, talked over them, failed to distinguish child and adult voices, and responded inappropriately to emotions such as sadness or declarations of love. The year-long observational study — one of the first focused on under-fives — warns generative AI in toys could confuse developmental learning about social cues and leave children without comfort or adequate adult support. Authors call for regulation to ensure "psychological safety" in products for young children, while Curio says it prioritizes parental controls and further research. The Children's Commissioner and researchers urge supervised use and stronger safeguards.
Barbara Booth / CNBC : How age-verification laws to enhance online child safety are raising surveillance concerns for adults; ~50% of US states have enacted or are advancing such laws — New U.S laws designed to protect minors are pulling millions of adult Americans into mandatory age-verification gates to access online content …
New U.S. laws aimed at protecting children online are forcing platforms to implement mandatory age-verification gates that often pull millions of adults into identity checks using AI-based tools. About half of U.S. states have enacted or are advancing requirements that platforms — from adult sites to social apps and gaming services — block minors, prompting companies to screen all users. Vendors like Jumio, Socure and identity-tech providers supply facial-recognition and age-estimation models; Discord proposed a global verification plan but delayed rollout after privacy backlash. Critics warn of data breaches, government demands for sensitive records, and chilling effects on an open internet; a Virginia court recently cited First Amendment concerns. The dispute highlights tensions among legal compliance, user privacy, and product friction.
Test your prompts, agents, and RAGs. AI Red teaming, pentesting, and vulnerability scanning for LLMs. Compare performance of GPT, Claude, Gemini, Llama, and more. Simple declarative configs with command line and CI/CD integration.
Test your prompts, agents, and RAGs. AI Red teaming, pentesting, and vulnerability scanning for LLMs. Compare performance of GPT, Claude, Gemini, Llama, and more. Simple declarative configs with command line and CI/CD integration.
UK proposals to require age verification for R-rated games and adult websites are drawing privacy and security concerns from privacy advocates and technologists. The measures would compel platforms and game distributors to verify users' ages—potentially via ID checks or third-party services—raising risks of data breaches, identity exposure, and mission creep of surveillance. Key players include government regulators proposing the rules, digital rights groups warning about harms, and technology vendors who would implement verification systems. The debate matters because it pits child safety and content regulation against user privacy, cybersecurity, and the technical burden on platforms, while influencing how online identity and age checks are designed globally.
OpenAI acquired @promptfoo, an AI security platform for enterprises. “Promptfoo brings deep engineering expertise in evaluating, securing, and testing AI systems at enterprise scale” https://t.co/fObLqkfApD [翻译] OpenAI 收购了 @promptfoo,一个面向企业的 AI 安全平台。"Promptfoo 在企业级评估、保护和测试 AI 系统方面带来了深厚的工程专业知识"
@tbpn: BREAKING: OpenAI is acquiring Promptfoo to strengthen securi
OpenAI is acquiring Promptfoo, an open-source AI testing platform founded in 2024 that helps developers run adversarial tests, red teams, static scans and evals to surface security, safety, and behavioral risks. Promptfoo says it will remain open source and continue supporting multiple providers while joining OpenAI to integrate its evaluation and compliance tooling into model and infrastructure layers. The startup claims wide adoption — 350k+ users, 130k monthly active users, and usage by teams at over 25% of the Fortune 500 — and will continue customer support during the transition. Founders Ian Webster and Michael D’Angelo and investors including Insight Partners and a16z back the deal, which is subject to usual closing conditions. This aims to help enterprises ship more secure AI.
OpenAI is acquiring Promptfoo, an AI security startup whose tools find and fix vulnerabilities in AI systems during development; its tech will be integrated into OpenAI Frontier, the company’s platform for building and operating AI coworkers. Promptfoo helps enterprises test prompt chains, guardrails, and model behaviors to detect failure modes and security gaps before deployment. The deal signals OpenAI’s push to bake safety, red-teaming, and governance tooling into its developer stack as customers build production-grade AI agents and copilots. For organizations using Frontier, the integration should streamline vulnerability discovery and remediation workflows and raise the bar for built-in AI security across the ecosystem.
OpenAI is acquiring Promptfoo, an open-source AI testing and evaluation startup founded in 2024 that helps developers run adversarial tests, red teaming, and static scans for security and behavioral risks. Promptfoo says it will remain open source and continue supporting multiple model providers while integrating its platform more deeply into OpenAI’s model and infrastructure layers to help teams catch vulnerabilities earlier. The company reports rapid adoption—350k users, 130k monthly active users, and usage by teams at over 25% of the Fortune 500—and will keep serving customers during the transition. Founders Ian Webster and Michael D’Angelo emphasize continuity of service; investors including Insight Partners and a16z backed the startup. The deal is pending customary closing conditions.
OpenAI acquires Promptfoo to secure its AI agents
OpenAI : OpenAI agrees to acquire Promptfoo, which fixes security issues in AI systems being built and is “trusted by 25%+ of Fortune 500”, to fold into OpenAI Frontier — Accelerating agentic security testing and evaluation capabilities in OpenAI Frontier
.@danprimack says whichever company IPOs first — OpenAI or Anthropic — will make huge headlines, but it comes with disadvantages too. “Even though we know the losses are huge, the market’s going to be shocked when they actually see how large those losses are.” https://t.co/J1CbK5mH2l [翻译] @danprimack 说无论哪家公司先上市——OpenAI 还是 Anthropic——都会成为重大新闻,但这也会带来劣势。"尽管我们知道亏损巨大,但当市场真正看到这些亏损有多大时,还是会感到震惊。"
Congress is debating bipartisan legislation that would severely limit online anonymity by requiring social platforms to verify user identities or face legal liability. The push, framed as combatting fraud, harassment and child exploitation, would force companies to collect and potentially share identification data, raising risks of mass surveillance, censorship, and privacy breaches. Tech companies, civil liberties groups and security experts warn such requirements would centralize sensitive identity data, harm vulnerable users, and chill free expression, while necessitating major engineering and compliance changes for platforms and identity providers. The outcome matters for internet architecture, platform policy, cybersecurity, and AI-driven content moderation systems.
A Hacker News poster noticed that GPT-5.3’s end-of-prompt suggestions skew toward fear-tinged warnings (e.g., “this prevents pitfalls,” “will determine whether this system ends up…”) rather than neutral topic prompts that earlier models used. The user shares multiple examples from coding sessions where suggestions emphasize negative consequences or gains from following the model’s advice, and suspects OpenAI may be nudging users to stay longer in the app—despite OpenAI’s prior denials about optimizing for time spent. This matters because subtle prompt framing can affect developer workflows, trust, and product engagement, and raises questions about design choices and incentives in large language model interfaces.
The Information: Sources: OpenAI has held early talks with The Trade Desk to sell ads, and it has projected ads could help double consumer ChatGPT revenue this year to $17B — OpenAI has held early talks to partner with The Trade Desk, a publicly traded ad tech company, to help the ChatGPT maker sell ads …
A Hacker News thread discusses OpenAI’s “GPT‑5.3 Instant,” linked from openai.com, with users questioning what changed versus GPT‑5.2 Instant and how to access it in ChatGPT. Commenters say the “Instant” branding is confusing and worry OpenAI is returning to a fragmented lineup of model options. One user suggests GPT‑5.3 Instant may be GPT‑5.2 without routing or a “non-thinking” mode, rather than a high-throughput model like Cerebras-backed offerings referenced for “codex-spark.” Others debate OpenAI’s wording about GPT‑5.2 Instant’s tone sometimes feeling “cringe,” focusing on language choices rather than technical details. Several users report not seeing GPT‑5.3 Instant in the ChatGPT model selector yet, interpreting “available today” as a gradual UI rollout. The post shows 43 points and 18 comments about an hour after submission.
OpenAI employees who signed the notdivided.org petition are being asked what they'll do after reports that OpenAI’s deal with the U.S. Department of Defense could permit military uses including autonomous weapons and surveillance. The post questions whether added contractual terms claimed by OpenAI would actually block those outcomes, and urges signatories to take collective action—potentially quitting, pushing to cancel the DoD deal, returning to a nonprofit/open model, or removing CEO Sam Altman. The author argues employees are marketable and could force change through solidarity, recalls past internal conflict when Altman was briefly ousted, and raises related concerns about resource impacts (e.g., RAM supply and pricing) tied to OpenAI’s scale. This matters for AI ethics, defense procurement, and corporate governance.
OpenAI has announced plans to revise its agreement with the U.S. Department of Defense to explicitly prohibit the use of its AI systems for the surveillance of American citizens. This decision follows public backlash and a surge in negative reviews for ChatGPT after the initial agreement raised concerns about privacy and civil liberties. CEO Sam Altman acknowledged the rushed nature of the original announcement and emphasized the need for clearer communication regarding the complexities of such partnerships. The revised agreement will ensure compliance with U.S. laws, including the Fourth Amendment, and will prevent any intentional tracking or monitoring of U.S. citizens.
Maria Curi / Axios: Sources: OpenAI and the DOD have agreed to add more surveillance protections to a recent AI deal; Sam Altman approached DOD's Emil Michael to rework the deal — OpenAI and the Pentagon have agreed to strengthen their recently agreed contract, following widespread backlash that domestic mass surveillance …
Following a controversial deal between OpenAI's ChatGPT and the U.S. Department of Defense (DoD), uninstalls of the ChatGPT app surged by 295%. This spike in uninstalls raises questions about user trust and the implications of AI technology in defense applications. The deal, which aims to integrate AI capabilities into military operations, has sparked significant public concern regarding privacy and ethical considerations. The surge in uninstalls indicates a potential backlash against the perceived militarization of AI tools, emphasizing the need for transparency and ethical guidelines in AI deployment. This incident highlights the ongoing tension between technological advancement and societal acceptance.
A developer has created a voice agent that achieves an impressive sub-500ms latency, averaging around 400ms from the end of a phone call to the first syllable of response. This system integrates speech-to-text (STT), large language models (LLM), and text-to-speech (TTS) in a streamlined process, emphasizing the importance of semantic end-of-turn detection and real-time streaming. The developer highlights that traditional sequential pipelines are inadequate for natural conversation, advocating for a more integrated approach. The project showcases advancements in voice technology, particularly in reducing latency, which is crucial for enhancing user experience in voice interactions.
A developer has successfully built a sub-500ms latency voice agent from scratch, outperforming existing platforms like Vapi. This project, undertaken over six months for a major consumer packaged goods company, highlights the complexities of voice agents compared to text-based systems. The author emphasizes the challenges of real-time orchestration, including managing turn-taking and ensuring seamless transitions between user speaking and listening. By leveraging advanced models like GPT-5.3 and Claude 4.6, the developer created a more efficient orchestration layer, demonstrating the potential for custom solutions in voice technology. This work underscores the growing importance of voice agents in tech.
A developer has created a voice agent that achieves an impressive average latency of approximately 400ms from phone stop to the first syllable. This system integrates speech-to-text (STT), large language models (LLM), and text-to-speech (TTS) without relying on precomputed responses. Key innovations include treating voice interaction as a turn-taking problem, implementing semantic end-of-turn detection, and ensuring that STT, LLM, and TTS operate in a streaming manner rather than sequentially. The developer highlights the importance of low latency in voice interactions, with Groq's technology significantly enhancing performance. This advancement could impact various applications in voice technology and conversational AI.
@GaryMarcus: BREAKING: “OpenAI agreed to follow laws that have allowed for mass surveillance in the past, while i
Sam Altman AMA on DoD Collaboration
Sam Altman AMA on DoD Collaboration
Sam Altman, CEO of OpenAI, engaged in a Twitter AMA discussing the company's recent partnership with the Department of Defense (DoD). He highlighted the importance of open debate regarding the balance of power between democratically elected governments and private companies in AI development. Altman addressed concerns about potential government nationalization of AI projects and emphasized the necessity of a close collaboration between government and tech firms to ensure safety and ethical use of AI technologies. He reassured participants that OpenAI would implement robust safety measures in its systems to prevent misuse, particularly in military applications. This dialogue reflects ongoing tensions and considerations in the intersection of AI technology and national security.
The 'Cancel ChatGPT' movement has gained traction following OpenAI's recent partnership with the U.S. Department of Defense (DoD). This collaboration has sparked public debate over the ethical implications of AI technology in military applications. Critics argue that the integration of AI in defense could lead to unregulated and potentially harmful uses, while supporters believe it is essential for national security. The movement reflects growing concerns about the role of AI in society and the responsibilities of tech companies like OpenAI. As AI continues to evolve, the discourse surrounding its ethical use in sensitive sectors like defense becomes increasingly relevant.
OpenAI's ChatGPT Pro subscription has seen a significant decline in user satisfaction, leading many, including the author, to cancel their plans. The company recently retired popular models like GPT-4o and GPT-4.1, consolidating them into the less favored GPT-5.2, which users criticize for verbosity and inconsistency. OpenAI's financial struggles are evident as it begins to introduce ads, contradicting earlier statements by CEO Sam Altman. The competitive landscape has shifted, with rivals like Claude and Gemini gaining traction, causing OpenAI's market share to plummet from 86% to 65%. This situation highlights the challenges OpenAI faces in maintaining its leadership in the AI space.
Sam Altman / @sama: Sam Altman says OpenAI reached an agreement with the DOD to deploy its models in DOD's classified network and asks DOD to extend those terms to all AI companies — Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome. AI safety and wide distribution of
OpenAI CEO Sam Altman has informed staff that the company is currently in negotiations with the U.S. government regarding regulatory frameworks for artificial intelligence. This development highlights the increasing scrutiny and potential for government oversight in the AI sector, as policymakers seek to address safety and ethical concerns surrounding AI technologies. The outcome of these discussions could significantly influence the operational landscape for AI companies, impacting innovation and compliance requirements. OpenAI's proactive engagement with the government underscores its commitment to responsible AI development and the importance of collaboration between tech firms and regulators.
Sharon Goldman / Fortune: Source: Sam Altman told employees the DOD is willing to let OpenAI build its own “safety stack” and won't force OpenAI to comply if its model refuses a task — Sam Altman told OpenAI employees at an all-hands meeting on Friday afternoon that a potential agreement is emerging …
Keach Hagey / Wall Street Journal: Note to staff: Sam Altman says OpenAI seeks a DOD deal with exemptions for cases like domestic surveillance and wants to “help de-escalate” DOD-Anthropic fight — Anthropic has spent weeks at odds with the Pentagon over the scope of how its Claude AI tools can be used
OpenAI CEO Sam Altman has emphasized that individuals without technical backgrounds can significantly contribute to the development of Artificial General Intelligence (AGI). He believes that the best research teams are formed by those with diverse backgrounds and a strong sense of 'taste' in identifying valuable projects. OpenAI is actively seeking talent from non-traditional backgrounds, particularly entrepreneurs, to enhance its research hiring. This perspective aligns with the growing recognition in the AI industry that the ability to discern meaningful innovations is becoming a crucial skill. Altman's comments echo sentiments from tech history, highlighting the importance of aesthetic judgment in creating impactful products.
An investigative report reveals that OpenAI, in collaboration with the US government and identity verification company Persona, has developed a surveillance system that monitors individuals and files reports with federal authorities. The system reportedly utilizes facial recognition technology to assess users' identities and check them against watchlists. The authors claim to have uncovered sensitive source code and operational details, raising concerns about privacy and government oversight. Persona's CEO, Rick Song, has committed to addressing the findings in a forthcoming response. This investigation highlights the intersection of AI technology and surveillance, prompting discussions about ethical implications and regulatory scrutiny.
The article discusses the intricacies of developing real-time voice agents, detailing the full stack involved in their operation, including WebRTC for media transport, streaming speech-to-text (STT), incremental large language model (LLM) inference, and text-to-speech (TTS) technologies. It emphasizes the importance of understanding where latency accumulates in the system to ensure seamless interactions. The author invites feedback and insights from others in the field, highlighting the collaborative nature of optimizing voice systems. This exploration is crucial for developers and companies focusing on enhancing voice technology applications.
Prompt Injection Is a LangSec Problem: Unsolvable in the General Case