What Hacker News’ Ban on AI‑Generated Comments Actually Means
# What Hacker News’ Ban on AI‑Generated Comments Actually Means
Hacker News’ updated rule is simple in wording but messy in practice: you’re not supposed to post comments that are generated by AI or edited by AI, reflecting guidance that HN comment threads are meant to be conversation between humans. In other words, the site is trying to draw a bright line around comment threads as human-authored discourse—even as it continues to treat some AI use as acceptable elsewhere on the platform.
Direct answer: what the ban actually says
The key line in the updated guidance is summarized as: don’t post generated or AI‑edited comments; HN is for conversation between humans. The important nuance is that this is about comments—the conversational layer—rather than a blanket prohibition on anything AI-touched on the site.
That sits alongside Hacker News’ broader AI usage policy, which states: “No AI-generated media is allowed (art, images, videos, audio, etc.). Text and code are the only acceptable AI-generated content, per the other rules in this policy.” In plain terms, HN already had a categorical ban on AI-generated multimedia. The recent move adds a tighter constraint specifically around one high-value type of text—comments—by making them off-limits if they’re AI-generated or AI-edited, even though text and code are otherwise governed by the policy’s other rules.
So the policy shift isn’t “HN bans AI.” It’s closer to: HN wants the comment section to remain human-to-human, even if other parts of the site may still contain AI-generated text or code under existing rules.
What users can and can’t do in practice
In practice, this becomes a rule about authorship and voice.
Clearly allowed (under the stated intent):
- Writing your own comments—your own reasoning, experience, and perspective—and posting them as-is.
- Doing manual copy-editing of your own words (fixing typos, restructuring sentences yourself, tightening clarity).
Clearly disallowed (per the new wording):
- Wholesale AI drafting of a comment (prompt → output → post).
- Posting a comment that has been “AI-edited”—which, as written, aims to include cases where an AI tool rewrites or substantially rephrases your comment before you post it.
- AI-generated images/audio/video, which HN policy already forbids.
Ambiguous and contentious:
The rule’s phrase “AI-edited” raises immediate questions for people who rely on tools that blur the line between spelling correction and rewriting—especially non‑native speakers and people using assistive technologies. If a tool suggests grammar fixes, tone changes, or alternative phrasing, at what point does that become “AI-edited” in the sense HN means?
HN’s text doesn’t spell out exceptions or thresholds. That ambiguity is part of the story: the rule is trying to protect conversational authenticity, but the enforcement boundary for “editing” is not crisply defined.
How moderators can (and can’t) detect AI use
The hard reality is that detection is difficult and error-prone, and there’s no reliable, widely accepted technical method to prove LLM authorship for a specific comment at scale. In the HN discussions around AI restrictions more broadly, commenters note that detection tools are error-prone and controversial, and that the idea of guaranteed enforcement is “a fantasy” because you can’t technically verify compliance at scale.
That pushes enforcement toward softer mechanisms:
- Manual review by moderators
- User reports
- Pattern recognition and “vibe” judgments (e.g., repetitive, generic, mass-produced tone)
- Potentially repeat-offender bans or other account-level actions
But these tools come with trade-offs. If you can’t confidently verify AI usage, you risk:
- False positives (penalizing human writers who sound “too polished,” or who write in a style that resembles model outputs)
- False negatives (missing AI-written comments that are short, specific, or carefully prompted)
- Incentivizing cat-and-mouse behavior, where users try to make AI text “look human”
The result is a policy that functions partly as a norm-setting statement—a declaration of what belongs—more than a rule that can be mechanically enforced.
Accessibility and fairness trade-offs
A blanket-sounding prohibition on “AI-edited” comments can collide with real needs. Many people use AI-assisted tools for:
- Grammar support
- Translation
- Cognitive scaffolding (drafting help, organization help)
A strict rule with no carve-outs risks excluding or chilling participation from non-native speakers and people with disabilities or neurodivergence—precisely the groups that might benefit from lightweight assistance that doesn’t change the underlying ideas.
At the same time, allowing AI editing broadly can make it harder to stop low-effort, mass-produced comments that degrade thread quality. A middle ground some communities discuss—requiring disclosure of AI assistance—has its own problems: disclosure is hard to verify, and it may create social stigma or inconsistent enforcement.
This is the core tension: authenticity norms vs. inclusivity and assistive use.
Why legal and open-source precedents matter
HN’s move also echoes a wider pattern: communities are increasingly regulating AI not by “model type,” but by context and risk.
Consider NetBSD, which banned commits of AI-generated code, with the rationale that such code is presumed “tainted”—copyright provenance is unclear, and that uncertainty conflicts with the project’s licensing goals. The problem being managed there isn’t “inauthentic conversation,” but legal provenance and licensing compatibility.
On the regulatory side, the EU’s Artificial Intelligence Act defines an “AI system” broadly as a machine-based system that operates with varying autonomy and generates outputs like predictions, content, recommendations, or decisions that influence environments. But the most notable restrictions discussed in the referenced HN thread are use-case based, including “unacceptable risk” prohibitions such as:
- AI collecting real-time biometric data in public places for law enforcement
- AI that expands facial recognition databases by scraping images
- AI inferring sensitive personal characteristics from biometric data
The common thread is governance by purpose and harm. HN’s comment ban fits that same mold: it’s not “LLMs are bad,” it’s LLM-authored comments undermine the purpose of the comment section as a human conversation space.
For engineering organizations wrestling with similar boundaries, it’s the same question in a different costume: what contexts require human accountability and provenance? (Related: How Should Engineering Teams Govern AI‑Assisted Code Changes?)
Why It Matters Now
This change lands in a moment when AI governance is being negotiated simultaneously at the platform, project, and regulatory levels. In the referenced HN discussion thread about the EU AI Act’s “unacceptable risk” category, the post drew heavy engagement (455 points and ~398 comments), a sign that the tech community is actively debating where bans make sense, what can be enforced, and what risks are real.
Meanwhile, policy conversations are trending toward disclosure and labeling—including examples mentioned in discussion such as a New York proposal about disclaimers on AI-generated news—yet commenters also emphasize how hard disclosure is to enforce in practice.
HN’s choice is influential because it’s not just moderating content; it’s shaping norms for how developers and technologists talk in public. If HN treats AI-assisted comments as out-of-bounds, other communities may copy the approach to preserve trust in discussion quality—especially as generative tools become ubiquitous. (See also: Today’s TechScan: Agents Strike, WebAssembly steps up, and buried votes)
What to Watch
- Whether HN clarifies exceptions for grammar, translation, or assistive use—or keeps the rule intentionally broad.
- Whether HN introduces any disclosure mechanism for limited assistance, and how it would handle verification and social dynamics.
- How enforcement evolves: more moderator guidance, examples, or appeals processes—or continued reliance on judgment and reports.
- Parallel moves in open source, where policies like NetBSD’s AI-generated code ban could spread due to licensing/provenance anxiety.
- Ongoing regulatory gravity from the EU AI Act’s risk-tiered model, which may push more communities to define AI rules by context rather than by tools.
Sources: news.ycombinator.com, news.ycombinator.com, news.ycombinator.com, news.ycombinator.com, news.ycombinator.com, duckduckgo.com
About the Author
yrzhe
AI Product Thinker & Builder. Curating and analyzing tech news at TechScan AI. Follow @yrzhe_top on X for daily tech insights and commentary.