Loading...
Loading...
Ars Technica published a public newsroom policy clarifying its use of generative AI: humans write all reporting, analysis, and commentary, and AI tools are only allowed as supervised assistants for tasks like grammar, workflow, research navigation, and reproducing AI outputs when those outputs are themselves news subjects. The policy forbids AI from being the author, illustrator, or videographer and prohibits using AI to generate or attribute quotes to named sources; all AI-assisted material mus
Ars Technica published a reader-facing policy clarifying how it uses generative AI: humans write all reporting, analysis, and commentary, and AI is not allowed to author stories, create images, or replace journalists. The newsroom may use vetted AI tools for non-authorial tasks—grammar, style, structural edits, research assistance, and reproducing AI outputs as examples—only under human oversight and with verification. AI-generated material used as illustration must be clearly labeled and visually separated; attributions and quotes must come from direct sources, not AI summaries. The policy will be updated publicly if practices change, reflecting the outlet’s stance that AI can aid but not supplant professional journalism.
Ars Technica published a public newsroom policy clarifying how it uses generative AI: human journalists remain the authors of reporting, analysis, and commentary, and AI will not create articles, images, or videos. The policy allows vetted AI tools only for non‑authorial tasks—grammar, style suggestions, structural feedback, and research assistance—while insisting all AI outputs be verified and never used as authoritative sources. AI-generated content shown for reporting purposes must be clearly labeled and visually separated; attribution to named sources must always come from direct engagement, not AI summaries. The document will be updated with any meaningful practice changes and is linked on the site footer for transparency. This matters because it sets editorial standards for trustworthy tech journalism amid growing AI tool use.
Ars Technica has published a reader-facing generative AI policy confirming that all editorial reporting, analysis, and commentary remain human-authored. The policy bans AI from serving as author, illustrator, or videographer and requires human oversight for any AI-assisted workflow such as grammar, style, structural suggestions, or vetted research aids. Reporters may use approved AI tools to navigate large datasets or summarize background material, but AI output is never treated as authoritative and must be verified; AI must not be used to generate or attribute quotes or paraphrases to named sources. The newsroom will visually label AI outputs used as examples and will update the policy when practices change.
Ars Technica published a public newsroom policy clarifying its use of generative AI: humans write all reporting, analysis, and commentary, and AI tools are only allowed as supervised assistants for tasks like grammar, workflow, research navigation, and reproducing AI outputs when those outputs are themselves news subjects. The policy forbids AI from being the author, illustrator, or videographer and prohibits using AI to generate or attribute quotes to named sources; all AI-assisted material must be verified and disclosed where reproduced. The document aims to increase transparency about editorial standards, will be updated when practices change, and underscores that human judgment remains central to journalism despite adopting AI tools.