← Back to all work

Case Study 02 — HCI 445, DePaul University — 2026

AI-Generated News: Distinguishing evidence-based Journalism from "Churnalism"

ObservationS
Interviews
ai ethics
HCI

Role

UX Researcher (Principal Investigator)

Timeline

Winter 2026 ~10 weeks

Methods

Task-based observations, semi-structured interviews, affinity diagramming

/Purpose

This study explored how adults aged 18 - 45 evaluate the credibility of AI-generated news across articles, social media videos, and podcasts. Using a two-phase qualitative approach with task-based observations (n=5) and semi-structured interviews (n=8) we identified behavioral patterns, trust frameworks, and design interventions that support more accurate evaluation of AI-generated content.

01

/Context

Generative AI is rapidly transforming how news is produced and consumed. Tools like ChatGPT and Notebook LM generate articles, reports, and summaries with little human editorial oversight. These articles mimic the authoritative tone of legacy journalism without the same accountability. This AI-generated content contributes to a rise in “churnalism” low-quality, rapidly produced content that is difficult for everyday readers to distinguish from evidence-based reporting.

A 2025 Reuters/Oxford study found daily use of AI for information-seeking nearly doubled from 18% to 34% in a single year, while a large-scale analysis revealed that AI use in American newspapers is widespread, uneven, and rarely disclosed to readers. Despite this scale, no solution exists to help consumers effectively detect AI-generated news or support the verification of editorial content.

02

/Process

We used a two-phase qualitative research design. Phase 1 consisted of observational sessions with a think-aloud protocol. Participants engaged with curated AI-generated media across three formats: a written article, a short-form video reel, and a podcast.

Phase 2 consisted of semi-structured interviews (n=8) exploring attitudes, trust frameworks, and design preferences around AI-generated news. The interview structure ensured participants articulated their own attitudes before the topic of AI was introduced. Data from both phases was analyzed using affinity diagramming and the Contextual Inquiry framework, then synthesized into personas, scenarios, and a priority matrix for design implications.

03

/Key Findings

01

The AI Red Line

Participants demonstrated clear mental model distinguishing between AI as a tool and AI as an autonomous creator. While users readily accepted AI for data synthesis and interview transcription, they firmly rejected its use for editorial judgment.

02

Verification & Transparency

Participants expressed a need for granular transparency about the degree of AI Use. They want inline clickable citations, visible audit trails, and editorial attributions reguardless if AI was used to generate a news article.

03

The Trust Spectrum

All particpant in the study displayed a high level of distrust in news media. Study participants report cross-referencing claims across multiple outlets to navigate bias. Participants ranged from subscribing to curated newsletters and checking established outlets to relying exclusively on community-based sources and citizen journalists.

04

Quotes

The AI Red Line

"AI transcribing interviews. AI checking facts against a database. That's AI as a tool that makes the human journalist better... What I'm not okay with is AI replacing the human judgment part of it."

P3

"That's where I draw a hard line."

P4

Verification & Transparency

"Most people are using AI to fact check, but then who is fact checking the AI?"

P2

"If we're gonna get AI generated news, they better come with receipts... liability matters."

P5

"Clickable sources right next to specific claims, not hidden at the bottom."

P7

The Trust Spectrum

"I will do my own research."

P1

"I compare how left-wing politics, right-wing politics, and neutrality frame the same event before forming an opinion."

P6

04

/Discussion & Impact

The research team identified Six actionable design implications prioritized by impact and feasibility: AI Disclosure labels, Verifiable Source Citations, Editorial Accountability, AI Content Filters, Human Review Audit Trails, and Independent Verification Badges. These were synthesized into evidence-based personas and a priority matrix to directly inform the next phase of product ideation. Our findings corroborate and extend research from the Reuters Institute (2025) on AI in journalism.

Designed transparency is a framework where news editors disclose AI models used, cite external sources, and provide fact-checking methods. I also proposed tagging sources as "not-externally verified" to let readers gauge the cost/benefit of consuming breaking news versus waiting for content to be externally verifed with primary and secondary sources.  

This study was conducted with co-researchers Madhumitha Donthineni, Elias Azzou, and Ramya Yerramilli under faculty advisor Oliver Alanzo, PhD.

Priority Matrix — Design Implications

No

Feature

Category

Priority

Impact

Feasibility

1

Editorial Accountability

Transparency

High

High

Medium

2

AI Disclosure

Transparency

High

High

High

3

Verifiable Source Citations

Transparency

High

High

High

4

Human Review Audit Trail

Transparency

Medium

Medium

Low

5

Independent Verification Badge

Trust

Medium

High

Low

6

AI Content Filter

Detection

High

High

Medium

05

/Personas & Affinity Diagram

Persona card for Priya — Graduate Student and analytical news consumerPersona profile for Mark, a 30-year-old mid-career professional from Chicago who distrusts legacy news brands, prefers YouTube creators from communities he trusts, uses AI for work, hears news through real people, and desires verifiable sources, journalist oversight, and transparency.Persona card for Mark — Mid-Career Professional who trusts community-based news sourcesAffinity diagram from an interview divided into five categories: News & Media Landscape, AI Perception & Risks, Design Priorities, Trust & Verification, with color-coded notes detailing insights on media consumption, AI skepticism, accessibility in design, and source credibility.

Next Case Study

Co-Design as a Framework for Community-Based Research

This is some text inside of a div block.