Faculty and graduate affiliates from the Center for Communication & Public Policy (CCPP) recently presented new research at the Southern Political Science Association 2026 Annual Meeting, examining how large language models (LLMs) conceptualize democracy across national and institutional contexts.
The paper, Political Divergence of AI: Cross-National Differences in Large Language Models’ Perspectives on Democracy, led by CCPP graduate affiliate Chloe Mortenson, brings together scholars from Northwestern University and industry collaborators to assess whether, and how, AI systems differ in their democratic value frameworks.
Understanding Democracy Through AI Systems
As LLMs become a common source of political information, they increasingly function as both information channels and information producers. This project asks a foundational question: What kind of democracy do AI systems “recognize,” and does that understanding vary by model or country of origin?
To answer this, the research team prompted nine widely used LLMs with 28 carefully designed democracy vignettes. Each vignette reflected one of three conceptualizations of democracy—procedural, liberal, or distributive—and models were asked to evaluate how democratic each scenario was. By comparing responses across repeated trials and models developed in different national contexts, the study identifies systematic variation in how AI systems interpret core democratic principles.
Key Findings
The analysis reveals several important patterns:
- Substantial variation across individual AI models, even among those developed in the same country.
- Stronger recognition of procedural and liberal democracy—such as free elections and civil liberties—than distributive democracy tied to economic outcomes and social welfare.
- Limited evidence that country of origin alone explains differences, suggesting that model design choices and training practices matter more than national context.
Together, these findings challenge simple narratives about “U.S. AI” versus “Chinese AI” and highlight how specific development decisions shape the political meanings embedded in AI systems.
Why It Matters
As AI tools play a growing role in political learning and information-seeking, their implicit assumptions about democracy may influence how users understand democratic governance, rights, and legitimacy. This research contributes to CCPP’s broader agenda on AI, political communication, and democratic governance, offering a framework for evaluating how emerging technologies may shape public understanding of politics across societies.
The paper was presented at SPSA 2026 by members of the CCPP research team, including graduate and undergraduate student affiliates and external collaborators. We look forward to sharing more findings as this work develops and to conversations with scholars at the conference about the democratic implications of generative AI.