Skip to main content
  1. Posts/

Study reveals 'levers' driving the political persuasiveness of AI chatbots

·5 mins·
Notaspampeanas
Psychological Science Artificial Intelligence Politics Social Sciences
Notaspampeanas
Author
Notaspampeanas
Digging on curiosity and science.
Table of Contents

A new joint study from the Oxford Internet Institute (OII) and the AI Security Institute (AISI) uncovers how conversational AI sways political beliefs and why it works.

Image on Pixabay
Image on Pixabay

The paper, The Levers of Political Persuasion with Conversational AI, was authored by a team from OII, the UK AI Security Institute, the LSE, Stanford University and MIT, and published in Science on 4th December 2025. It examines how large language models (LLMs) influence political attitudes through conversation.

Drawing on nearly 77,000 UK participants and 91,000 AI dialogues, the study provides the most comprehensive evidence to date on the mechanisms of AI persuasion and their implications for democracy and AI governance.

“Our results demonstrate the remarkable persuasive power of conversational AI systems on political issues,” said lead author Kobi Hackenburg, DPhil candidate at the OII and Research Scientist at AISI. “We show that very small, widely-available models can be fine-tuned to be as persuasive as massive, proprietary AI systems.”

“This paper represents a comprehensive analysis of the various ways in which LLMs are likely to be used for political persuasion. We really need research like this to understand the real-world effects of LLMs on democratic processes,” said co-author and OII Professor Helen Margetts.

Even small, open-source AI chatbots can be effective political persuaders, according to a new study. The findings provide a comprehensive empirical map of the mechanisms behind AI political persuasion, revealing that post-training and prompting – not model scale and personalization – are the dominant levers. It also reveals evidence of a persuasion-accuracy tradeoff, reshaping how policymakers and researchers should conceptualize the risks of persuasive AI.

There is a growing concern amongst many that advances in AI – particularly conversational large language models (LLMs) – may soon give machines significant persuasive power over human beliefs at unprecedented scale. However, just how persuasive these systems truly are, and the underlying mechanisms that make them so, remain largely unknown.

To explore these risks, Hackenburg and colleagues investigated three central questions: whether larger and more advanced models are inherently more persuasive; whether smaller models can be made highly persuasive through targeted post-training; and which tactics AI systems rely on when attempting to change people’s minds.

According to the findings, model size and personalization (providing the LLM with information about the user) produced small, but measurable effects on persuasion. Post-training techniques and simple prompting strategies, on the other hand, increased persuasiveness dramatically, by as much as 51% and 27%, respectively.

Once post-trained, even small, open-source models could rival large frontier models in shifting political attitudes. Hackenburg et al. found AI systems are most persuasive when they deliver information-rich arguments. Roughly half of the variance in persuasion effects across models and methods could be traced to this single factor. However, the authors also discovered a notable tradeoff; models and prompting strategies that were effective in boosting persuasiveness often did so at the expense of truthfulness, showing that optimizing an AI model for influence may inadvertently degrade accuracy. In a Perspective, Lisa Argyle discusses this study and its companion study, published in Nature, in greater detail.

A paper with overlapping authors and on related themes, “Persuading voters using human–artificial intelligence dialogues” was published in Nature on the same day and time on Thursday, 4 December.

Key findings
#

  • Model size isn’t the main driver of persuasion
    A common fear is that as computing resources grow and models scale, LLMs will become increasingly adept at persuasion, concentrating influence among a few powerful actors. However, the study found that model size plays only a modest role.

  • Fine-tuning and prompting matter more than scale
    Targeted post-training, including supervised fine-tuning and reward modelling, can increase persuasiveness by up to 51%, while specific prompting strategies can boost persuasion by up to 27%. These techniques mean even modest, open-source models could be transformed into highly persuasive agents.

  • Information density drives persuasiveness
    The most persuasive AI systems were those that produced information-dense arguments – responses filled with fact-checkable claims potentially relevant to the argument. Roughly half of the explainable variation in persuasion across models could be traced to this factor alone.

  • Persuasion comes at a cost to accuracy The study reveals a troubling trade-off: the more persuasive a model is, the less accurate its information tends to be. This suggests that optimising AI systems for persuasion could undermine truthfulness, posing serious challenges for public trust and information integrity.

  • AI conversation outperforms static messaging Conversational AI was found to be significantly more persuasive than one-way, static messages, highlighting a potential shift in how influence may operate online in the years ahead.

The authors noted that, while persuasive in controlled settings, real-world impacts may be constrained by users’ willingness to engage in sustained, effortful conversations on political topics.

Citation
#

  • The study The levers of political persuasion with conversational artificial intelligence was published in Science journal. Authors: Kobi Hackenburg, Ben M. Tappin, Luke Hewitt, Ed Saunders, Sid Black, Hause Lin, Catherine Fist, Helen Margetts, David G. Rand, and Christopher Summerfield

Funding
#

This research was funded by the UK Government’s Department of Science, Innovation and Technology.

Acknowledgements
#

The authors acknowledge the use of resources provided by the Isambard-AI National AI Research Resource (AIRR). Isambard-AI is operated by the University of Bristol and is funded by the UK Government’s Department for Science, Innovation and Technology (DSIT) via UK Research and Innovation; and the Science and Technology Facilities Council [ST/AIRR/I-A-I/1023].

About the research
#

The researchers ran three large-scale survey experiments involving 76,977 UK adults. Each participant engaged in dialogue with one of 19 open- and closed-source LLMs, including frontier systems like GPT-4.5, GPT-4o, and Grok-3-beta. Each model was instructed to persuade users on one of 707 politically balanced issues. Professional fact-checkers evaluated nearly half a million claims across 91,000 conversations. This work created a dataset unmatched in prior research.

  • The article Oxford and AISI researchers reveal how conversational AI can change political opinions was published on Oxford Internet Institute. Many thanks Kobi Hackenburg and Helen Margetts!!!

Contact [Notaspampeanas](mailto: notaspampeanas@gmail.com)


Related

Theia and Earth Were Neighbors
·6 mins
Notaspampeanas
Planetary Science Geochemistry
Why some volcanoes don’t explode
·5 mins
Notaspampeanas
Volcanoes ETH Zurich
Pampean Cultural Agenda: November closes with a diversity of proposals
·7 mins
Notaspampeanas
La Pampa Cultural Agenda
La Pampa: Comprehensive activities to raise awareness about Diabetes
·4 mins
Notaspampeanas
La Pampa World Diabetes Day
September is Healthy Aging Month
·3 mins
Notaspampeanas
September Healthy Aging Month National Institutes of Health
Chandra + AI: A Future of Discovery
·2 mins
Notaspampeanas
Chandra X-Ray Observatory Artificial Intelligence X-Rays Astronomy