image

Illustration by Marija Janeva

Does AI pose a Threat to Democracy: Interview with a Stanford professor Michael Tomz

Have you ever pondered on how AI might threaten democracy? Last week Stanford Professor Michael Tomz gave a lecture at the NYUAD Institute on this exact subject, and I interviewed him for additional comments on the topic.

Nov 23, 2025

It is a confusing notion to think that AI could even get involved in politics, let alone impede democracy. Yet, it does so through the spread of misinformation. LLMs can supercharge the creation of misleading texts and deepfakes to an unprecedented scale. What this means is that it virtually requires no effort to create misleading content; all it takes is one prompt and dozens of misleading articles are ready to be published. Coupled with automation tools, high-quality propaganda machines become accessible to virtually everyone. The key concern here is that individuals won't be able to tell apart AI content from “real” content. In fact they are already unable to.
In his research, Professor Tomz analyzed group susceptibility to propaganda articles generated by GPT-3. Using the same topics as real-world propaganda, the research group asked GPT-3 to generate another propaganda article. They then showed these articles to a sample group of over 8000 Americans and found that 43.6% of the participants agreed with the AI propaganda article. This percentage was increased when the prompt was curated to accurately create a misleading article. With these changes, 52.7% of participants believed the AI propaganda articles to be true. This is a highly concerning majority.
Considering this research was done using GPT-3 – an older AI model – it is not implausible to assume these results could be even higher now. Highly efficient propaganda articles can be generated almost instantly and in incredibly high numbers. But that is not all. Not only is AI proficient at creating misleading articles, but it can also generate deepfakes. These AI-generated videos are set to reach a level of realism that makes them indiscernible from reality by the day.
Professor Tomz’s second study measured how voters would react to their candidate and others using deepfakes in their campaigns. Overall, this undermined confidence in democratic institutions as a whole. Voters did not like that their candidate was using deepfakes to promote their campaign. However, if their candidate was using them in response to the opposition doing the same, that was a different story. Voters would “punish” their candidate less when their usage of AI-generated videos was reactionary. Unfortunately, this could create a feedback loop where AI videos become normalized. The political world becomes a sibling fight, where “They did it first!” is the justification for using AI. The fear is that AI videos are much easier to create and are much more tempting for bad actors to utilize in malicious ways. This would further undermine confidence in democratic institutions, and one can't help but wonder if a true democracy will exist in the future at all?
When you cannot trust what your news sources are telling you, this leads to lower levels of trust across the board in the systems in place. However, this may be dependent on the already existing level of trust in the government and its mechanisms. “Confidence in elections is low, especially among Republicans. But the treatment about deepfakes causes it to fall even farther, but I mentioned that because in a different country, maybe where elections are more trusted, that confidence might start from a higher level,” Professor Tomz commented. He notes that in the US, where his research on deepfakes was conducted, people might already have negative attitudes towards the current “democracy”. As such it is important to examine different regions where democracy still applies or at least is still believed in.
So what does this mean for us? It seems that every day, the average person has less and less of a say over what happens in the political world. A world that directly affects our lives. Professor Tomz advises us to be vigilant in our search for reputable information. It is important now more than ever to be skeptical of what we read online and cross-reference it with news sources of what we know. My two cents is that in light of these extremely powerful technologies, we have to be aware of how easily we can be manipulated. Consequently, we should work against our ignorance by, at the very least, caring about politics. We should be seeking out information from unbiased news sources and trying our best to see all perspectives, not just our own.
Despite how concerning some of this information is, it is worth noting that entirely rejecting AI is not the right way forward. Researching it and trying to understand how to make it work with our society is a more worthwhile endeavour. The research done by Professor Tomz further deepens our understanding of the interplay between technology and psychology. And like with all sciences, the more we understand, the better.
Adam Drai is a Staff Writer at the Gazelle. Email them at feedback@thegazelle.org.
gazelle logo