Navigation area

Fake news and AI: how can we protect ourselves against manipulation?

hand with mobile phone and text disinformation fake news
© Getty Images/Arkadiusz Warguła

Artificial intelligence (AI) is changing our lives. Education, work, research: progress in the area of AI is omnipresent. Many of the developments are beneficial. However, there are also some risks, especially in the media sector. Fake news is a phenomenon that is facilitated by AI. Experts believe that this is a real threat, for example in election campaigns. Professor Stefan Feuerriegel, the Director of the Institute for Artificial Intelligence (AI) in Management at the Ludwig Maximilian University (LMU) in Munich and DAAD Zuse Schools Fellow, shared some insights with us.

Professor Feuerriegel, what was the impact fake news and AI had on the presidential election campaigns in the USA? Did this contribute to Donald Trump’s victory?

There are numerous examples that show that and AI played a role in the US election campaigns. Content that seemed realistic was created and intentionally spread, using generative in particular. To what extent such technology has actually had an impact on individual groups of voters is yet to be verified through research. In our research we have found that deep fakes and similar forms of manipulation can influence the opinions of individuals. We have published a study on this issue that is available here: .

An important election is coming up in Germany, too, in 2025. The Bundestag will be re-elected sooner than originally planned. Do you think that there is a risk of AI being used to intentionally influence voters in Germany?

Yes, I do think that such a risk exists. However, I would rather not speculate as to which stakeholders may wish to exert influence and for which reasons. Strategic campaigns by players such as Russia that aim to fuel uncertainty and intentionally manipulate voters are particularly dangerous.

Russia is frequently mentioned in relation to manipulation attempts, including via social media or using AI. Can you give an example of how Russia has made an impact in Germany?

We do know that Russia has already been using such technology to intentionally spread disinformation. There is a recent example that shows how AI-based tools are used to produce content that appears deceptively real, and is then spread via social media networks. The ChatGPT developer OpenAI claims to have stopped a number of disinformation campaigns by state-supported players. These seek to use AI for their activities, and such content is used to undermine the trust in democratic institutions. We need more initiatives here that monitor such attempts, as we have hardly any systematic findings on this issue to date.

Are there any plausible approaches for preventing such disinformation? What could a realistic model look like?

There are some regulatory and technical approaches for identifying fake news, such as watermarks and automated fact checking bots. Measures such as these are, however, not perfect in terms of security, and we cannot fully rely on them. State actors in particular could bypass or undermine such measures. It is therefore important to promote media literacy among citizens to put users in a better position to identify disinformation themselves.

Do you have any advice, how consumers can guard themselves against such campaigns?

Consumers should approach any bold and highly emotionalised content with caution and always check the sources. Trusting established media outlets and reading information from different sources to check them against each other are also good strategies.

Could we perhaps just turn the tables? Couldn't we use “good” AI to expose “bad” AI and therefore attempts to spread disinformation?

Yes, we could, in theory, use “good” AI to expose disinformation. There are are few research projects underway in which AI models for automated fake news detection are being developed. However, we should not rely on those technologies alone, since generative AI models are constantly evolving and may easily overtake any detection algorithms.

Disclaimer:

The opinions expressed in the interview represent merely the point of view of the interviewee.

We strive to use gender-sensitive language. External texts and interviews may not conform to our preferred wording.

* mandatory field