Our research project explores how to combat the spread of fake news and misinformation in elections through the implementation of reliable algorithmic standards.
Our project addresses the significant threat of fake news and misinformation to global elections, which is exacerbated by artificial intelligence (AI) technologies like deepfakes.
Funded by Brunel University of London's Policy Development Fund, our research explores strategies to balance responsible AI use with mitigating the harms of misinformation.
Our goal is to guide policymakers in implementing algorithmic reliability standards to protect elections and ensure transparency.
Addressing the threat of AI-driven misinformation
In recent years, the spread of fake news and misinformation has become a major concern for democracies worldwide. Artificial intelligence (AI) technologies, particularly deepfakes, have made it easier to create and disseminate false information, which can undermine public trust in democratic institutions, manipulate voter behaviour, and destabilise societies.
With elections recently held in 77 countries, including the UK, we need to maintain trust in democratic processes.
Our research examines how governments and platforms can adopt algorithmic reliability standards and regulations to combat election misinformation, addressing issues such as voter manipulation and the misuse of AI technologies.
By balancing the responsible use of AI with harm reduction, our project contributes to societal goals such as equitable access to accurate information, democratic integrity, and ethical AI governance.
We aim to help policymakers and organisations create robust frameworks that promote transparency, accountability, and informed civic participation.
Understanding and mitigating psychological harm
We focus on the psychological harm caused by fake news, particularly during elections. We examine the characteristics of psychological harm —its triggers, manifestations, and mental health impacts on individuals and groups.
Unlike previous studies, we explore the lifecycle of psychological harm: how it originates, evolves, and spreads, examining its transfer from one person or group to another.
We measure this harm through indicators like emotional distress, cognitive biases, and behavioural changes, providing a framework to assess its severity and progression.
By understanding these psychological aspects, we offer insights into how misinformation destroys trust, incites fear or anger, and polarises societies. This helps us develop strategies to reduce harm and build resilience, guiding policymakers in creating frameworks that prioritise mental wellbeing, civic trust, and societal cohesion.
Using a narrative literature review, we examine the psychological harm caused by fake news, including emotional distress, behavioural changes, and societal polarisation. We investigate how psychological harm starts, evolves, and spreads across individuals and groups. Our review develops metrics to assess the severity and scope of psychological harm, offering insights into its societal impact.
One of our key objectives is balancing the responsible use of AI with harm reduction.
By analysing existing literature on algorithmic reliability, we propose recommendations for policymakers to create frameworks that support ethical AI usage while safeguarding democratic integrity.
We also explore how ethical AI governance can strengthen societal resilience against misinformation and promote informed civic participation. By synthesising research on AI’s effects on public trust, we examine how ethical guidelines can protect democratic institutions from manipulation.
Our research supports goals such as equitable access to accurate information, mental well-being, and the protection of democratic values. Supported by Brunel University of London's Policy Development Fund, our findings inform policy recommendations and regulatory frameworks to ensure responsible AI use, fostering transparency and accountability.
Meet the Principal Investigator(s) for the project
Related Research Group(s)
Strategy Entrepreneurship and International Business - Our themes of research range from entrepreneurial and internationalisation strategies of small-to-medium-sized enterprises (SMEs) to inward and outward investment by large enterprises and supra-national governance.
AI Social and Digital Innovation - Social, economic and strategic effects of AI and associated technologies. Impact of AI and related technologies on societies, organisations and individuals.
Partnering with confidence
Organisations interested in our research can partner with us with confidence backed by an external and independent benchmark: The Knowledge Exchange Framework. Read more.
Project last modified 09/12/2024