“Misinformation and disinformation have long existed in society, yet policy responses remain limited and largely ineffective,” writes Dr Elena Abrusci, Senior Lecturer in Law at Brunel University London, in written evidence submitted in December 2024. The submission was published as part of the Innovation and Technology Committee’s inquiry: Social media, misinformation and harmful algorithms. Dr Abrusci assesses the effectiveness of the UK's regulatory and legislative framework in addressing harmful content on social media and explores ways to strengthen it.
The key issues mentioned include:
- Policy responses to misinformation and disinformation – such as content moderation, media literacy programmes, and regulation – have had minimal impact on reducing harm.
- The rise of generative AI has increased the volume of misinformation but has not fundamentally changed the nature of the problem or the harm misinformation causes to society.
- The UK Online Safety Act fails to balance freedom of expression with harm prevention, has vague definitions, and lacks sufficient enforcement powers for key regulators.
- Regulating public debate must balance free expression with the rights of those impacted, directly or indirectly, from that content.
- The Act does not adequately address deepfakes or provide clear guidelines for service providers, leaving room for inconsistent enforcement and potential overreach.
- Tackling misinformation requires coordination beyond Ofcom, involving bodies such as the Electoral Commission, Advertising Standards Authority, and Equality and Human Rights Commission.
Reed the full written evidence here.