By Joseph Arber, Polis Analysis
Over the last five years, we have witnessed a mass proliferation of the phenomenon known as ‘fake news.’ The term fake news can be defined as “information that is false yet is presented under the pretence of news or factual information.” Fake news is deliberately misleading, with the term, ‘disinformation’ also used to refer to the wider problem. Disinformation has fueled a massive decline in trust of government and societal institutions, and as such is considered by most mainstream commentators as an obstacle to the regular functioning of our key democratic processes. More recent analysis has suggested that the explosion of false political information is a major factor in the rising levels of political polarisation that have come to define Western politics in recent years.
Whilst disinformation has existed in various forms throughout history, it has never before existed on this scale. Much like many other challenges we now face, the problem of disinformation is very much particular to the digital world we currently occupy. Undoubtedly, one of the main reasons disinformation has been able to flourish to the extent it has, is that digital and online infrastructure provides an environment conducive to the rapid spread of information. The primary vector of false information is social media, with the major platforms being Twitter, Facebook, Whatsapp and Youtube.
The understanding that false information can quickly spread across digital platforms has given rise to the notion of ‘digital disinformation.’ Significantly, the increase in the amount of Twitter and Facebook users, accompanied by an increasing level of scepticism in traditional forms of media, has meant that users are increasingly exposed to false information on a daily basis. Recent studies by Data Scientists at MIT have indicated that, on average, a false story reaches 1,500 people six times quicker than a true story does. This highlights the volume and scale at which disinformation is circulating on the digital platforms that we interact with on a daily basis. Inevitably, cracking down on digital disinformation is an issue that has received substantial attention from policymakers and academics alike. Indeed, these actors have all deliberated on an array of possible solutions that could help to tackle this problem.
Increasingly, attention has turned towards the question of whether governments can leverage their legislative power to implement policies that can effectively regulate the volume of false information shared on digital platforms. One such solution discussed, has been to develop a regulatory framework for internet governance. Specifically, governments would be expected to work alongside technology companies to control the flow of information and content on these platforms.
In the quest for more information on the suitability of these solutions, we at Polis recently launched a new fake news campaign. Over the last few weeks, the Polis Analysis team has been busy engaging and collaborating with leaders and academics vocal in this space. Most recently, we hosted Professor Trisha Meyers, an expert in European Political Governance. We identified Professor Meyers as a leader in this area, precisely because her research explores the role and effects of technology and participatory mechanisms in governing societal problems, such as disinformation, as well as the transition to a sustainable digital lifestyle.
The event we hosted with Professor Meyers explored the various platform responses to the problem of disinformation. Our discussion with Meyer indicated that there are indeed multiple actors who are equipped to respond to the problem of disinformation. Put simply, if disinformation is an issue that originates on digital platforms then technology companies are certainly in a position to generate solutions to limit the spread of this information on their platforms. Another set of actors equipped to tackle the problem of disinformation are the third-parties. Specifically those organisations willing to verify and fact check the information that is shared on digital platforms. Recent developments suggest that machine-learning and artificial intelligence can be used to assist this fact checking process of Tweets, and Facebook posts. Yet, as Professor Meyers rightly pointed out, very little can be done unless governments begin to pass legislation that can strike a balance between protecting freedom of speech and containing the spread of false information.
From our time with Professor Meyers, a number of other important points stood out. In particular, it was clarified that there is a lack of consistency in regard to official policy responses to disinformation. The complexities of the problem are substantial: where domestic governments have attempted to pass legislation, there is an underlying problem of ensuring collective action from fellow governments. The globalised nature of the internet means that there is a recurring issue of intellectual property rights, which stretches to ownership and jurisdiction of digital information. Importantly, the regulation of digital information has not been subject to any official international or cooperative legal agreement which makes it difficult for there to be a collective response to the problem of false information.
Whilst the actions of governments are of course critical, private tech companies with their global reach have the capacity to act in a more coordinated manner. Yet as suggested above, there is also variation in their responses. The most stark contrast is demonstrated by the diverging responses of Facebook and Twitter to the disinformation problem. Facebook has demonstrated its inclination to take a hands-off approach to the problem, with CEO Mark Zuckerberg indicating that the company has no intention of being the ‘arbiter of truth of everything that people say online’. In practice this means that Facebook is unlikely to flag or remove content that could be false. In contrast, Twitter has shown that it will actively flag, and in some instances remove false content spread on its platform. More specifically, Twitter deploys a machine-learning spam detector that automatically flags content deemed factually incorrect. The most recent examples of the spam detector in action have come during the Covid-19 Pandemic, whereby Twitter has taken an approach of flagging tweets that spread factually incorrect information. If information contained in Tweets directly contradicts health advice from official organisations such as governments and verified health services, then it is likely to be flagged as suspicious.
But what does all this mean for Polis? Certainly, Polis Analysis is in no position to directly influence the action of governments and private technology companies. But we are ourselves a third-party that is in a position to make a positive contribution to the fight against disinformation. Simply by producing fact-based analysis, aimed at subscribers who want to become more informed about political events that shape their lives, we are providing an alternative to information that is based on false claims. These events should not be interpreted in a partisan way, but rather should be explained with the full facts, enabling the reader to form their own conclusions. Finally, as a startup, Polis Analysis is always looking for new ways to inform readers on how best to respond to the problem of disinformation. As a subscriber to Polis Analysis you can expect content that directly addresses how to identify and respond to disinformation and fake news.