Computational propaganda is the use of algorithms, automation, and big data analytics to purposefully disseminate manipulative and misleading messages over social media networks. shared on social media platforms have come to play major roles in shaping popular narratives on...
Computational propaganda is the use of algorithms, automation, and big data analytics to purposefully disseminate manipulative and misleading messages over social media networks. shared on social media platforms have come to play major roles in shaping popular narratives on politics, culture and economics in connected communities across the world. In this context, actors with vested interests in interfering with democratic processes and manipulating public opinion for political gain. The last couple of years has also seen the advent of automated entities or bots that deliberately work to amplify misinformation, hate speech and work to promote polarisation within society. Since then misinformation on social media has emerged as a one of the most serious threats to democratic processes across the world. Democracy itself is under assault from foreign governments and internal threats, such that democratic institutions may not continue to flourish unless social data science is used to put our existing knowledge and theories about politics, public opinion, and political communication to work. The Computational propaganda project seeks to answers these fundamental questions: How are algorithms and automation used to manipulate public opinion during elections or political crises? What are the technological, social, and psychological mechanisms by which, we can encourage political expression but discourage opinion herding or the unnatural spread of extremist, sensationalist, conspiratorial news? What new systems of scholarship can deliver real time social science about political interference, algorithmic bias, or external threats to democracy?
In this context, the Computational Propaganda project tracks important moments in public life like elections and referenda in order to identify the proportion of misinformation that is circulating on social media particularly during election campaign periods and identify the groups of public pages or accounts with common interests that have been responsible for sharing this content in large volumes. To defend the public sphere requires a better understanding of digital citizenship and civic engagement in these domains: Fake news, the deliberate spread of misinformation through (non-satirical) news stories with no basis in fact, often for-profit through click-through advertising revenue; Online hate speech, in particular misogyny and racism, aimed at public figures from specific political or religious groups, or based on ethnicity, gender, or sexuality; Personalised political advertising, as used in large-scale data-driven campaigns, delivering targeted but hidden interventions; Political bots, or computational propaganda, where automated (or partly automated) social media accounts are used to manipulate opinion and disrupt election campaigns. In an increasingly connected world these techniques have been used by actors to sow discord and deepen fault lines within societies. It is therefore of crucial importance to scientifically study the role that political propaganda on social media platforms plays in shaping public opinion.
The overall objectives of the project are (a) to identify sources of misinformation that have been circulated widely on social media platforms, during important moments in public life (b) identify main actors or groups that have disseminated these sources on social media (c) identify accounts on Twitter that have engaged in suspicious high frequency tweeting.
In order to study the problem of misinformation and effectively disseminate our findings to voter groups, the team developed an innovative method of undertaking short studies on large volumes of data and publishing our results in the form of a data memo on our dedicated website hosted by the Oxford Internet Institute at Oxford University. These memos have hugely succeeded in creating an increased awareness of the problem of misinformation, and the impact that the spread of political propaganda has on democratic processes in Europe and around the world. These memos have been covered extensively by major international organisations and team members have been invited to various panel discussions in leading conferences and workshops, government led expert committees, parliamentary and senate hearings, and meeting with industry leaders and civil society groups. We have succeeded in creating significant awareness of the problem of misinformation.
Our data memos have covered parliamentary elections in the UK, the Brexit referendum, elections in France, Germany and Sweden. We have also studied the role of misinformation during the 2016 US Presidential elections and the 2018 US midterm elections. Apart from these countries, our work has also covered major countries in Latin America, including Mexico and Brazil. In all these memos, we have collected purposefully sampled public data from social media platforms and analysed the data to identify sources of misinformation. We have also analysed the data to identify groups who have spread these sources and Twitter accounts that have engaged in high-frequency tweeting in order to amplify this content. We have published our data memos before important political events in order to educate the voters on the types of information they have been exposed to during a crucial period leading up to the elections.
In addition to data memos, we have also published a number of working papers on political misinformation and computational propaganda. Our wave of reports looked at computational propaganda and misinformation in the following countries, Germany, Poland, Ukraine, Russia and also from Brazil, Canada, China, Taiwan and the United States. Each case study was an investigation into digital misinformation in domestic politics with a special focus on the role of automated and algorithmic manipulation. Following the publication of the report the team organised a worldwide briefing tour and held presentations in London, Washington, DC and in Palo Alto where nearly all the social media companies are headquartered. This significant effort was responsible for bringing the problem of misinformation and computational propaganda to public attention and creating the foundation for our subsequent work.
Our consolidated report on these country studies identified the distinct global trends in computational propaganda. In many of these countries, social media platforms play a crucial role in political participation and dissemination of news content. The report found that, these platforms have emerged as the primary media over which young people develop their political identities, particularly in countries where companies like Facebook, are effectively monopoly platforms for public life. Further, in several democracies, the majority of voters use social media to share political news and information, especially during elections. Even in countries where only small proportions of the public have regular access to social media, such platforms are still fundamental infrastructure for political conversation among the journalists, civil society leaders, and political elites. Thus, the report concluded that social media platforms are actively used as a tool for public opinion manipulation, though in diverse ways and on different topics. Further, through the use of sophisticated data analytics tools, social media platforms are actively used for computational propaganda either through broad efforts at opinion manipulation
The Computational Propaganda project has been among the first projects to systematically study the spread of misinformation on social media. We have studied misinformation in the context of important moments in public life like elections and referenda. During the course of our studies in Europe we discovered that the problem of computational propaganda is a global phenomenon with bad actors in different countries regularly adopting and learning from manipulation methods that have been successful in other country contexts. This research therefore represents one the most comprehensive studies of computational propaganda undertaken by a research project. This effort has required the collaboration of a diverse group of researchers including political scientists, sociologists, media scholars and computer scientists and is a unique multi-disciplinary research effort to scientifically study this problem using and extending state of the art tools in different disciplines. The research effort has been very agile in adapting research methods to suit the different cultures of social media use that prevail in various countries. Systematic analysis of large volumes of social media posts, has required the combination of a unique set of qualitative and quantitative methods which have been constantly updated as we studied a range of political events in diverse countries. For our computational analysis, the project has worked with data that has been sourced from social media platforms in the crucial last few weeks of political campaigning, making our reports definitive analyses of the sources of misinformation circulating on social media platforms. Moreover, the reports have analysed suspicious high-frequency trending patterns of politically relevant hashtags on Twitter which become active only few days prior to political events. The project has been among the first to examine how governments across the world are using some of these techniques to manipulate domestic audiences and retain or secure state power. Further our reports have highlighted how these tactics are used to silence and discredit voices that are critical of governments causing damage to civil discourse and eroding public faith in democratic processes. The project has also developed novel methods of collaborating and learning from a diverse group of stakeholders including fellow academics, newspaper organisations, civil society organisations, policy makers and industry leaders to ensure the widest possible dissemination of our research findings. The project work has thus established new ways of studying computational propaganda and in doing so pushed the boundaries of political science and computational social science methods.
From our work, it is clear that many democratic norms—even in established democracies in Europe —are being challenged by technological innovations that allow political actors to manipulate public opinion. Given the affordances of new social media platforms and the rapid development of artificial intelligence (AI), we recognise that there is a crucial need to drive forward innovation in political theory and study the impact of misinformation campaigns powered by advanced AI technologies on voter groups. From our extensive study of elections in a number of democracies, we have seen that news content shared over social networks during key moments in public life, can shape public opinion around important social issues including health, immigration, technology and education. Given that it is now possible to develop targeted campaigning messages for segmented voter groups, it is important to identify communities that might be particularly vulnerable to misinformation campaigns due to various factors. This includes, communities with low levels of media literacy, and limited awareness of technological tools that are used to push propaganda and lack of access to verified sources of information. While we are still grappling with the challenges posed by misinfo
More info: https://comprop.oii.ox.ac.uk/.