Pandemic are scary. There is nothing worse that an invisible enemy to spread anxiety. Information is partial, contradictory. Rumors and minsinformation flourish. People have to grapple daily with uncertainty and fear. Such a context is a breeding ground for conspiracytheories and their harmful consequences.
Among the interesting phenomena that have been observed so far, there is the attempt to incorporate the pandemic into already established conspiracy and fringe narratives, such as the anti-vax and the Q-Anon one. The first one is rather popular in Italy. In this country, a law on mandatory child vaccinations (2017) sparked an heated debate and information crisis and fueled the spread of misinformation on social media. The QAnon conspiracy theory, on the contrary, is not so common in Italy. However, the situation could change.
Analyzing 12 months of data from Google Trends, Twitter and Facebook, I have observed a unequivocal rising interest in QAnon theories in Italy during the last period of the Covid-19 quarantine.
Let me be clear: at the moment the number of tweets and posts is small. Nevertheless, if the peak keeps growing…
The following table includes the most active Facebook entities (pages, public groups, or verified profiles) and Twitter accounts in the collected data sets.
In the following Google sheets are included some examples of the Faceook posts and tweets with the highest engagement (data are divided in the pre-pandemic period, that is before 2019-12-31, the pandemic period, after 2019-12-31 and before 2020-03-15, and peak of interest period, after 2020-03-15).
I’ll keep monitoring the situation: let’s see what happens!
Political and health misinformation are huge problems today. Both can have nefarious consequences on individuals, citizens, institutions and society. As I and co-authors have recently observed in our paper “Blurred Shots: Investigating the Information Crisis Around Vaccination in Italy”, politicization can increase the circulation of misinformation on vaccines “both directly and by opening the door to pseudoscientific and conspiratorial content (…) published by problematic news sources”. It is therefore interesting to further analyze the link between politicization and misinformation.
Italy is an excellent case for studying the intertwining of politics and health misinformation, since it has been at the center of both political and health information crisis during the election (on March 2018) and the debate on the law on mandatory child vaccinations (around July 2017).
I analyzed about 500,000 tweets published between 2016-02-01 and 2019-01-31 (three years),18 months before and after July 31, 2017, when the Italian law on mandatory vaccinations came into force (you can read here the working paper).
The peak of discussions during the political debate was very clear and a structural break analysis confirmed that the usual flow of conversation dramatically changed around that time.
I categorized over 1,000 information sources shared by Twitter’s usersand used a combination of network analysis, correspondence analysis and clustering to identify the groups of information source categories frequently shared together.
The resulting map was unequivocal: the Twitter information environment related to vaccines was polarized and characterized by information homophily. The information sources openly promoting anti-vax perspectives were clustered with blacklisted domains (included in the blacklists of the main Italian debunkers), alternative therapy sites, and conspiracy blogs. On the opposite polarity of the Correspondence Analysis axis (the dimension account for over the 70% of the total variance) we find scientific information sources, those of health organizations, as well as the official sources. Political information sources (such as websites of political parties and advocacy organizations) lie in between these two polarities.
I measured the spread of health misinformation by using the tweets that shared sources included in the cluster of problematic information sources and found that the time series of tweets sharing misinformation was characterized by a structural break during the political debate as the whole series of tweets. It seemed, therefore, that the politicization of the topic fostered not only the conversations on the topic in general, but also the spread of health misinformation.
Since the spread of misinformation can be clearly associated with the general attention and more specifically with the media attention to the topic (misinformation sources can try to “ride the wave” of this interest), I used a Vector Autoregressive Model to further test the hypothesis, and found that politicization, operationalized through a proxy variable indicating the structural break, was associated with an increase in quantity of misinformation also keeping constant the quantity of news media coverage of the previous days, which I measured through Media Cloud (but I found roughly the same results also using the time series of tweets sharing information sources categorized as news media).
Moreover, it seems that while misinformation, on average, grew a lot during the political debate (of about 6 times the period before the political debate) and also after that time (about two times the period before the political debate), the number of tweets sharing scientific and health information sources increased much less during the central period of the debate (only about two times) and even decreased after that time.
This study is only a small step toward understanding the intertwining of health and political misinformation. There is still much work to be done, other case studies are needed and more sophisticated measures are needed. In the meanwhile you can read here the full paper of the study briefly described above.
Today we have released CooRnet, an R package developed for detecting coordinated link sharing behavior on Facebook and Instagram.
Given a list of URLs, the package queries the CrowdTangle API link endpoint to retrieve their shares performed by Facebook and Instagram public pages, groups and verified accounts, and identifies the networks involved in coordinated activity around those links.
The basic functions of CooRnet are augmented with other useful functions to create, for instance, the graph of the coordinated networks or the dataset of the most shared URLs.
In these works we found that networks involved in coordinated link sharing behavior are consistently associated with the spread of misinformation on Facebook. In the two figures below you can see the proportion of blacklisted domains shared by coordinated and non-coordinated entities and the proportion of problematic entities (signaled by Avaaz) included in the coordinated and non-coordinated entities, as emerged in our studies on the Italian elections.
A short description of the usage of the R package CooRnet and the instruction for installing it can be found on the GitHub repository where it is currently hosted and on the dedicated website, where you can find also some useful tutorials (the work is in progress) that introduce to the functions of the package.
In the report we analyzed “coordinated inauthentic behavior”, a concept only briefly defined in Facebook public statements which we found useful to frame our research in the light of existing scientific literature.
Based on a combination of CrowdTangle API (a tool for accessing Facebook and other social media data) and two datasets of Italian political news stories published in the run-up to the 2018 Italian general election and 2019 European election, we developed a methodthat led to the identification of severalnetworks of pages/groups/verified public profiles (“entities”) thatshared the same political news stories on Facebook within a very short period of time (10 in 2018, composed of 28 entities, and 50 in 2019, composed of 143 entities). We called this behavior “coordinated link sharing”. You can find the R script to implement the method here.
By analyzing the social media profiles of such coordinated entities, we observed that while some of them were clearly political, others presented themselves as entertainment venues, despite sharing political content too. Since the political news stories shared by these non-political entities can reach a broad audience which is largely unguarded against attempts to influence, we describe their behavior as “inauthentic” (look at the following tweet to get an idea of what we mean by “inauthentic”).
We identified a total of 60 networks (171 pages/public groups) that shared the same political news story in a very short period of time. Of those 171 my personal favorite is a page called “Professione”. Can you spot the outsider in this recent sample of their timeline 👇 pic.twitter.com/nuR1z09EUy
Our analyses showed that the news shared by the coordinated networks received a Facebook engagement higher than other news included in our dataset. Further analyses are needed to understand the impact of coordinated activities on engagement and public opinion.
We found also that much news boosted anti-immigration and far-right propaganda (primary League-friendly propaganda) and that several of the news outlets shared by these networks, as well as some of the Facebook pages involved in coordinated behavior, were already blacklisted by fact-checkers.
In recent years technological beings have been entering our individual and social lives in ever increasing numbers. From virtual personal assistants like Microsoft Cortana, Apple Siri, Amazon Alexa and Google Home to robots working with us and for us, artificial creatures are leaving both the fictional world they inhabited for centuries and the industrial and aeronautical fields in which they used to be applied, to increasingly share our living space.
The study has analyzed how, and how much, robot-related topics have been covered and represented in the headlines of Italian online newspapersthroughout recent years, relying on text mining techniques for unsupervised text classification developed in R. The news stories were collected through Media Cloud and their Facebook engagement retrieved through the Facebook Graph API, using an approach directly inspired by the Mapping Italian News project.
Results support the idea that online media have been increasingly covering robot-related news stories,and online public has been increasingly affected by this. The text analysis has revealed that the most relevant topic in online news media has concerned the work-skills of robots, which partly arouses astonishment, and partly concern about job losses.
The news with the highest engagement concerns the experiment of two robots that started “talking to each other” in an unknown language (Facebook engagement: 67,819). Then, there is news on issues such as the future, human-robot relationship, and work-related controversies and policies.
Although fears that robots might steal human jobs and become autonomous and uncontrollable seem to persist, news representing robots as a threat is less than expected. This might support the idea that threatening representations of robots (Mori, 1970) are not so widespread or engaging.
This was not a specific area of inquiry of the current study and further research is needed to assess the attitude toward robots, and how and why it has changed through the years. However, some observations are possible. For example, there might be a lack of awareness regarding risks associated with the use of robots – for example war robots – due to a scarce media coverage of the topic. However, a stronger explanation has to do with socialization practices promoting human-robots coexistence: a lot of news revolves around the use of robots in teaching activities, entertainment industry, festivals, exhibits and in the personal and familial sphere. These activities promote a gradual, positive integration of robots in everyday life. Considering that many people still have limited direct experience with robots, media play a central role in promoting a positive representation of robots. Finally, a significant role is played, and will be played in the future, by marketing activities aimed at promoting positive attitudes toward consumer robotics products.
From a general perspective, the results have shown that online news stories on robots have increased over time, doubling in the five years considered in the study. The Facebook engagement follows the same path, so validating the idea of an increasing interest towards robots among the Italian online public and suggesting they no longer appear a topic people perceive far from their lives. In turn, familiarity with robots is reinforced by their increasing presence in online news stories.
On June 24, 2019 the University of Urbinohostedthe first AoIR Flashpoint Symposium. I am really happy to have contributed to the success of the event with the other members of the organizing committee.
The Flashpoint Symposiumisa new format of academic meeting that, as the president of the Association of Internet Researchers Axel Bruns said, aims at responding “more rapidly to the key issues of the day than conventional conferences, journals, and books are able to do”.
Title of the Flashpoint Symposium was “Below the Radar: Private Groups, Locked Platforms and Ephemeral Contents”. The focus of the event was on the problems researchers face in accessing social media data and on the issues of studying social media contents in an environment marked by an increasing number of ephemeral user generated contents.
The AoIR Flashpoint Symposium was kicked off with the keynote speech of the digital anthropologist Crystal Abidin, and closed by Rebekah Tromble.
Crystal Abidin presented a lot of engaging research materials and an interesting perspective on how the danah boyd’s concept of networked publics could be revisited in the light of the recent transformations of the Internet.
Key slides from my #aoirfps19 keynote on 'visibility labour' & 'refraction publics'. Still work-in-progress!
Cite: Abidin, Crystal. 2019. “Public shaming, Vigilante trolling, and Genealogies of Transgression on the Singaporean Internet.” AoIR Flashpoint Symposia, Urbino. 24 Jun. pic.twitter.com/zHBu9QapzX
The closing keynote speech was delivered by Rebekah Tromble that addressed the issue of research ethics in a scenario where social media data are increasingly difficult for researchers to access, soliciting scholars to thinking critically on the social importance of research questions and on the ethics of data treatment and conservation.
OnTwitter, sources mainly shared by supporters of populist parties (the Five Star Movement and the League) are characterized by higher levels of insularity compared to those shared by supporters of other parties.
OnFacebook, news items published by highlyinsularsources receive a higher number of shares per comment.
News stories presenting a positive framing of the ‘cyber party’ Five Star Movement received ahigher number of shares per comment compared to items presenting the Movement in a negative light, while theopposite is true for stories on all other political parties (see the figure below).
Extending the computational method first introduced by Benkler, Faris, Roberts and others (see here and here), the paper makes use of Twitter data to measure partisan attention to news media sources in a multi-party political system.
To validate the method we compared our results with those obtained through a survey (ITANES), finding remarkable similarity (see figure below).
Furthermore, we analyzed the degree of polarization of the Italian online news media system we observed in the lead-up to the 2018 Italian election, finding a moderate level of polarization.
We also find that populist parties’ online communities relied on news sources characterized by an higher level of insularity (i.e. mainly shared on Twitter by their partisan community only) than non-populist ones.
Replication data and R code used in the study can be found here, while the paper can be read here.