Author Archives: admin

A project on coordinated behavior at the Digital Methods Summer School and Data Sprint 2024

This week (July 1-July 5, 2024) I facilitated with Richard Rogers a project on coordinated inauthentic behavior on Facebook at the Digital Methods Summer School and Data Sprint 2024.

In recent years, an increasing number of studies have focused on analyzing coordinated communication networks across social media platforms. These networks use coordination among various social media accounts to influence and manipulate the user base and the platforms themselves (Chan, 2022). This phenomenon is linked with several problematic activities, including the propagation of disinformation (Giglietto et al., 2020) and more extensive state-backed information operations.

Coordinated behaviour can be undertaken by both covert and less covert actors with varying objectives, ranging from mimicking organic engagement and support for content, to distributing content across social media pages through multi-actor broadcasting and re-posting, for example. In this project, we aim to perform an analysis of the diverse array of coordination forms, highlighting the analytical ambiguity of approaches that measure inauthenticity through the use of platform research affordances such as (timed) link-sharing and reposting.

Coordinated inauthentic behaviour campaigns on social media are driven by actors (and perhaps bots) pushing the same or related content in synchrony and causing it to gain virality, or some threshold of interactions and impressions to indicate a degree of popularity. The primary purpose is ambient. It is to ‘flood the space’, thereby exerting or appearing to exert a large measure of influence. The broader aim, for state and other political actors, could well be to develop a full-fledged counter-program to accrue symbolic power and assert political dominance (McIntyre 2018). 

Research into coordinated inauthentic behaviour has demonstrated its operational as well as geographical breadth but also its platform dependence and orientation toward single platform studies (Thiele et al. 2023). It has been tied to corona-politics (Magelinski & Carley 2020), election misinformation (Nizzoli et al. 2021), protest repression in authoritarian regimes (Kulichkina et al. 2024), and cryptocurrency manipulation (Terenzi 2023). It is far-flung geographically but rather platform-specific in its targeting. Research has described coordinated networks on social media in Australia (Graham et al., 2021), Nigeria (Giglietto et al. 2022), South Korea (Keller et al., 2020), Philippine (Yu 2022), Brazil and France (Gruzd et al. 2022).

The algorithmic architecture of each platforms suggests coordination efforts around certain digital objects. For example, coordination on Twitter typically aims to push a hashtag into the trending topics, occasionally with ‘weaponised bots’ (Graham et al. 2021). On Facebook, website URLs are typically placed in posts on Pages and Groups, where the idea is to elicit emotive reactions, long comment threads and further sharing, which is the platform formula for algorithmic amplification (Merrill & Oremus 2021). The Facebook Feed is thereby persuaded to elevate these shared links now charged with emotive currency. 

Through a thick description of a set of international coordinated networks on Facebook, this project advances the discourse on coordinated inauthentic behaviour on social media platforms by extending its study beyond influence operations. Based on techniques developed for its technical analysis – namely CooRnet (Giglietto, Righetti, Rossi, 2020) – we undertake an empirical study on Facebook that surfaces not only such operations but also activist networking, viral marketing, fan support, analytics-driven publishing and others in the service of mobilising attention.

Coordinated Inauthentic Behaviour on Facebook, Richard Rogers (UvA), Nicola Righetti (Univ Urbino Carlo Bo), Digital Methods Summer School and Data Sprint 2024

Rethinking Veganism in the Digital Age

The paper Rethinking Veganism in the Digital Age. Innovating Methodology and Typology to Explore a Decade of Facebook Discourses is now published on Sociological Research Online, a journal of the British Sociological Association. The paper can be read on the journal website or, if you don’t have access, you can access and download the accepted version at this link.

In this paper we analyze 200,000 posts published on Facebook pages and groups mentioning veganism, articulating a typology of social media functions for veganism.

In addition, we develop a critical methodological reflection on the limitations and potential of “big data” for studying the phenomenon.

Mediation and Moderation Analysis with R

I have uploaded the materials for my Advanced Data Analysis course on Mediation and Moderation online. The instructional material can be found at this link. It is based on the excellent book of Andrew F. Hayes Introduction to Mediation, Moderation, and Conditional Process Analysis, an invaluable resource to learn these techniques, written with rare clarity. The material includes a brief introduction to R and RStudio, a short summary of exploratory data analysis and linear regression, and sections dedicated to mediation, moderation, and mediated moderation using the Hayes’ process software.

The impact of Facebook’s vaccine misinformation policy on the endorsements of vaccine problematic content in the Italian context

I quickly checked the impact of the Facebook MisinformationPolicy (announced on March 7, 2019) on the endorsements of vaccine-related posts by Italian misinformation actors, employing interrupted time series analysis (you can read about this technique in the didactic material I have made available online at https://nicolarighetti.github.io/Time-Series-Analysis-With-R/).

There is a small instantaneous drop in likes, a slight inversion in the overall trend, and a persistent decrease in the average likes from before to after the policy implementation. The analysis is just exploratory and has several limitations (e.g., small sample size), but the effect looks clear.

The analysis is directly inspired by the recent paper of Gu et al. It is, in fact, a quick check of their findings in the Italian context: Gu, J., Dor, A., Li, K., Broniatowski, D. A., Hatheway, M., Fritz, L., & Abroms, L. C. (2022). The impact of Facebook’s vaccine misinformation policy on user endorsements of vaccine content: An interrupted time series analysis. Vaccine, 40(14), 2209-2214.

The Facebook accounts spreading misinformation that I employed in the analysis come from the MINE project: Giglietto, F., Farci, M., Marino, G., Mottola, S., Radicioni, T., & Terenzi, M. (2022, January 7). Mapping Nefarious Social Media Actors to Speed-up Covid-19 Fact-checking. https://lnkd.in/eRMzrrvP

PolarVis: Visual Persuasion in a Transforming Europe

As part of the CHANSE initiative, the Austrian Science Fund (FWF) financed (about 300,000€) the project “PolarVis: Visual Persuasion in a Transforming Europe” (2023-2026) led by Annie Waldherr (PI) and Nicola Righetti (co-PI).

Climate change has been called the defining crisis of our time. In the last few years, millions of people have taken to the streets to demand urgent action on the escalating ecological emergency. Social media have had great importance in the development of the movement. For example, the virality of posts on Twitter and Instagram has quickly transformed the activist Greta Thunberg into an iconic figure, attracting supportive but also openly hostile reactions. The importance of images in the online communication of the movement and the emotions moving these activists and those who attack them online draw attention to the symbolic and emotional role of images for social movements. The PolarVis project will examine the role of visual content in processes of political polarization and belonging in the digital age by focusing on the intergenerational issue of climate change and the green transition.

PolarVis: Visual persuasion in a transforming Europe: The affective and polarizing power of visual content in online political discourse will be led by Annie Waldherr (PI) and Nicola Righetti (Co-PI), and supported by a postdoctoral researcher as well as a student research assistant. The funding provided by the Austrian Science Fund (FWF) totals to around € 300.000 over the course of the next three years. The Viennese project of PolarVis is part of a large and interdisciplinary international consortium within the CHANSE initiative. The consortium is led by Alexandra Segerberg of the Department of Government at Uppsala Universitet, Sweden.

Further information on the project can be accessed here.

CooRTweet: an R package to detect coordinated behavior on Twitter

Update: this post refers to the first versions of the package CooRTweet, which is now a multi platform package for coordinated behavior analysis. Read more here.

I have just release the beta version of CooRTweet, an R package that I developed to help detecting coordinated networks on Twitter.

The CooRTweet package builds on the existing literature on coordinated behavior and the experience of previous software, particularly CooRnet, to provide R users with an easy-to-use tool for coordinated action detection.

Coordinated behavior is a relevant social media strategy employed for political astroturfing (Keller et al., 2020), the spread of inappropriate content online (Giglietto et al., 2020), and activism. Software for academic research and investigative journalism has been developed in the last few years to detect coordinated behavior, such as the CooRnet R package (Giglietto, Righetti, Rossi, 2020), which detects Coordinated Link Sharing Behavior (CLSB) and Coordinated Image Sharing on Facebook and Instagram (CooRnet website), and the Coordination Network Toolkit by Timothy Graham (Graham, QUT Digital Observatory, 2020), a command line tool for studying coordination networks in Twitter and other social media data. CooRTweet adds to this set of tools with an easy app for R users.

Further details and the instruction for installing and using the package are available on GitHub: https://github.com/nicolarighetti/CooRTweet

Political Advertisement and Coordinated Behavior on Social Media in the Lead-Up to the 2021 German Federal Elections.

The report of the research project MINE-GE: Mapping Coordinated Inauthentic Behavior in the Lead Up to the 2021 German Federal Election has been released. During the project, which was funded by Landesanstalt für Medien Nordrhein Westfalen, we collected over 13,000 Facebook Ads, 2.5 million political posts, and 1.8 Million URLs shared on Facebook, Twitter, and Instagram by parties, candidates, and other social media users in the six weeks up to the election day, to monitor political social media communication, detect coordinated networks and analyze possible micro-targeting strategies.

The report is available in English and German on the website of Landesanstalt für Medien Nordrhein Westfalen at the following links:

The Anti-Gender Debate on Social Media

The Anti-Gender Debate on Social Media. A Computational Communication Science Analysis of Networks, Activism, and Misinformation (which can be freely accessed at this link) takes into account 10 years of anti-gender communication on Facebook in Italy, and proposes a multifaceted analysis of different aspects of the debate, including activism and misinformation.

It shows that both right-wing/populists/religious and pro-LGBTQI+ actors were involved in the debate, but the former got more engagement. Notably, religious accounts got even more engagement than the right-wing ones. Also, posts from left-wing parties’ accounts were just a few.

The most engaging posts against Gender came from Radio Maria, a popular (and sometimes controversial) catholic radio, and the conversations peaked in 2015, close to the conservative manifestation “Family Day”, but religious actors have kept paying attention to the issue. 

Time series analysis suggested that Facebook posts mostly amplified an agenda set by news media following offline events. Similarly, Facebook has been used to amplify “traditional” types of activism, like petitions “against gender”.

However, an analysis through CooRnet also revealed the presence of coordinated Facebook networks spreading news stories on gender ideology, also coming from websites renowned for spreading misinformation and low-quality, click-bait news stories. 

Still on the subject of misinformation, the analysis shows that 2% of the about 20,000 analyzed Facebook posts associated LGBTQI+ people and organizations with paedophilia by means of “gender ideology”.