It is well known that predators lurk on the internet seeking to harm others through misinformation, bullying, and hate speech. What is not understood, however, is how online abuse and harassment like this spread via misinformation can lead to real, physical violence in communities.
To understand and counter this phenomenon, faculty with Georgia Tech’s College of Computing is leading a two-year study on the impact of online violence-provoking misinformation and hate speech toward minority populations.
Realizing the implication on public health and safety, the Center of Disease Control and Prevention (CDC) is funding the study through a $678,000 grant. Georgia Tech’s Srijan Kumar is the principal investigator of the CDC grant while Munmun De Choudhury is co-PI.
“The conversations we have and the information we consume on the web really shape our behavior not just online, but in the real-world,” said Kumar, an assistant professor with the School of Computational Science and Engineering. “We need systematic evaluations and an all-hands-on-deck approach to study what sort of societal impact harmful online content has on health, equity, integrity, and safety.”
Inspiration for this study comes from the researchers’ observations of prevalent hate speech and calls for racially motivated violence on the internet and social media. This issue became more prominent during the Covid-19 pandemic.
In their grant proposal, the researchers say more than 11,000 incidents of physical violence and online aggression against Asians were reported during the pandemic. They also state that misinformation about Covid-19 led to destruction of property and over 800 deaths.
To understand the connection between virtual misinformation and physical violence, the study has four stated goals:
- Identify health misinformation that promotes community violence
- Map and measure prevalence of violence-provoking health misinformation across social media platforms
- Establish the causal impact of such misinformation on consumers’ reactions and intention to engage in harm
- Design mitigation and intervention strategies to reduce the prevalence of such misinformation
According to the group, social media is often “ground-zero” for health misinformation where it spreads at exceptional speed and scale. That is why the team intends to study hate speech and misinformation on social media platforms Twitter and Reddit.
“The very use of these platforms is impacting us in different ways. Sometimes these impacts are good, sometimes they are bad,” said De Choudhury, an associate professor with the School of Interactive Computing. “As we think about our wellbeing and the role of these online platforms, we cannot ignore the very fact that misinformation on those platforms is affecting our wellbeing in negative ways.”
The group selected Twitter and Reddit specifically due to their prominence in networking and wide range of demographically diverse users, a fact supported by Pew Internet Research statistics.
Diversity of the two social media platforms make them the ideal ecosystem for what the group intends to observe: anti-Asian and Anti-Black violence-provoking misinformation.
Given the correlation of Covid-19 misinformation spread and increase of discrimination and violence toward Asian and Black communities during the pandemic, the researchers believe their findings will make meaningful impact for people affected by this public health issue.
For example, the group points out that misinformation masquerading as medical racism further targeted Black communities. As a result, this both degraded trust in institutions and diminished vaccine efficacy during the Covid-19 pandemic.
Purdue University Assistant Professor Laura Schwab Reese, an expert in community and behavioral health, joins Kumar and De Choudhury in their research. Together, they will collaborate with the non-profit Anti-Defamation League during the study. Backed with CDC funding, the team will develop tools to study and find solutions to violence-provoking health misinformation.
For example, the researchers will start by developing algorithms to detect health misinformation and violence-provoking hate speech targeting minorities. These algorithms will rely on the latest machine learning methodologies and social media data sets.
From there, the group can build an interactive dashboard that maps the spread of violence-provoking misinformation online and offers analytic capabilities on the visualized data for end users. This will provide quantitative insight about the causal relationship between misinformation exposure and violent attitudes toward targeted communities.
Finally, the team will provide an intervention design plan to mitigate the impact of the misinformation. This will include presenting exploratory evaluation of interventions, and even creating new social media-based intervention tools that can interrupt misinformation.
By leveraging computing methods and interdisciplinary collaboration, the group is poised to make online and physical communities safer places for all.
“The reason we are partnering with the Anti-Defamation League and public health researchers is because community engagement is paramount in this work,” De Choudhury said. “One of the unique things about this study is that the computational techniques and interventions we will develop will be informed by the communities who are targeted by these incidents.”