Algorithmic extremism: Examining YouTube's rabbit hole of radicalization

Keywords: YouTube, recommendation algorithm, radicalization

Abstract

The role that YouTube and its behind-the-scenes recommendation algorithm plays in encouraging online radicalization has been suggested by both journalists and academics alike. This study directly quantifies these claims by examining the role that YouTube’s algorithm plays in suggesting radicalized content. After categorizing nearly 800 political channels, we were able to differentiate between political schemas in order to analyze the algorithm traffic flows out and between each group. After conducting a detailed analysis of recommendations received by each channel type, we refute the popular radicalization claims. On the contrary, these data suggest that YouTube’s recommendation algorithm actively discourages viewers from visiting radicalizing or extremist content. Instead, the algorithm is shown to favor mainstream media and cable news content over independent YouTube channels with a slant towards left-leaning or politically neutral channels. Our study thus suggests that YouTube’s recommendation algorithm fails to promote inflammatory or radicalized content, as previously claimed by several outlets.

Author Biographies

Mark Ledwich
Mark Ledwich is a software engineer and a data visualization expert.
Anna Zaitsev, The University of California, Berkeley, the School of Information
Anna Zaitsev is a postdoctoral scholar at the School of Information in University of California, Berkeley
Published
2020-02-26
How to Cite
Ledwich, M., & Zaitsev, A. (2020). Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. First Monday, 25(3). https://doi.org/10.5210/fm.v25i3.10419