Social:Algorithmic radicalization

From HandWiki
Revision as of 01:49, 9 March 2024 by JMinHep (talk | contribs) (link)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: Radicalization via social media algorithms


Algorithmic radicalization is the concept that recommender algorithms on popular social media sites such as YouTube and Facebook drive users toward progressively more extreme content over time, leading to them developing radicalized extremist political views. Algorithms record user interactions, from likes/dislikes to amount of time spent on posts, to generate endless media aimed to keep users engaged. Through echo chamber channels, the consumer is driven to be more polarized through preferences in media and self-confirmation.[1][2][3][4]

Algorithmic radicalization remains a controversial phenomenon as it is often not in the best interest of social media companies to remove echo chamber channels.[5][6] Though social media companies have admitted to algorithmic radicalization's existence, it remains unclear how each will manage this growing threat.

Social media echo chambers and filter bubbles

Social media platforms learn the interests and likes of the user to modify their experiences in their feed to keep them engaged and scrolling. An echo chamber is formed when users come across beliefs that magnify or reinforce their thoughts and form a group of like-minded users in a closed system.[7] The issue with echo chambers is that it spreads information without any opposing beliefs and can possibly lead to confirmation bias. According to a group polarization theory, an echo chamber can potentially lead users and groups towards more extreme radicalized positions.[8] According to the National Library of Medicine, "Users online tend to prefer information adhering to their worldviews, ignore dissenting information, and form polarized groups around shared narratives. Furthermore, when polarization is high, misinformation quickly proliferates."[9]

Facebook's algorithms

Facebook's algorithm focuses on recommending content that makes the user want to interact. They rank content by prioritizing popular posts by friends, viral content, and sometimes divisive content. Each feed is personalized to the user's specific interests which can sometimes lead users towards an echo chamber of troublesome content.[10] Users can find their list of interests the algorithm uses by going to the "Your ad Preferences" page. According to a Pew Research study, 74% of Facebook users did not know that list existed until they were directed towards that page in the study.[11] It is also relatively common for Facebook to assign political labels to their users. In recent years,[when?] Facebook has started using artificial intelligence to change the content users see in their feed and what is recommended to them. A document known as The Facebook Files has revealed that their AI system prioritizes user engagement over everything else. The Facebook Files has also demonstrated that controlling the AI systems has proven difficult to handle.[12]

Facebook's allegations

In an August 2019 internal memo leaked in 2021, Facebook has admitted that "the mechanics of our platforms are not neutral",[13][14] concluding that in order to reach maximum profits, optimization for engagement is necessary. In order to increase engagement, algorithms have found that hate, misinformation, and politics are instrumental for app activity.[15] As referenced in the memo, "The more incendiary the material, the more it keeps users engaged, the more it is boosted by the algorithm."[13] According to a 2018 study, "false rumors spread faster and wider than true information... They found falsehoods are 70% more likely to be retweeted on Twitter than the truth, and reach their first 1,500 people six times faster. This effect is more pronounced with political news than other categories."[16]

YouTube's algorithm

YouTube has been around since 2005 and has more than 2.5 billion monthly users. YouTube discovery content systems focus on the user's personal activity (watched, favorites, likes) to direct them to recommended content. YouTube's algorithm is accountable for roughly 70% of users' recommended videos and what drives people to watch certain content.[17] According to a new study, users have little power to keep unsolicited videos out of their suggested recommended content. This includes videos about hate speech, livestreams, etc.[17]

YouTube's allegations

YouTube has been identified as an influential platform for spreading radicalized content. Al-Qaeda and similar extremist groups have been linked to using YouTube for recruitment videos and engaging with international media outlets. In a research study published by the American Behavioral Scientist Journal, they researched "whether it is possible to identify a set of attributes that may help explain part of the YouTube algorithm's decision-making process".[18] The results of the study showed that YouTube's algorithm recommendations for extremism content factor into the presence of radical keywords in a video's title. In February of 2023, in the case of Gonzalez v. Google, the question at hand is whether or not Google, the parent company of YouTube, is protected from lawsuits claiming that the site's algorithms aided terrorists in recommending ISIS videos to users. Section 230 is known to generally protect online platforms from civil liability for the content posted by its users.[19]

TikTok algorithms

TikTok is an app that recommends videos to a user's 'For You Page' (FYP), making every users' page different. With the nature of the algorithm behind the app, TikTok's FYP has been linked to showing more explicit and radical videos over time based on users' previous interactions on the app.[20] Since TikTok's inception, the app has been scrutinized for misinformation and hate speech as those forms of media usually generate more interactions to the algorithm.[21]

As of 2022, TikTok's head of US Security has put out a statement that "81,518,334 videos were removed globally between April - June for violating our Community Guidelines or Terms of Service" to cut back on hate speech, harassment, and misinformation.[22]

Alt-right pipeline

Graphic of interactions between mostly right-wing personalities on YouTube from January 2017 to April 2018. Each line indicates a shared appearance in a YouTube video, allowing audiences of one personality to discover another.[23]

The alt-right pipeline (also called the alt-right rabbit hole) is a proposed conceptual model regarding internet radicalization toward the alt-right movement. It describes a phenomenon in which consuming provocative right-wing political content, such as antifeminist or anti-SJW ideas, gradually increases exposure to the alt-right or similar far-right politics. It posits that this interaction takes place due to the interconnected nature of political commentators and online communities, allowing members of one audience or community to discover more extreme groups.[23][24] This process is most commonly associated with and has been documented on the video platform YouTube, and is largely faceted by the method in which algorithms on various social media platforms function through the process recommending content that is similar to what users engage with, but can quickly lead users down rabbit-holes.[24][25][26]

Many political movements have been associated with the pipeline concept. The intellectual dark web,[24] libertarianism,[27] the men's rights movement,[28] and the alt-lite movement[24] have all been identified as possibly introducing audiences to alt-right ideas. Audiences that seek out and are willing to accept extreme content in this fashion typically consist of young men, commonly those that experience significant loneliness and seek belonging or meaning.[29] In an attempt to find community and belonging, message boards that are often proliferated with hard right social commentary, such as 4chan and 8chan, have been well documented in their importance in the radicalization process.[30]

The alt-right pipeline may be a contributing factor to domestic terrorism.[31][32] Many social media platforms have acknowledged this path of radicalization and have taken measures to prevent it, including the removal of extremist figures and rules against hate speech and misinformation.[25][29] Left-wing movements, such as BreadTube, also oppose the alt-right pipeline and "seek to create a 'leftist pipeline' as a counterforce to the alt-right pipeline."[33]

The effects of YouTube's algorithmic bias in radicalizing users has been replicated by one study,[24][34][35][36] although two other studies found little or no evidence of a radicalization process.[25][37][38]

Self-radicalization

An infographic from the United States Department of Homeland Security's "If You See Something, Say Something" campaign. The campaign is a national initiative to raise awareness to homegrown terrorism and terrorism-related crime.

The U.S. department of Justice defines 'Lone-wolf' (self) terrorism as "someone who acts alone in a terrorist attack without the help or encouragement of a government or a terrorist organization".[39] Through social media outlets on the internet, 'Lone-wolf' terrorism has been on the rise, being linked to algorithmic radicalization.[40] Through echo-chambers on the internet, viewpoints typically seen as radical were accepted and quickly adopted by other extremists.[41] These viewpoints are encouraged by forums, group chats, and social media to reinforce their beliefs.[42]

References in media

The Social Dilemma

Main page: The Social DilemmaThe Social Dilemma is a 2020 docudrama about how algorithms behind social media enables addiction, while possessing abilities to manipulate people's views, emotions, and behavior to spread conspiracy theories and disinformation. The film repeatedly uses buzz words such as 'echo chambers' and 'fake news' to prove psychological manipulation on social media, therefore leading to political manipulation. In the film, Ben falls deeper into a social media addiction as the algorithm found that his social media page has a 62.3% chance of long-term engagement. This leads into more videos on the recommended feed for Ben and he eventually becomes more immersed into propaganda and conspiracy theories, becoming more polarized with each video.

Possible solutions

Section 230

In the Communications Decency Act of 1996, section 230 states that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider".[43] Section 230 protects the media from liabilities or being sued of third-party content, such as illegal activity from a user.[43] However, this approach reduces a company's incentive to remove harmful content or misinformation.  This loophole has allowed social media companies to maximize profits through pushing radical content without legal risks.[44]

See also

References

  1. "What is a Social Media Echo Chamber? | Stan Richards School of Advertising" (in en). https://advertising.utexas.edu/news/what-social-media-echo-chamber. 
  2. "The Websites Sustaining Britain's Far-Right Influencers" (in en-GB). 2021-02-24. https://www.bellingcat.com/news/uk-and-europe/2021/02/24/the-websites-sustaining-britains-far-right-influencers/. 
  3. Camargo, Chico Q. (January 21, 2020). "YouTube's algorithms might radicalise people – but the real problem is we've no idea how they work" (in en). http://theconversation.com/youtubes-algorithms-might-radicalise-people-but-the-real-problem-is-weve-no-idea-how-they-work-129955. 
  4. E&T editorial staff (2020-05-27). "Facebook did not act on own evidence of algorithm-driven extremism" (in en-US). https://eandt.theiet.org/content/articles/2020/05/facebook-did-not-act-on-own-evidence-of-algorithm-driven-extremism/. 
  5. "How Can Social Media Firms Tackle Hate Speech?" (in en-US). https://knowledge.wharton.upenn.edu/article/can-social-media-firms-tackle-hate-speech/. 
  6. "Internet Association - We Are The Voice Of The Internet Economy. | Internet Association". 2021-12-17. https://internetassociation.org/. 
  7. "What is a Social Media Echo Chamber? | Stan Richards School of Advertising" (in en). https://advertising.utexas.edu/news/what-social-media-echo-chamber. 
  8. Cinelli, Matteo; De Francisci Morales, Gianmarco; Galeazzi, Alessandro; Quattrociocchi, Walter; Starnini, Michele (2021-03-02). "The echo chamber effect on social media". Proceedings of the National Academy of Sciences of the United States of America 118 (9): –2023301118. doi:10.1073/pnas.2023301118. ISSN 0027-8424. PMID 33622786. Bibcode2021PNAS..11823301C. 
  9. Cinelli, Matteo; De Francisci Morales, Gianmarco; Starnini, Michele; Galeazzi, Alessandro; Quattrociocchi, Walter (January 14, 2021). "The echo chamber effect on social media". Proceedings of the National Academy of Sciences of the United States of America 118 (9): e2023301118. doi:10.1073/pnas.2023301118. ISSN 0027-8424. PMID 33622786. Bibcode2021PNAS..11823301C. 
  10. Oremus, Will; Alcantara, Chris; Merrill, Jeremy; Galocha, Artur (October 26, 2021). "How Facebook shapes your feed". The Washington Post. https://www.washingtonpost.com/technology/interactive/2021/how-facebook-algorithm-works/. 
  11. Atske, Sara (2019-01-16). "Facebook Algorithms and Personal Data" (in en-US). https://www.pewresearch.org/internet/2019/01/16/facebook-algorithms-and-personal-data/. 
  12. Korinek, Anton (2021-12-08). "Why we need a new agency to regulate advanced artificial intelligence: Lessons on AI control from the Facebook Files" (in en-US). https://www.brookings.edu/research/why-we-need-a-new-agency-to-regulate-advanced-artificial-intelligence-lessons-on-ai-control-from-the-facebook-files/. 
  13. 13.0 13.1 "Disinformation, Radicalization, and Algorithmic Amplification: What Steps Can Congress Take?" (in en-US). 2022-02-07. https://www.justsecurity.org/79995/disinformation-radicalization-and-algorithmic-amplification-what-steps-can-congress-take/. 
  14. Isaac, Mike (2021-10-25). "Facebook Wrestles With the Features It Used to Define Social Networking" (in en-US). The New York Times. ISSN 0362-4331. https://www.nytimes.com/2021/10/25/technology/facebook-like-share-buttons.html. 
  15. Little, Olivia (March 26, 2021). "TikTok is prompting users to follow far-right extremist accounts" (in en). https://www.mediamatters.org/tiktok/tiktok-prompting-users-follow-far-right-extremist-accounts. 
  16. "Study: False news spreads faster than the truth" (in en). https://mitsloan.mit.edu/ideas-made-to-matter/study-false-news-spreads-faster-truth. 
  17. 17.0 17.1 "Hated that video? YouTube's algorithm might push you another just like it." (in en). https://www.technologyreview.com/2022/09/20/1059709/youtube-algorithm-recommendations/. 
  18. Murthy, Dhiraj (2021-05-01). "Evaluating Platform Accountability: Terrorist Content on YouTube". American Behavioral Scientist 65 (6): 800–824. doi:10.1177/0002764221989774. https://doi.org/10.1177/0002764221989774. 
  19. Root, Damon (April 2023). "Scotus Considers Section 230's Scope". Reason 54 (11): 8. ISSN 0048-6906. http://ezproxy.uky.edu/login?url=https://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=161782688&site=ehost-live&scope=site. 
  20. "TikTok's algorithm leads users from transphobic videos to far-right rabbit holes" (in en). October 5, 2021. https://www.mediamatters.org/tiktok/tiktoks-algorithm-leads-users-transphobic-videos-far-right-rabbit-holes. 
  21. Little, Olivia (April 2, 2021). "Seemingly harmless conspiracy theory accounts on TikTok are pushing far-right propaganda and TikTok is prompting users to follow them" (in en). https://www.mediamatters.org/tiktok/seemingly-harmless-conspiracy-theory-accounts-tiktok-are-pushing-far-right-propaganda-and. 
  22. "Our continued fight against hate and harassment" (in en-us). 2019-08-16. https://newsroom.tiktok.com/en-us/our-continued-fight-against-hate-and-harassment. 
  23. 23.0 23.1 Cite error: Invalid <ref> tag; no text was provided for refs named Lewis 2018
  24. 24.0 24.1 24.2 24.3 24.4 Ribeiro, Manoel Horta; Ottoni, Raphael; West, Robert; Almeida, Virgílio A. F.; Meira, Wagner (2020-01-27). "Auditing radicalization pathways on YouTube" (in en). Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. pp. 131–141. doi:10.1145/3351095.3372879. ISBN 9781450369367. 
  25. 25.0 25.1 25.2 Ledwich, Mark; Zaitsev, Anna (2020-02-26). "Algorithmic extremism: Examining YouTube's rabbit hole of radicalization" (in en). First Monday. doi:10.5210/fm.v25i3.10419. ISSN 1396-0466. https://firstmonday.org/ojs/index.php/fm/article/view/10419. Retrieved 2022-10-28. 
  26. "Mozilla Investigation: YouTube Algorithm Recommends Videos that Violate the Platform's Very Own Policies". 7 July 2021. https://foundation.mozilla.org/en/blog/mozilla-investigation-youtube-algorithm-recommends-videos-that-violate-the-platforms-very-own-policies/. 
  27. Cite error: Invalid <ref> tag; no text was provided for refs named Hermansson 2020
  28. Cite error: Invalid <ref> tag; no text was provided for refs named Mamié 2021
  29. 29.0 29.1 Cite error: Invalid <ref> tag; no text was provided for refs named Roose 2019
  30. Hughes, Terwyn (26 January 2021). "Canada's alt-right pipeline". https://the-pigeon.ca/2021/01/26/canadas-alt-right-pipeline/. 
  31. Piazza, James A. (2022-01-02). "Fake news: the effects of social media disinformation on domestic terrorism". Dynamics of Asymmetric Conflict 15 (1): 55–77. doi:10.1080/17467586.2021.1895263. ISSN 1746-7586. https://doi.org/10.1080/17467586.2021.1895263. Retrieved 2022-11-04. 
  32. Cite error: Invalid <ref> tag; no text was provided for refs named Munn 2019
  33. Cite error: Invalid <ref> tag; no text was provided for refs named Cotter 2022
  34. Lomas, Natasha (January 28, 2020). "Study of YouTube comments finds evidence of radicalization effect" (in en-US). https://social.techcrunch.com/2020/01/28/study-of-youtube-comments-finds-evidence-of-radicalization-effect/. 
  35. Newton, Casey (2019-08-28). "YouTube may push users to more radical views over time, a new paper argues" (in en). https://www.theverge.com/interface/2019/8/28/20836019/youtube-ceo-quarterly-letter-radicalization-pipeline. 
  36. Ribeiro, Manoel Horta; Ottoni, Raphael; West, Robert; Almeida, Virgílio A. F.; Meira, Wagner (2019-08-22). "Auditing Radicalization Pathways on YouTube". arXiv:1908.08313 [cs.CY].
  37. Hosseinmardi, Homa; Ghasemian, Amir; Clauset, Aaron; Mobius, Markus; Rothschild, David M.; Watts, Duncan J. (2021-08-02). "Examining the consumption of radical content on You Tube". Proceedings of the National Academy of Sciences 118 (32). doi:10.1073/pnas.2101967118. PMID 34341121. Bibcode2021PNAS..11801967H. 
  38. * Chen, Annie Y.; Nyhan, Brendan; Reifler, Jason; Robertson, Ronald E.; Wilson, Christo (22 April 2022). "Subscriptions and external links help drive resentful users to alternative and extremist YouTube videos". arXiv:2204.10921 [cs.SI].
  39. "Lone Wolf Terrorism in America | Office of Justice Programs". https://www.ojp.gov/ncjrs/virtual-library/abstracts/lone-wolf-terrorism-america. 
  40. Alfano, Mark; Carter, J. Adam; Cheong, Marc (2018). "Technological Seduction and Self-Radicalization" (in en). Journal of the American Philosophical Association 4 (3): 298–322. doi:10.1017/apa.2018.27. ISSN 2053-4477. https://www.cambridge.org/core/journals/journal-of-the-american-philosophical-association/article/abs/technological-seduction-and-selfradicalization/47CADB240E6141F9C6160C40BC9A6ECF. 
  41. Dubois, Elizabeth; Blank, Grant (2018-05-04). "The echo chamber is overstated: the moderating effect of political interest and diverse media". Information, Communication & Society 21 (5): 729–745. doi:10.1080/1369118X.2018.1428656. ISSN 1369-118X. 
  42. Sunstein, Cass R. (2009-05-13) (in en). Going to Extremes: How Like Minds Unite and Divide. Oxford University Press. ISBN 978-0-19-979314-3. https://books.google.com/books?id=jEWplxVkEEEC&pg=PP9. 
  43. 43.0 43.1 "47 U.S. Code § 230 - Protection for private blocking and screening of offensive material" (in en). https://www.law.cornell.edu/uscode/text/47/230. 
  44. Smith, Michael D.; Alstyne, Marshall Van (2021-08-12). "It's Time to Update Section 230". Harvard Business Review. ISSN 0017-8012. https://hbr.org/2021/08/its-time-to-update-section-230.