No Ounce of Nuance: Why We Can’t Get Enough of Ragebait

Cool title, huh? I have to say, however, this title is part of the very problem it is aiming to address. Lack of nuance has been an issue since the conception of media, and it is certainly not new, per se. A case can be made for the ways in which this issue has proliferated within the current media landscape. It appears that in recent years the topic of nuance in digital media has become more relevant as the forms of information distribution have diversified and expanded by including short-form content. Social platforms such as TikTok and Instagram (reels) seem closer to becoming the “newspaper” of today’s generation, with a nearly six-fold increase in the number of people who access news through such social media apps in the last five years only. Despite the appeal of its convenience, the trajectory of the media in recent years invites questions as to how this is affecting our media literacy and independent thought (at least for those who are still holding on to their critical thinking skills). Not only does the media offer a compelling image of how information broadcasting has changed, but our reactions to it also relay important messaging about our present political and economic climate.  Paired with the outsourcing of critical thought and effortful meditation to artificial intelligence, the lack of nuance we have been experiencing in online discourse starts to make more sense.

In the 2020s, algorithm is king. Swapping out chronological order for user engagement, by 2016 tech giants such as Instagram and Twitter had adopted algorithm-driven timelines, following the steps of even larger platforms like Facebook and YouTube, who had already implemented this in the early 2010s. Sensationalist and emotion-provoking media has been a trusted tool within the kit of many industries, from marketing all the way to news channels. Tending to the inherent attraction to divisiveness, algorithms serve as a way of amplifying polarization by clustering partisan-congenial information and spreading misinformation (Beer, 2019). An applied example of this is the happenings from the 2016 US presidential election. Algorithmically facilitated false information was spread through “bot” advocacy (accounts that were made for this intended purpose) (Howard et al., 2018). Humans tend to not actively seek out disconfirmation (Pearson & Knobloch-Westerwick, 2019) and have been shown to develop a pattern of opinion reinforcement after being shown algorithm-recommended content (Joe et al., 2021). Thus, many users end up finding themselves in echo chambers. 

Ok, echo chamber this, lack of nuance that… as if there weren’t enough buzzwords. What do I mean by this? I think what’s going on in the media right now is nothing new: attention grabbing, headline-focused, and reaction-inducing. At most it is an exacerbated form of what has been happening so far. However, it is evident how blatant discourse has become online, and it’s quite funny how well it’s doing. YouTube channels like The CUT and Jubilee have amassed millions of subscribers by tickling a very specific part of our brains. “1 pro-lifer vs 25 pro-choicers,” “fit women vs plus-size women.” It makes me think about whether we really need people from the extreme opposite ends of a spectrum just to showcase a problem. But there is something about this sort of content that is just so…irresistible. Like tonguing at a mouth sore, like picking at a hangnail. The truth is we love ragebait. We can’t get enough of it. Why? I don’t know. Probably something to do with our amygdala and dopamine. But who cares? Wouldn’t it be much more interesting if it was pinned to us “being inherently flawed”? Isn’t this part of the article that much better without all those god-awful citations (which, by the way, I won’t be using anymore; this is a Slim Radio article for Christ’s sake)? Throughout this article I’ve been fighting with myself through every sentence, struggling to give a somewhat objective overview of the problem instead of my feeling-fueled opinion. I think people are at an everlasting war with what they know they should do and the easy/shortcut way out. It really doesn’t help that our current media dangles extremism, ragebait, and “us vs. them” content in front of us when we’re hanging on by a thread anyway.

The reality is that nuance doesn’t sell or feel as good as emotion. Nuance takes time. The issue with objectivity is that it’s boring. Paired with the speed of content creation and generated responses enabled by AI, reading through a multi-sided, resourced text feels slow and unnecessary. It is that much easier to pick an issue (preferably a false dilemma) with (seemingly) only two outcomes and have representatives that clearly (clearly) belong to either, thus encouraging the readers/watchers to identify with either one or the other. 

I don’t think my article is that much different than the ones I’m criticizing. Catchy title. Lots of “I think”s”. However, I think (here we go again), being aware of what you’re doing is a first step to changing and improving yourself. Realizing the ways in which we are persuaded by the media and our own ways of partaking, giving in, and opposing this media are all valuable in the permanent push and pull between objectivity and sensationalism.  A population that is trained on headlines and conclusions will find it much harder to reach their own resolutions through independent thinking instead of reactionary responses to emotion-evoking retellings of reality. Feeling is fun…but it is not forever. I believe that it is in the interest of each person to develop and protect one of the most precious qualities that has long defined us as humans: reasoning.

*

References

Beer, D. (2019). The social power of algorithms. In The Social Power of Algorithms (pp. 1–13). Routledge.

Howard, P. N., Woolley, S., & Calo, R. (2018). Algorithms, bots, and political communication in the US 2016 election: The challenge of automated political communication for election law and administration. Journal of Information Technology & Politics, 15(2), 81–93. https://doi.org/10.1080/19331681.2018.1448735

Joe, W., (Seán), L., (Alastair), R., & (Fabio), V. (2021, June 30). Recommender systems and the amplification of extremist content. Policyreview.Info. https://policyreview.info/articles/analysis/recommender-systems-and-amplification-extremist-content

Pearson, G. D. H., & Knobloch-Westerwick, S. (2019). Is the confirmation bias bubble larger online? Pre-election confirmation bias in selective exposure to online versus print political information. Mass Communication & Society, 22(4), 466–486. https://doi.org/10.1080/15205436.2019.1599956

Previous
Previous

Spherical Pilgrimage

Next
Next

A Connection in Three Acts (A Sprain in the Fourth)