Identifying and Mitigating Online Bias in Social Media

Part 3 in a series of research perspectives in human-computer interaction.

Nessa Kim
11 min readJan 18, 2021

The online environment mediates rapid communication and dynamic social activity through online forums, comments, reviews, and platforms such as Facebook and Twitter. Not only can these areas that constitute social media quickly transmit biased or misinformative content, but they are also where users exercise their biases directly against others or societal entities. Whereas I previously explored biases of individual users during information search, the current explores biases that are elicited in online social interactions and across a range of social media activities.

Online Ratings and Reviews

User-generated content for online business and product reviews has become a dominant application of social computing, such as in the review systems of Amazon, Yelp, and Apple’s App Store. Online reviews play an integral role in the daily purchasing decisions of consumers as well as in the enduring reputation of businesses and retailers. Their ease of use and interaction with search results allow users to make efficient decisions based on the opinions of their peers; however, the visual heuristics of a few stars and other elements designed in review systems can contribute to stronger biases against certain businesses or brands. In consequence, individual consumers can be led to make suboptimal decisions and, at larger scale, undermine fair competition. The presence of average user ratings can overpower the objective quality of the product or service [1], especially for low-motivation users who rely more on heuristic cues [2]. Prior ratings also influence how users give their own ratings, through the tendency to imitate social norms — observed in online reviews as “social influence bias” [3], herding behavior [4], or the bandwagon effect [5]. A recent review on the economic and behavioral consequences of this topic concludes that “these biases create inefficient market outcomes and low rating variability” since individual users conform to group norms at either extremity [6].

Understanding the relationship between ratings and user biases can help designers and advertisers of online review platforms make decisive choices in the presentation of information for their users that encourage them to make qualified evaluations and decisions. Judgements are often made heuristically or quickly, and a user’s perception of a product depends critically on when they encounter review information versus product information. For this reason, research suggests that marketers place rating information below the product description [7]. Similarly to information search behavior, users exhibit confirmation bias in reading reviews [8]. They additionally show self-selection bias, an increased likelihood to review products or services that they are disposed to strongly favor or disfavor, which has been summarized and evaluated with modeling [9]. In order to mitigate the effects of such biases, platforms can promote both positive and negative reviews that users have voted to be helpful and informative. For example, Amazon customer reviews display both the “Top positive review” and the “Top critical review” — determined by the number of users who voted it as “helpful” — before showing reviews ordered by recency.

Comments and News Feeds

To a much greater extent than online reviews, comments build the structure of user-to-user interactions in social media and websites. Behaviors in comment sections often escalate to antisocial, uncivil, and prejudiced behaviors that may be unique to the online environment and aren’t shown in face-to-face interactions [10], a phenomenon regarded as the “nasty effect” [11] or the (toxic) online disinhibition effect [12, 13]. Online incivility drives biased perceptions to reach heightened polarity on issues [14]. Whereas social media spaces are areas for varying viewpoints and opinions to potentially converge, aspects of their design instead lead to stronger partisanship.

Biased opinions and prejudices may show more strongly in social media than in person, but social information has real impacts on attitudes and behavior carried outside of online environments. Hsueh, Yogeeswaran, and Malinen (2015) summarized existing research that demonstrates the malleability of both implicit and explicit prejudice by online social influence. Contributing with their own investigation, they found that the exposure to comments increased the extent to which respondents’ own comments became biased in the direction of the established social norm — specifically, participants exposed to prejudiced online comments, which expressed dissent or suspicion towards an ethnic group, responded with prejudiced comments themselves. Additionally, this socially-influenced prejudice was generalized beyond the task in participants’ conscious and unconscious attitudes toward social groups, as measured by the IAT [15].

Biased and prejudiced comments also have negative effects on the perceived credibility of news articles [16, 17, 18]. Gearhart, Mode, and Zhang (2020) took a unique stance in examining the phenomenon of hostile media bias, which is a reader’s perception that neutral media coverage is instead biased against one’s own view. Whereas previous studies explored the perception of bias among strong partisans after viewing news or of the influence of comments seen after the news content, Gearhart et al. (2020) controlled for partisanship and studied how the comments in Facebook posts appearing before the news story influenced audience perceptions of neutral news content. Their results showed that the exposure to like-minded comments before reading the story led to enhanced perceptions of overall news quality. The opposite was also true: exposure to expressions that disagreed with the participant’s opinions led to generalized perceptions that the news source was biased against their own opinion [19]. Their findings build on previous research that indicate that concurrent exposure of user-generated comments influence audience perceptions on online news content, providing implications for whether and how comments should be displayed in the distribution of news.

Although aspects of social media diminish the perceived integrity of information, this does not deter people from depending on it to form opinions on current issues. In 2018, the Pew Research Center reported that 14% of U.S. adults said to have changed their views on a social or political issue as a result of seeing something on social media, such as Facebook, Twitter, or Instagram [20]; the Center’s more recent report in 2020 shows this rate has increased to 23% [21]. Social issues included the Black Lives Matter movement; police brutality and the need for police reform; political parties, ideologies and politicians; the coronavirus; and race relations or social issues in general. Biased attitudes and incivility that take place in comment sections negatively impact online interactions and the way information on pertinent issues is perceived. Individual attitudes that are shaped by group norms can create serious societal divisions online, but the level of engagement and the possibility to change views through social media offer another prospect that is hopeful. Other design features on social media can help mitigate bias and promote positive effects including civic engagement, social cohesion, and the exposure to multiple perspectives.

Mitigation in Social Media

Despite the negative impacts on civility and credibility, it isn’t necessary to eliminate comments all together from news articles and content shared through social media. Studies have shown that providing pre-moderation of comments, which requires user registration or crowdsourced efforts to identify quality comments, encourages more civil discussion in comment sections [22]. To this effect, The New York Times displays the option to show comments as a button placed at the very end of an article’s web page; once opened, comments are ordered according to the viewer’s selection by “NYT Picks’’ through the newspaper’s moderation, by “Reader Picks’’ through votes by registered readers, or by “All” in chronological order. The involvement of journalists is associated with less incivility and greater use of evidence from commenters [23]. Additionally, online comment moderation has been shown to diminish the effects of uncivil comments on the perceptions of bias in online news content [24]. Comments help promote prosocial behavior, universal orientation (feeling connected to others), and improve a user’s perception of public opinion [25, 26]. Both users and news sources can benefit from the support of comments in online news articles or on social media, but the positive effects are made possible by features that don’t detract from the main news content.

Comments are avenues of communication and where diversified viewpoints can be expressed and consequently supported or debated. Rather than perceiving information with implicit bias, users have the opportunity to express their views and to qualify their arguments in thoughtful discussions. Research by Karlsen et al. (2017) showed that attitudes can be reinforced by both like-minded support and debate with opposing views, demonstrating that fission can contribute to collective learning and qualification of arguments. It is important to note that eliminating biased perceptions doesn’t necessarily lead to consensus, which might be undermined by homogenous opinion. The authors focus on an alternative value in that “different opinions are expressed and exchanged, despite continuing differences in points of view” [27]. Researchers and developers may gain important insights into online communication by expanding their explanations to more complex social dynamics.

Online echo chambers can be prevented by features that encourage users to engage on large social issues. The utilization of hashtags take advantage of the rapid communication style of social media to spread awareness on important issues, by providing each individual user a digital door handle that opens to a public space. For instance, the #MeToo hashtag that went viral in 2017 coalesced with and dramatically accelerated a movement that had begun in 1997 [28, 29], resulting in public accusations and a surge of solidarity against sexual harassment and assault. First coined in a Facebook post in 2013, the #BlackLivesMatter movement helped bring global awareness to racist police brutality, sparked by the acquittal of George Zimmerman who killed unarmed 17-year-old Trayvon Martin. On May 28, the Pew Research Center reported from an analysis of publicly available posts on Twitter that a peak of nearly 8.8 million tweets contained the #BlackLivesMatter hashtag in a single day [30].

In the realm of social media, biases formed by social heuristics or by incivility can lead users to exhibit prejudice, suboptimal decision-making, or poor behavior. These effects can be mitigated by features that emphasize information or dialogue of higher quality and support accessibility to multiple perspectives. Diversifying options and representations can change initial expectations through the visualization of content that is thoughtfully curated or socially relevant. The speed and breadth at which information spreads on social media makes it optimal for increasing exposure on issues requiring large attention.

However, attention to public issues is fleeting on social media and requires further effort to sustain support [31, 32]. Research has explored the utility of social media for fostering online civic engagement of youth through education [33, 34] and of citizens through trust in institutions [35]. The ability to identify biases will be required in curriculums and in healthy citizen-institutional relationships in order for people to be accurately informed and to balance multiple perspectives on a variety of issues, and social media has potential in supporting such a requirement. Nevertheless, challenges remain in the mitigation of bias on social media and other online spaces, where algorithms make predictions and recommendations based on user behaviors. The effect is that biased judgments and decisions are further reinforced and that social perspectives are tipped drastically off-balance to instead become distorted polarization. I explore the mechanisms of such an effect in the following.

References

  1. De Langhe, B., Fernbach, P.M. and Lichtenstein, D.R. (2016) Navigating by the stars: investigating the actual and perceived validity of online user ratings. Journal of Consumer Research 42(6): 817–833. https://doi.org/10.1093/jcr/ucv047
  2. Gupta, P. and Harris, J. (2010). How e‐WOM recommendations influence product consideration and quality of choice: a motivation to process information perspective. Journal of Business Research 63(9): 1041–1049. https://doi.org/10.1016/j.jbusres.2009.01.015
  3. Cicognani, S., Figini, P., & Magnani, M. (2016). Social Influence Bias in Online Ratings: A Field Experiment. Quaderni — Working Paper DSE N° 1060, Available at SSRN: https://ssrn.com/abstract=2737992 or http://dx.doi.org/10.2139/ssrn.2737992
  4. Sunder, S., Kim, K. H., Yorkston, E. A. (2019). What Drives Herding Behavior in Online Ratings? The Role of Rater Experience, Product Portfolio, and Diverging Opinions. Journal of Marketing. 83(6): 93–112. https://doi.org/10.1177/0022242919875688
  5. Moe, W. W. & Schweidel, D. A. (2012) Online product opinions: incidence, evaluation, and evolution. Marketing Science 31(3): 372–386. https://doi.org/10.1287/mksc.1110.0662
  6. Magnani, M. (2020). THE ECONOMIC AND BEHAVIORAL CONSEQUENCES OF ONLINE USER REVIEWS. Journal of Economic Surveys, 34: 263–292. https://doi.org/10.1111/joes.12357
  7. Camilleri, A. R. (2020). The importance of online reviews depends on when they are presented. Decision Support Systems, 133. https://doi.org/10.1016/j.dss.2020.113307
  8. Yin, D., Mitra, S., & Zhang, H. (2016). Research Note — When Do Consumers Value Positive vs. Negative Reviews? An Empirical Investigation of Confirmation Bias in Online Word of Mouth. Information Systems Research, 27:1, 131–144. https://doi.org/10.1287/isre.2015.0617
  9. Bhole, B., & Hanna, B. (2017). The effectiveness of online reviews in the presence of self-selection bias. Simulation Modelling Practice and Theory, 77: 108–123. https://doi.org/10.1016/j.simpat.2017.05.005
  10. Joinson, A. N. (2007). Disinhibition and the Internet. In J. Gackenbach (Ed.), Psychology and the Internet: Intrapersonal, interpersonal, and transpersonal implications (p. 75–92). Academic Press. https://doi.org/10.1016/B978-012369425-6/50023-0
  11. Anderson, A. A., Brossard, D., Scheufele, D. A., Xenos, M. A., & Ladwig, P. (2013). The “Nasty Effect:” Online incivility and risk perceptions of emerging technologies. Journal of Computer-Mediated Communication, 19, 373–387. https://doi.org/10.1111/jcc4.12009
  12. Lapidot-Lefler, N., & Barak, A. (2012). Effects of anonymity, invisibility, and lack of eye-contact on toxic online disinhibition. Computers in Human Behavior. 28 (2): 434–443. https://doi.org/10.1016/j.chb.2011.10.014
  13. Suler, J. (2004). The Online Disinhibition Effect. CyberPsychology & Behavior. 7 (3): 321–326. https://doi.org/10.1089/1094931041291295
  14. Anderson, A. A., Yeo, S. K., Brossard, D., Scheufele, D. A., Xenos, M. A. (2018). Toxic Talk: How Online Incivility Can Undermine Perceptions of Media. International Journal of Public Opinion Research, 30(1): 156–168. https://doi.org/10.1093/ijpor/edw022
  15. Hsueh, M., Yogeeswaran, K. & Malinen, S. (2015). “Leave Your Comment Below”: Can Biased Online Comments Influence Our Own Prejudicial Attitudes and Behaviors?. Hum Commun Res, 41: 557–576. https://doi.org/10.1111/hcre.12059
  16. Lee, E-J. (2012). That’s Not the Way It Is: How User-Generated Comments on the News Affect Perceived Media Bias, Journal of Computer-Mediated Communication, 18(1): 32–45. https://doi.org/10.1111/j.1083-6101.2012.01597.x
  17. Naab, T. K., Heinbach, D., Ziegele, M., & Grasberger, M-T. (2020). Comments and Credibility: How Critical User Comments Decrease Perceived News Article Credibility. Journalism Studies, 21:6, 783–801. https://doi.org/10.1080/1461670X.2020.1724181
  18. Prochazka, F., Weber, P., & Schweiger, W. (2018). Effects of Civility and Reasoning in User Comments on Perceived Journalistic Quality. Journalism Studies, 19:1, 62–78. https://doi.org/10.1080/1461670X.2016.1161497
  19. Gearhart, S., Moe, A., Zhang, B. (2020). Hostile media bias on social media: Testing the effect of user comments on perceptions of news bias and credibility. Hum Behav & Emerg Tech, 2: 140–148. https://doi.org/10.1002/hbe2.185
  20. Bialik, K. (2018, August 15). 14% of Americans have changed their mind about an issue because of something they saw on social media. Pew Research Center. https://www.pewresearch.org/fact-tank/2018/08/15/14-of-americans-have-changed-their-mind-about-an-issue-because-of-something-they-saw-on-social-media/
  21. Perrin, A. (2020, October 15). 23% of users in U.S. say social media led them to change views on an issue; some cite Black Lives Matter. Pew Research Center. https://www.pewresearch.org/fact-tank/2020/10/15/23-of-users-in-us-say-social-media-led-them-to-change-views-on-issue-some-cite-black-lives-matter/
  22. Ksiazek T. B. (2015). Civil interactivity: How news organizations’ commenting policies explain civility and hostility in user comments. Journal of Broadcasting & Electronic Media, 59(4), 556–573. http://doi.org/10.1080/08838151.2015.1093487
  23. Stroud N. J., Scacco J. M., Muddiman A., Curry A. L. (2015). Changing deliberative norms on news organizations’ Facebook sites. Journal of Computer-Mediated Communication, 20(2), 188–203. https://doi.org/10.1111/jcc4.12104
  24. Yeo, S. K., Su, L. Y-F., Scheufele, D. A., Brossard, D., Xenos, M. A., & Corley, E. A. (2019). The effect of comment moderation on perceived bias in science news, Information, Communication & Society, 22:1, 129–146. https://doi.org/10.1080/1369118X.2017.1356861
  25. Soffer, O., & Gordoni, G. (2020). The Role of User Comments in Estimation of the Public Opinion Climate and Perceived Support for One’s Opinion, International Journal of Public Opinion Research, 32(3): 569–585. https://doi.org/10.1093/ijpor/edz036
  26. Waddell, T. F. & Bailey, A. (2017). Inspired by the crowd: The effect of online comments on elevation and universal orientation, Communication Monographs, 84:4, 534–550. https://doi.org/10.1080/03637751.2017.1369137
  27. Karlsen, R., Steen-Johnsen, K., Wollebæk, D., & Enjolras, B. (2017). Echo chamber and trench warfare dynamics in online debates. European journal of communication, 32(3), 257–273. https://doi.org/10.1177/0267323117695734
  28. Stevens, H. (2017, October 16). #MeToo campaign proves scope of sexual harassment, flaw in Mayim Bialik’s op-ed. Chicago Tribune. https://www.chicagotribune.com/columns/heidi-stevens/ct-life-stevens-monday-me-too-mayim-bialik-1016-story.html
  29. Garcia, S. E. (2017, October 20). The Woman Who Created #MeToo Long Before Hashtags. The New York Times. https://www.nytimes.com/2017/10/20/us/me-too-movement-tarana-burke.html
  30. Anderson, M., Barthel, M., Perrin, A., & Vogels, E. A. (2020, June 10). #BlackLivesMatter surges on Twitter after George Floyd’s death. Pew Research Center. https://www.pewresearch.org/fact-tank/2020/06/10/blacklivesmatter-surges-on-twitter-after-george-floyds-death/
  31. Tesler, M. (2020, August 19). Support For Black Lives Matter Surged During Protests, But Is Waning Among White Americans. FiveThirtyEight. https://fivethirtyeight.com/features/support-for-black-lives-matter-surged-during-protests-but-is-waning-among-white-americans/
  32. Anderson, M., Toor, S., Rainie, L., & Smith, A. (2018, July 11). Activism in the Social Media Age. Pew Research Center. https://www.pewresearch.org/internet/2018/07/11/activism-in-the-social-media-age/
  33. Kahne, J., Hodgin, E., & Eidman-Aadahl, E. (2016). Redesigning Civic Education for the Digital Age: Participatory Politics and the Pursuit of Democratic Engagement, Theory & Research in Social Education, 44:1, 1–35. https://doi.org/10.1080/00933104.2015.1132646
  34. Kenna, J. L. & Hensley, M. A. (2019). Utilizing Social Media to Promote Civic Engagement in the Social Studies Classroom, The Social Studies, 110:2, 86–94. https://doi.org/10.1080/00377996.2018.1524360
  35. Warren, A. M., Sulaiman, A., Jaafar, N. I. (2014). Social media effects on fostering online civic engagement and building citizen trust and trust in institutions. Government Information Quarterly, 31(2), 291–301. https://doi.org/10.1016/j.giq.2013.11.007

--

--

Nessa Kim

Graduate student in HCI & Human Factors pulling her thoughts together.