information kiosk
My Thoughts

The Scope of Misinformation

The ability to find information within seconds has been argued as the best modern invention of this generation. This invention of course is the Internet, which is made up of interconnected devices from across the globe, sharing and creating information to be consumed. With the progression of technology, cheaper computers, and faster Internet service, access to this universe of interconnectedness has become less expensive. As a result, the availability of the Internet has grown, and in turn so has the amount of content available for that consumption. The sheer amount of content, where that content is coming from, the tools used to look up information, and the decision-making skills of the average adult have led to this age of increased misinformation.  

The democratization of information is a relatively new form of spreading knowledge, which has led to copious amounts of content on the Internet. This sheer amount of information is only a part of the problem. Tewksbury and Rittenberg define information democratization as “the increasing involvement of private citizens in the creation, distribution, exhibition, and curation of civically relevant information” (147). Breaking down the definition into its component parts, the act of creating information is not the only way in which one can partake in this democratization. When an Internet user shares a link or copies that link on their own blog or webpage, they are spreading that content. When one posts a story to their Facebook feed, they are participating by keeping that information relevant and on display for others to see. A high percentage of the American population is participating in this creation and spread of information. 68% of the American population uses Facebook, up from 66% last year (Shearer and Gottfried). This definition also includes the word curation, which is the process of organizing and displaying information just as a curator of a museum would collect and display pieces of art. Where once the term curator was reserved for those with a keen eye, the term now is donned on the shoulders of the average Internet user. “In an age when bloggers are perceived as more credible sources than traditional media outlets,” the user becomes this self-proclaimed curator who controls the content that is displayed on his or her webpage (Johnson and Kaye qtd. in Savelli). It is evident that the average information creator is becoming the new source of information. This new wave of content creation has led to large amounts of data being posted online. 

The amount of information that is available on the Internet has grown exponentially in the past few years. However, the rate of the creation by these new content creators outpaces that of the consumers. In a recent study by IBM, they claim that 2.5 quintillion bytes of data are made every day and that 90% of the information has been created in the past two years (Helfand qtd. in Tyson et al.). With such a high amount of information being created in a single day, it is very improbable that an individual would be able to sort through it all. To put that number in perspective, if all 2.5 quintillion bytes of information were printed out, it would add up to 185,000 pages of information created per day per person (Helfand qtd. in Tyson et al.). All of this information growth is linked with the cheaper cost of owning a computer as well as the increase of access to the World Wide Web. This is evidenced by the 2012 United States Census findings, “as of 2012, 74.8% of all households have Internet use at home” (Savelli). These numbers not only show the dramatic increase in the amount of content that is being created for consumption, but also that there is a correlation between the increase in internet usage as well as the amount of content available on the Web. Most average Internet users do not sit in front of their computers and consume this entire content straight from the source. Users normally seek information to specific questions that they have. With the great amount of content on the Internet available to be found, the simple act of navigating through it is not an easy task.

The tool that most Internet users use to search through the vast information on the Internet plays a pivotal role in misinformation as well. That tool is the Google search engine. The algorithm that Google uses to show relevant search results influences what information a user sees. Harold Holone sheds light on how the Google algorithm lists its results: “[T]he results are sorted not only by objective relevance, but rather is heavily influenced by your search history, your social network, when you are searching and where you are searching from” (Holone). When one is searching for information on Google the user expects that the information being received is correct, accurate and credible information. However, as evidenced here the populated results are not as objective as once believed. This should not be a problem, as one would expect the user to think critically about the source of information. However, “research shows that many people believe that just being in the top five hits of a Google search is a sign that a piece of information is credible” (Fox qtd. in Stebbins 2). The way that the Google search engine works promotes a subliminal confirmation bias as it takes things that one has already searched for or liked, and feeds it back to them. This bubble of reinforcement that the average Internet user lives in has been coined the “Filter bubble” by Eli Pariser in the book The Filter Bubble: What the Internet is Hiding From You. The filter bubble summarized is the idea that people are only exposed to information that meets the criteria of their filter bubble. This leads the consumer to have a certain outlook on information as they are only being fed the same information in a constant feedback loop (Holone). The user in this filter bubble is made oblivious of information that does not pass the selection criteria, and in doing so becomes misinformed due to a narrowing of information being received. 

This tailoring of information brought about by the Google search engine fails to promote knowledge, and instead promotes misinformation through knowledge fragmentation. The ability for users to seek and only select content that is specific to their views, and the creation of specialized information outlets is dubbed fragmentation (Tewksbury and Rittenberg 199-121).  The Google search engine fragments results for the user based on its search algorithm. In other words, Google uses its information and search history of the researcher to then tailor the results that each individual will see. The lack of objectivity in the Google search results, and the willingness for users to accept the first few results as being accurate demonstrates the role that fragmentation has on misinformation. Google is not the only culprit in fragmentation of audiences. Google is a passive form of fragmentation, whereas Facebook is all about user choice.  Facebook excels at fragmenting groups by interest. People get to choose what appears on their newsfeed, they can accept or deny friend requests, and block pages of information that they don’t wish to see.  Facebook doesn’t stop there; it takes it one step further with its news feed algorithm, and curates information for the user based on certain values. For example, it looks at friends that an individual interacts with most, and recent post that they have liked to show them content deemed relevant specifically to them based on their algorithm (Bessi et al.). This can be seen in advertisements on a user’s home screen based on their recent Internet activity. Both Google and Facebook use personalized algorithms to fragment information in their own way, creating a specific view for each individual user.

Fragmentation of content does not just affect a small amount of the population. In looking at the number of the American population that gets its news from Facebook, it is evident that Facebook is being used to spread misinformation and enforce fragmentation. According to a Pew research poll published September 7, 2017, 67% of Americans reported getting some of their news from social media, with 45% of the adult population getting their news from Facebook (Shearer and Gottfried). These 45% of people who get their information from Facebook are being exposed to an effect called “selective exposure” (Savelli). Since users are responsible for the information they would like to see, the result is that users view more information from others with similar interest and values (Savelli). This sharing of information among individuals with similar values creates closed minded information consumers, as they are not getting information that is in conflict with the information they currently hold. This tailored presentation of information is presenting a one sided story. 

These groups of closed minded individuals sharing and consuming information tailored to their taste by the Google and Facebook algorithms are becoming less informed. These groups have been dubbed echo chambers, Stebbins notes that these occur when the same information is created, consumed and shared, leading to a reinforcement or magnification of those ideas. She uses a liberal news outlet that shares information from a likeminded source as an example (Stebbins 3). Without ever hearing a conflicting point of view, these information groups become biased as they only see a particular issue one way, creating a gap in that person’s knowledge. Knowledge gaps could not only cause different segments of society to have different information about the same topic, but knowledge gaps can influence the way people process information. These gaps could play a role in the socio-economic inequalities that already face humanity today (Tewksbury and Rittenberg 166). If everyone is not receiving the same information, or if some of that information is skewed, there is then a lack of equality in terms of exposure to knowledge. As the echo chambers increase, the gap in equality has the possibility to increase as well. These biased information groups lead to people becoming misinformed due to the knowledge gap that they have created.

The high percentage rate of users, who are getting their information from social media sites such as Facebook, brings up another issue with the democratization of information. That issue is the lack of edited content. In most cases when one publishes a book there is an editor who must approve the manuscript prior to print. When it applies to most scientific journals there is a peer review process that the piece of work must pass through prior to being published. The use of Facebook in the creation, distribution and consumption of information (Democratization) drops the role of the editor out of the creation process. The “[t]raditional gatekeepers such as journalist and editors have little place in [social networking sites] since people are able to publish their content with few (if any) barriers’ ‘ (Nah and Chung qtd. in Savelli). With the lack of this editor to monitor the curators, the curators are able to create and disseminate whatever information appeals to them. The editing of content is left up to the self-righting principle, the idea that eventually bad information will be seen for what it is and only accurate information will remain (Savelli). The problem with this idea is that it is not working. These curators of information have not been able to pass the hurdles that the Google search engine, echo chambers, and the Facebook newsfeed present to these individuals.  As a result, people are not able to discriminate between the information as being good information or bad information, and continue sharing it regardless of its accuracy. “In fact, the [marketplace of ideas] does not discriminate factual from fictitious information when publishing ideas’ ‘ (Carter and Freist qtd. in Savelli). Facebook itself is just one big market place filled with millions of users’ ideas, making it just another tool in the curation of information, even though the information it is spreading may not always be accurate.  

The Google search and Facebook newsfeed are only tools that society uses to look up information. The prevalence of misinformation is also attributed to the way humans analyze information to make decisions. The quick rules that people use to make decisions are labeled as heuristics. Two heuristics that can lead to misinformation are satisficing and the confirmation bias. When a person is looking up a small amount of content to compile information that is good enough to make a decision, they are satisficing (Stebbins 63). The heuristic of satisficing shows that people are willing to sacrifice the correct information for information that is just good enough. Relating to satisficing is the amount of effort used to locate that information. Moody argues that “the convenience of the information access can supersede the evaluation of a low credible source” (Savelli). The ideas of satisficing as well as the notion that convenience overpowers humanities need to look for an actual credible source is a powerful obstacle in the search for accurate information.  Another contributing heuristic that plays a role in the spread of misinformation is the confirmation bias. “Confirmation bias has been shown to play a pivotal role in the diffusion of rumors online” (Bessi et al.). Confirmation bias is the act of accepting information as correct if it supports the individual’s current beliefs. In turn, if the information goes against those beliefs the information is seen as wrong (Stebbins 63). An example of this type of biased misinformation with disastrous results was in the 1960’s when the United States administration declined to accept accurate intelligence reports on the situation in South East Asia. They preferred to keep their worldview because their held beliefs became more important than the need for accuracy (Arendt qtd. in Marshall).  It is evident that humans have a barrier when it comes to accurate information selection due to biases and time constraints. 

Some may argue that society is better off when people have access to more information from multiple sources as a result of information democratization. However, humans must compete with their societal needs as well as internal and external moral obligations to themselves and to the consumer in the challenge to curate accurate information. In a 2004 study, social psychologists documented that 78% of their test subjects admitted to lying to a stranger in the course of a ten minute conversation due to the need to fit in and seem welcomed by the stranger (Taylor and Feldman qtd. in Marshall). These findings show that in the face of being accepted or shunned by another person, the person will explicitly choose to spread misinformation in the face of being left out, disregarding any moral obligations they may have towards that stranger. Additionally, the study points out how easy it becomes for a person to lie to another solely based on the fact that the person is a stranger. The Internet is nothing but massive amounts of information exchange among strangers. This anonymity offered by social media sites and the Internet opens up the possibility for more deceit and misinformation to spread. Savelli calls upon the dissenting opinion of a circuit judge to argue this point. The Justice commented that the use of the computer in the creation of information can lead to the creation of incorrect or deceptive information (Savelli). This may seem like a random opinion of a dissenting justice who is unhappy with the outcome of the suit. However, “[t]he Anti-Defamation League also noted that internet users use deceptive information as a way of persuading readers” (Braun qtd. in Savelli). The claim by the circuit justice as well as the follow up from the Anti-Defamation League shows that creators of information feel that they have no moral obligations to the consumer of the content. This shows that the need to spread the information regardless of its accuracy is more important than any potential harm that may befall the consumer. 

The Internet was meant to be a place to share information. This sharing of information is definitely happening on a very large scale, yet in many ways this large scale is flawed. This massive amount of information has been made possible due to the increase in consumer created content. With the help of Google and Facebook, content editors have now been replaced by the content creators who are flawed with biases, cognitive hurdles, and a lack of disregard for any morals that they may have towards their fellow readers. Misinformation will only continue to get worse as the content created becomes more and more fragmented, motivated by deceit, bias, and even naivety, while the legitimate editors are continually being left out of the process. 

Leave a Reply

Your email address will not be published. Required fields are marked *