Fake News

Last updated on: Mar 8, 2022



Fake news is the manipulation and transmission of information for a wide audience intended to deceive individuals into believing false propositions. Fake news may be transmitted for financial incentives (‘impure fake news’) or have loftier epistemic goals (‘pure fake news’). While not solely resigned to social media platforms for transmission, social media’s prevalence as a main artery and authority for information transmission contributes to its contingent relationship with fake news. Due to the manner in which fake news proliferates it is very hard to fact check and nearly impossible or futile to damage control.

            In Fake News and Partisan Epistemology Regina Rini offers an institutional approach to solve the problem of fake news transmission on social media platforms. I think she misses the mark, but her topic and approach certainly warrant further attention. To that end I offer a combined approachthat empowers individual social media users while regulating the potentially limitless powers of authority and influence that social media companies are fast growing to wield. To do so I’ll begin by highlighting the crux of Rini’s analysis of fake news and her argument for institutional regulation. Next, I’ll expose the dangers in her approach, especially with respect to individual freedom of speech and the potential monopolization of media. I’ll conclude with my own solution that centers on agential rights and responsibilities of individuals and institutions and address some limitations of my own approach.


Bent Testimony

Rini thinks that fake news is problematic in so far as it is often used for the purposes of manipulation and misinformation; however, she argues that the testimonial process of fake news transmission is individually reasonable. It is reasonable because it involves the transmission of information from fellow partisans who, by definition of their partisanship, hold beliefs, values, and virtues similar to oneself. Therefore, their testimony to such and such information and subsequent uptake, even if it is fake news, is epistemically virtuous. This, coupled with a lack of norms established to regulate social media sharing is what makes this a form of bent testimony. It is bent due to the fact that no standard yet exists to judge an individual’s social media actions (e.g., is a retweet an endorsement of such information and should individuals be held accountable for their retweets?).

            Rini concludes that we exist and act in a moral and epistemic gray area when we use social media platforms. From this position she argues that fake news exploits the otherwise reasonable practices of communication and therefore requires institutional change rather than an individual one. She takes an institutional approach to solving fake news because she deems the fact-checking of all content one might be exposed to on social media an unreasonable endeavor for each person to reliably complete due to cognitive and temporal restrictions and limitations.

            So, she posits that we should focus on establishing an institutional norm of individual accountability. One that accepts “a retweet as an endorsement” and holds individual users accountable for their social media epistemic actions, but that also accounts for our limitations. With this she arrives at her most surprising suggestion: social media institutions ought to offload some of the fact checking demands from the individual. That is, social media platforms could keep track of who-testified-to-what, who retweeted who and what, and even do the process of fact checking themselves. She thinks that social media institutions could “flag” the propositional content of information or give the user a social media reputation score based on their usage. She sees this as an unburdening of the individual and a responsibility of the platform to protect the veracity of information.

            To visualize how this memory offload might work she gives the example of busy pathways where cyclists and pedestrians interact. Since each location differs it would be unrealistic for people to remember which side to walk or bike in every location. So, institutions can offload this individual epistemic demand by separating each half of the roadway with a line and painting appropriate pedestrian and cyclist icons on either side. This way, both parties know which side of the road they belong.

            In summary, Rini desires to offload the epistemic burden of fact-checking each piece of testimony presented on social media platforms. She maintains that the testimony (in the forms of retweets or shares), particularly politically charged ones, of our peers is epistemically virtuous, but that fake news can exploit this process due to the fact that we currently have no clear rules for how to judge social media interactions. To solve this problem, Rini suggests that institutions should be the arbiters of information on their platforms and either label information as suspect, false, or true, or rate their users by their social media actions thereby giving them a social score from which other users can judge them by. This solves the problem of fake news in so far as information flow is regulated and individual users are held accountable for their social media usage.


The Dangers of Offloading Individual Epistemic Practices to Institutions

Rini’s analysis and approach is reasonable, and she acknowledges that it is far from perfect; however, she seems to scoff at the idea that social media companies could or would ever do anything malicious. Moreover, she appears to discount the individual’s epistemic capacity to sort through fake news. This is evident in her proposed fake news solution to have social media institutions digest information on our behalf and then tell us what is what or assign us a reputation score. She wrote this article in 2017 so we can excuse some of her judgments, but it is this naivety that worries me and will be the sticking point for my own combined approach to dealing the problem of fake news. In what follows I will address some of the obvious dangers with her approach and then offer a solution that emphasizes the need for individuals to maintain a skeptical mentality especially when utilizing social media platforms.

            The first obvious danger presented by Rini’s suggestion is that she assumes that institutions share the elimination of fake news as a common goal with its users or with outside parties (such as politics, economics, media, etc.). I think this assumption is false because social media platforms are social platforms, not media outlets or official government entities. So, the content of information shared on their platforms should not concern them so long as their platform is achieving its purpose- creating an entertaining and dynamic space for users to interact with each other. Freedom of speech looms large in this arena. If a user or group of users can’t express themselves or their beliefs due to a conflict with the platform’s rules and regulations, then it may quickly lose followers and cease to exist. However, the more latitude the platform provides its users to express and share content, the more users it will attract. So, there is an obvious incentive for social media companies to protect free speech because this most basic principle attracts more opinions and beliefs which equates directly to new users.

            Why then might social media platforms elect to censor and regulate information, or rate its own users? To claim to be the only platform that is most epistemically pure and least ignorant? To have the lowest fake news score? To have the greatest concentration of users who boast the best epistemic ratings? Clearly these reasons are ludicrous. Social media companies are driven by profit. Even if we grant that some companies might be guided by morals and values, this doesn’t necessarily lead us to posit the same solution as Rini. So, we are left with the bottom line as our prime motivator.

            The second danger arises when we consider the subjectivity and biases of algorithms and a company’s bottom line. Algorithms are developed with certain tasks, conditions, standards, and end states in mind. All of which are created by and for the company which necessarily makes them biased to a certain degree. Coupled with the fact that social media companies are for profit institutions influenced by individuals inside and outside of the organization (workers vs. shareholders vs. investors) it is clear that the user’s epistemic protection is not the platform’s concern. Nor is fake news the bane of their existence. In fact, in so far as fake news consists of lies mixed in with truths it would be very hard for social media algorithms to validate as true any informational content without first having a database or standard by which to judge this information. So, the social media company must profit from the censoring of information and the alienation (or deletion) of some of its users. Otherwise, this would be a counterproductive operation that would be self defeating.

            This leads us to the third danger. If social media companies are actively participating in this counterproductive practice, then there must be a financial incentive to do so. Here we extend beyond the social media platform as a social entity and into the realm of politics. It is my contention that social media institutions not only benefit from political contributions, but also that as they grow more powerful through their control of information, they also gain reciprocal power over politics. Social media platforms therefore are no longer social entities, they are a media arm of their political position of choice (or funder). This is the most concerning realization for individuals across the political spectrum.

            The combination of media and state is the bread and butter of totalitarian regimes. Of course, every form of government and politics contains some sort of censorship, propaganda and media influence. However, when this influence extends to individual epistemic judgments over the veracity of information, how this information is/can be spread, and who is allowed to participate in communication, we reach a new level in the erosion of free speech. Through the deliberate limitation and elimination of information (especially information that contradicts the established narrative) a political entity can narrow reality for its polis for its own purposes. You will see what it wants you to see. Read what it wants you to read. Think, feel, and act in ways that are conducive to the maintenance and substance of the powerful. Contradictory thoughts, books, actions, and words are dangerous to the status quo because they lead individuals to think. They ask more questions and become restless with the infinite regress of absurd answers. Pent up frustration ultimately leads to revolt and regime change. This process is talked about at length in the fields of epistemology of ignorance and oppression.

            These three reasons are just the most obvious ones to present for rejecting Rini’s solution in so far as an enactment of her policy (which I argue has been in place for some time now) will ultimately lead to the epistemic retardation of individual critical thinking skills. Not only is freedom of speech threatened with this suggestion, but individual epistemic authority and competency is questioned and eliminated. Big brother knows what’s best for you.


Embracing Skepticism to Combat the Erosion of Critical Thinking

It might seem that my analysis of Rini’s suggestion is too farfetched or that I’ve watched too many dystopian sci-fi movies. However, the question I pose in response to critics concerns weighing the costs and benefits of offloading epistemic reasoning skills to institutions. I am assuming that Rini is interested in solving the problem of fake news in so far as it erodes individual epistemic practices and trust in other individuals and institutions and not because she thinks that it harms institutions or that fake news is a problem in and of itself. Since the protection of the individual is paramount, then we can ask which option poses the greater threat to the individual: having social media companies regulate information and assign social scores, or leaving individuals to sort through the mire and muck that is today’s media environment themselves?

            I think the latter is clearly the lesser of two evils because it empowers the individual rather than assumes that they are incapable of making rational decisions based on little or contradictory data. Empowering individuals firmly places choice and control in their corner whereas management of information encourages control of the individual. Any approach that attempts to limit the effects of fake news must focus on strengthening the epistemic skill and integrity of the individual. I think that the safest and surest way to do this is to develop a healthy skeptical mentality and emplace regulatory measures on institutions to safeguard individual rights.

            This combined (top-down-bottom-up) approach is already used in many other facets of daily social life. We don’t allow minors to gamble, buy tobacco or alcohol, or drive until they’ve reached a certain age and passed certain requirements. This limits the freedom of individuals while simultaneously protecting them and others from potential collateral damage (and from themselves). Simultaneously, we hold individuals accountable for their actions even if they haven’t met certain legal gates. Ignorance isn’t often accepted as a justified answer for one’s actions. I think we can develop a similar framework for social media usage once a defining desideratum is presented. If this is the protection of freedom of speech and individual rights then a simple policy might look like the following: 1) individuals are responsible for their actions on social media, and 2) social media entities are responsible for maintaining an environment where freedom from hacking, exploitation, and abuse of the individual and the platform is maintained.

            Does this solve the problem of fake news? Certainly not, but I think that fake news is an irrevocable part of our everyday epistemic and social environment anyway. This approach does, however, make it clear that individuals are responsible for their actions on social media platforms. A retweet is an endorsement in my view. Since the “pressure” is on the individual then the platform is free to focus on deleting fake accounts, identifying bots, and protecting itself from hackers. Expectation management is created through the delineation of responsibilities. But is fake news really the problem we need to be discussing? No, but it serves as an easy distractor from the real issue of government and corporate oversight and control. Hopefully this won’t be labeled as fake news.



Rini, R. Fake News and Partisan Epistemology. Kennedy Institute of Ethics Journal (2017).

Written by <a href="https://www.callsignsandoval.com/author/matt/" target="_self">Matt Sandoval</a>

Written by Matt Sandoval

Hi, I’m Matt. I am an active duty Soldier in the United States Army. I hold a Bachelor’s degree in NeuroExperimental Psychology and a Master’s degree in Philosophy. I am highly competitive in all activities (work and recreation), but my personality is a strange concoction of Seinfeld and Stoicism. I am interested a wide-variety of philosophical and military topics and gravitate towards discussions of human experience. I have a penchant for planning, organization, and building teams. If you’re looking for information, advice, or dialogue on leadership, metaphysics, or a bit of wit infused with some healthy cynicism then I’m your guy.
Notify of
Inline Feedbacks
View all comments