Upcoming webinar | Apr 16, 2026 AI Interoperability: Build a connected agent ecosystem | Register
A woman sits in front of multiple monitors displaying lines of code and data visualizations. The setting appears to be a dark, modern workspace, suggesting a technology or cybersecurity environment. She wears a dark top and a heart-shaped pendant necklace. The screens show colorful graphs and text, contributing to a focused, analytical mood.

What is cognitive bias?

Cognitive bias, defined

Cognitive bias refers to systematic errors, flawed reasonings and common misinterpretations frequently observed during the human decision-making process. All human beings are susceptible to cognitive bias, which can impair their ability to interpret data and draw accurate conclusions. 

Even the most analytical and rational people can fall victim to cognitive bias, which is often unconscious. Many biases are both hidden and pervasive, potentially influencing both data analysis and the very structure of data collection. 

With recent major advancements in machine learning (ML) and artificial intelligence (AI), interest in and awareness of cognitive bias have surged. As investment in these smart systems grows exponentially, efforts to mitigate AI bias and data bias are becoming critical to both AI ethics and AI governance.  

Cognitive bias vs. logical fallacies

Though the terms logical fallacy and cognitive bias are sometimes used interchangeably, they are distinct. Logical fallacies stem from errors made in a logical argument. Cognitive bias errors go deeper, arising during the thought processing stages. Cognitivie biases are often rooted in problems associated with memory, recall, attention, attribution and other forms of meta-analysis.    

Additionally, many types of logical fallacy stem directly from examples of cognitive bias. Cognitive bias affects critical thinking. When logical arguments are built on biased reasoning, systemic errors commonly create related logical fallacies. 

Take, for example the sunk cost fallacy, a term that can describe both a cognitive bias and an informal logical fallacy. Comparatively: 

As a logical fallacy: The sunk cost fallacy refers to a logically flawed argument for continuing to invest resources (time, money, effort and so on) in a project based solely on the amount of unrecoverable resources already invested. A gambler playing a game with unfavorable odds can’t logically justify continuing to play just because they’ve already lost a large sum. Logically, the lost money is irrelevant to the current decision to bet again or walk away.

As a cognitive bias: The common saying, “throwing good money after bad,” captures the essence of the sunk cost fallacy well. From a cognitive perspective, emotional factors such as loss aversion, social pressures, shame and panic can lead the gambler to think irrationally. They might conclude that, despite past evidence demonstrating their inability to win money, the next hand will be different. 

Why cognitive bias matters 

Cognitive bias is insidious because the decisions it produces are not always blaringly wrong. They might instead be only subtly suboptimal, or even coincidentally accurate. This lack of a clear feedback loop is one reason cognitive biases persist. 

If not caught early, the downstream effect of cognitive biases during planning phases can create systemic issues that might lead to erroneous, inequitable or dangerous results. But by studying and understanding cognitive bias, decision makers and those designing or securing decision-making systems can help achieve even better results.

Cognitive bias and cybersecurity

Understanding cognitive bias can help security teams build stronger and more secure systems.

By taking note of common errors in human thinking, cybersecurity professionals can better recognize weaknesses in security tools and solutions. An awareness of cognitive biases can also help developers and security pros alike avoid potentially critical flaws when designing, building and implementing security platforms, apps and other systems.

In situations where security is essential—such as securing sensitive business communications, protecting state secrets or safeguarding consumer financial information—cognitive bias can lead to systemic vulnerabilities that might allow bad actors to punch through otherwise rigorous security measures. 

For example, the false consensus cognitive bias is the tendency to assume that one’s own opinions are widely considered correct, even if there is no data to support that conclusion. This bias could lead a developer to believe that certain types of security features aren’t necessary for the type of app their building. As a result, they might inadvertently leave their system vulnerable to cyberattacks they never anticipated.

Cognitive bias and artificial intelligence 

Cognitive biases impair the ability to interpret data and make informed decisions.

For less important decisions, while cognitive bias might result in poor decision-making, the stakes are obviously small. For bigger decisions, cognitive bias becomes a greater threat.

Perhaps the greatest threat posed by cognitive bias comes from the potential impact these errors can have when designing systems that will be making many decisions themselves. Thus as AI systems grow increasingly prevalent, the need for safeguards against AI bias becomes urgently important. 

AI models often have flaws stemming from both the cognitive biases inherent in the human developers who design these systems and the biases contained in the systems’ training data. The resulting systems then have biases of their own, which can result in discrimination and other deleterious effects. 

Cognitive bias and AI governance

AI governance helps address and prevent issues such as algorithmic bias, in which ML algorithms produce unfair or discriminatory outcomes that can reinforce preexisting socioeconomic, racial or gender inequalities. 

AI governance, processes, standards and guardrails help AI system architects create tools that are both safe and ethical. AI governance informs frameworks for AI research, development and applications that help ensure safety, fairness and respect for human rights.

One way that AI governance addresses bias is by promoting the inclusion of diverse sets of training data. In areas such as health care, it’s critical to draw data from a wide range of populations that might have unique contextual circumstances.

For instance, consider a health care AI designed to detect signs of lung cancer. If this model is trained only on data from nonsmokers living in rural areas, it might be less effective when analyzing subjects who smoke and live in high-pollution areas. To make a more effective AI, it would be vital to include training data that represents as many potential scenarios as possible. 

Another example of AI governance in action would be efforts to institute and maintain human-in-the-loop (HITL) systems. These types of systems require human oversight of AI outputs. HITL frameworks help ensure an additional check. If the human detects bias that the AI system missed, then the human can override the AI’s decision.  

AI Academy

Uniting security and governance for the future of AI

While grounding the conversation in today’s newest trend, agentic AI, this AI Academy episode explores the tug-of-war that risk and assurance leaders experience between governance and security. It’s critical to establish a balance and prioritize a working relationship for both to achieve better, more trustworthy data and AI your organization can scale.

History of cognitive bias

While cognitive bias has been observed for centuries, the concept was first introduced formally in the 1970s by research psychologists Amos Tversky and Daniel Kahneman. In 1974, the pair published their paper, Judgment Under Uncertainty: Heuristics and Biaseswhich first outlined the ways in which people tend to rely on mental shortcuts, known as heuristics, when making decisions or drawing conclusions. 

In the fields of social psychology and behavioral economics, researchers have theorized that humans evolved to use heuristics to alleviate cognitive load, especially when perfect accuracy isn’t as important as efficiency. Tversky and Kahneman found that heuristics—common rules of thumb or “tricks of the trade”—were especially prevalent in situations where judgments must be made quickly, without full context or under general uncertainty. 

Although some heuristics can effectively help people, organizations and machines make fast and effective decisions, Tversky and Kahneman identified how cognitive bias can frequently lead to undesirable outcomes. 

Unfortunately, even people who are aware of cognitive bias are still susceptible to them. However, by studying the different types of cognitive bias, one might effectively reduce their negative impacts on the decision-making process. It can be especially helpful to have analyses and decisions reviewed by outside teams or systems specially trained to recognize cognitive biases.  

Types of cognitive bias

The first step in combatting bias is to become aware of the various ways it manifests. Since the publication of Tversky and Kahneman’s breakthrough paper, researchers have discovered many types of cognitive bias.

Some of the most common types of cognitive bias are:

Confirmation bias

Confirmation bias is the tendency to over-value information that confirms existing beliefs, while devaluing information that would contradict them. For example, someone who believes that most cyberattacks are the result of outsider threats might brush-off growing reports of insider attacks as outliers.

Actor-observer bias

The tendency to assume that one’s own conditions, experiences or circumstances can be attributed to outside sources, while other people’s circumstances are the result of their own actions. For example, someone who falls for a phishing email might believe that the cybercriminals were exceptionally clever. Yet the same person might see a coworker fall for a similar scam and conclude that their coworker is gullible and easily fooled.

Self-serving bias

Like the actor-observer bias, the self-serving bias attributes negative personal results to external forces while giving oneself undue credit for positive outcomes. The self-serving bias could lead a person to assume that a lucky poker hand is the result of their own skill, while interpreting a losing hand to be an unavoidable twist of fate. 

Fundamental attribution error

The fundamental attribution error is the perception of others as good or bad based on their internal motivations. For example, a frustrated person stuck behind a slow driver might assume that said driver is thoughtless and rude, without considering potential external factors such as poor driving conditions or engine trouble.

Anchoring bias

The tendency to place outsized importance on the first piece of information learned about a new subject. Vendors can leverage the anchoring bias when they advertise a product’s former, higher price. By anchoring the customer with a larger number, the slashed price seems a better deal.

Halo effect

The halo effect refers to how one’s impression of a person influences one’s ability to judge their character or abilities. This type of cognitive bias can manifest in many ways, but one particularly unfortunate example is the ability of conventionally attractive or personable people to quickly gain the trust of strangers. Cybercriminals often abuse the halo effect when attempting social engineering attacks by, for example, using attractive avatars. 

Attention bias

The tendency during decision making to pay more attention to certain factors while ignoring others. For example, when shopping for cybersecurity solutions, someone laboring under an attention bias might pay more attention to the overall cost of a product without considering whether it meets all their needs.

Naïve realism

The tendency to perceive one’s own subjective experience of reality as objective and expect others to perceive the world the same way. Ironically, someone suffering from naïve realism might dismiss differing views as themselves irrational or biased.

Misinformation effect

The tendency for outside analysis of an event to influence one’s personal perception of presented data. This type of bias has led even eyewitness observers to doubt their own eyes based on the analysis of biased third parties. 

Functional fixedness

The tendency to assume that just because something isn’t designed to fulfill a certain function, that it cannot possibly fulfill that function. For example, because a wrench is not designed to hammer a nail, one might assume that it cannot be used as a hammer. However, in a pinch, a wrench can in fact be used like a hammer. 

Optimism bias

The tendency to believe one is less likely to fail and more likely to succeed. While a bias toward optimism can be valuable in certain situations, an unjustified assumption of success can lead to ruin. Decisions influenced by an optimism bias are more likely to discount serious issues and leave decision-makers exposed to potentially devastating outcomes. 

Survivorship bias

A type of sampling bias, survivorship bias is the tendency to mistake a subset of a larger group for the entire group. Survivorship bias leads people to incorrectly focus all their attention on only data points that “survive” some number of qualifying tests.

One of the most famous examples of survivorship bias comes from World War II.1 When consulted as to where best to reinforce airplane armor, Abraham Wald made his recommendations based on a statistical analysis of where planes returning from combat showed the most damage. However, instead of reinforcing these areas, Wald recommended reinforcing the areas that showed the least damage. Wald understood that his data represented only planes that had survived combat, so he rightly inferred that the damage these planes incurred was not as critical as damage done elsewhere. Planes that did suffer damage in the unrepresented areas were destroyed and didn’t survive to be catalogued in the damage distribution analysis. 

Bandwagon effect

The tendency to make a decision, come to a conclusion or adopt a belief based on the number of other people who hold similar positions. Essentially, the bandwagon effect describes the influence that popularity and social pressures can have on one’s psychology. 

The framing effect

The way in which information is presented can result in different conclusions. For example, a political survey asking respondents to rate their approval of “the bad things the governor has done” will yield lower scores than a more objective description of “the governor’s policies.” 

Availability bias

The tendency to over-value information that is readily available or easy to recall. This bias might lead a doctor to misdiagnose a patient with the flu based on the presence of flu-like symptoms without testing for other diseases with similar symptoms. While it might be easier to assume the flu, decisions made with limited information are often impacted by the availability bias.

Base rate fallacy

Also known as neglect, the base rate fallacy describes the tendency to overlook or ignore general, statistical or population-level data (the base rate) in favor of specific or anecdotal evidence. In analysis, it’s critical to first establish a base rate to properly measure any statistical deviations. 

Hindsight bias

Also called the knew-it-all-along phenomenon or creeping determinism, hindsight bias is the tendency to perceive past events as being more indicative of future outcomes then they might have been. In hindsight, it can seem easy to connect causal connections as though the present were a foregone conclusion. In the moment, though, many outcomes were possible, making the future less clear. 

Representativeness heuristic

This cognitive bias is the tendency to rely on assumptions rooted in stereotypes rather than objective information. For example, a person might assume that a musician with tattoos and spiked hair plays rock and roll, when they might in fact be a classical violinist. The representativeness heuristic describes the tendency to “judge a book by its cover.” 

The Dunning-Kruger effect

Named for researchers David Dunning and Justin Kruger, this cognitive bias describes the tendency for individuals who are unskilled in a particular domain to overestimate their performance or capabilities within that domain. One explanation for this overconfidence is that as a person learns new information about a subject, they become more aware of how deep that subject is. Neophytes have yet to acquire this awareness.

Ironically, a recent study found something of a reverse Dunning-Kruger effect when it comes to using AI. Researchers asked two groups of people to solve logical problems. One group was permitted to use AI to help them make conclusions, and the other wasn’t. After taking the test, participants who used AI were also asked to estimate how well they thought they performed, and how experienced they were using AI tools. 

Typically, the Dunning-Kruger effect would show an inverse correlation between confidence in using AI and a subject’s actual ability to use AI effectively. However, the opposite was observed. Subjects with higher levels of AI experience also overestimated their ability to use AI effectively. 

While further study is required before drawing any meaningful conclusions, this report does underscore the crucial importance of examining cognitive bias. When it comes to decision-making, cognitive bias can dramatically affect outcomes and conclusions in unexpected ways. When cognitive bias leads to errors made during the decision-making process, even the best, most insightful and impactful data can be grossly misinterpreted. 

Author

Josh Schneider

Staff Writer

IBM Think

Related solutions
IBM® watsonx.governance®

Govern generative AI models from anywhere and deploy on the cloud or on premises with IBM watsonx.governance.

Discover watsonx.governance
AI governance solutions

See how AI governance can help increase your employees’ confidence in AI, accelerate adoption and innovation and improve customer trust.

Discover AI governance solutions
AI governance consulting services

Prepare for the EU AI Act and establish a responsible AI governance approach with the help of IBM Consulting®.

Discover AI governance services
Take the next step

Direct, manage and monitor your AI with a single portfolio to speed responsible, transparent and explainable AI.

  1. Explore watsonx.governance
  2. Book a live demo
Footnotes

1. “Abraham Wald’s Work on Aircraft Survivability, Journal of the American Statistical Association, June 1984.