Deep thinkers, this message is for you. Many of us engage with new ideas like personal jigsaw puzzles, piecing them together in the quiet corners of our minds, away from external distractions. We delight in the intellectual pursuit, convinced that through concentration and persistence, the lightbulb of insight will inevitably switch on. The problem is, this approach rests on the premise that solitary reasoning is objective and demanding. Spoiler alert: it isn’t.
Mirage of solitary reasoning
This view of solitary reasoning overlooks the inherent limitations and biases that come with trying to think through complex problems alone. Human reasoning isn’t just occasionally off—it’s systematically flawed. We cling to comforting intuitions, cherry-pick information that fits our preconceptions, and rarely subject our beliefs to the same scrutiny we reserve for others’. Contrary to the promise of reason as a means to think better on one’s own, in practice, it steers us towards overconfidence and stubbornness. If reasoning is supposed to guide us to make logical judgments and decisions, why does it routinely result in cognitive biases? The answer lies in its evolutionary origins.
Our capacity for thought is limited by the amount of time and energy we have. To manage these limitations, brains developed mental shortcuts known as heuristics. These shortcuts help us make decisions quickly and with little effort, but because they’re optimized for speed over accuracy, they can backfire spectacularly when we're left to our own devices.
One notorious example of how heuristics can lead us astray is confirmation bias—our tendency to prefer information that confirms our preexisting beliefs while ignoring contradictory evidence. This bias affects how we gather, interpret, and remember information, especially when it comes to deeply held, emotionally charged beliefs. Left unchecked, confirmation bias can turn reasoning into a self-reinforcing loop, distorting our understanding and decision-making. As we become more entrenched in our beliefs, we miss broader perspectives and viable solutions, hindering personal growth and fueling societal division—cool right?!
It’s no wonder why, at the level of the individual, cognitive biases seem like mental flaws—errors we tolerate for the efficiency they bring. But perhaps this is the wrong perspective. If cognitive biases were genuinely harmful, natural selection would’ve stripped them away by now. Their persistence implies an evolutionary advantage, suggesting these biases serve a purpose. It’s possible that these biases aren’t failures of reasoning but are rather the result of applying reasoning outside its natural context. Historically, humans developed reasoning skills through group discussions and social interactions, which provided immediate feedback and alternative perspectives. Outside these social and dialogic environments, there's no guarantee that reasoning will benefit a solitary thinker. Just as we wouldn’t expect our lungs to function underwater, we shouldn’t expect reason to work properly in isolation.
Why reason evolved
According to cognitive scientists Dan Sperber and Hugo Mercier’s argumentative theory, reasoning didn’t evolve to help us uncover the truth. Instead, reasoning has two main functions: justifying oneself and convincing others. Reason, they argue, isn't about individual enlightenment, it's about social survival. Humans are unique not just in our cognitive capabilities but in our extensive cooperation, not just with kin but with strangers, not just for immediate gains but for long-term goals. This cooperation demands sophisticated coordination and trust. Reason provides the tools for us to work together, enabling us to explain and justify our actions, signal our intentions and set expectations. On the flip side, evaluating others’ reasons is essential for deciding whom to trust and how to collaborate effectively.
We display a curious imbalance in how we deploy reason. When we defend our beliefs, we’re often lenient and biased. However, when evaluating the arguments presented by others, we become notably more critical and rigorous. This asymmetry is a key reason why group discussions, when balanced, lead to better judgments than those made by individuals alone. Within a diverse social context, the pitfalls of confirmation bias are lessened, and a new advantage emerges. During discussions, individuals zero in on their own viewpoints. While this may seem self-serving, it actually speeds up the generation of ideas. Each person brings a unique perspective, quickly highlighting different aspects that need consideration. Although confirmation bias can mislead us when we are alone, pursuing a strongly held belief, even if biased, can be beneficial at the group level, enabling the group to cover more ground than any single individual could. As ideas are pooled together and subjected to collective scrutiny, they’re refined or rejected, resulting in robust conclusions.
David Moshman and Molly Geil conducted a series of controlled experiments using the Wason four-card selection task, a classic problem in logical hypothesis testing. Their research involved 143 college undergraduates, with 32 working alone and 20 groups composed of 5 or 6 students each. The findings were striking: only 9% of the individuals arrived at the correct solution, while a remarkable 75% of the groups did.
What made the difference? During their discussions, group members frequently challenged each other's choices, prompting one another to justify their selections and consider various consequences and alternatives. This interactive process led to final decisions that were based on a collective and genuine understanding of the task's logic, rather than simply conforming to the majority opinion or deferring to an authority figure. By sharing insights, reasons, doubts, and possibilities, groups were able to achieve a true consensus, vastly outperforming individuals. Sperber and Mercier liken the dramatic improvement in group performance to “the equivalent of getting sprinters to run a 100-meter race in five seconds by making them run together.”
The beauty of biased argumentation
When we focus on reducing cognitive errors, we often end up stifling insight. This paradox arises because systems designed to minimize mistakes can inadvertently diminish creativity and discovery. Social reasoning is an anti-fragile system, one that thrives on variability and the ability to adapt. For such systems, the capacity to mutate—sometimes successfully, sometimes not—is crucial for resilience. Consider biological examples like the process of natural selection, which allows species to adapt and become stronger in response to environmental pressures, and the immune system, which responds to pathogens by strengthens the immune response, making the body more capable of fighting off future infections. These systems flourish through a process of trial and error, with individual successes and failures contributing to the robustness of the whole. Similarly, group reasoning leverages the diverse inputs of its members to achieve a form of Mandevillian intelligence, where the bias of individuals can lead to superior outcomes for the group.
Bias is beneficial, provided there's a wide variety of perspectives in play. When biases skew too heavily in one direction due to a lack of counterbalancing viewpoints, groupthink sets in and the echo chamber forms. In addition to engaging with a diversity of perspective, individuals must be willing to participate in real debate. There can be no passengers in this process! Without sincere argument, social reasoning's effectiveness is compromised by phenomena like social loafing, where individuals exert less effort in a group, expecting others to pick up the slack. For instance, brainstorming sessions often yield fewer and poorer ideas compared to individual efforts due to a lack of immediate feedback and the tendency to withhold criticism. However, when groups engage in organized skepticism, they can overcome these challenges.
Genius → scenius
Let’s face it: trying to eliminate cognitive biases is a fool’s errand. Instead, we should harness them to enhance our reasoning. I believe the key to leveraging cognitive biases is “scenius.” Brian Eno coined the term “scenius” to describe the collective form of intelligence and intuition that arises within a cultural scene, as opposed to the traditional notion of lone genius. Eno argues that significant cultural and artistic developments are often the result of a fertile environment where many individuals contribute to an “ecology of talent” that fosters creativity and innovation.
When I was a student at an arts magnet school, art critiques were an essential part of our education. During those formative years, we learned to communicate complex ideas while still shaping our identities in relation to them. Subjecting our nascent thoughts to scrutiny was daunting, but immensely beneficial. I feel fortunate to have learned early on the value of sharing my thoughts with others, who could then help me express them more clearly. Critique not only strengthened my ideas but also helped me understand myself more deeply.
Now, more than 20 years later, I reflect on those early experiences and appreciate how they wired my brain to be open to exploring ideas with others. Lively discussions, where generous skeptics challenged my assumptions, made me more willing to expose my ignorance and trust others to help me improve. The older I get, the more I view this willingness to debate as a competitive advantage for anyone courageous enough to embrace it.
Critiques and discussions can significantly enhance our reasoning, but they are not universally effective. If we start with closely aligned beliefs, discussions can lead to polarization, reinforcing extreme positions. Conversely, when we have conflicting ideas without a shared goal, dialogue can spiral into deadlock. The most productive exchanges occur when people channel different viewpoints towards a shared objective.
To achieve this, you need the right social setting. It doesn’t necessarily require a group gathering—it can be as simple as having one-on-one conversations with a diverse network of people. I’ve been fortunate to find friends and colleagues who engage in deep, thoughtful discussions with me, and it’s transformed the way I approach problems and decisions. While these interactions can be informal, there are some guidelines I find helpful to effectively leverage social reasoning.
First, engage with people who disagree with you. Otherwise, you’ll fall into the echo chamber effect, where only similar viewpoints are discussed and reinforced. This limits your perspective and growth. Seek out individuals who bring distinct biases and life experiences. It’s likely that you’ll need to push beyond your default social circles to find people who truly differ from you. If you agree on anything, it should be the need for a thorough process to achieve clear thinking. This alignment in purpose allows your differences to work together, driving both of you towards greater clarity.
Second, evaluate arguments based on their merit, regardless of who proposed them. Good ideas can come from anywhere, so be open. On the other hand, bad ideas can come from people you trust, so be vigilant. Often, we avoid challenging ideas to spare feelings, but balancing openness with vigilance is crucial. You must disentangle the people (psychological dimension) from the problem (substantive dimension). A useful rule of thumb is: Be hard on the problem and soft on the people.
Lastly, cultivate an environment where challenging each other’s ideas is encouraged. Make it clear that everything is up for debate—nothing is off-limits. The only thing that’s sacred is our shared commitment to truth-seeking. The goal should be to dig deep and question assumptions, even core ones. If you can’t hold each other accountable to argue with rigor and depth, what’s the point? Push beyond surface-level thinking and engage in meaningful, sometimes challenging, discourse to achieve better judgments and decisions. Through this process, your reasoning will become sharper, your ideas clearer, and your understanding deeper.
Surprisingly, multiple studies suggest that people who engage in more deliberative thinking about a particular issue, such as gun control or the existence of God, display a stronger confirmation bias. Evidence that confirmation bias can thrive in more deliberative contexts sets it apart from other cognitive biases that tend to weaken under scrutiny.
The first of two clever analogies I will borrow from The Enigma of Reason by Dan Sperber and Hugo Mercier. The also have some inventive comparisons to the “sodium lighting” of an underground parking garage and a bomb exploding inside a plane.
Dan Sperber, an anthropologist and cognitive scientist, and Hugo Mercier, a cognitive psychologist, proposed the argumentative theory of reason in a 2010 paper titled Why do humans reason? Arguments for an argumentative theory. Their perspective, rooted in evolutionary psychology, sparked extensive debate and spurred further research across the cognitive sciences. I became familiar with this theory through their book, The Enigma of Reason, which explores the social and communicative functions of reasoning in depth.
In the Wason selection task, participants are given four cards to test the rule: "If a card has a vowel on one side, then it has an even number on the other side." The first card had the letter 'E,' the second 'K,' the third the number '4,' and the fourth the number '7.' It was explained that each card had a letter on one side and a number on the other side. To effectively test the rule, participants should flip the 'E' card because 'E' is a vowel and needs to be checked for an even number on the reverse. They should also flip the '7' card because it's an odd number, and if a vowel is on the other side, it falsifies the rule. The 'K' card, which displays a consonant, is irrelevant to the rule and doesn't need to be flipped. Similarly, the '4' card, while even, doesn't help test the rule since it doesn't specify the requirement for the reverse side of even numbers. Thus, only the 'E' and '7' cards should be selected to test the hypothesis effectively.
Yep, another amazing analogy from The Enigma of Reason. I’m not ashamed to say that I highlighted about a third of this book while reading it. If this interest you, definitely check it out and let me know what you think!
Mandevillian intelligence, inspired by philosopher Bernard Mandeville, describes how individual shortcomings can enhance collective outcomes. This concept, which intersects cybernetics, psychology, and biology, illustrates that superior group conclusions often depend on diverse, imperfect individual contributions, showcasing a paradox in social reasoning and group dynamics.
Isaac Newton, often romanticized as a lone genius, famously wrote to his rival Robert Hooke, “if I have seen further, it is by standing on the shoulders of giants.” Clearly, he understood the limitations of reasoning alone.
I learned this concept from negotiation expert William Ury during my time working with him. It’s an insight I drawn on frequently.