Thursday morning, March 5th. A hiring manager is choosing between two candidates. She feels a clear preference for one but has been trained to "examine her biases," so she spends thirty minutes articulating her reasons. She identifies that the preferred candidate went to a better school, has a neater resume layout, and gave a more polished answer to the opening question. Those reasons feel legitimate. She hires based on them. Six months later the hire is struggling. The other candidate has thrived in a similar role elsewhere. The thirty minutes of reflection didn't remove her bias. It gave her bias a vocabulary.

Carefully examining your reasons for a decision reliably makes those decisions worse, not better. This is not a paradox or a caveat—it is one of the more replicated findings in decision-science research, and it runs directly against the cultural consensus that more self-reflection produces better judgment. The consensus is wrong, and the error is expensive.

The Experiment That Should Have Changed Everything

In 1991, psychologists Timothy Wilson and Jonathan Schooler published a study that has since been reproduced in various forms but remains surprisingly underknown outside academic circles.

Participants were asked to taste and rank strawberry jams. One group just tasted and ranked. Another group was asked to write down the reasons for their preferences before ranking. The researchers then compared participants' rankings against rankings from a panel of food experts.

The group that simply tasted made choices that aligned closely with expert judgment. The group that analyzed their reasons made choices that correlated significantly less well with expert judgment.

Thinking carefully about why you preferred a jam made you worse at ranking jams.

Wilson and Schooler called this effect "verbal overshadowing"—the process of translating a judgment into words interferes with the underlying perceptual or evaluative process. When you analyze a preference, you shift attention to features that are easy to articulate. "This one is sweeter" or "that one has a more even texture" — these are things you can say. The gestalt quality that actually predicts whether you'll enjoy the jam over time is harder to verbalize, so it gets crowded out by the verbalizable features, which are often less predictive.

The effect generalizes. It has been replicated with face recognition (describing a face before trying to identify it in a lineup makes identification worse), with aesthetic judgments, with consumer choices. The specific domain where analyzing reasons hurts you is not jams or faces—it is anywhere that your first-pass judgment is tracking something real that language cannot fully capture.

You Don't Know Why You Do Things

The deeper problem is that introspection is not a reliable channel to your actual motivations. It feels like direct access to your inner states, but the research suggests it's mostly confabulation—plausible narrative construction that happens after the fact.

The foundational paper here is Nisbett and Wilson's 1977 "Telling More Than We Can Know." In a series of experiments, participants were influenced by factors they were completely unaware of—position effects in product selection, subtle priming, incidental environmental details—and then confidently gave reasons for their choices that had nothing to do with the actual causes. The reasons were not lies. They were the most plausible explanations people could generate. They were also wrong.

The split-brain research from Michael Gazzaniga's lab reinforces this. Patients whose left and right brain hemispheres have been surgically disconnected will perform an action controlled by the right hemisphere (which the verbal left hemisphere has no access to), and then, when asked why they did it, immediately generate a confident, coherent explanation. The explanation cannot be accurate—the left hemisphere literally does not have access to the information that drove the action. But it doesn't generate "I don't know." It generates a story. This storytelling reflex is not limited to split-brain patients. It appears to be what verbal self-report is, at baseline.

This means the reasons you give for your preferences and decisions are largely reverse-engineered narratives. They feel like causes, but they're mostly effects—constructed after the decision was already reached through processes you don't have conscious access to.

When Deliberation Makes You Worse

The interference is most pronounced in skill-based performance and preference judgment. Expert practitioners of physical skills—athletes, musicians, surgeons—perform worse when asked to think consciously about what they're doing. This is well-documented in sports psychology: asking a golfer to focus on the mechanics of their swing reliably degrades the swing. The phenomenon has a name: "paralysis by analysis," though that name undersells how structural the effect is. It's not that overthinking makes you anxious; it's that conscious deliberation competes with and disrupts the automatic processing that skilled performance runs on.

Something parallel happens with preference decisions. The part of you that evaluates whether a person, a job, a neighborhood, a relationship is right for you is not primarily linguistic. It integrates signals across many dimensions simultaneously, tracking patterns you couldn't enumerate if asked. When you translate that evaluation into language—when you make yourself list pros and cons, articulate your values, examine your assumptions—you're not improving that evaluation. You're replacing it with something coarser: a list of the things you can think of to say.

Gerd Gigerenzer's research on fast-and-frugal heuristics shows that simple, quick judgment rules often outperform elaborate deliberative models in complex real-world domains. Not because deliberation is always inferior, but because in domains where the predictive features are numerous, partly tacit, and difficult to weight correctly, a calibrated gut reaction may be integrating more relevant information than a careful verbal analysis—which can only include what you can articulate.

The Asymmetry: When to Think and When to Trust

This is not an argument against thinking. It's an argument for a more precise theory of when thinking helps.

Deliberation is useful for several things. Checking for overlooked constraints: is there a practical consideration you haven't factored in? Is there information you need but don't have? These are questions deliberation can answer. It's also useful for coordinating decisions with other people, since shared deliberation requires language even when the output is worse for it.

Deliberation is least useful—and often actively harmful—for preference judgments about complex, multidimensional options where you have prior relevant experience. In these cases, your first response is probably tracking more than your analysis will. The jam study participants who just tasted were using more information than the participants who explained why.

The practical split: use deliberation to check logistics and constraints. Trust your first judgment on preference and evaluation questions in domains where you have real experience. The person who agonizes over every decision is not being careful—they're substituting a less accurate process for a more accurate one.

The Real Purpose of Reflection

What is introspection actually useful for, if not finding your real reasons?

It is useful for understanding the story you tell yourself. That story matters—it shapes what actions feel available, what risks feel acceptable, what identity you're maintaining. But the story is not the cause; it's the narrative layer. If you want to know what actually drives your behavior, don't ask yourself. Watch your behavior. Where do you spend time you didn't intend to spend? What do you consistently avoid despite stated intentions? What choices have you made repeatedly across different framings? Behavioral patterns are more honest than explanations because they don't go through the storytelling reflex.

This is why therapies that focus on changing behavior—rather than insight into why you behave that way—often outperform insight-focused approaches for behavior change. You don't need to correctly identify your reasons; you need to change the inputs and outputs. The internal explanatory mechanism doesn't have to be accurate to change.

The hiring manager who wants to avoid bias is better served by structured rubrics evaluated before the interview, or blind resume review, than by thirty minutes of introspecting on her possible biases. The rubric doesn't require her internal story to be accurate. It routes around it.

The Practical Adjustment

Stop using introspection as a decision-making tool for preference questions. You already have a response to most preference questions—trust it, particularly in domains where you have real accumulated experience. The analysis you perform afterward is mostly a rationalization machine, not a clarity engine.

Use reflection for two specific things: checking constraints you might have missed (practical deliberation), and understanding what story you're living in (narrative awareness). The first improves decisions by adding missing information. The second helps you see which frames are shaping your perception. Neither requires the belief that your stated reasons are your actual reasons.

When you catch yourself analyzing at length before making a preference decision you already viscerally resolved—the job offer, the relationship, the project—ask what the analysis is actually for. It is probably not improving the decision. It is probably either (a) looking for permission to do what you already want, or (b) building a case for what you've already decided. Neither requires more time.

Your first answer is usually your best answer. The elaboration is mostly performance.

Today's Sketch

March 05, 2026