• Home
  • Subscribe
  • About
  • Privacy Policy
  • Disclaimer
Science of Money
Science of Money

The seductive allure of neuroscience: Why brain talk feels so satisfying, even when it explains nothing

by Eric W. Dolan
May 11, 2026
Share on FacebookShare on Twitter

Pick up a magazine article about willpower, memory, or mental illness, and chances are you’ll encounter a reference to the prefrontal cortex, a colorful brain scan, or some mention of dopamine. Advertisers know this too, slapping “brain-based” onto everything from toddler toys to energy drinks. The question is whether sprinkling neuroscience into an explanation actually changes what people think, or whether it just feels impressive without doing much real work.

An analysis in Public Understanding of Science in 2023 pulls together 15 years of experiments on this question and finds that the answer depends heavily on what you’re asking people to judge. Irrelevant neuroscience genuinely makes explanations feel better, but it rarely changes what people actually believe or decide.

The puzzle of conflicting results

The phenomenon has a name: the seductive allure of neuroscience explanations, or SANE effect. Ever since a pair of 2008 studies showed that adding brain images or neural jargon boosted ratings of scientific arguments, researchers have tried to pin down how strong and reliable the effect really is. The results have been all over the map. Some studies found dramatic effects, with mock jurors swayed by brain scans and readers rating sloppy explanations as satisfying once neurons were mentioned. Others found essentially nothing.

Science of Money
Sign up for our free weekly newsletter for the latest insights.

Elizabeth M. Bennett and Peter J. McLaughlin of Pennsylvania Western University set out to make sense of the scattered evidence. They combined 60 individual experiments from 28 publications, covering more than 13,000 participants, and applied a statistical technique called meta-analysis that pools results across studies to estimate a single average effect. Their aim was to determine whether the SANE effect is a real phenomenon, how large it is, and why different studies keep producing different answers.

Gathering the evidence

The researchers searched major psychology and medical databases for experiments that compared how laypeople judged material with and without added neuroscience content. They limited the sample to nonexperts, since prior work shows that people with a relevant college degree or more tend to be immune to the effect. They also set aside studies involving actual neuroscience methods, philosophical essays, and any experiments that didn’t include a proper comparison condition.

Outcomes fell into a few categories: ratings of how satisfying, high-quality, or convincing an explanation seemed; agreement with the main claim; and, for courtroom studies, verdicts or sentence lengths. Where authors hadn’t reported enough numbers to calculate an effect size, Bennett emailed them to request the data. A handful of studies had to be dropped because authors didn’t respond or couldn’t supply the raw numbers.

A real effect, but a small one

Averaged across all 60 experiments, adding irrelevant neuroscience produced a small but statistically reliable bump in how favorably people responded to material. In the language the researchers use, the effect size was 0.25, which is the kind of nudge you’d barely notice in any individual instance but which is unlikely to be a fluke when pooled across thousands of participants.

The more interesting finding was how much the results varied from one study to the next. A standard measure of that variability suggested most of the differences across studies weren’t random noise but reflected real differences in what was being tested. This pushed the researchers to split their dataset into subgroups to figure out where the effect is strong, where it’s weak, and where it barely exists.

Satisfaction is easy to sway. Belief is not.

The single clearest pattern involves what participants were asked to judge. When studies measured whether people found an explanation satisfying, high-quality, or easy to understand, adding neuroscience produced a noticeably stronger effect. When studies measured whether people actually agreed with the claim being made, or found it convincing, the effect shrank to almost nothing.

Bennett and McLaughlin interpret this as a split between gut reaction and considered judgment. Neuroscience language and imagery appear to change how people feel about material, giving them a warm sense that they’ve grasped something deep about the brain. But that feeling doesn’t reliably translate into being persuaded of a factual claim. Earlier work has shown that this inflated sense of understanding doesn’t come with any actual improvement in comprehension, something one group of researchers labeled “an illusion of explanatory depth.”

This distinction matters for interpreting everyday encounters with brain talk. A news story framed around a brain scan may feel more illuminating than the same story without one, even when readers don’t ultimately change their opinion on the underlying issue.

Text beats pictures

The second pattern involves the form of the neuroscience content. Studies that added neuroscience-flavored text to an explanation produced stronger effects than studies that added brain images. Combining both didn’t deliver the additive punch that some earlier researchers had predicted; in fact, the combination looked roughly similar to images alone.

This is something of a reversal of the original framing of the phenomenon, which leaned heavily on the visual appeal of brain scans. Several attempts to replicate the seminal 2008 brain-image study have failed, and the meta-analysis suggests the written neuroscience jargon may be doing more of the persuasive work than the pictures that often accompany it.

The courtroom is complicated

Courtroom studies, where mock jurors weigh brain-based evidence in criminal cases, showed the most tangled results. Sometimes neuroscience mitigated sentences; sometimes it didn’t change verdicts at all; occasionally it cut both ways within the same case.

The authors point to what other researchers have called a “double-edged sword.” Telling jurors a defendant has a brain abnormality may lower perceptions of moral responsibility while also raising worries that the person is dangerous and hard to rehabilitate. Depending on which concern dominates, the same evidence can produce opposite outcomes. Recent work suggests that whether neuroscience mitigates or aggravates punishment depends on whether jurors are thinking about incarceration as retribution or as public safety.

Within-subjects designs amplify the effect

The researchers also found that studies where each participant saw both neuroscience and non-neuroscience versions of the same material produced stronger effects than studies where each participant saw only one version. This echoes an earlier finding that brain images only nudged ratings when they appeared after a comparison stimulus that served as a reference point.

The practical meaning is that the seductive pull of neuroscience may be sharpest in situations where people can directly compare two versions of the same claim, one with brain content and one without. That’s less common in everyday media consumption, though it does map onto courtroom settings where opposing sides present competing evidence.

What it means for readers and marketers

For anyone trying to evaluate science writing, advertising, or expert testimony, the findings offer a specific cautionary note rather than a blanket dismissal of brain talk. Extra neuroscience content tends to make material feel more coherent and more understandable, regardless of whether it actually adds explanatory value. That feeling operates on subjective impressions more than on the conclusions people draw.

The researchers also note some limits on their analysis. There are signs of publication bias in the overall literature, meaning studies with null results may be sitting unpublished. The heterogeneity of the studies also means the single average effect size masks real differences between types of material and outcomes. And laboratory experiments using trivial scientific content may not capture how neuroscience messaging operates in high-stakes real-world settings, such as when brain claims align with someone’s political views or parenting anxieties, where prior work suggests the pull can be stronger or weaker depending on motivated reasoning.

Bennett and McLaughlin suggest that future research should move beyond rating tasks in the lab and examine whether consumers actually prefer “brain-based” products over alternatives, and whether neuroscience framing shapes health or political behavior in ways the existing literature hasn’t yet measured.

Share133Tweet83Send

Related Posts

AI in Business

When two heads aren’t better than one: What research reveals about human-AI teamwork in marketing

May 11, 2026
Behavioral Finance and Investor Psychology

New research links local employment shocks to cognitive decline in older men

May 8, 2026
Psychology of Selling and Marketing

Why a blue background can make a brown sofa look bigger

May 6, 2026
Psychology of Selling and Marketing

Why brand names like “Yum Yum” and “BonBon” taste sweeter to our brains

May 5, 2026

Science of Money is part of the PsyPost Media Inc. network.

  • Home
  • Subscribe
  • About
  • Privacy Policy
  • Disclaimer

Follow us

  • Home
  • Subscribe
  • About
  • Privacy Policy
  • Disclaimer