Nullius in Verba

4:23:00 PM

Nullius in verba means “take no one’s word for it.”

It’s the motto of the Royal Society, a truly remarkable institution, whose members contributed more than anyone ever to the formation of the distinctive, and distinctively penetrating, mode of ascertaining knowledge that is the signature of science.

The Society’s motto—“take no one’s word for it!”; i.e., figure out what is true empirically, not on the bias of authority—is charming, even inspiring, but also utterly absurd.

“DON’T tell me about Newton and his Principia Naturalis,” you say, “I’m going to do my own experiments to determine the Law of Gravitation.”

“Shut up already about Einstein! I’ll point my own telescope at the sun during the next lunar eclipse, place my own atomic clocks inside of airplanes, and create my own GPS system to ‘see for myself’ what sense there is in this relativity business!’ ”

“Fsssssss—I don’t want to hear anything about some Heisenberg’s uncertainty principle. Let me see if it is possible to determine the precise position and precise momentum of a particle simultaneously.”

After 500 years of this, you’ll be up to this week’s Nature, which will at that point be only 500 years out of date.

But, of course, if you “refuse to take anyone’s word for it,” it’s not just your knowledge of scientific discovery that will suffer. Indeed, you’ll likely be dead long before you figure out that the earth goes around the sun rather than vice versa.

If you think you know that antibiotics kill bacteria, say, or that smoking causes lung cancer because you have confirmed these things for yourself, then take my word for it, you don’t really get how science works. Or better still, take Popper’s word for it; many of his most entertaining essays were devoted to punching holes in popular sensory empiricism—the attitude that one has warrant for crediting only what one “sees” with one’s own eyes.

The amount of information it is useful for any individual to accept as true is gazillions of times larger the amount she can herself establish as true by valid and reliable methods (even if she cheats and takes the Royal Society’s word for it that science’s methods for ascertaining what’s true are the only valid and reliable ones).

This point is true, moreover, not just for “ordinary members of the public.” It goes for scientists, too.

In 2011, three physicists won the Nobel Prize “for the discovery of the accelerating expansion of the Universe through observations of distant supernovae.” But the only reason they knew what they (with the help of dozens of others who helped collect and analyze their data) were “observing” in their experiments even counted as evidence of the Universe expanding was that they accepted as true the scientific discoveries of countless previous scientists whose experiments they could never hope to replicate—indeed, whose understanding of why their experiments signified anything at all these three didn’t have time to acquire and thus simply took as given.

Scientists, like everyone else, are able to know what is known to science only by taking others’ words for it.  There’s no way around this. It is a consequence of our being individuals, each with his or her own separate brain.

What’s important, if one wants to know more than a pitiful amount, is not to avoid taking anyone’s word for it. It’s to be sure to “take it on the word” of  only those people who truly know what they are talking about.

Once this point is settled, we can see what made the early members of the Royal Society, along with various of their contemporaries on the Continent, so truly remarkable. They were not epistemic alchemists (although some of them, including Newton, were alchemists) who figured out some magical way for human beings to participate in collective knowledge without the mediation of trust and authority.

Rather their achievement was establishing that the way of knowing one should deem authoritative and worthy of trust is the empirical one distinctive of science and at odds with those characteristic of its many rivals, including divine revelation, philosophical rationalism, and one or another species of faux empiricism.

Instead of refusing to take anyone's word for it, the early members of the Royal Society retrained their faculties for recognizing "who knows what they are talking about" to discern those of their number whose insights had been corroborated by science’s signature way of knowing.

Indeed, as Steven Shapin has brilliantly chronicled, a critical resource in this retraining was the early scientists’ shared cultural identity.  Their comfortable envelopment in a set of common conventions helped them to recognize among their own number those of them who genuinely knew what they were talking about and who could be trusted—because of their good character—not to abuse the confidence reposed in them (usually; reliable instruments still have measurement error).

There’s no remotely plausible account of human rationality—of our ability to accumulate genuine knowledge about how the world works—that doesn’t treat as central individuals’ amazing capacity to reliably identify and put themselves in intimate contact with others who can transmit to them what is known collectively as a result of science.

Now we are ready to return to why I say cultural cognition is not a bias but actually an indispensable ingredient of our intelligence.

This is post no. 2 on the question “Is cultural cognition a bias,” to which the answer is, “nope—it’s not even a heuristic; it’s an integral component of human rationality.”

Cultural cognition refers to the tendency of people to conform their perceptions of risk and other policy-consequential facts to those that predominate in groups central to their identities. It’s a dynamic that generates intense conflict on issues like climate change, the HPV vaccine, and gun control.

Those conflicts, I agree, aren’t good for our collective well-being. I believe it’s possible and desirable to design science communication strategies that help to counteract the contribution that cultural cognition makes to such disputes.

I’m sure I have, for expositional convenience, characterized cultural cognition as a “bias” in that context. But the truth is more complicated, and it’s important to see that—important, for one, thing, because a view that treats cultural cognition as simply a bias is unlikely to appreciate what sorts of communication strategies are likely to offset the conditions that pit cultural cognition against enlightened self-government.

In part 1, I bashed the notion—captured in the Royal Society motto nullius in verba, “take no one’s word for it”—that scientific knowledge is inimical to, or even possible without, assent to authoritative certification of what’s known.

No one is in a position to corroborate through meaningful personal engagement with evidence more than a tiny fraction of the propositions about how the world works that are collectively known to be true. Or even a tiny fraction of the elements of collective knowledge that are absolutely essential for one to accept, whether one is a scientist trying to add increments to the repository of scientific insight, or an ordinary person just trying to live.

What’s distinctive of scientific knowledge is not that it dispenses with the need to “take it on the word of” those who know what they are talking about, but that it identifies as worthy of such deference only those who are relating knowledge acquired by the empirical methods distinctive of science.

But for collective knowledge (scientific and otherwise) to advance under these circumstances, it is necessary that people—of all varieties—be capable of reliably identifying who really does know what he or she is talking about.

People—of all varieties—are remarkably good at doing that. Put 100 people in a room and tell them to perform, say, a calculus problem and likely one will genuinely be able to solve it and four mistakenly believe they can.  Let the people out 15 mins later, however, and it’s pretty likely that all 100 will know the answer. Not because the one who knew will have taught the other 99 how to do calculus. But because that’s the amount of time it will take the other 99 to figure out that she (and none of the other four) was the one who actually knew what she was talking about.

But obviously, this ability to recognize who knows what they are talking about is imperfect. Like any other faculty, too, it will work better or worse depending on whether it is being exercised in conditions that are congenial or uncongenial to its reliable functioning.

One condition that affects the quality of this ability is cultural affinity. People are likely to be better at “reading” people—at figuring out who really knows what about what—when they are interacting with others with whom they share values and related social understandings. They are, sadly, more likely to experience conflict with those whose values and understandings differ from theirs, a condition that will interfere with transmission of knowledge.

As I pointed out in the last post, cultural affinity was part of what enabled the 17th and early 18th Century intellectuals who founded the Royal Society to overturn the authority of the prevailing, nonempiricial ways of knowing and to establish in their stead science’s way. Their shared values and understandings underwrote both their willingness to repose their trust in one another and (for the most part!) not to abuse that trust. They were thus able to pool, and thus efficiently build on and extend, the knowledge they derived through their common use of scientific modes of inquiry.

I don’t by any means think that people can’t learn from people who aren’t like them. Indeed, I’m convinced they can learn much more when they are able to reproduce within diverse groups the understandings and conventions that they routinely use inside more homogenous ones to discern who knows what about what. But evidence suggests that the processes useful to accomplish this widening of the bonds of authoritative certification of truth are time consuming and effortful; people sensibly take the time and make the effort in various settings (in innovative workplaces, e.g., and in professions, which use training to endow their otherwise diverse individuals with shared habits of mind). But we should anticipate that the default source of "who knows what about what" will for most people most of the time be communities whose members share their basic outlooks.

The dynamics of cultural cognition are most convincingly explained, I believe, as specific manifestations of the general contribution that cultural affinity makes to the reliable, every-day exercise of the ability of individuals to discern what is collectively known.  The scales we use to measure cultural worldviews likely overlap with a large range of more particular, local ties that systematically connect individuals to others with whom they are most comfortable and most adept at exercising their “who knows what they are talking about” capacities.

Normally, too, the preference of people to use this capacity within particular cultural affinity groups works just fine.

People in liberal democratic societies are culturally diverse; and so people of different values will understandably tend to acquire access to collective knowledge within a large number of discrete networks or systems of certification. But for the most part, those discrete cultural certification systems can be expected to converge on the best available information known to science. This has to be so; for no cultural group that consistently misled its members on information of such vital importance to their well-being could be expected to last very long!

The work we have done to show how cultural cognition can polarize people on risks and other policy-relevant facts involve pathological cases. Disputes over matters like climate change, nuclear power, the HPV vaccine, and the like are pathological both in the sense of being bad for people—they make it less likely that popularly accountable institutions will adopt policies informed by the best available information—and in the sense of being rare: the number of issues that admit of scientific investigation that generate persistent divisions across the diverse networks of cultural certification of truth are tiny in relation to the number that reflect the convergence of those same networks.

An important aim of the science of science communication is to understand this pathology. CCP studies suggest that they arise in cases in which facts that admit of scientific investigation become entangled in antagonistic cultural meanings—a condition that creates pressures (incentives, really) for people selectively to seek out and credit information conditional on it supporting rather than undermining the position that predominates in their own group.

It is possible, I believe, to use scientific methods to identify when such entanglements are likely to occur, to structure procedures for averting such conditions, and to formulate strategies for treating the pathology of culturally antagonistic meanings when preventive medicine fails. Integrating such knowledge with the practice of science and science-informed policymaking, in my opinion, is vital to the well-being of liberal democratic societies.

But for the reasons that I’ve tried to suggest in the last two posts, this understanding of what the science of science communication can and should be used to do does not reflect the premise that cultural cognition is a bias. The discernment of “who knows what about what” that it enables is essential the ability of our species to generate scientific knowledge and for individuals to participate in what is known to science.

Indeed, as I said at the outset, it is not correct even to describe cultural cognition as a heuristic. A heuristic is a mental “shortcut”—an alternative to the use of a more effortful, and more intricate mental operation that might well exceed the time and capacity of most people to exercise in most circumstances.

But there is no substitute for relying on the authority of those who know what they are talking about as a means of building and transmitting collective knowledge. Cultural cognition is no shortcut; it is an integral component in the machinery of human rationality.

Unsurprisingly, the faculties that we use in exercising this feature of our rationality can be compromised by influences that undermine its reliability. One of those influences is the binding of antagonistic cultural meanings to risk and other policy-relevant facts. But it makes about as much sense to treat the disorienting impact of antagonistic meanings as evidence that cultural cognition is a bias as it does to describe the toxicity of lead paint as evidence that human intelligence is a “bias.”

We need to use science to protect the science communication environment from toxins that disable us from using faculties integral to our rationality. An essential step in the advance of this science is to overcome simplistic pictures of what our rationality consists in. 

-Dan Kahan

0 comments