Should I trust experts?
Experts are more often right - but also more often wrong. Better to trust near-experts. Even in the arts.
Dan Greco recently wrote a post addressing the question, “should I trust economists?” That post primarily operationalizes this question with the question, “where are the airplanes?” That is, he mainly concerns himself with the question of whether trusting economists leads to more practically successful interventions in the world, the way that trusting physicists, mechanical engineers, etc. leads to airplanes.
As he notes, the successful applications of economics are fewer than those of these other fields (primarily involving the design of auctions). But he also spends some time discussing the way that economists (particularly since the “credibility revolution”) often make testable predictions that enable us to see that their theories are more accurate than the alternatives. He concludes, “while the limitations I’ve discussed are real, I haven’t dwelled on the various counterintuitive phenomena that economics gets right; none of these successes are as impressive as airplanes, but they’re also nothing to sneeze at.”
I think something like his explanation applies to experts in nearly every field (how do I define a “field” or “experts”? I’ll get to that…) so from here on, I’ll replace “should I trust economists?” with “should I trust experts in X?”, where “X” can stand for any field.
But I think more is worth saying. Although I think Dan is quite right that experts in every field know a lot of things that other people don’t, I think there’s another phenomenon that he leaves out (like most other people who address this question). In particular, although an expert is likely to answer a question with much more truth than anyone else, it is also true that a randomly chosen expert is likely to answer a question with much more falsehood than anyone else. In every field, there are controversies that few people outside the field know anything about, and different experts take different views on these controversies. Most of these views will be wrong, and so there is a kind of wrong answer that you will get from many experts that you won’t get from anyone else. (See these lists of controversies in economics - economics in particular might have more fake controversies than other fields, because non-economists often have an incentive to pretend to be experts, but for instance, physicists are more likely to give you incorrect answers about quantum gravity or cosmic inflation or dark matter than anyone else, because not many people other than physicists are likely to believe false theories about these things.) Even in a field like mathematics, where experts are unlikely to make any statement about matters of fact that haven’t yet been settled by formal proof, when you ask them questions about which methods are promising ones to address currently-unresolved questions, the experts will disagree, and some will be more wrong than anyone outside the field could be.
And even without controversy, experts often give confidently wrong answers of a kind no one else can, because of their always-incomplete state of evidence. My favorite example of this sort is the fact that in the late 19th century, physicists like Lord Kelvin were able to use information about the temperature of the Earth and Sun, and the physics of radiative cooling, to “prove” that the Earth and Sun could be no more than a few tens of millions of years old, thus “refuting” Darwinian theory. (Kelvin would have been right if it weren’t for excess heat generated by radioactive decay inside the Earth, and fusion inside the Sun, but neither of these processes were discovered until decades later.) But this sort of thing happens in every field (consider the early claims by experts that masks weren’t relevant for protecting against covid, because most diseases are spread on solid particles or liquid droplets and not usually through the air; which was itself an over-correction against the miasma theory, where 19th century doctors were confident that diseases were only spread through the air, and not through solid or liquid particles).
So I start with the observation that experts can lead you more right than anyone else, but can also lead you more wrong than anyone else, and return to the question, “should I trust experts in X?”
The ethics of belief
Looking literally at the question, the “should” makes it a question in ethics, and “trust” is about belief, so this is a question in the ethics of belief. There are many ways to think about what makes beliefs right or wrong. One Enlightenment idea is that one should always think for oneself, because thinking for yourself is the appropriate use of your mental faculties. Versions of this idea can be found in Descartes and Kant, but also other philosophers.
In his 1877 paper, “The Ethics of Belief”, the mathematician and philosopher, William Kingdon Clifford, comes to a conclusion like this by thinking not about our faculty of reason, but rather about the purpose of belief. He suggests that the point of belief is to lead to action, and that this action will be more successful if the belief is true, and that the belief is more likely to be true if it is based on evidence. Even for beliefs that aren’t practically significant, getting in the habit of basing them on evidence makes you better for the ones that are, and helps shape the habits of others as well. He comes to the conclusion that “it is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence.”
He never comes to a criterion of what it takes for evidence to actually be sufficient, but much of the paper is dedicated to arguing that both scientific and religious authorities might have good intentions, but sometimes make claims going beyond what anyone could possibly have evidence for (for instance, claims that a particular vision was truly an angel rather than a hallucination, or claims that fundamental particles have existed eternally and unchanging). For Clifford, basing our beliefs on evidence means thinking for ourselves, even if only to evaluate what sort of evidence a supposed expert might be able to have, and thus when to reject what they have to say.
Evidence vs reason to believe
Clifford’s criterion works if you have a clear sense of what sorts of evidence are possible, and can identify when someone is speaking beyond any possible evidence. But as John Hardwig points out in two important papers (“Epistemic Dependence” from 1984, and “The Role of Trust in Knowledge” from 1991), just what sorts of evidence are possible, and what it is possible to know on the basis of that evidence, is often precisely what non-experts don’t know. He argues that in fact, we often should believe things without having any of the relevant evidence, and we should refuse to think for ourself, and that this applies not just to the layperson but even to the expert.
Hardwig argues for all this by distinguishing “evidence” from a more general concept of “reasons to believe”. On his view, “evidence” is whatever objectively serves to establish the truth of a claim, while “reason to believe” is what a particular person has. Importantly, for Hardwig, testimony from experts can constitute a reason for you to believe, but doesn’t usually constitute evidence - if we want to total up all the evidence for and against a claim, we might include the results of experiments and reasoning that various experts have done, but we wouldn’t include the number of times the experts told other people about it. Furthermore, if someone misunderstood what an expert said to them, they might have reason to believe something for which there is no evidence.
But Hardwig still preserves an important role for evidence. “Reason to believe” can come from having first-order evidence directly for a claim oneself, but it can also come from having second-order evidence that someone else has first-order evidence, or even third-order evidence that someone else has second-order evidence that someone else has first-order evidence, or further iterations. This is because most of us only have the skills to understand some types of evidence, and not others. Most of us can’t get direct access to the best evidence for a claim - but we can sometimes get indirect access, if use the evidence that we can understand about who has the best evidence (or who has the best second-order evidence about who has the best first-order evidence, or whatever), and then just trust them. And as Hardwig notes, the rise of massive collaborative projects in science (“big science”) shows that even people we might think of as experts usually need to rely on others for much of the knowledge relevant to their expertise. Insisting on thinking for yourself means cutting yourself off from much of the best evidence, even if you are an expert on some of the relevant topics yourself.
The value of diversity
But Hardwig says little more than Clifford about what it is for something to be good evidence, or what it means that some evidence is better than others. Clifford and Hardwig both presume that getting access to better evidence makes you more likely to get at the truth. Both of them presume that some things are objectively better evidence than others, and that there are objective facts about how it should be interpreted. Clifford thinks there are some things we can know about what evidence might support, but Hardwig thinks that some people are better able to access these objective facts for some types of evidence than others are.
However, I’m skeptical that there is such a fact about what the evidence objectively supports. And more importantly, none of this can explain why experts so often go more wrong than anyone else, and why we should trust the experts if they can go wrong in this sort of way.
There are various models showing why bounded agents who follow objective evidence as they get it might reasonably end up at odds with one another. But as I said, I’m skeptical about there being such objective evidence, and in any case I think it’s better to consider the question not by thinking about individual experts and their individual beliefs, but rather by thinking about the community of experts - and in everything I’m willing to call a “field”, there really is a community of experts, and not just a bunch of individuals.
A classic 1990 article by Philip Kitcher argues that it is important for the scientific community for experts to believe different things, so that there will be adequate investigation of all hypotheses. There need to be professional incentives to believe things that are wrong, in order for the field to adequately investigate hypotheses that currently seem un-promising, but may later turn out to be right.
A more recent classic paper by Kevin Zollman demonstrates this with a formal model, showing that the community of researchers as a whole is more likely to converge to the truth if there is some mechanism ensuring diversity (whether it involves researchers ignoring different subsets of the evidence, or researchers starting out with different biases - I suspect it would also work if we used some of the models from above where imperfectly rational individuals get polarized by looking at the same evidence) than if they get stuck in a premature consensus.
In Zollman’s model, the specific mechanism is that researchers only generate evidence regarding hypotheses that they believe, so it is important that some researchers believe different theories, in order that evidence relevant to all of them continue getting generated. (William James’s own pragmatist response to Clifford makes the same point less formally, pointing to “the sagacity that Herbert Spencer and August Weismann now display in their famous controversy over the inheritance of acquired characteristics”, and arguing that a researcher who didn’t believe one side of a controversy would be unlikely to generate any relevant evidence.)
My former grad student Sean Conte has generalized this result (in his dissertation, and further not-yet-published work), showing that even if everyone always generates evidence regarding every theory, the community as a whole is more reliable if it is more diverse.
Who to believe?
Given that a well-functioning field will contain researchers with a diversity of beliefs, this raises the question of who to trust, if you want to get the most accurate answers. Even though the experts themselves have the best access to the best evidence, they also have professional incentive to get things wrong, in order for the field to function effectively.
I suspect that experts in nearby fields are likely the right people to ask, if you want to maximize accuracy. Because they are working nearby, they have some incentive to keep up with the state of arguments for and against the controversial hypotheses. But because they aren’t in the field themself, they have no incentive to adopt a false theory.
How far away is a “nearby” field? That seems like a hard question to answer, very much in line with the difficulty for Clifford of identifying good evidence, or for Hardwig in identifying the right experts. (Perhaps this even is the same question as the one for Hardwig - though now “expert” means something slightly different, as the person who is most likely to have the most accurate beliefs, rather than the person who has access to the most specialized evidence.) But my guess is that you want someone working as close as possible to the question you are trying to answer, but not specifically on that question.
Philip Tetlock has made similar observations in his work on expert predictions. When he asked political scientists to make predictions about which countries would have coups and which would have successful elections, and about which countries would have economic growth and which would have stagnation, and various other questions, he noticed that no one was very good at predicting. Conservatives and progressives got things just as wrong, as did realists and idealists. People got things more wrong about the region of the world that they specialized in. But he did notice one dimension of personality type that made a significant difference.
Tetlock referred back to a classic essay by Isaiah Berlin, which itself refers back to a parable by Archilocus - “the fox knows many things, but the hedgehog knows one big thing”. The idea is that some experts, the “hedgehogs”, tried to fit everything into one big theory (the way the animal itself rolls up into a ball and uses its spines to protect, no matter what threat it faces) while others, the “foxes”, were eclectic and had no single strategy for answering questions (the way the animal itself uses different tricks to hunt different prey). Tetlock noticed that the [foxes] were far more accurate than the [hedgehogs]. (Thanks Doug Bates for catching the typo!)
Many people since then, including Nate Silver, have used this as a reason to be more fox-like, at least if you want your predictions to be more accurate. But I think we should think of this more equivocally - the foxes are probably more accurate themselves, but the hedgehogs do more to contribute to the intellectual diversity of their field, and provide the different interpretations that foxes need to make their improvements.
Experts themselves should be hedgehogs. You should trust them if you want to have a detailed and interesting view, and have access to the very best evidence. But if you want your beliefs to be accurate, you should find the foxes, who might themselves be the hedgehogs of nearby fields, with enough distance to be foxlike on the question you are asking.
Beyond accuracy
I promised a bit of discussion of what a “field” is. So far, I basically mean it to be any subject matter for which there is a community that is dedicated towards collectively getting at the truth. They each form different views on the subject matter of the field, paying attention to different evidence, and giving each other feedback on their work. They have significant disagreements, because that is important to help the field progress, though there are many methodological similarities they will share, to bind them together as a community.
Through practice, they learn to identify new kinds of evidence that those of us outside the field don’t know how to interpret. It is important that they are primarily subject to peer review, rather than the review of outsiders. (Thi Nguyen argues that requiring them to use evidence that is interpretable by outsiders will cause them to lose their expertise.) But because humans are clever and creative, and they are aiming at a common goal, they will get better at finding ways of getting themselves collectively closer to the truth, even though they disagree at every point.
But I think a lot of this generalizes beyond the kinds of “fields” that aim at the truth. I think this is true in communities of the arts, or crafts, or fashion, or game design, or puzzle construction. If they create these things for each other, and are subject primarily to peer review, rather than creating for the general public, they will develop their own internal standards that those of us outside the field don’t necessarily understand. The specialists within the field will likely embrace all kinds of weirdness that those of us outside the field have no appreciation of. But the things that they all agree on, and particularly the things that are appealing to experts in nearby fields will often be the creations that we eventually find most rewarding.

Yeah good essay- Agree on foxes versus hedgehogs, I’ve made the exact same point before that while a fox may be indvidually more likely to be correct, a hedgehog, but developing specific theories and approaches, might be making a bigger correction to society as a whole being correct.
One can use an analogy from the literature on index funds and think of those who passively predict on the basis of existing consensus as in some sense engaged in epistemic free-riding. My view is that “double-booking”- having two sets of beliefs- a personal set, and a consensus informed set- with different uses on different occasions is necessary.
https://philosophybear.substack.com/p/rationalism-and-social-rationalism
As a lay person trying to operationalize this, could you help with a handful of concrete examples of “find adjacent field → determine some consensus view that practitioners hold about your original field of interest”. Say 1) Is it possible for a person/team/algorithm to predict the value of a stock of a particular company on a future date, and if so, what is it going to be. 2) Is it the case that accepting a particular faith tradition is going to guarantee salvation? 3) Should I use phonics or whole language to teach my child reading? Addendum: “any subject matter for which there is a community that is dedicated towards collectively getting at the truth” — supposedly we have a range from mathematics to astrology, say, to go from real to fake disciplines. Is there a meta-framework for assessing fields themselves.