Saturday, April 11, 2015

On the question of trusting experts

Over the years, journalist Chris Mooney has made a name for himself as a chronicler of what he has called The Republican War on Science: the numerous battles being fought within/across America's political landscape over issues like global warming, pollution and regulation.  As time has gone by, Mooney has also started to draw on social psychological and brain imaging research on political bias: how people interpret scientific findings and facts in the light of their ideological convictions, or as Mooney's article for Mother Jones was titled, "The Science of Why We Don't Believe Science: How our brains fool us on climate, creationism, and the vaccine-autism link."  From an STS perspective, these findings, even though couched in the slightly problematic scientistic idiom of social psychology, make perfect sense: they suggest that "data" is always interpreted in the light of previously held beliefs; that facts and values are not easily separable in practice. (Mooney's second book is titled "The Republican Brain" which does sound problematic.  But since I haven't read it, I'm not going to comment on it.  See this discussion between him and Mother Jones' Kevin Drum: here, here and here.)

In a new article in the Washington Post, Mooney reports on a recent experiment from social psychologist Dan Kahan to argue that you should, yes, trust experts more than ordinary people.  Kahan and his collaborators asked the subjects in their pool (judges, lawyers, lawyers-in-training and laypeople, statistically representative of Americans) to interpret whether a given law was applicable to a hypothetical incident; the question was: would they apply the rules of statutory interpretation correctly? So first, they were informed about the law that bans littering in a wildlife preserve.  Next they were told that a group of people had left litter behind, in this case, reusable water containers.  But there was a catch: some were told that these were left behind by immigration workers helping illegal immigrants cross over the US-Mexico border safely.  Others were told that the litter belonged to a construction crew building a border fence.  All were polled to understand their political and ideological affiliations.  Predictably, depending on their ideological beliefs, people came down on different sides of the issue: Republicans tended to be more forgiving of the construction workers, etc.  What was different was that judges and trained lawyers tended, more than laypeople, to avoid this bias.  They interpreted the law correctly (the correct answer here was that it didn't constitute littering because the water bottles were reusable) despite their ideological convictions. Well, so far so good. As an anthropologist, I interpret the experiment to be saying that lawyers are subjected to special institutional training, unlike the rest of us, and that this habitus lets them reach the "correct" result far more than us. Experts are different from the rest of us, in some way.

But what's interesting is the conclusion that Mooney draws from this experiment: that while experts are biased, they are less biased than the rest of us, and that therefore, experts should be trusted more often than not.  Well.  Scientific American's John Horgan has a pragmatic take on this: that this, of course, leaves open the question of which experts to trust, and scientists, like other experts, have been known to be wrong many many many times.  To trust experts because they are experts, seems, well, against the spirit of a democratic society.  (Another Horgan reponse to Mooney here.)

But I think there's something here about the particular result that Mooney is using to make his point.  Something that I can think can help to show that yes, experts matter, but no, that doesn't mean that there's a blanket case for trusting experts more than others.  Lawyers and judges do come to different conclusions than the rest of us when it comes to statutory interpretation.  But there is one huge elephant in the room: the US Supreme Court, at this very moment we speak, is considering King vs. Burwell, a challenge to the Affordable Care Act that hinges precisely on questions of statutory interpretation. How do you interpret a law that says that the federal government will provide subsidies to insurance exchanges created by "States"?  Does "States" mean the states that constitute the United States or does it mean state, in the abstract, whether it's the federal government or the states?  How difficult can that be?  It seemed very clear in the long battle over the ACA that the subsidies were meant for everyone.  But no, the question was contentious enough that that courts disagreed with each other and the Supreme Court took it up.  With a good chance that they might rule in a way that destroys the very foundations of the Affordable Care Act.

How might one reconcile the findings of the Kahan study with what's happening with the Supreme Court?  The Supreme Court judges are certainly experts, elite, well-trained, and at the top of their respective games.  And yet, here they are, right at the center of a storm over what journalist David Leonhardt has called the "federal government's biggest attack on inequality."   I think there's a way.  Experts are conditioned to think in certain ways, by virtue of their institutional training and practice, and when stakes are fairly low, they do.  But once stakes are high enough, things change.  What might seem like a fairly regular problem in ordinary times, a mere question of technicality, may not look like one in times of crisis.  At this point, regular expert-thinking breaks down and things become a little more contested.

And we do live in a polarized time.  As political scientists have shown time and time again, the polity of the United States experienced a realignment after the Civil Rights movement.  The two major parties had substantial overlaps before but now they don't.  They cater to entirely different constituencies: the Republicans being the party of managers, evangelicals and the white working class, the Democrats being the party of organized labor, affluent professionals and minorities.  Political polarization has meant that even institutions which are supposed to be non-political (but really have never been so) start to look more and more political, because there are basic questions of disagreement over things that may have seemed really simple and technical.  This explains the spate of decisions from the Supreme Court where the conservatives and liberals on the bench have split neatly along ideological lines.  

But does that mean that judges are just politicians in robes? (Which is the thesis that Dan Kahan and others set out to debunk.)  Not really.  The US Supreme Court actually resolves many many cases with  fairly clear majorities; more than a third of them through unanimous decisions.  These cases hardly ever make it into the public eye, and they involve, to us at least, what seem like arcane questions of regulation and jurisdiction.  Another way to interpret this is to say that these cases are "technical" because they are not in the public eye, no great "stakes" attach to these decisions, unless it's to the parties in question.  When stakes are high -- Obamacare, campaign finance funding, abortion, gay marriage -- the Supreme Court, just like the rest of the country, is hopelessly polarized.  And a good thing too because fundamental crises in values are best addressed through Politics (with a capital P) rather than leaving them to bodies of experts.

Does this sound at all familiar to you?  STS has come a long way since The Structure of Scientific Revolutions but this is not unfamiliar to what Kuhn calls a time of crisis. Scientists and others have (often in self-serving ways) taken up the message of the book to be that science moves in cycles of innovation: a time of normal science, and then a time of revolutionary science (ergo, starting a new paradigm is the best way to do science).  But really, the point of the book that's missed is that the crises that Kuhn talks about (that happen through a buildup of anomalies) are organizational crises; at such times, fundamentally taken-for-granted understandings of what is right and wrong break down.  New taken-for-granted understandings emerge, but they emerge along-side a different organizational order.

Social psychological experiments on political bias in the "public understanding of science" need to be understood not as grand truths about how people interpret, but as historically contingent findings.  Yes, judges will vote more "correctly" than laypeople, but a toy case presented as part of a social psychological study is not the same as an actual case.  Real cases have audiences, as do the judges and the lawyers.  I remember when the first original challenge to Obamacare was floated, many liberals (including me) found the question of the constitutionality of the individual mandate ridiculous.  Of course, the federal government could mandate that everyone needed to purchase health insurance, that's what governments do!  (Not to mention that it provided subsidies to those who couldn't afford it, so it was hardly unjust.)  But the case just about squeaked through the Supreme Court in our favor and it could have well gone the other way.  Burrell vs King is, if anything, an even sillier case, but no one is underestimating it anymore.

It seems like social psychological studies of "bias" might be doing in this day and age what the social studies of scientific expertise did many years ago.  Although Mooney seems to misunderstand the point of science studies.  These studies weren't meant to show that experts are "biased."  They were meant to show that that expert practices and discourse are designed to construct certain questions as "technical."  This is not necessarily a bad thing, but it does, at certain points, drown out other voices who disagree with the experts' conclusion (again, this is true of all political choices, usually). What is more, once experts are framed as objective and arguing only from facts rather than values, opposing voices, who have no recourse to the language of facts, get delegitimized even further.  STS recommendations were not that you need to trust experts more or less, but that questions about competing values needed to be brought to the fore in public  debates that involved science and technology (along with technical expertise, of course).  And while scientific and legal experts work in different institutional ecologies, all experts work within institutional ecologies, which means their work is shaped by their collective understandings of what is technical and what is not.  The solution is not to trust experts more but to find better ways to debate differences in fundamental values, while still using what we can from experts.  

Wednesday, April 8, 2015

New blog-posts around the web: Crowdsourcing and Alan Turing

This blog has been quiet for a while but I had two posts at the CASTAC blog in the past two months.

The first one, titled Crowdsourcing the Expert, points out that computer scientists have now turned their attention to more sophisticated forms of crowdsourcing: not just crowds of uniform homogeneous click-workers, but also crowds of experts.  The crowdsourcing platform is now seen as a manager, not just for the unskilled worker, but also for the creative classes?   And what about the expertise of computer scientists themselves which is left fairly undefined? Anyway, read the whole thing if it strikes your interest.

The second one is about Alan Turing.  I summarize some recent articles on Alan Turing and computer science published by historians Thomas Haigh and Edgar Daylight,  where they suggest that some of the recent commemorations of Alan Turing are not quite historically accurate.  But fascinating nonetheless because they show us how computer science, as a discipline, was constituted.