Wednesday, April 20, 2011

Yes, yes, but the social sciences have been here before ...

Chris Mooney's latest piece in Mother Jones titled, nicely, "The Science of Why We Don't Believe in Science" is pretty good and refers to some interesting studies that I hadn't heard of.  The article's main point is: modern neuroscience and psychology have shown that our emotions color our capacity to reason, even when we are looking at scientific evidence. 

But may I just point out that the sociology reached this conclusion a century ago?  You don't really need neuroscience or fancy brain mechanisms to understand this; looking closely at social practices will get us there faster.  I don't really have any bone to pick with the article -- if neuroscience is what the public needs to understand that there is no such thing as pure unadulterated rationality, then I'll take that any day.

But there are still a couple of points that I'd like to make -- because they are implicit in Mooney's article and in the arguments of some of my friends as well.

It's nice that psychologists and neuroscientists think that values color the way we think of facts.  The problem is that even after knowing this, we still like to think of "facts" and "values" as useful terms.  Even more problematic is that we continue to think that people change their minds because of arguments.  This, to me, is almost entirely wrong, and still is the guiding assumption of Mooney's piece.  No one ever became an atheist because someone presented him with an irrefutable proof of God's non-existence (which is why I find the New Atheists a bit boring).  The secularization of Europe did not happen because Voltaire's diatribes against God -- it happened because a Church-State separation was put into place following all the bloody wars of religion that people were so tired of.  This separation, the rise of industrial capitalism and the separation of spheres that forms such a big part of the classical liberal political system -- all of these were instrumental in the rise of secularism (and Voltaire must be understood as a part of this current, rather than someone who made arguments that presented certain "facts" to the reader.)

When I say this to my friends, the answer always is: "But that's not true.  We did change our minds because of someone's arguments."  Or "that's not true.  I know plenty of people who became atheists after reading Dawkins."  Now there's no way I can offer a mathematical proof of what I am saying.  My point is simply: that this so-called argument that changed someone's mind was simply, to use a tennis metaphor, the last point of a match. Of course, the winner wins the last point -- but it's even more crucial to know the events that led  up to the last point.  If we want to know why someone changes his mind about something important, we need to look at the wider narrative of practices and that person's history, rather than just at some fact that convinced him (even if he himself attributes his change of beliefs to the presentation of certain facts).

Monday, April 18, 2011

A question about Hawk-Eye statistics: who owns them?



I've been working on a course project that looks at the various aspects of the Hawk-Eye system used in tennis for adjudicating line calls.  For those that don't follow tennis, Hawk-Eye is a computer vision-based system, that infers the trajectory of the ball on the tennis court and calculates where it hit the ground (inside, outside, on the line, etc.)  The idea is to eliminate the human sources of error when calling the ball "in" or "out"; errors that can be all the way from "the ball is too fast for the human eye" to "the linesman hates this player."

But there's one place where I'm getting stuck.  The Hawk-Eye system, once deployed on a court, generates a bunch of statistics about the ball placement.  Who owns these statistics?  The tournament organizers, who have presumably purchased the Hawk-Eye system?  The television stations, like Tennis Channel, who use the Hawk-Eye generated statistics (and the awesome visualization) in their broadcasts?  Or is it a combination of the two?   And if someone wants to use the Hawk-Eye statistics for coaching or strategic purposes (say, for Andy Roddick to figure out how to beat Federer in the next match), how do they get those statistics?


Any help would be much appreciated.  If you have any suggestions, answers or tips (including books or links I could look up), please use the comments.  Thanks!

Sunday, April 10, 2011

The Republican War on Science and David Bloor's Symmetry Principle

In Knowledge and Social Imagery, his manifesto for the Strong Program in the sociology of science, David Bloor lays out his four principles for the discipline of Science Studies.  One of these is the Symmetry principle which he expresses as follows:
[The Strong Program] would be symmetrical in its style of explanation.  The same types of cause would explain, say, true and false beliefs.
There is a certain aesthetic reasonableness to this principle that I like very much: after all why should there be different explanations of true and false beliefs?  And the principle itself is intended to oppose the traditional conception of scientific knowledge: that true beliefs need no explanation, but false beliefs do.  The explanation of false beliefs is usually distorting factors like personal beliefs, commitments and ideology.

But I've always found it hard to explain the utility of the symmetry principle to others.  What's the use of it? is usually the question.  And I must admit it was always hard to explain its utility outside the field.  As a principle in understanding any kind of knowledge (including scientific knowledge), the symmetry principle has always seemed to me an indispensable tool.  What it could be used for -- outside of the sociology of knowledge --  I couldn't really say. 

Well, until today, that is.

I've been reading Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming by Naomi Oreskes and Erik M. Conway.  The book is a meticulously researched piece of political journalism.  The story is galvanizing, if a little wearying in its repetitiveness: how a handful of scientists, financed by corporations, helped to create doubt about the scientific consensus on topics like the risks of smoking and acid rain to the ozone hole and global warming, leading to considerable delay in the enactment of regulatory policies (and for global warming, no policies at all).  It's astonishing how the same names of the same scientists keep popping up in every debate: these guys really were deep-pocketed merchants of doubt.  They had a clear objective which they shared with the rest of the American conservative movement: to oppose any possible Government regulations on corporations as well as dismantle the already existing ones. 

So far so good.  Unfortunately, despite all the wonderful data that the authors have combed through, the story the authors tell is frustratingly traditional and asymmetric.  It's true that the road to the regulation of tobacco (and now it seems global warming) was long and arduous and disturbing.  But the road to regulation for acid rain and the ozone hole was arguably much shorter.  We were able to regulate the emissions of sulfur and CFCs despite the doubts created by the right-wing machine using both straight-forward bans and cap-and-trade mechanisms.  Why were regulators quick to respond in these cases but not in the others despite the right-wing noise machine?  The authors, it seems to me, don't think it is especially relevant.  For instance on page 124, they say:
The combined results of the Ozone Trends Panel and the field expeditions caused the Montreal Protocol to be renegotiated. The results also convinced the industry that their products really were doing harm, and opposition began to fade. CFCs would now be regulated based on what had already happened, not on what might happen in the future. Because the chemicals had lifetimes measured in decades, there was no longer any doubt that more damage would happen.  [My emphasis]
But didn't they spend the previous three chapters describing how industry leaders almost never accept scientific findings when they go against their own interests (e.g. Big Tobacco on the risks of smoking)?   So why should the industry be convinced in this case?  It seems to me that the authors don't really care.  When scientific findings lead to the appropriate regulations, it's because they were true.  When they don't, it's because of the right-wing doubting machine and its near-fanatical free market ideology.

This is where the symmetry principle would have been useful.  If we assume that there is one process that leads from scientific findings to the appropriate regulations, then the same process holds irrespective of whether said regulation was enacted or not.  (It's not as if the doubting thomases didn't start beating their drums during the ozone hole controversy, it's just that they were not successful in blocking regulation.)  So knowing what we did right in the ozone hole and acid rain case will arguably be important for us if we want to enact global warming regulation.

I don't mean to suggest that if the authors did treat these cases symmetrically, we would know what to do to enact emissions reductions in the US.  No.  And it's possible that the difference is just that the right-wing machine threw less money at these problems where we were able to enact regulation.  Or maybe because of the consequences of acid rain or ozone depletion were so close to home (skin cancer, etc.) whereas the consequences of global warming are strikingly diffuse (what exactly does it mean for average temperatures to rise by 2 degrees?), the American public was just more supportive of regulation in these cases.  But whatever it is, it would be useful to treat the cases of successful regulation and delayed (or impossible) regulation symmetrically.

The Symmetry principle is often derided for its relativism towards science.  Here is one case where it could be used (although in a political economic analysis) for science, and not against it.

[I don't mean to hit on the book, I think it's rich and very detailed and a rich source of data for anyone who wants to understand the political economy of scientific findings and their relation to regulation.  I highly recommend it.]

Saturday, April 2, 2011

Why did the prestige of science and engineering decline in the US?

I just finished reading The Great Stagnation a couple of days ago and enjoyed it very much.  There's a lot there to think about -- especially about technological changes, even for an STS-er.  Plus the book is only about 20000 words -- go for it!

In the chapter "Can we fix things?" Cowen offers the following policy prescription: "Raise the social status of scientists."  And by social status he seems to mean something like: "do something that makes doing science something more young people aspire to."  (And remember this book is specifically about the United States.)

I think that is exactly right.  But I'm curious: does anyone have any ideas why the social status of science (and I presume, engineering and technology) declined in the US?  And when it started to decline?  I am starting to get particularly interested in this question.  Any books, articles, or your own hypotheses that you think are relevant to this?


Please leave your response in the comments.