Sunday, July 27, 2014

Science vs. Politics: A pragmatic argument for why this distinction doesn't work

[X-posted on the HASTS blog.]

Recently, I talked to a doctor and public health professional about the relationship between science and policy; he told me, in a vivid metaphor, of how things work, and should work, in the regulatory process. The science produces the facts, which then get funneled through our values through the process of politics.  What comes out of this machine, he said, are policies.

It was quite a beguiling vision, but as an STS person, I couldn't help asking: did he really believe in it? Yes, he said.  I pressed on.  How, I asked, would he explain the controversy over global warming? Why was it difficult to implement policy when the scientists had a decent agreement over the facts?  His answer was that it was Fox News, fed by the big bad industry, which had fooled certain people into not believing the scientists.  I asked if it might not be more useful to wonder whether this disagreement over what to do about climate change (or about whether anthropogenic climate change even exists) might be an indication of something deeper: perhaps a reflection on the particular ways in which American society is now polarized rather than about Fox News brainwashing susceptible viewers.  He didn't think so, he said.  (He objected strenuously to my use of the word "brainwashing"; I took it back, but I maintain that it was an accurate descriptor of what he was saying.) I asked at the end what he thought should be done about all of this.  He said it was a long-term project; but it began with education; scientific literacy had to begin at a very early age.  Only then would people stop listening to Fox News. At that point, I gave up.

I admit that there is something really alluring about this picture of a science that produces facts which are then funneled through our values by the process of politics, all of which combines to produce rational public policy.  Even if we admit that this isn't really how it works in practice, perhaps this is how it should work.

But even holding on to this vision as a normative ideal may not be in our best interests.  As Sheila Jasanoff and Bryan Wynne have shown, this is because the process of science is shot through and through with values. Wynne suggests that scientific models to measure risk (e.g. risk analysis, cost-benefit calculations) often contain hidden assumptions and prescriptions: about what it means to be social and human, and what an ideal social order should be.  These visions of the human and the social are often found wanting by different publics.  E.g., the language of risk analysis comes coded with what a risk is or is not, and what things humans should worry about, points about which different publics disagreed but a) did not have the tools to express their disagreement, and b) were not taken seriously by experts and understood as only lacking an understanding of the science.  One of Jasanoff's suggestions is that rather than trying to cure science of its values, or create a politics that is based on "facts," we accept the value-riddenness of science and use that to think about how expert advice fits into the political process.  (Needless to say, I agree.)

All of which brings me the real reason why I'm writing this: this Scientific American blog-post which the worst combination, in my mind, of two overlapping tendencies: the plague-on-both-houses bipartisan strategy of journalism (something that journalist James Fallows calls "false equivalence"), and the dichotomous conception of "science" and "politics" as two mutually opposing entities.

The post details the ways in which the EPA's efforts to establish a new regulatory standard for drinking water, with an even smaller permitted amount of arsenic in it, were stymied by a Republican Congress.  The contours of the story itself will not surprise anyone.  Surveying some of the research that had been conducted, the EPA was on the verge of making official its stance that arsenic was a more dangerous carcinogen than it had originally thought.  This would be a prelude to a tougher drinking water standard. Naturally, this meant that corporations that produced arsenic or used arsenic in their products lobbied hard to make sure this didn't happen.  In these polarized days of American politics, it made sense to turn to the Republican party.  And the Republicans delivered by delaying the process.  Essentially, they got the the National Academy of Sciences (NAS) do an independent review.  Read the whole piece; it's detailed and precise to the point where it can exhaust the reader.

And here my problems begin.  Take the headline:
Politics Derail Science on Arsenic, Endangering Public Health
Why "Politics" and "Science"?  Why not say "Republicans Derail Science on Arsenic"?  Or even better and my personal preference: "Republicans derail EPA on Arsenic"?

Then take the leading line after the headline:
A ban on arsenic-containing pesticides was lifted after a lawmaker disrupted a scientific assessment by the EPA.
Again, why this coyness about the identity of this "lawmaker"?  Why not mention upfront that that this is a Republican congressman?  Why does it take until well into half-way into the article to identify the offending Congressman: Mike Simpson of Idaho?

Why, for example, is this sentence worded in this particular way when we know we're talking about the Bush White House?
The White House at that point had become a nemesis of EPA scientists, requiring them to clear their science through OMB starting in 2004.
The piece, for all its commendable whistle-blower reporting, contains the worst tendencies of what journalist James Fallows has called "false equivalence" in journalism, which is the plague-on-both-houses stance (see Fallows' copious collection of examples).  Essentially, newspaper reporting has a tendency to blame both political parties, or politics in the abstract, when things reach a bad state.  Here the newspaper is seen as above politics, which is what grubby politicians do. And therefore the contrast between the policy that the newspaper is advocating (which is not politics but merely good moral sensible stuff), and that what the politicians are doing, which is bad, i.e. politics.  E.g., the tendency to see the US Congress itself as dysfunctional, rather than the threats of the Republican Congressmen to filibuster pretty much any legislation.

The same forces are at work in the Scientific American piece.  Notice that the piece is not explicitly portrayed as a Republicans vs. the EPA piece but rather as a Politics vs. Science piece.  If I had to caricature it, the main point is: science good, politics bad.  The problem is that this often serves to paint politics itself as grubby and well, dishonest.  

This also leads to a manifest lack of curiosity about certain topics.  You might wonder why the makers of the arsenic-containing herbicide choose to work through the Republican Party and not the Democratic Party.  There's no way you could answer this question without looking at the broader trends in American politics over the last 50 years.  The two parties now occupy non-overlapping spaces on the political spectrum: the Democrats are a hodge-podge of interest groups: minorities, relatively affluent social liberals, unions, etc; the Republicans, on the other hand, have only two constituencies: evangelicals and big business.  Perhaps, 50 years ago, a business that wanted to fight a piece of regulation, would have had to think harder before deciding which political channel to use; today, it doesn't take more than a minute to decide what to do.

I understand that this is perhaps unfair criticism.  The piece is long enough, and talking about the realignment of American politics will only make it longer.  But that's exactly the point: if you black-box both science and politics, and paint the regulatory battle in question as a contest between them, then you don't need to think deeply about either.  Framing the article as the Republicans' battle against the EPA would have required the writers to ask the question of why these two actors are arrayed against each other. Editorial choices matter.

But the worst thing about the article is what's NOT even in it.  What, one would wonder, is a citizen to do after reading it?  The article doesn't say but I have an answer: call or write to your Congressman (especially the Republican ones but it doesn't really matter).  Tell him or her that you think the gutting of the EPA's power is something you don't agree with.  That you believe in a robust regulatory structure with teeth.  That perhaps you believe in a more take-precautions-first European style of regulation rather than a do-it-first-deal-with-consequences-later American style of regulation.  Why couldn't the SciAm article include a link for us to call or email our Congressmen?  Because that would have been too political, that's why.  And why should we bother with grubby politics when the science is in our favor?

One of the recent revelations for me has been how easy the Web can make it for us to call or email our legislators to inform them about our opinions on particular issues.  The techies did it really effectively with their "blackout" in protest of SOPA and PIPA.  Recently, in protest against the FCC's proposal to gut net neutrality, we were able to flood the FCC's comment-solicitation notice board with some good arguments for net neutrality.  At heart, this is just good old-fashioned politics, trying to convince our fellow-citizens about the rightness and wrongness of certain causes, sometimes celebrating victory, at other times, accepting defeat and vowing to fight another day [1].

Now I understand that explicitly political action might not be feasible for certain organizations responsible for the article, a collaboration between the Center for Public Integrity, and the Center for Investigative Reporting, both of which may have explicit prohibitions (because of their funding model, for e.g.) against participating explicitly in politics.  But that's part of what's got to change because that's the most important shortcoming of the science-vs.-politics narrative.  It precludes avenues of action for citizens.  What do you do? Trust science, which is what the SciAm investigative piece seems to suggest?  Despair that your representatives are morally and politically corrupt [2]?   Or do the hard work of politics and convince your fellow-citizens that they're better off having a robust EPA?  I vote for the latter.


[1] Certainly, citizens are starting to participate in science-politics in other ways, most importantly, through the practices of citizen science. Citizen science is perhaps the most interesting way of making science "impure." But making phone-calls to your legislators, voting, giving money to causes you deem fit, are also equally good ways of participating in the political process.

[2] And that, perhaps, explains why the show of our times is Netflix's House of Cards.  More on that another time.  

Sunday, July 13, 2014

Recent blog-posts around the Web

In the past few months, I've been blogging at multiple places and as a result, have completely neglected this blog.  In the future, when I post somewhere else, I will cross-post it here, or at least, post a link.  In the meantime, though, here are some of the posts I wrote recently:

For the CASTAC blog, a post on the history of artificial intelligence and the new field of machine learning.

Also, for the CASTAC blog, a revised post on the phenomenon called "data science" where I speculate that the proliferation of claims about "big data" is more about a crisis in professional identities (who has the expertise to work on particular problems: those with domain knowledge or those with data manipulation skills?) rather than an epistemological crisis (can we analyze phenomena without pre-existing theory?).

Finally, a post on the HASTS blog about how one might use the game of tennis as a way of understanding what the history of technology is all about.

I've also started posting interesting articles I see to my Tumblr.


Thursday, November 14, 2013

Big Data, Boundary Work and Computer Science

A Google Data Center. Image taken from here.

The Annual Meeting of the Society of the Social Studies of Science this year (i.e. 4S 2013) was full of "big data" panels (Tom Boellstorff has convinced me to not capitalize the term). Many of these talks were critiques; the authors saw big data as a new form of positivism, and the rhetoric of big data as a sort of false consciousness that was sweeping the sciences*.

But what do scientists think of big data?

In a blog-post titled "The Big Data Brain Drain: Why Science is in Trouble," physicist Jake VanderPlas (his CV lists his interests as "Astronomy" and "Machine Learning") makes the argument that the real reason big data is dangerous because it moves scientists from the academy to corporations.
But where scientific research is concerned, this recently accelerated shift to data-centric science has a dark side, which boils down to this: the skills required to be a successful scientific researcher are increasingly indistinguishable from the skills required to be successful in industry. While academia, with typical inertia, gradually shifts to accommodate this, the rest of the world has already begun to embrace and reward these skills to a much greater degree. The unfortunate result is that some of the most promising upcoming researchers are finding no place for themselves in the academic community, while the for-profit world of industry stands by with deep pockets and open arms. [all emphasis in the original]
His argument proceeds in three steps: first, he argues that yes, new data is indeed being produced, and in stupendously large quantities. Second, processing this data (whether it's in biology or physics) requires a certain kind of scientist who is both skilled in statistics and software. Third, because of this, "scientific software" which can be used to clean, process, and visualize data becomes a key part of the research process. And finally, this scientific software needs to be built and maintained, and because the academy evaluates its scientists not for the software they build but for the papers they publish, all of these talented scientists are now moving to doing corporate research jobs (where they are appreciated not just for their results but also for their software). That, the author argues, is not good for science.

Clearly, to those familiar with the history of 20th century science, this argument has the ring of deja vu. In The Scientific Life, for example, Steven Shapin argued that the fear that corporate research labs would cause a tear in the prevailing (Mertonian) norms of science, by attracting the best scientists away from the academy, was a big part of the scientific (and social scientific) landscape of the middle of the 20th century. And these fears were largely unfounded (partly, because they were largely based on a picture of science that never existed, and partly because, as Shapin finds, scientific virtue remained nearly intact in its move from the academy to the corporate research lab.) [And indeed, Lee Vinsel makes a similar point in his comment on a Scientific American blog-post that links to VanderPlas' post.]

But there's more here, I think, for STS to think about. First, notice the description of the new scientist in the world of big data:
In short, the new breed of scientist must be a broadly-trained expert in statistics, in computing, in algorithm-building, in software design, and (perhaps as an afterthought) in domain knowledge as well. [emphasis in the original].
This is an interesting description on so many levels. But the reason it's most interesting to me is that it fits exactly with the description of what a computer scientist does. I admit this is a bit of a speculation, so feel free to disagree. But in the last few years, computer scientists have increasingly turned their attention to a variety of domains: for example, biology, romance, learning. And in each of these cases, their work looks exactly like the work that VanderPlas' "new breed of scientist" does. [Exactly? Probably not. But you get the idea.] Some of the computer scientists I observe who design software to help students learn work exactly in this way: they need some domain knowledge, but mostly they need the ability to code, and they need to know statistics both, in order to create, machine learning algorithms, as well as to validate their argument to other practitioners.

In other words, what VanderPlas is saying that practitioners of the sciences are starting to look more and more like computer scientists. His own CV, which I alluded to above, is a case in point: he lists his interests as both astronomy and machine learning. [Again, my point is not so much to argue that he is right or wrong, but that his blog-post is an indication of changes that are afoot.]

His solution to solving the "brain drain" is even more interesting, from an STS perspective. He suggests that the institutional structure of science should recognize and reward software-building so that the most talented people stay in academia and do not migrate to industry. In other words, become even more like computer science institutionally so that the best people stay in academia. Interesting, no?

Computer science is an interesting field. The digital computer's development went hand-in-hand with the development of cybernetics and “systems theory”—theories that saw themselves as generalizable to any kind of human activity. Not surprisingly, the emerging discipline of computer science made it clear that it was not about computers per se; rather, computers were the tools that it would use to understand computation—which potentially applied to any kind of intelligent human activity that could be described as symbol processing e.g. see Artificial Intelligence pioneers Newell and Simon’s Turing award speech. This has meant that computer science has had a wayward existence: it has typically flowed where the wind (meaning funding!) took it. In that sense, its path has been the polar opposite to that of mathematics, whose practitioners, as Alma's dissertation shows, have consciously policed the boundaries of mathematics.   (Proving theorems was seen to be the essence of math; anything else was moved to adjoining disciplines.)

X-posted on Tumblr and the HASTS blog.  


*The only exception to this that I found was Stuart Geiger's talk which was titled "Hadoop as Grounded Theory: Is an STS Approach to Big Data Possible?," the abstract of which is worth citing in full:
In this paper, I challenge the monolithic critical narratives which have emerged in response to “big data,” particularly from STS scholars. I argue that in critiquing “big data” as if it was a stable entity capable of being discussed in the abstract, we are at risk of reifying the very phenomenon we seek to interrogate. There are instead many approaches to the study of large data sets, some quite deserving of critique, but others which deserve a different response from STS. Based on participant-observation with one data science team and case studies of other data science projects, I relate the many ways in which data science is practiced on the ground. There are a diverse array of approaches to the study of large data sets, some of which are implicitly based on the same kinds of iterative, inductive, non-positivist, relational, and theory building (versus theory testing) principles that guide ethnography, grounded theory, and other methodologies used in STS. Furthermore, I argue that many of the software packages most closely associated with the big data movement, like Hadoop, are built in a way that affords many “qualitative” ontological practices. These emergent practices in the fields around data science lead us towards a much different vision of “big data” than what has been imagined by proponents and critics alike. I conclude by introducing an STS manifesto to the study of large data sets, based on cases of successful collaborations between groups who are often improperly referred to as quantitative and qualitative researchers.

Monday, September 30, 2013

The Breaking Bad finale was ...


Frankly, that's the only word that I can think of.  [****SPOILERS FOLLOW****].  The show ended with what can only be called a bang for Walt.  He found a way to give his family his money without them knowing, found a way to see them, he killed the evil Nazis, he set Jesse free and then, well, and then he died.  Or something.  The whole thing was a wish-fulfillment fantasy from start to finish.

Why do I care?  Not so much because I think that characters that are bad need to be punished.  But because there is a coherence and an atmosphere to any show or a movie and this finale violated every one of them.

Take Elliott and Gretchen for instance.  Breaking Bad has been very coy about what exactly transpired between Walt and the Schwartzes or why he and Gretchen broke up.  But the show took the characters seriously.  It made it seem as if the story behind Grey Matter Inc. had substance.  Walt and Skyler's visit to Elliott's birthday party had a pathos to it, and Walt's last confrontation with Gretchen had bite.

None of that mattered yesterday.  The Schwartzes were completely transformed--they were cartoons; rich and pampered people who had robbed Walt of what was rightfully his.  I laughed when Gretchen screamed as Walt did his "Boo" thing to them.  But it only subtracted from what the show has spent the last 6 seasons doing: carefully, assiduously, building even its peripheral characters.

And my beef isn't that the episode was wildly unrealistic and implausible.  (Walt not only gets out of New Hampshire, but manages to drive all the way to New Mexico, threaten Elliott and Gretchen, talk to Skyler (slipping through a police dragnet) and then kill all the villains.)  No.  For all its virtues, Breaking Bad has never been what one might call a "realistic" show.  In a sense, the Season 5 finale was similar to the Season 4 finale where Walt, improbably, vanquishes Gus Fring, the drug king ("I won," he declares at the end of that season).   I should confess that I enjoyed that ending (and I wish that the show that ended without a fifth season).  But the show's tone was different then.  It was unquestionably a thriller, even as its characters suffered and made ambiguous choices.  In the second half of Season 5, it had tipped from being a thriller into full-fledged tragedy.  Jesse and Walt were irrevocably estranged, as were Walt and Hank Schraeder, Hank is killed and Jesse had probably been subjected to every humiliating situation one could think of (meth slavery seemed like a fitting climax).  In the face of such tragedy (Hank dies in "Ozymandias," and Jesse's ex-girlfriend Andrea is brutally executed in "Granite State"), the last episode's almost upbeat tone came as a bit of a shock.  This is how thrillers end, and not tragedies, and at this point, I don't think Breaking Bad was a thriller.  I confess I have no idea how the show should have ended -- but this particular ending was, just, unseemly. 

Saturday, June 22, 2013

N+1 is wrong about sociology

The hoity-toity magazine N+1 has a long, rambling editorial  about sociology [1].  The editorial is long -- far too long -- and I'm inclined to think that it is facetious and tongue-in-cheek.  Still, I think it contains within it an important misconception about what sociology is and does.

The argument, if I have it right, goes something like this:  sociology, the Editors think, has gone too far in taking a calculative, demystifying stance on human affairs.  And as sociology never stays within the academy but leaks out, this has made the public (or at least the public that the N+1 editors have dinner parties with) far too calculative as well.

Naturally, the Editors aren't really worried about other things that sociology has demystified, say, religion or technology.  They are mostly concerned about art and the novel.   They worry that far too many works of art are discussed in terms of the gain and loss of cultural capital by the artist.  People interpret various moves that artists make as merely strategies or positioning.  We are not earnest anymore, not passionate; we are merely cold and calculating analysts, never more so than with respect to art.  Sociology's analysis of art -- the works of Bourdieu or Becker, say, and many others -- often makes it seem as if "art mostly expresses class and status hierarchies, and only secondarily might have snippets of aesthetic value."  The Editors are worried about aesthetics.  "There is still," they suggest, "a space where the aesthetic may be encountered immediately and give pleasure and joy uninhibited by surrounding frameworks and networks of rules and class habits."

I get the argument, as far as it goes [2].  And let's also grant that in certain circles frequented by the Editors, this does happen.  There's far too much sociologizing, far too much analyzing of the moves that people make as an expression of their effort to conserve their cultural position, and far too little discussion of aesthetics.  How new is this?  And how much should we blame cultural sociology for this, especially the post-structuralist variety?

Let's take one example that the editorial cites, the case of Jeff Bezos:
We’ve reached the point at which the CEO of Amazon, a giant corporation, in his attempt to integrate bookselling and book production, has perfectly adapted the language of a critique of the cultural sphere that views any claim to “expertise” as a mere mask of prejudice, class, and cultural privilege. Writing in praise of his self-publishing initiative, Jeff Bezos notes that “even well-meaning gatekeepers slow innovation. . . . Authors that might have been rejected by establishment publishing channels now get their chance in the marketplace. Take a look at the Kindle bestseller list and compare it to the New York Times bestseller list — which is more diverse?” Bezos isn’t talking about Samuel Delany; he’s adopting the sociological analysis of cultural capital and appeals to diversity to validate the commercial success of books like Fifty Shades of Grey, a badly written fantasy of a young woman liberated from her modern freedom through erotic domination by a rich, powerful male. Publishers have responded by reducing the number of their own “well-meaning gatekeepers,” actual editors actually editing books, since quality or standards are deemed less important than a work’s potential appeal to various communities of readers.  [my emphasis.]
The Editors seem to imply that Bezos read Bourdieu and then came up with his strategy of how to attack those who opposed Amazon's self-publishing initiatives on aesthetic grounds (exhibit one: the N+1 Editors themselves) i.e. characterize them as gatekeepers trying to protect their fiefdom.

How true is this?  My guess is not at all.  Bezos may well have read Bourdieu but there is nothing new whatsoever about his strategy; it's hundreds of years old and definitely older than when the word "cultural capital" was invented. Take a look, for example, at Andrew Abbott's brilliant sociological history of the professions.  When different groups warred over a task (doctors and nurses over medical care, accountants and lawyers over certain kinds of corporate money management, psychiatrists and psychologists over how mental problems should be treated), they have always resorted to some version of this language of gate-keeping to characterize the other side.  As Wendy Espeland says: "our tendency [is] to see others as having interests where we have commitments."  Sociologists have often taken this as their fundamental problem asking: how does this come to be?  What does this say about the production of the social order?  And so on.

In other words, sociologists did NOT invent demystification in the academy from where it supposedly diffused across society so that now even Jeff Bezos adopts the language of cultural capital, interests and gate-keeping.  It was already there, has always been there, and usually comes to the fore when controversies arise [3].

Take the passing of the Affordable Care Act, for instance.  Throughout the debate, Republicans alleged that the ACA was a thinly veiled attempt at the redistribution of income, and an effort to take control (of medical decisions) away from families and into the hands of the federal government.  They portrayed themselves as standing for seniors, and Obama as a socialist.  Democrats, for their part, suggested that Republicans did not care about the uninsured, and only cared about protecting the interests of the insurance companies.  Insurance companies, often working behind the scenes, were demonized by everyone but doctors were usually not.  Seniors were quite sure that the ACA was an effort to take away the health-care that they rightfully deserved.  Throughout the debate, you saw actors imputing tawdry "interests" to their opponents and portraying themselves as being committed to certain values.  You might say that they had all gone and read Bourdieu.  Or you might say that this is how social controversies are fought and settled. 

Or, take MOOCs (Massive Open Online Courses, for those who haven't heard of them), for instance.  MOOCs have been the topic of great dispute in the public sphere.  And in the debate, you see the same confluence of imputed interests and personal commitments.  The much-famous open letter that the philosophy faculty at San Jose State University wrote to Michael Sandel suggested that MOOCs might be part of a neoliberal transformation of the university and an ongoing commodification of education (classes produced at a factory at Harvard, then distributed to community colleges and state universities, which then only need to hire TAs, and so on), and not so much about improving access for students.  On the other hand, MOOC inventor Sebastian Thrun emphasizes the kind of easy acccess that made his Artificial Intelligence course at Stanford so famous:
Yet there is one project he's happy to talk about. Frustrated that his (and fellow Googler Peter Norvig's) Stanford artificial intelligence class only reached 200 students, they put up a website offering an online version. They got few takers. Then he mentioned the online course at a conference with 80 attendees and 80 people signed up. On a Friday, he sent an offer to the mailing list of a top AI association. On Saturday morning he had 3,000 sign-ups—by Monday morning, 14,000.

In the midst of this, there was a slight hitch, Mr. Thrun says. "I had forgotten to tell Stanford about it. There was my authority problem. Stanford said 'If you give the same exams and the same certificate of completion [as Stanford does], then you are really messing with what certificates really are. People are going to go out with the certificates and ask for admission [at the university] and how do we even know who they really are?' And I said: I. Don't. Care."

Aaron Bady, a graduate student at Berkeley and one of the hottest voices in the blogosphere says: "not so fast!"  Bady plays up Thrun's tenure at Google, suggests that Thrun is interested not so much in improving access as much as increasing Google's bottom line:
The MOOC that debuted in IHE in December 2011 was Sebastian Thrun’s “Artificial Intelligence” MOOC, a course that was offered at Stanford but opened up to anyone with a broadband. The way this story is usually told is that his incredible success—160,000 students, from 190 countries—encouraged Thrun to leave Stanford to try the new mode of pedagogy that he had stumbled upon. He had seen a TED talk given by Salman Khan, the founder of Khan Academy, and when he decided to give it a whirl and it was a huge success, the rest is history. In January, 2012, he would found the startup Udacity.

However, another way to tell the story would be that Thrun was a Google executive—who was already well known for his work on Google’s driverless car project—and that he had already resigned his tenure at Stanford in April 2011, before he even offered that Artifical Intelligence class. Ending his affiliation with Stanford could be described as completing his transition to Silicon Valley proper. In fact, despite IHE’s singular “a Stanford University professor,” Thrun co-taught the famous course with Google’s Director of Research, Peter Norvig.

It’s important to tell the story this way, too, because the first story makes us imagine a groundswell of market forces and unmet need, a world of students begging to be taught by a Stanford professor and Google, and the technological marvels that suddenly make it possible. But it’s not education that’s driving this shifting conversation; as the MOOC became something very different in migrating to Silicon Valley, it’s in stories told by the New York Times, the WSJ, and TIME magazine that the MOOC comes to seem like an immanent revolution, whose pace is set by necessity and inevitability.
You might say that this actually proves the Editors' point because Bady is a graduate student and has definitely read his Bourdieu.  I would suggest that that would be missing the big picture.  The point is: this kind of debate, with imputations of nefarious interests and declarations of personal commitment, are routine, especially in the midst of social controversies.  Blaming cultural sociology for this is giving academics too much credit [4].

In fact, sociologists (and historians, and anthropologists, and literary studies scholars, and most scholars of the humanities) have always grappled with a strange paradox.  Sociologist study "society"--an object that is itself a can of worms.  Society is more than the sum of the people who constitute it.  Yet, when one starts investigating the social world, one discovers that most people are themselves lay-sociologists.  Or to put it in a different way, sociologists themselves are trying to come up with a more systematic version of what people do routinely in their life.  They analyze their own social world, their "society" and strategize.  Parents spend a great deal of time managing their children's spare time because they know this will serve the child well later on in life.  Teenagers routinely think about what they want to do for a living; they know that going to a good college is a big part of achieving it.  People spend a great deal of time picking spouses and friends.  They may not always succeed in getting what they want, but they think about it nevertheless.

Academic sociology then is built on the foundation of everyday reasoning [this, in fact, is the central insight of Harold Garfinkel's ethnomethodology].  All sociological concepts--power, prestige, cultural capital, class, race, gender--are based on everyday versions of these categories.    And in fact, one of the divides in sociology--between qualitative and quantitative sociologists--is based precisely on different understandings of this relationship between this scholarly and lay sociology.  At the risk of over-simplifying, most quantitative sociologists will acknowledge this relationship but suggest that a reliance on large numbers, aggregates and the methods of statistics can be a useful way of differentiating specialist sociology from its lay variant.  And qualitative sociologists believe that because actors are always theorizing about their own circumstances, this understanding needs to be part of any theory about society.

Let me end on a tongue-in-cheek note (which will probably drive the Editors up a wall).  A while ago, A. O. Scott wrote on an essay on a number of smart young men and women who had all teamed up to start two little magazines
"You'd better mean something enough to live by it," Kunkel told me, echoing both his fictional creation and, as it happens, one of his comrades in another literary enterprise. On the last page of the first issue of n+1, a little magazine that made its debut last year, the reader learns that "it is time to say what you mean." The author of that declaration, a forceful variation on some of Dwight Wilmerding's more tentative complaints, is Keith Gessen, who edits n+1 along with Kunkel, Mark Greif and Marco Roth. All four editors are around Dwight's age - he's 28 when the main action in the book takes place; they're 30 or a little older. Like him, they often glance anxiously and a bit nostalgically backward to a pre-9/11, pre-Florida-recount moment that seems freer and more irresponsible than the present. You wouldn't, however, call any of them any kind of idiot. Nor, based on their pointed, closely argued and often brilliantly original critiques of contemporary life and letters, would you accuse them of indecision, though they do sometimes display a certain pained 21st-century ambivalence about the culture they inhabit.

N+1 is not the first small magazine to come out of this ambivalence or the first to have its mission encapsulated by a memoiristic account of the attempt to figure out one's life. Consider the following scrap of dialogue from Dave Eggers's "Heartbreaking Work of Staggering Genius," famously hailed as the manifesto of a slightly earlier generational moment:

"And how will you do this?" she wants to know. "A political party? A march? A revolution? A coup?"

"A magazine."

Eggers is talking about an old (in fact, a defunct) magazine called Might, but never mind. Even with a bit of historical distance - five years after the book's publication, a decade and more after the events it describes - these lines capture both a moment and the general spirit of the magazine-starting enterprise. A bunch of ambitious, like-minded young friends get together to assemble pictures and words into a sensibility - a voice, a look, an attitude - that they hope will resonate beyond their immediate circle.

And yet, look at how these nice young men met:
The four editors of n+1 are also connected by shared sensibilities and school ties. Kunkel, who grew up in Colorado, went from Deep Springs College, a tiny, all-male school in the California desert devoted to the classical ideal of rigorous study in a pastoral setting, to Harvard, where he met Greif, though not Gessen, who was also there at the time. (Actually, they later discovered that they did have one brief encounter as undergraduates, about which Kunkel would say only that at least one of them was drunk and that one suggested the other should get a lobotomy.) Gessen, who lived in the Soviet Union until he was 6, was a football player at Harvard and went on to get an M.F.A. in fiction from Syracuse. Greif entered the Ph.D. program in American studies at Yale, where he met Roth, who had arrived via Oberlin and Columbia to pursue his doctorate in comparative literature. After talking about it for years - another friend from Harvard, Chad Harbach, who edits the n+1 Web site, thought of the name back in 1998 - they decided the moment was right to put their ideas and aspirations into print.
Harvard and Yale.  Hmmm.  I'm dying to use the term "cultural privilege" but I won't. 

I don't mean to doubt the Editors' sincerity or commitment to producing a certain kind of literature.  But the fact remains that in order to fulfill any high-minded goals, you need to descend to the ground, to use existing resources.  The Editors all meant through social networks that were spawned by attending elite universities.  They decided to start a magazine--not a blog, and not an only-online publication.  They made -- or were constrained to make -- certain kinds of choices to reach their goals.  And finally, they were the object of a piece in --of all places! -- the New York Times Magazine that vastly improved their magazine's visibility (I, certainly, had not heard of N+1 until I read Scott's piece).  In Scott's article, the Editors go out of their way to assert what makes their magazine different, their commitment to a certain style of writing, of seeing  the world.  Other magazines, they suggest, are moribund, caught up in a rut; N+1 is fresh and young.

To me, what the Editors are doing in that piece is a version of the "impute interests to others, values to self" rhetoric that they then criticize in an editorial published many years later and blame on academic cultural sociology.  Which only goes to show that the phenomenon itself has been around a long long time.



[1]  And of course, written in its characteristic style with the imperial "we," that, to me at least, often feels like a reference to the small segment of the cultural elite they feel an ineffable bond with.

[2] I am not sure who these people are who analyze art at dinner parties using cultural sociology.  In the circles I hang out with, art is still discussed with reference to aesthetics.  And nothing that I read in high culture magazines like the New York Review of Books or the New Republic convinces me that we now discuss art in terms of art-makers managing cultural capital rather than its deep aesthetic value.

[3] "Always" may be an overstatement.  But certainly one sees examples of this from the early modern period.  

[4] That said, it's always flattering when someone credits the humanities with that much influence. 

Tuesday, May 28, 2013

Postcolonial Theory and Its Discontents

Ian Hacking, in one his articles, praises the uniquely French form of the interview as a great way to understand the author's thoughts.  He's talking about Foucault--and indeed, some of Foucault's interviews are far easier to understand than his books.  In that same spirit--i.e. it lays out the lay of the land on which these debates are staged--I liked this interview with Vivek Chibber in Jacobin on his new book "Post Colonial Theory and the Specter of Capital," which criticizes post-colonial theory and urges a return to good old-fashioned Marxism.
The argument goes like this: the universalizing categories associated with Enlightenment thought are only as legitimate as the universalizing tendency of capital. And postcolonial theorists deny that capital has in fact universalized — or more importantly, that it ever could universalize around the globe. Since capitalism has not and cannot universalize, the categories that people like Marx developed for understanding capitalism also cannot be universalized.
What this means for postcolonial theory is that the parts of the globe where the universalization of capital has failed need to generate their own local categories. And more importantly, it means that theories like Marxism, which try to utilize the categories of political economy, are not only wrong, but they’re Eurocentric, and not only Eurocentric, but they’re part of the colonial and imperial drive of the West. And so they’re implicated in imperialism. Again, this is a pretty novel argument on the Left.
This is probably cartoonish--as is probably the rest of the interview--but if I was teaching a class, I'd use it as a text for setting out the background arguments.

A much more rigorous response to Chibber's book by Chris Taylor is also great--although far more abstract.
It’s kind of hard to say. Chibber does not expend anything like the same amount of time unpacking—much less justifying—his own Marxist normative and epistemological presuppositions as he does in showing that Guha, Chatterjee, and Chakrabarty are anti-Marxist. In broad outlines, Chibber’s Marxism depends on “a defense of two universalisms, one pertaining to capital and the other to labor.” More specifically, Chibber’s Marxism is bound to the idea that ”the modern epoch is driven by the twin forces of, on the one side, capital’s unrelenting drive to expand, to conquer new markets, and to impose its domination on the laboring classes [the first universalism], and, on the other side, the unceasing struggle by these classes to defend themselves, their well-being, against this onslaught [the second universalism] (208).” So far, nothing objectionable: welcome to the Communist Manifesto. The problem emerges, however, when Chibber attempts moving from the universal to the particular, from the universality of capitalism’s antagonism to the particular social zoning of its enactment. If postcolonial theorists want to hold onto the particularity of the particular, and engage the universal through it, Chibber uses these “two universalisms” to denude the particular, to remove the peculiarity of the particular in order to reduce it to the universal. Methodologically, Chibber’s Marxism is pre-Hegelian. Indeed, his Marxism is the kind of “monochrome formalism” derided by Hegel, an epistemology for which the universal dominates the particular, one through which “the living essence of the matter [is] stripped away or boxed up dead.”
And then later:
In part, I think that “Marxism versus postcolonial theory” is simply running interference for a set of disciplinary battles over methodological and theoretical orientation. The antinomy that Chibber continually establishes is one between a realist sociology (with an investment in abstract structures that prime and cause human action) and hermeneutically inclined fields of anthropology, history, and literary studies. (Don’t mention literary studies to Chibber. He doesn’t seem to like it very much.) In each of Chibber’s chapters, the explanatory triumph of universalist accounts over particularist accounts can be read as the triumph of a certain form of sociological reason over its others.
More importantly, I think that Chibber is desperate for the resurgence of a particular kind of Marxism, one that was displaced not by postcolonial theorists but by anticolonial Marxists like Fanon, James, and so on. That’s why he can’t incorporate them into his account of postcolonial theory: they are Marxists who mount critiques of formalist universalisms by keeping close to the particular, by maintaining the tension that obtains between economic structure and lived phenomenology, between structuralist accounts of the world and hermeneutic investigations into worlds. I have no idea why one would wish to return to the days of CP sloganeering. (I can’t be the only one who heard echoes of “black and white, unite and fight!” in his book.) But the desire is there, and it shapes the way he constructs postcolonial theory. Chibber’s fantasy that an anti-Marxist postcolonial theory reigns hegemonic in the academy enables him to maintain the fantasy that the once and future king of Marxism might some day be restored to rule. But, in order to elaborate this fantasy, he needs to transform a tension internal to postcolonial theory (between Marxist accounts of structure and hermeneutic approaches to the particular—which can still be, of course, Marxist) into a struggle exterior to it. 

Saturday, March 30, 2013

The Construction of Disability

Everyone should all listen to this latest This American Life episode on what the reporter of the piece, Chana Joffe-Walt, calls the "disability industrial complex."  The simple factoid with which it begins?  The rise in the number of people all over America on disability.  Joffe-Walt starts with this and starts to burrow in deeper.  She finds that disability is a slippery concept: how does it get defined in practice?  When she meets the doctor in Hale County, Alabama where 1 out of every 4 people is on disability (and he's responsible for many of these diagnoses), he tells her some of the criteria he uses.  One among them is education level.  Why, she wonders, is education level a criterion for disability?  The answer is, of course, that he's trying to think about the kinds of jobs they will be working in, and if he estimates that they can't work those jobs successfully, well, then for all practical purposes, they are disabled. (See transcript.)

But Joffe-Walt doesn't stop there.  She wants to explore this whole ecosystem of disability.  So she looks at lawyers.  What role have lawyers played in getting people on disability?  (And lawyers here come off surprisingly well, I think--crass, yes, money-minded, definitely, but also fulfilling a deep need.)  The answer: a lot.  And what of the political economy?  Aside from the problem of inequality--that the number of good jobs that don't require college degrees is steadily decreasing--she also points to federal and state regulations.  States, she finds, have an active interest in moving off people from their welfare rolls onto the federally funded disability program.  This work, naturally, is done by consultants who charge a fee for every successful transfer. 

It's all deeply fascinating stuff that moves fluidly on a number of different levels.  Sometimes Joffe-Walt is down on the ground, talking to people, seeking their opinions, wondering what they think.  At other times, she is taking an eagle-eyed view of the scene, talking to economists, and regulators.  [The web-site has a number of interesting graphs that are worth checking out.]