Thursday, November 14, 2013

Big Data, Boundary Work and Computer Science

A Google Data Center. Image taken from here.

The Annual Meeting of the Society of the Social Studies of Science this year (i.e. 4S 2013) was full of "big data" panels (Tom Boellstorff has convinced me to not capitalize the term). Many of these talks were critiques; the authors saw big data as a new form of positivism, and the rhetoric of big data as a sort of false consciousness that was sweeping the sciences*.

But what do scientists think of big data?

In a blog-post titled "The Big Data Brain Drain: Why Science is in Trouble," physicist Jake VanderPlas (his CV lists his interests as "Astronomy" and "Machine Learning") makes the argument that the real reason big data is dangerous because it moves scientists from the academy to corporations.
But where scientific research is concerned, this recently accelerated shift to data-centric science has a dark side, which boils down to this: the skills required to be a successful scientific researcher are increasingly indistinguishable from the skills required to be successful in industry. While academia, with typical inertia, gradually shifts to accommodate this, the rest of the world has already begun to embrace and reward these skills to a much greater degree. The unfortunate result is that some of the most promising upcoming researchers are finding no place for themselves in the academic community, while the for-profit world of industry stands by with deep pockets and open arms. [all emphasis in the original]
His argument proceeds in three steps: first, he argues that yes, new data is indeed being produced, and in stupendously large quantities. Second, processing this data (whether it's in biology or physics) requires a certain kind of scientist who is both skilled in statistics and software. Third, because of this, "scientific software" which can be used to clean, process, and visualize data becomes a key part of the research process. And finally, this scientific software needs to be built and maintained, and because the academy evaluates its scientists not for the software they build but for the papers they publish, all of these talented scientists are now moving to doing corporate research jobs (where they are appreciated not just for their results but also for their software). That, the author argues, is not good for science.

Clearly, to those familiar with the history of 20th century science, this argument has the ring of deja vu. In The Scientific Life, for example, Steven Shapin argued that the fear that corporate research labs would cause a tear in the prevailing (Mertonian) norms of science, by attracting the best scientists away from the academy, was a big part of the scientific (and social scientific) landscape of the middle of the 20th century. And these fears were largely unfounded (partly, because they were largely based on a picture of science that never existed, and partly because, as Shapin finds, scientific virtue remained nearly intact in its move from the academy to the corporate research lab.) [And indeed, Lee Vinsel makes a similar point in his comment on a Scientific American blog-post that links to VanderPlas' post.]

But there's more here, I think, for STS to think about. First, notice the description of the new scientist in the world of big data:
In short, the new breed of scientist must be a broadly-trained expert in statistics, in computing, in algorithm-building, in software design, and (perhaps as an afterthought) in domain knowledge as well. [emphasis in the original].
This is an interesting description on so many levels. But the reason it's most interesting to me is that it fits exactly with the description of what a computer scientist does. I admit this is a bit of a speculation, so feel free to disagree. But in the last few years, computer scientists have increasingly turned their attention to a variety of domains: for example, biology, romance, learning. And in each of these cases, their work looks exactly like the work that VanderPlas' "new breed of scientist" does. [Exactly? Probably not. But you get the idea.] Some of the computer scientists I observe who design software to help students learn work exactly in this way: they need some domain knowledge, but mostly they need the ability to code, and they need to know statistics both, in order to create, machine learning algorithms, as well as to validate their argument to other practitioners.

In other words, what VanderPlas is saying that practitioners of the sciences are starting to look more and more like computer scientists. His own CV, which I alluded to above, is a case in point: he lists his interests as both astronomy and machine learning. [Again, my point is not so much to argue that he is right or wrong, but that his blog-post is an indication of changes that are afoot.]

His solution to solving the "brain drain" is even more interesting, from an STS perspective. He suggests that the institutional structure of science should recognize and reward software-building so that the most talented people stay in academia and do not migrate to industry. In other words, become even more like computer science institutionally so that the best people stay in academia. Interesting, no?

Computer science is an interesting field. The digital computer's development went hand-in-hand with the development of cybernetics and “systems theory”—theories that saw themselves as generalizable to any kind of human activity. Not surprisingly, the emerging discipline of computer science made it clear that it was not about computers per se; rather, computers were the tools that it would use to understand computation—which potentially applied to any kind of intelligent human activity that could be described as symbol processing e.g. see Artificial Intelligence pioneers Newell and Simon’s Turing award speech. This has meant that computer science has had a wayward existence: it has typically flowed where the wind (meaning funding!) took it. In that sense, its path has been the polar opposite to that of mathematics, whose practitioners, as Alma's dissertation shows, have consciously policed the boundaries of mathematics.   (Proving theorems was seen to be the essence of math; anything else was moved to adjoining disciplines.)

X-posted on Tumblr and the HASTS blog.  

--------------------

*The only exception to this that I found was Stuart Geiger's talk which was titled "Hadoop as Grounded Theory: Is an STS Approach to Big Data Possible?," the abstract of which is worth citing in full:
In this paper, I challenge the monolithic critical narratives which have emerged in response to “big data,” particularly from STS scholars. I argue that in critiquing “big data” as if it was a stable entity capable of being discussed in the abstract, we are at risk of reifying the very phenomenon we seek to interrogate. There are instead many approaches to the study of large data sets, some quite deserving of critique, but others which deserve a different response from STS. Based on participant-observation with one data science team and case studies of other data science projects, I relate the many ways in which data science is practiced on the ground. There are a diverse array of approaches to the study of large data sets, some of which are implicitly based on the same kinds of iterative, inductive, non-positivist, relational, and theory building (versus theory testing) principles that guide ethnography, grounded theory, and other methodologies used in STS. Furthermore, I argue that many of the software packages most closely associated with the big data movement, like Hadoop, are built in a way that affords many “qualitative” ontological practices. These emergent practices in the fields around data science lead us towards a much different vision of “big data” than what has been imagined by proponents and critics alike. I conclude by introducing an STS manifesto to the study of large data sets, based on cases of successful collaborations between groups who are often improperly referred to as quantitative and qualitative researchers.

Monday, September 30, 2013

The Breaking Bad finale was ...

...unseemly. 

Frankly, that's the only word that I can think of.  [****SPOILERS FOLLOW****].  The show ended with what can only be called a bang for Walt.  He found a way to give his family his money without them knowing, found a way to see them, he killed the evil Nazis, he set Jesse free and then, well, and then he died.  Or something.  The whole thing was a wish-fulfillment fantasy from start to finish.

Why do I care?  Not so much because I think that characters that are bad need to be punished.  But because there is a coherence and an atmosphere to any show or a movie and this finale violated every one of them.



Take Elliott and Gretchen for instance.  Breaking Bad has been very coy about what exactly transpired between Walt and the Schwartzes or why he and Gretchen broke up.  But the show took the characters seriously.  It made it seem as if the story behind Grey Matter Inc. had substance.  Walt and Skyler's visit to Elliott's birthday party had a pathos to it, and Walt's last confrontation with Gretchen had bite.



None of that mattered yesterday.  The Schwartzes were completely transformed--they were cartoons; rich and pampered people who had robbed Walt of what was rightfully his.  I laughed when Gretchen screamed as Walt did his "Boo" thing to them.  But it only subtracted from what the show has spent the last 6 seasons doing: carefully, assiduously, building even its peripheral characters.



And my beef isn't that the episode was wildly unrealistic and implausible.  (Walt not only gets out of New Hampshire, but manages to drive all the way to New Mexico, threaten Elliott and Gretchen, talk to Skyler (slipping through a police dragnet) and then kill all the villains.)  No.  For all its virtues, Breaking Bad has never been what one might call a "realistic" show.  In a sense, the Season 5 finale was similar to the Season 4 finale where Walt, improbably, vanquishes Gus Fring, the drug king ("I won," he declares at the end of that season).   I should confess that I enjoyed that ending (and I wish that the show that ended without a fifth season).  But the show's tone was different then.  It was unquestionably a thriller, even as its characters suffered and made ambiguous choices.  In the second half of Season 5, it had tipped from being a thriller into full-fledged tragedy.  Jesse and Walt were irrevocably estranged, as were Walt and Hank Schraeder, Hank is killed and Jesse had probably been subjected to every humiliating situation one could think of (meth slavery seemed like a fitting climax).  In the face of such tragedy (Hank dies in "Ozymandias," and Jesse's ex-girlfriend Andrea is brutally executed in "Granite State"), the last episode's almost upbeat tone came as a bit of a shock.  This is how thrillers end, and not tragedies, and at this point, I don't think Breaking Bad was a thriller.  I confess I have no idea how the show should have ended -- but this particular ending was, just, unseemly. 

Saturday, June 22, 2013

N+1 is wrong about sociology

The hoity-toity magazine N+1 has a long, rambling editorial  about sociology [1].  The editorial is long -- far too long -- and I'm inclined to think that it is facetious and tongue-in-cheek.  Still, I think it contains within it an important misconception about what sociology is and does.

The argument, if I have it right, goes something like this:  sociology, the Editors think, has gone too far in taking a calculative, demystifying stance on human affairs.  And as sociology never stays within the academy but leaks out, this has made the public (or at least the public that the N+1 editors have dinner parties with) far too calculative as well.

Naturally, the Editors aren't really worried about other things that sociology has demystified, say, religion or technology.  They are mostly concerned about art and the novel.   They worry that far too many works of art are discussed in terms of the gain and loss of cultural capital by the artist.  People interpret various moves that artists make as merely strategies or positioning.  We are not earnest anymore, not passionate; we are merely cold and calculating analysts, never more so than with respect to art.  Sociology's analysis of art -- the works of Bourdieu or Becker, say, and many others -- often makes it seem as if "art mostly expresses class and status hierarchies, and only secondarily might have snippets of aesthetic value."  The Editors are worried about aesthetics.  "There is still," they suggest, "a space where the aesthetic may be encountered immediately and give pleasure and joy uninhibited by surrounding frameworks and networks of rules and class habits."

I get the argument, as far as it goes [2].  And let's also grant that in certain circles frequented by the Editors, this does happen.  There's far too much sociologizing, far too much analyzing of the moves that people make as an expression of their effort to conserve their cultural position, and far too little discussion of aesthetics.  How new is this?  And how much should we blame cultural sociology for this, especially the post-structuralist variety?

Let's take one example that the editorial cites, the case of Jeff Bezos:
We’ve reached the point at which the CEO of Amazon, a giant corporation, in his attempt to integrate bookselling and book production, has perfectly adapted the language of a critique of the cultural sphere that views any claim to “expertise” as a mere mask of prejudice, class, and cultural privilege. Writing in praise of his self-publishing initiative, Jeff Bezos notes that “even well-meaning gatekeepers slow innovation. . . . Authors that might have been rejected by establishment publishing channels now get their chance in the marketplace. Take a look at the Kindle bestseller list and compare it to the New York Times bestseller list — which is more diverse?” Bezos isn’t talking about Samuel Delany; he’s adopting the sociological analysis of cultural capital and appeals to diversity to validate the commercial success of books like Fifty Shades of Grey, a badly written fantasy of a young woman liberated from her modern freedom through erotic domination by a rich, powerful male. Publishers have responded by reducing the number of their own “well-meaning gatekeepers,” actual editors actually editing books, since quality or standards are deemed less important than a work’s potential appeal to various communities of readers.  [my emphasis.]
The Editors seem to imply that Bezos read Bourdieu and then came up with his strategy of how to attack those who opposed Amazon's self-publishing initiatives on aesthetic grounds (exhibit one: the N+1 Editors themselves) i.e. characterize them as gatekeepers trying to protect their fiefdom.

How true is this?  My guess is not at all.  Bezos may well have read Bourdieu but there is nothing new whatsoever about his strategy; it's hundreds of years old and definitely older than when the word "cultural capital" was invented. Take a look, for example, at Andrew Abbott's brilliant sociological history of the professions.  When different groups warred over a task (doctors and nurses over medical care, accountants and lawyers over certain kinds of corporate money management, psychiatrists and psychologists over how mental problems should be treated), they have always resorted to some version of this language of gate-keeping to characterize the other side.  As Wendy Espeland says: "our tendency [is] to see others as having interests where we have commitments."  Sociologists have often taken this as their fundamental problem asking: how does this come to be?  What does this say about the production of the social order?  And so on.

In other words, sociologists did NOT invent demystification in the academy from where it supposedly diffused across society so that now even Jeff Bezos adopts the language of cultural capital, interests and gate-keeping.  It was already there, has always been there, and usually comes to the fore when controversies arise [3].

Take the passing of the Affordable Care Act, for instance.  Throughout the debate, Republicans alleged that the ACA was a thinly veiled attempt at the redistribution of income, and an effort to take control (of medical decisions) away from families and into the hands of the federal government.  They portrayed themselves as standing for seniors, and Obama as a socialist.  Democrats, for their part, suggested that Republicans did not care about the uninsured, and only cared about protecting the interests of the insurance companies.  Insurance companies, often working behind the scenes, were demonized by everyone but doctors were usually not.  Seniors were quite sure that the ACA was an effort to take away the health-care that they rightfully deserved.  Throughout the debate, you saw actors imputing tawdry "interests" to their opponents and portraying themselves as being committed to certain values.  You might say that they had all gone and read Bourdieu.  Or you might say that this is how social controversies are fought and settled. 

Or, take MOOCs (Massive Open Online Courses, for those who haven't heard of them), for instance.  MOOCs have been the topic of great dispute in the public sphere.  And in the debate, you see the same confluence of imputed interests and personal commitments.  The much-famous open letter that the philosophy faculty at San Jose State University wrote to Michael Sandel suggested that MOOCs might be part of a neoliberal transformation of the university and an ongoing commodification of education (classes produced at a factory at Harvard, then distributed to community colleges and state universities, which then only need to hire TAs, and so on), and not so much about improving access for students.  On the other hand, MOOC inventor Sebastian Thrun emphasizes the kind of easy acccess that made his Artificial Intelligence course at Stanford so famous:
Yet there is one project he's happy to talk about. Frustrated that his (and fellow Googler Peter Norvig's) Stanford artificial intelligence class only reached 200 students, they put up a website offering an online version. They got few takers. Then he mentioned the online course at a conference with 80 attendees and 80 people signed up. On a Friday, he sent an offer to the mailing list of a top AI association. On Saturday morning he had 3,000 sign-ups—by Monday morning, 14,000.

In the midst of this, there was a slight hitch, Mr. Thrun says. "I had forgotten to tell Stanford about it. There was my authority problem. Stanford said 'If you give the same exams and the same certificate of completion [as Stanford does], then you are really messing with what certificates really are. People are going to go out with the certificates and ask for admission [at the university] and how do we even know who they really are?' And I said: I. Don't. Care."

Aaron Bady, a graduate student at Berkeley and one of the hottest voices in the blogosphere says: "not so fast!"  Bady plays up Thrun's tenure at Google, suggests that Thrun is interested not so much in improving access as much as increasing Google's bottom line:
The MOOC that debuted in IHE in December 2011 was Sebastian Thrun’s “Artificial Intelligence” MOOC, a course that was offered at Stanford but opened up to anyone with a broadband. The way this story is usually told is that his incredible success—160,000 students, from 190 countries—encouraged Thrun to leave Stanford to try the new mode of pedagogy that he had stumbled upon. He had seen a TED talk given by Salman Khan, the founder of Khan Academy, and when he decided to give it a whirl and it was a huge success, the rest is history. In January, 2012, he would found the startup Udacity.

However, another way to tell the story would be that Thrun was a Google executive—who was already well known for his work on Google’s driverless car project—and that he had already resigned his tenure at Stanford in April 2011, before he even offered that Artifical Intelligence class. Ending his affiliation with Stanford could be described as completing his transition to Silicon Valley proper. In fact, despite IHE’s singular “a Stanford University professor,” Thrun co-taught the famous course with Google’s Director of Research, Peter Norvig.

It’s important to tell the story this way, too, because the first story makes us imagine a groundswell of market forces and unmet need, a world of students begging to be taught by a Stanford professor and Google, and the technological marvels that suddenly make it possible. But it’s not education that’s driving this shifting conversation; as the MOOC became something very different in migrating to Silicon Valley, it’s in stories told by the New York Times, the WSJ, and TIME magazine that the MOOC comes to seem like an immanent revolution, whose pace is set by necessity and inevitability.
You might say that this actually proves the Editors' point because Bady is a graduate student and has definitely read his Bourdieu.  I would suggest that that would be missing the big picture.  The point is: this kind of debate, with imputations of nefarious interests and declarations of personal commitment, are routine, especially in the midst of social controversies.  Blaming cultural sociology for this is giving academics too much credit [4].

In fact, sociologists (and historians, and anthropologists, and literary studies scholars, and most scholars of the humanities) have always grappled with a strange paradox.  Sociologist study "society"--an object that is itself a can of worms.  Society is more than the sum of the people who constitute it.  Yet, when one starts investigating the social world, one discovers that most people are themselves lay-sociologists.  Or to put it in a different way, sociologists themselves are trying to come up with a more systematic version of what people do routinely in their life.  They analyze their own social world, their "society" and strategize.  Parents spend a great deal of time managing their children's spare time because they know this will serve the child well later on in life.  Teenagers routinely think about what they want to do for a living; they know that going to a good college is a big part of achieving it.  People spend a great deal of time picking spouses and friends.  They may not always succeed in getting what they want, but they think about it nevertheless.

Academic sociology then is built on the foundation of everyday reasoning [this, in fact, is the central insight of Harold Garfinkel's ethnomethodology].  All sociological concepts--power, prestige, cultural capital, class, race, gender--are based on everyday versions of these categories.    And in fact, one of the divides in sociology--between qualitative and quantitative sociologists--is based precisely on different understandings of this relationship between this scholarly and lay sociology.  At the risk of over-simplifying, most quantitative sociologists will acknowledge this relationship but suggest that a reliance on large numbers, aggregates and the methods of statistics can be a useful way of differentiating specialist sociology from its lay variant.  And qualitative sociologists believe that because actors are always theorizing about their own circumstances, this understanding needs to be part of any theory about society.

Let me end on a tongue-in-cheek note (which will probably drive the Editors up a wall).  A while ago, A. O. Scott wrote on an essay on a number of smart young men and women who had all teamed up to start two little magazines
"You'd better mean something enough to live by it," Kunkel told me, echoing both his fictional creation and, as it happens, one of his comrades in another literary enterprise. On the last page of the first issue of n+1, a little magazine that made its debut last year, the reader learns that "it is time to say what you mean." The author of that declaration, a forceful variation on some of Dwight Wilmerding's more tentative complaints, is Keith Gessen, who edits n+1 along with Kunkel, Mark Greif and Marco Roth. All four editors are around Dwight's age - he's 28 when the main action in the book takes place; they're 30 or a little older. Like him, they often glance anxiously and a bit nostalgically backward to a pre-9/11, pre-Florida-recount moment that seems freer and more irresponsible than the present. You wouldn't, however, call any of them any kind of idiot. Nor, based on their pointed, closely argued and often brilliantly original critiques of contemporary life and letters, would you accuse them of indecision, though they do sometimes display a certain pained 21st-century ambivalence about the culture they inhabit.

N+1 is not the first small magazine to come out of this ambivalence or the first to have its mission encapsulated by a memoiristic account of the attempt to figure out one's life. Consider the following scrap of dialogue from Dave Eggers's "Heartbreaking Work of Staggering Genius," famously hailed as the manifesto of a slightly earlier generational moment:

"And how will you do this?" she wants to know. "A political party? A march? A revolution? A coup?"

"A magazine."

Eggers is talking about an old (in fact, a defunct) magazine called Might, but never mind. Even with a bit of historical distance - five years after the book's publication, a decade and more after the events it describes - these lines capture both a moment and the general spirit of the magazine-starting enterprise. A bunch of ambitious, like-minded young friends get together to assemble pictures and words into a sensibility - a voice, a look, an attitude - that they hope will resonate beyond their immediate circle.

And yet, look at how these nice young men met:
The four editors of n+1 are also connected by shared sensibilities and school ties. Kunkel, who grew up in Colorado, went from Deep Springs College, a tiny, all-male school in the California desert devoted to the classical ideal of rigorous study in a pastoral setting, to Harvard, where he met Greif, though not Gessen, who was also there at the time. (Actually, they later discovered that they did have one brief encounter as undergraduates, about which Kunkel would say only that at least one of them was drunk and that one suggested the other should get a lobotomy.) Gessen, who lived in the Soviet Union until he was 6, was a football player at Harvard and went on to get an M.F.A. in fiction from Syracuse. Greif entered the Ph.D. program in American studies at Yale, where he met Roth, who had arrived via Oberlin and Columbia to pursue his doctorate in comparative literature. After talking about it for years - another friend from Harvard, Chad Harbach, who edits the n+1 Web site, thought of the name back in 1998 - they decided the moment was right to put their ideas and aspirations into print.
Harvard and Yale.  Hmmm.  I'm dying to use the term "cultural privilege" but I won't. 

I don't mean to doubt the Editors' sincerity or commitment to producing a certain kind of literature.  But the fact remains that in order to fulfill any high-minded goals, you need to descend to the ground, to use existing resources.  The Editors all meant through social networks that were spawned by attending elite universities.  They decided to start a magazine--not a blog, and not an only-online publication.  They made -- or were constrained to make -- certain kinds of choices to reach their goals.  And finally, they were the object of a piece in --of all places! -- the New York Times Magazine that vastly improved their magazine's visibility (I, certainly, had not heard of N+1 until I read Scott's piece).  In Scott's article, the Editors go out of their way to assert what makes their magazine different, their commitment to a certain style of writing, of seeing  the world.  Other magazines, they suggest, are moribund, caught up in a rut; N+1 is fresh and young.

To me, what the Editors are doing in that piece is a version of the "impute interests to others, values to self" rhetoric that they then criticize in an editorial published many years later and blame on academic cultural sociology.  Which only goes to show that the phenomenon itself has been around a long long time.

--------------------------------------------------------------------------------------------------

Endnotes:

[1]  And of course, written in its characteristic style with the imperial "we," that, to me at least, often feels like a reference to the small segment of the cultural elite they feel an ineffable bond with.

[2] I am not sure who these people are who analyze art at dinner parties using cultural sociology.  In the circles I hang out with, art is still discussed with reference to aesthetics.  And nothing that I read in high culture magazines like the New York Review of Books or the New Republic convinces me that we now discuss art in terms of art-makers managing cultural capital rather than its deep aesthetic value.

[3] "Always" may be an overstatement.  But certainly one sees examples of this from the early modern period.  

[4] That said, it's always flattering when someone credits the humanities with that much influence. 

Tuesday, May 28, 2013

Postcolonial Theory and Its Discontents

Ian Hacking, in one his articles, praises the uniquely French form of the interview as a great way to understand the author's thoughts.  He's talking about Foucault--and indeed, some of Foucault's interviews are far easier to understand than his books.  In that same spirit--i.e. it lays out the lay of the land on which these debates are staged--I liked this interview with Vivek Chibber in Jacobin on his new book "Post Colonial Theory and the Specter of Capital," which criticizes post-colonial theory and urges a return to good old-fashioned Marxism.
The argument goes like this: the universalizing categories associated with Enlightenment thought are only as legitimate as the universalizing tendency of capital. And postcolonial theorists deny that capital has in fact universalized — or more importantly, that it ever could universalize around the globe. Since capitalism has not and cannot universalize, the categories that people like Marx developed for understanding capitalism also cannot be universalized.
What this means for postcolonial theory is that the parts of the globe where the universalization of capital has failed need to generate their own local categories. And more importantly, it means that theories like Marxism, which try to utilize the categories of political economy, are not only wrong, but they’re Eurocentric, and not only Eurocentric, but they’re part of the colonial and imperial drive of the West. And so they’re implicated in imperialism. Again, this is a pretty novel argument on the Left.
This is probably cartoonish--as is probably the rest of the interview--but if I was teaching a class, I'd use it as a text for setting out the background arguments.

A much more rigorous response to Chibber's book by Chris Taylor is also great--although far more abstract.
It’s kind of hard to say. Chibber does not expend anything like the same amount of time unpacking—much less justifying—his own Marxist normative and epistemological presuppositions as he does in showing that Guha, Chatterjee, and Chakrabarty are anti-Marxist. In broad outlines, Chibber’s Marxism depends on “a defense of two universalisms, one pertaining to capital and the other to labor.” More specifically, Chibber’s Marxism is bound to the idea that ”the modern epoch is driven by the twin forces of, on the one side, capital’s unrelenting drive to expand, to conquer new markets, and to impose its domination on the laboring classes [the first universalism], and, on the other side, the unceasing struggle by these classes to defend themselves, their well-being, against this onslaught [the second universalism] (208).” So far, nothing objectionable: welcome to the Communist Manifesto. The problem emerges, however, when Chibber attempts moving from the universal to the particular, from the universality of capitalism’s antagonism to the particular social zoning of its enactment. If postcolonial theorists want to hold onto the particularity of the particular, and engage the universal through it, Chibber uses these “two universalisms” to denude the particular, to remove the peculiarity of the particular in order to reduce it to the universal. Methodologically, Chibber’s Marxism is pre-Hegelian. Indeed, his Marxism is the kind of “monochrome formalism” derided by Hegel, an epistemology for which the universal dominates the particular, one through which “the living essence of the matter [is] stripped away or boxed up dead.”
And then later:
In part, I think that “Marxism versus postcolonial theory” is simply running interference for a set of disciplinary battles over methodological and theoretical orientation. The antinomy that Chibber continually establishes is one between a realist sociology (with an investment in abstract structures that prime and cause human action) and hermeneutically inclined fields of anthropology, history, and literary studies. (Don’t mention literary studies to Chibber. He doesn’t seem to like it very much.) In each of Chibber’s chapters, the explanatory triumph of universalist accounts over particularist accounts can be read as the triumph of a certain form of sociological reason over its others.
 
More importantly, I think that Chibber is desperate for the resurgence of a particular kind of Marxism, one that was displaced not by postcolonial theorists but by anticolonial Marxists like Fanon, James, and so on. That’s why he can’t incorporate them into his account of postcolonial theory: they are Marxists who mount critiques of formalist universalisms by keeping close to the particular, by maintaining the tension that obtains between economic structure and lived phenomenology, between structuralist accounts of the world and hermeneutic investigations into worlds. I have no idea why one would wish to return to the days of CP sloganeering. (I can’t be the only one who heard echoes of “black and white, unite and fight!” in his book.) But the desire is there, and it shapes the way he constructs postcolonial theory. Chibber’s fantasy that an anti-Marxist postcolonial theory reigns hegemonic in the academy enables him to maintain the fantasy that the once and future king of Marxism might some day be restored to rule. But, in order to elaborate this fantasy, he needs to transform a tension internal to postcolonial theory (between Marxist accounts of structure and hermeneutic approaches to the particular—which can still be, of course, Marxist) into a struggle exterior to it. 

Saturday, March 30, 2013

The Construction of Disability


Everyone should all listen to this latest This American Life episode on what the reporter of the piece, Chana Joffe-Walt, calls the "disability industrial complex."  The simple factoid with which it begins?  The rise in the number of people all over America on disability.  Joffe-Walt starts with this and starts to burrow in deeper.  She finds that disability is a slippery concept: how does it get defined in practice?  When she meets the doctor in Hale County, Alabama where 1 out of every 4 people is on disability (and he's responsible for many of these diagnoses), he tells her some of the criteria he uses.  One among them is education level.  Why, she wonders, is education level a criterion for disability?  The answer is, of course, that he's trying to think about the kinds of jobs they will be working in, and if he estimates that they can't work those jobs successfully, well, then for all practical purposes, they are disabled. (See transcript.)

But Joffe-Walt doesn't stop there.  She wants to explore this whole ecosystem of disability.  So she looks at lawyers.  What role have lawyers played in getting people on disability?  (And lawyers here come off surprisingly well, I think--crass, yes, money-minded, definitely, but also fulfilling a deep need.)  The answer: a lot.  And what of the political economy?  Aside from the problem of inequality--that the number of good jobs that don't require college degrees is steadily decreasing--she also points to federal and state regulations.  States, she finds, have an active interest in moving off people from their welfare rolls onto the federally funded disability program.  This work, naturally, is done by consultants who charge a fee for every successful transfer. 

It's all deeply fascinating stuff that moves fluidly on a number of different levels.  Sometimes Joffe-Walt is down on the ground, talking to people, seeking their opinions, wondering what they think.  At other times, she is taking an eagle-eyed view of the scene, talking to economists, and regulators.  [The web-site has a number of interesting graphs that are worth checking out.]  

Sunday, March 17, 2013

The Problem with Critique: On Evgeny Morozov's new book

[Update: Okay, perhaps I should say this upfront.  This is not a review of Morozov's book; rather it's a set of reflections on what we do as STS scholars based on two really outstanding reviews of Morozov's book.  I've made some minor changes to reflect this.] 

Evgeny Morozov's new book, "To Save Everything, Click Here: The Folly of Technological Solutionism" is out.  There's a bunch of reviews out there and out of those I'd suggest two: Tom Slee's on his eponymous website, and Alexis Madrigal's at The Atlantic.  They're both very different in tone and content, yet I think they capture the essence of Morozov's argument (I haven't read the book yet!), both in terms of its strengths and its problems.**

Reflecting on what Slee and Madrigal say about the book, I found myself thinking about STS scholarship in general.  Morozov is particularly against Internet-centric solutionism which usually ends up using an approach that, as Slee rightly observes is often an application of "engi­neer­ing, neu­ro­science, [and] an under­stand­ing of incen­tives (in the nar­rowly util­i­tar­ian sense)."  But what ends up happening though in this criticism of solutionism is that, as both Slee and Madrigal point out, Morozov ends up using tropes that are usually used by conservatives--and worse, by reactionaries. 

And then there is the idea of critique itself.  It was illuminating to read that Morozov is actually inspired by what historians of science have done to their topic, that he wants to destroy "the Internet" the same way STS scholars have destroyed "science" as a natural category.  As Madrigal (using Paul Rabinow) rightly points out, this destruction of science is all but unnoticed outside the human sciences.  Actual working scientists are hardly aware of it, and if they were, they would just shrug and carry on with their work. It isn't that science studies hasn't been revolutionary--but it has been revolutionary within the humanities and social sciences.  It's almost as if freed from the cultural authority that science enjoyed, we, the human sciences--sociology, history, anthropology, literary studies--can now discover, analyze, and understand, on our own terms.  But their influence on science itself and even more importantly, on public life, has been minimal.

And I'm afraid something similar might happen with Morozov.  Some people will read Morozov's book, it might even change some people's minds but Silicon Valley solutionism will carry on as it did before.

The more I think about it, the more I real­ize that the late Richard Rorty had it right. He con­sis­tently upheld the poet, the nov­el­ist, and the politi­cian as roles that are higher than a philosopher–higher he said, because they are the ones who expand or change ideas about human­ness. The problem with Morozov (and with science studies) is that they are stuck at the level of philosophy or critique.  Critique is good, but critique is not the same as doing things.  Even Thomas Kuhn's The Structure of Scientific Revolutions, though dated as a science studies text, points out that a scientific paradigm is never discarded unless an option is available; old paradigms fall only because new ones appear and until a new one does appear, an old paradigm can carry on with infinite ad-hoc additions to itself.  Morozov doesn't provide that paradigm; even if he does, he provides it in the spirit of critique and that may not work because the people he is arguing with are not in the business of critique.  They are in the business of doing things and while it may be a Silicon-Valley-corporate-profit-driven thing, it still manages to shift people's ideas and experiences in the way that critique does not.  STS scholarship has the same problem. 

Can critique change things?   Again, it's useful to go back to Rorty who points out that certainly something came of the attack on the canon in the 60s and 70s.  Attuned to ideas about race, class and gender, literary theorists went back into the past and re-discovered books that had been neglected because they had not been written by dead white men.  Today, these books, like Zora Neale Hurston's Their Eyes Were Watching God are no longer just texts in graduate seminars; they are now on school syllabi and increasingly read by school-children.  In that sense, the critique of the canon has indeed borne fruit.  Will critiques like Morozov's and other STS-type critiques yield something similar in the future?  And what will that be?  Only time will tell.

___________________________________________________________________
End-Notes: 

**Slee is good at describing the intellectual moves Morozov makes in his effort to take down Internet Triumphalism.
Moro­zov under­takes two projects, one suc­cess­fully and one less so. The first is to pro­vide a frame­work in which to think about the new inven­tions that are being sold to us, and the pat­terns of thought behind them. [...] Moro­zov iden­ti­fies a twin-tracked ide­ol­ogy behind the inven­tions and inven­tive­ness of the dig­i­tal world. One track is “Internet-centrism” – the prac­tice of “tak­ing a model of how the Inter­net works and apply­ing it to other endeav­ours”. Writ­ers have imbued the Inter­net with “a way of work­ing”; it has a “grain” to which we must adapt; it has a cul­ture, a “way it is meant to be used”, and it comes with a mythol­ogy in which iTunes and Wikipedia become mod­els to think about the future of pol­i­tics, and Zynga is a model for civic engage­ment (15). The sec­ond track is “solu­tion­ism”: the recast­ing of social sit­u­a­tions as prob­lems with def­i­nite solu­tions; processes to be opti­mized (23).
Moro­zov does a fine job of artic­u­lat­ing Internet-centrism and solu­tion­ism as two facets of a sin­gle Sil­i­con Val­ley ide­ol­ogy, [...] The com­mon assump­tions, shared biases, and indi­vid­u­al­is­tic predil­ic­tions give a cohe­sive­ness and homo­gene­ity to the new ideas and inven­tions, actively con­struct­ing and shap­ing the dig­i­tal envi­ron­ment from which they claim to draw their inspi­ra­tion. The insis­tence on “dis­rupt­ing” our social and envi­ron­men­tal lives; the idea that the solu­tions inspired by and enabled by the Inter­net mark a clean break from his­tor­i­cal pat­terns, a never-before-seen oppor­tu­nity – these mean that the only lessons to learn from his­tory are those of pre­vi­ous tech­no­log­i­cal dis­rup­tions. The view of soci­ety as an institution-free net­work of autonomous indi­vid­u­als prac­tic­ing free exchange makes the social sci­ences, with the excep­tion of eco­nom­ics, irrel­e­vant. What’s left is engi­neer­ing, neu­ro­science, an under­stand­ing of incen­tives (in the nar­rowly util­i­tar­ian sense): just right for those whose intel­lec­tual pre­dis­po­si­tions are to algo­rithms, design, and data struc­tures.
Slee thinks that Morozov's analysis of the "solutionism" that he sees coming from the Valley is less satisfying,.
Morozov’s approach to unpick­ing the hid­den assump­tions of solu­tion­ism, and the unpalat­able con­se­quences of its appli­ca­tion, is impres­sive but less suc­cess­ful. In order to avoid a blan­ket technopes­simism he makes two moves. The first is to adopt a broadly social con­struc­tion­ist approach to the world of dig­i­tal tech­nolo­gies. The Inter­net does not shape us, it is shaped by the soci­ety in which it is grow­ing. He is with Ray­mond Williams, against Mar­shall McLuhan. His stance here is blunt: he refuses to see “the Inter­net” as an agent of change, for good or bad. “The Inter­net” is not a cause; it does not explain things, it is the thing that needs to be explained. Chap­ter 2 is titled The Inter­net Tells Us Noth­ing (Because It Doesn’t Actu­ally Exist).

The sec­ond, more sur­pris­ing move, is to adopt a cri­tique that was first described in a pejo­ra­tive sense by Albert Hirschmann. “In his influ­en­tial book The Rhetoric of Reac­tion, Hirschmann argued that all pro­gres­sive reforms usu­ally attract con­ser­v­a­tive crit­i­cisms that build on one of the fol­low­ing three themes: per­ver­sity (whereby the pro­posed inter­ven­tion only wors­ens the prob­lem at hand), futil­ity (whereby the inter­ven­tion yields no results what­so­ever), and jeop­ardy (whereby the inter­ven­tion threat­ens to under­mine some pre­vi­ous, hard-earned accom­plish­ment)” (6). Moro­zov does not see him­self as a con­ser­v­a­tive, but instead places him­self in the tra­di­tion of other thinkers who have stood against pro­grams of orga­nized effi­ciency; “Jane Jacobs... Michael Oakeshott [and] ... James Scott "
Madrigal in his Atlantic review does a great close-reading of passages of the book to show that Morozov arguments are often high-ideology.  Which means that he often counters the ideological set-pieces that Silicon Valley types routinely use--visions of a future where a certain technology seems to solve all our problems--with one of his own that paints a completely opposite picture.  And as Madrigal goes on to note, he's really good at it except that at some point, he loses sight of real people doing real things.  This analysis is worth quoting because it is an example of how one can write a fine, principled, rigorous piece of criticism while still basically agreeing with the author on the important things:
Morozov's book is an innovation- and product-centered account of the deployment of technology. It focuses on marketing rhetoric, on the stories Silicon Valley tells about itself. And it refutes these stories with all the withering contempt that a brilliant person can muster over the course of a few years of dedicated reading and writing. But it does not devote any time to the stories the bulk of technology users tell themselves. It relies on wild anecdotes from newspaper accounts as if they were an adequate representation of the user base of these technologies. In fact, the sample is obviously biased by reporters writing about the people who sound the most out there.

"Celebrating quantification in the abstract, away from the context of its use, is a pointless exercise," Morozov writes, and yet he ends up doing excoriating quantification in the abstract. When he does apply his thinking to the specific case of nutrition aids, it is with some serious handwaving. Calories are not an adequate measure of overall nutrition content, he writes, and thinking narrowly about nutritional content is a boon for food companies, and maybe calories aren't even really the problem. All fine and valid ideas, but knowing how many calories you eat is a good starting point for good health, no? This has been well-established by the medical and public-health literature. And, in any case, tracking one's caloric intake is not a search for a "core and stable self." And if your calorie counter doesn't share your data, it could be a private practice. What if you write it in a book as has been done for decades, or in the iPhone's notes, rather than an official app? Is that OK? What about non-tweeting scales, are those anathema as well? Should the ethical concerns Morozov presents really prevent actual human beings from trying to understand the basics of their food intake?

Or take the use of pedometers, gussied up into packages like the Nike Fuel Band, Jawbone Up, or Fitbit. There are literally hundreds of thousands of pedometers and other activity monitors out there in America, but Morozov does not try to investigate how such devices are used. Are the people buying FitBits and Nike Fuel Bands trying to reveal deep inner truths about themselves? Are they sharing every bit and bite with friends? Or are they trying to lose a few pounds in private?

Look at what Amazon can tell you about the market for these devices: people who bought FitBits recently also bought diet books, scales, and multivitamins. While Morozov locates self-tracking "against the modern narcissistic quest for uniqueness and exceptionalism," it strikes me that I've yet to meet someone wearing a fitness tracker who wasn't engaged in that least unique American activity: weight management.

Monday, March 11, 2013

A Theory of Key Points: What tennis can tell us about technological change


Coming into the 2011 US Open with a track record of winning all but one of the Grand Slam matches that he played that year, Novak Djokovic was facing Roger Federer in the semi-finals, the very man who had beaten him in his only Grand Slam loss of 2011.  And ominously, he lost the first two sets, 6-7(7), 4-6 before rallying to take the next two 6-3, 6-2.  It was now the final set and Federer, having just broken Djokovic's serve in the final set to go up 5-3, was serving at 40-15, with two match-points on his own serve.  Upset at the crowd which was cheering Federer on wildly, Djokovic seemed out of sorts, angry at himself, perhaps, for being in this position despite playing a flawless third and fourth set.  



[See the video from the first minute.]  The interpretation of what happened next remains a matter of dispute, hotly debated in tennis forums, YouTube comments, and the blogosphere.  Serving from the ad-court, Federer served out wide to Djokovic's forehand.  It was not a bad serve, but Djokovic swung at it hard, and literally smashed it cross-court for a clean winner.  There was shocked silence for a second before cheering erupted.  Djokovic walked to the other side of the court, raised his hands and looked at the crowd.  Appreciate me, he seemed to be saying.  The crowd obliged even as a bemused Federer stood waiting to serve on the other side of the court. 

It was still match-point.  Federer threw a good serve straight at Djokovic's body, and a rally ensued, which ended, heartbreakingly for Federer, with his shot striking the net-chord and then dropping back on his own side.  Deuce.  Djokovic went on to win the game breaking Federer in the process.  He then won the next three games as well, winning the final set 7-5 to defeat Federer and reach the final. 

What was going on in Djokovic's mind when he hit that screaming forehand winner off Federer's serve?  Was it hit in anger or was it a calculated risk?  How much did Djokovic's gamesmanship – seeking the crowd’s approval – affect Federer on his next serve?  Tennis fans and analysts continue to debate this.  My own thought, as I was watching the match, was that Djokovic, who can often be peevish and irritable on court, was angry with himself and swung at the ball, more out of pique than anything else.  But the shot went in, and Djokovic used it to rally the crowd to his own side.  On the other side of the net, Federer suffered a dent in his own confidence, and this allowed Djokovic (who is undoubtedly the best and fittest player on the tour today) to put himself back into the match. 

Both players themselves offered contradictory interpretations of the return.  “It’s a risk you have to take,” Djokovic told Mary- Joe Fernandez in the on-court interview. “It’s in, you have a second chance. If it’s out, you are gone. So it’s a little bit of gambling.” Federer, on the other hand, was having none of it.  “Confidence, are you kidding me?” he scoffed in his post-match interview. “I never played that way. For me, this is very hard to understand how you can play a shot like that on match point.” Djokovic acknowledged that he needed to "get some energy from the crowd."  “Look, I was a little bit lucky in that moment because he was playing tremendously well with the inside-out forehand throughout the whole match. This is what happens at this level. You know, a couple of points can really decide the winner.” 

The Federer-Djokovic first match point is often what both tennis players and tennis analysts refer to as a "key point."  These key points, as Djokovic points out in his post-match interview, are often the ones that "decide the winner."  In the rest of this essay, I hope to show that this idea of "key points" as relevant to the outcome of a tennis match is possibly of interest to historians of technology. 

What is a "key point"?  A key point is a point (possibly among a set of points) which can be seen to have determined the outcome of the match, as seen by the players or the analysts (or both).  Players often sense that a point will be key during the match itself and go all out in their effort to win it, perhaps by hitting extra hard, taking a risk, or by running down a ball they would rather have left alone to conserve their energy.  Analysts too, as interested observers of a match, can sense whether a point will be key to the outcome, although they have no agency when compared to the players themselves. 

But while an upcoming key point can be sensed by the players and the spectators, key points can be definitively identified only after the match is over.  In other words, the identification of key points is contingent on the outcome.  In the Federer-Djokovic match we saw above, the courageous (or reckless) Djokovic return at 15-40 is a key point only because Djokovic won the next four games to win the match.  If Djokovic had lost the next match-point, this point would no longer be talked about as a key point but as a fluke.  Instead the game in which Federer broke Djokovic at 4-3 in the final set would have turned out to be the key to the outcome of the match.  To restate this point, the key to winning a match is to win the key points, but the points that are key to winning a match can only be determined after the match is won (or lost).

It is worth discussing an alternative explanation of match outcomes: that the more talented, or better, player wins the match.  I quoted a part of Djokovic's post-match interview above.  On actually watching the interview, it turned out that the quote left out a crucial part.  Djokovic actually said: "This is what happens at this level – when two top players meet.   You know, a couple of points can really decide the winner."  [Italics mine.]   The implication here is that it is only when players are evenly matched in terms of "talent" that the outcome hinges on a few key points.  When players have wildly different talents, the outcome hinges on, say, the "talent" they possess (which will not be the same) and not on the key points.

How might the key point analytic relate to what historians – especially historians of technology – do to understand the past?  As I see it, the topic of historians of technology is technological change.  Our aim is to understand the past and to answer the question: why do certain things change while others remain the same?  One might see this question as similar to those that tennis analysts pose to themselves: why did player X win against player Y?  Why has player X consistently beaten player Y in their previous 5 matches? 

Somewhat analogous to the two theories to explain the outcome of a tennis match – the "key point" theory vs. the "more talent" theory – one could oversimplify theories about technological change into two kinds.  One theory might be that technological change happens because a certain technology is better at producing certain desirable outcomes (more profits, more efficiency, better living conditions, progress and so on).  This theory would go under the name of "technological determinism" and would be similar to the "more talent" theory of tennis match outcomes.  The other theory would postulate that technological change happens because certain groups of people – I will call them “interest groups” – are able to defeat, or persuade, their opponents through the channels available to them at certain crucial junctures.   This theory would be similar to the "key point" theory.

How would the "key point" theory of technological change help avoid the pitfalls of technological determinism?  As I see it, the main dilemma of any social science is the issue of predictability.  Unlike the natural sciences which can predict the future behavior of their "actors" (the trajectory of a missile, the motion of the planets, the quantum states of atoms), the social sciences cannot (and with good reason) predict the changes of the future.  They cannot because assemblages of human actors are unpredictable.  They have agency.  Harry Collins has shown how even the behavior of natural scientists – who produce natural science, the most “rational” of all the disciplines – is still unpredictable, and is better understood as the application of certain tacit skills, than as the brute application of some rule-bound "scientific method." 

The social sciences thus face two different questions.  On the one hand, social scientists need to account for the sense of contingency and unpredictability that their actors often feel while thinking about the future.  They also need to account for why their actors feel that certain actions are the key to changing the future.  On the other hand, they (and here I speak of historians in particular) need to account for why the events of the past seem so inevitable, the way they seem to lead to the present so unproblematically.  Clearly actors in the past who experienced these "same" events did not know how things would turn out.  How can historians account for the inevitability of the past for us and its contingency for the actors experiencing the past?     

A theory of technological change that looked at "key points" as determining certain (technological/social) outcomes could be one solution to this.  Key points in history would need to have the following characteristics.  First, historical actors themselves should have some dim awareness that something important was happening and that different visions of the future are at stake.  Second, the outcomes of these key points should result in the victory of one set of interest group over others, thereby setting in motion a certain kind of future.  Third, these key points can only be determined retrospectively once the outcome is known (as historians always do).  Fourth, key points preserve the agency of historical actors.  Finally, key points in history can change as newer and newer outcomes arise.  For example, historians now agree that Barry Goldwater's defeat by Lyndon Johnson in the 1964 presidential election, and the subsequent rise of grass-rootsconservatism, is a key to understanding American politics today, even if no one seemed to be paying attention to it back then.  It was a key point for certain actors who were mobilizing to achieve their vision of the future, even if their ideological opponents were largely unaware of them.  

Tennis key points are heuristics, of course.  And they have their limitations, even in sports.  For instance, it is much more difficult to locate key points in soccer, for instance, where the notion of discrete points does not exist.  Soccer is, for lack of a better word, continuous, while tennis is more discrete, with precisely demarcated "points."     And even in tennis, determining key points is difficult.  Because one point seemingly leads to the next: if the Djokovic screaming forehand winner was a key point, what about the points before that one?   What about those that decided the first four sets?  Would it have mattered if Djokovic had won the first set--which he lost narrowly in a tie-breaker (9-7)?  

But I do think that determining the key points of a tennis match is like doing history.  The boiling down of a match outcome to a series of key points shows us how contingent events are.  And at the end of the day, match outcomes are predictable to some extent: a match between Federer and David Ferrer is far likely to lead to a Federer victory (although not always).  Those are the kinds of explanations/narratives of technological change that the key point theory would ask us to look for: highly contingent, built out of specific events, but with specific patterns that are by no means law-like. 

Thursday, March 7, 2013

Algorithms and Rape T-shirts

Startled by the title?  You should definitely go read this blog-post.

Long story short: there was a Twitter-storm over some offensive T-shirts sold by a vendor on Amazon.com that seemed to encourage rape ("Keep Calm and Rape A Lot" went one, etc.).  Well--it then turns out that the T-shirts don't exist. Or rather, these T-shirts are made on the fly when someone orders them.  So how did they come to be on Amazon?  This is the fun part--they were generated by algorithms.  An algorithm that probably looked at the most popular Google Searches online and then arranged the search words in a template, made an image out of them, and put them up on Amazon.  If someone buys it, the shirt gets made (literally printed out), and sent.

Exciting, isn't it? 

Henry Farrell on Crooked Timber compares the scenario to the singularity science fiction of Charlie Stross**.  And quotes this great line
Amazon isn’t a store, not really. Not in any sense that we can regularly think about stores. It’s a strange pulsing network of potential goods, global supply chains, and alien associative algorithms with the skin of a store stretched over it, so we don’t lose our minds.  
I think we're entering a brave new world of content farms and search engine optimization.  Exciting times, I think. 

**  Charlie Stross is one of those science fiction writers who gets raved over at Crooked Timber whose writing style just doesn't work for me.  I did get through three-and-a-half of his Merchant Princes books before giving up.  Even worse was Accelerando (ebook available for free), his singularity book, which I gave up on after a few pages--again, the writing was just not to my taste, which meant that all the rich ideas in there were inaccessible to me.  But I still hope to read him sometime. 

Saturday, March 2, 2013

"Good Smart" and "bad Smart": What Smart Technologies Do and Don't

Everyone should read Evgeny Morozov's latest op-ed in the Wall Street Journal (via Alan Jacobs). Morozov elaborates on the latest smart social technologies and gadgets; technologies that by virtue of the cheap price of hardware, AI or crowd-sourced pattern recognition,  and the possibility of making your activity visible to your friends and acquaintances, serves to change your behavior in some personally or socially optimal way.  Examples: going regularly to the gym, eating healthier foods, or even (which is his chosen example) recycling the waste generated by a household.

Morozov, of course, as one might expect, is not happy with this.  He suggests an analytic distinction: "good smart" and "bad smart" technologies, which I think is really useful in thinking about the recent spate of products that use ubiquitous computing paradigm for social ends.   
How can we avoid completely surrendering to the new technology? The key is learning to differentiate between "good smart" and "bad smart."

Devices that are "good smart" leave us in complete control of the situation and seek to enhance our decision-making by providing more information. For example: An Internet-jacked kettle that alerts us when the national power grid is overloaded (a prototype has been developed by U.K. engineer Chris Adams) doesn't prevent us from boiling yet another cup of tea, but it does add an extra ethical dimension to that choice. Likewise, a grocery cart that can scan the bar codes of products we put into it, informing us of their nutritional benefits and country of origin, enhances—rather than impoverishes—our autonomy (a prototype has been developed by a group of designers at the Open University, also in the U.K.).

Technologies that are "bad smart," by contrast, make certain choices and behaviors impossible. Smart gadgets in the latest generation of cars—breathalyzers that can check if we are sober, steering sensors that verify if we are drowsy, facial recognition technologies that confirm we are who we say we are—seek to limit, not to expand, what we can do. This may be an acceptable price to pay in situations where lives are at stake, such as driving, but we must resist any attempt to universalize this logic. The "smart bench"—an art project by designers JooYoun Paek and David Jimison that aims to illustrate the dangers of living in a city that is too smart—cleverly makes this point. Equipped with a timer and sensors, the bench starts tilting after a set time, creating an incline that eventually dumps its occupant. This might appeal to some American mayors, but it is the kind of smart technology that degrades the culture of urbanism—and our dignity.
Image taken from here.  It shows the wired trash bin with the camera attached to its lid. 

What about BinCam, the product he opens his essay with?  BinCam is a trash bin whose lid comes attached with a camera.  It takes a picture of the bin's contents when the lid is shut, uploads it to Amazon Mechanical Turk, where some Turker determines whether you've been putting recyclables into your trash, then publishes the photo along with the Turk assessment to the user's Facebook or Twitter profiles.  The idea is that peer pressure and perhaps some mild social censure will make you better behaved -- "better" in the sense of being socially and ecologically optimal.


You would think BinCam falls into the "good smart" category but no; Morozov says that it falls "somewhere between good smart and bad smart."  
The bin doesn't force us to recycle, but by appealing to our base instincts—Must earn gold bars and rewards! Must compete with other households! Must win and impress friends!—it fails to treat us as autonomous human beings, capable of weighing the options by ourselves. It allows the Mechanical Turk or Facebook to do our thinking for us.
I think Morozov's concerns about surveillance are really useful.  But he lost me with this paragraph.  Since when did it become a "base instinct" to win and impress friends?  If someone buys BinCam with the intention of helping him or her adhere to certain recycling conventions, how is it different from someone who uses her friends to police her diet?  I think the key to understanding the paragraph is the reference to Facebook and Mechanical Turk; those are the two technologies that make Morozov uncomfortable. And there is the fact that the behavior in question here is less useful individually, than collectively.  Whether I recycle my trash or not has less consequences for me than it does for the society I live in (unlike, say, dieting or exercise, although one might argue that even these two activities have a "social" dimension; they will help bring down the high cost of health care).  But recycling also has another aspect: more so, than dieting: it is a behavior whose template is created by experts.  And it is precisely this: aligning my behavior into a template decreed by experts, and monitored by my friends, is for Morozov, an unacceptable loss of autonomy.

I am not sure I buy this.  And it highlights, I think, one of the interesting points of similarity between critics like Morozov and Nicholas Carr: the normative use of the Cartesian subject.  For Carr, humans have a deep need for solitude; in fact, solitary reflection (exemplified by deep reading) is what makes us most deeply human.  And the Web, by its very constitution, forces us away from this; it forces us into multi-tasking, into skimming, and into a form of constant sociality though Facebook and Twitter.

Morozov's concerns are different, and I think far more politically salient, than Carr's.  But for him too, the most deeply human thing about us is our freedom and our autonomy--not just from state surveillance (a form of "negative liberty"), but also from certain forms of "base" socialities.  And so, while I find the "good smart" and "bad smart" distinction really really useful, I suspect the devil is in the details. 

Tuesday, February 26, 2013

Paragraph of the day!


Image Credit: PC Museum
Image Credit: PC Museum
Complementers, however, always run the risk that Microsoft will incorporate the unctions contained in their software into its own products, either by internal development or by acquiring the technology in a takeover.  Merger talks between Novell and Microsoft in 1990 fell through.  Microsoft subsequently introduced networking capabilities into its operating systems in the early 990s, thereby entering into intense competition with Novell.  On the other hand, in the case of Norton Utilities, Microsoft has shown the tolerance of an elephant for the tikka bird on its back, allowing Peter Norton Computing "deep into the innards of the operating system" and fostering "tremendous personal relationships between their development teams."  This is probably because Norton Utilities complement Microsoft's operation g systems, adding to their value--by providing anti-virus facilities, for example--in a way that Microsoft's relatively bureaucratic development processes would find difficult or uneconomical. 

--Martin Campbell-Kelly, From Airline Reservations to Sonic the Hedgehog: A History of the Software Industry (pg 260).

Friday, February 22, 2013

Integrated Circuits, City Planning and Book Printing

CC Image courtesy of james4765 on Flickr.

Modern integrated circuits, when examined under a microscope, look like the plan of a large, futuristic metropolis. The analogy with architectural design or city planning is appropriate when describing chip design and layout. Chips manage the flow of power, signals and heat just as cities handle the flow of people, goods, and energy. A more illuminating analogy is with printing, especially printing by photographic methods. Modern integrated circuits are inexpensive for the same reason that a paperback book is inexpensive--the material is cheap and they can be mass produced. They store a lot of information in a small volume just as microfilm does. Historically, the relationship between printing, photography and microelectronics has been a close one.
Paul Ceruzzi, A History of Modern Computing

Saturday, February 16, 2013

Policy vs. Politics: A Close Reading of Timothy Geithner's tightrope walking exit interview as a case of "boundary work"

In Science and Technology Studies (STS), the sociologist Thomas Gieryn has a concept that he calls boundary work.  Boundary work is the work of defining what constitutes science, of deciding on the boundary between science and pseudo-science (or non-science), or that between science and politics, or between science and religion, among others.  This work is done not in laboratories but in public debates, and often not by scientists, but by others: "journalists, bureaucrats and lawyers" (not to mention philosophers!).   In other words, boundary work is about the consumption and circulation of science.  Gieryn's is explicitly a constructivist stance.  He takes for granted that the boundary between science and non-science is made rather than found; is pragmatic rather than conceptual; and takes a good deal of work to police and maintain--a stance that is the foundation of STS.

In this blog-post, I am interested in a similar kind of boundary work w.r.t. "politics" that happens in American public culture (which builds on the pre-existing boundary work w.r.t. science).  The boundary work in public life then consists of establishing that something--a course of action, a guiding principle--is "policy" and not just "politics."  "Policy," here, is seen as the opposite of politics: a rational technocratic exercise where a group of skilled people sit down and solve problems.   Policy-making obviously gains its legitimacy from its similarity to doing science and from science's cultural authority, in particular, its image as rational, skeptical, and as being about facts rather than values. Since it is governments that do policy, and governments are almost always seen as political,  how might one separate policy from politics--and that too, using the cultural authority of science?  I would say that it is done by seeing policy-making as "solving problems" rather than, say, resolving conflicts between opposing interest-groups.

 An almost perfect illustration of this is Timothy Geithner's "exit" interview with the New Republic's Liaquat Ahamed.  Like everyone else, I was excited to read it.  What would Geithner say about the financial crisis which he inherited from Hank Paulson (although he was already a player in it as the head of the New York Fed), and which he--at least in some sense--brought to a close? That he chose not to talk about the juicy times of his tenure will surprise no one [1].  What was interesting about the interview though was the way he used the occasion to perform boundary work.: his own work, he insisted, was about policy-making, not "politics"; his job was to "fix the problem" at hand (the financial crisis, that is).  He was lucky, he claims, that President Obama is also a policy-maker President, who sees his advisers as offering policy advice that is not based on political expediency. 

In the next few paragraphs, I'm going to do a Wittgenstenian analysis of how Geithner uses the terms "policy" and "politics" respectively to show the kind of boundary work that is being done.  In response to a question from Ahamed on whether he "saved the economy, [and] lost the public," Geithner says:
I think that’s a great question. I think it’s hard for any of us to know. My own view was that it was going to be very hard, if not impossible to design a financial rescue that was going to be effective in protecting all the innocent victims hit by the crisis and still satisfy the completely understandable public desire for justice and accountability. Those things were in direct and tragic tension, never resolvable at that time. I always felt that the only preoccupation for people in policy at the time should be to fix the problem as quickly as we could, as effectively as we could, and only after that would other things be possible, including how to figure out not just how to clean up the mess, but reform the financial system.  [my emphasis.]
Public desire for justice and accountability is "understandable," says Geithner, but "the only preoccupation for people in policy at the time should be to fix the problem as quickly as we could."  Immediately, a distinction is made between "people in policy" whose task is seen as "fix[ing] problem[s]," and the "public" that wants "accountability."

In answering the next question, which was about his working relationship with President Obama, Geithner makes his point even more explicitly.  The President, he says approvingly, is interested in policy, not politics.
I always had the sense that he was going to put policy ahead of politics. And he always made it clear to people working for him that our job was to inform him of what the relative merits were of the policy choices, not to try to do the politics for him, or to limit our prescriptions by what was politically expedient. I think the country was very lucky because as you know, there have been lots of other examples in other countries, and certainly in our history, where the leaders of the country were much more reluctant to put politics aside and take the sting of a more decisive resolution. I’ve talked about this before, but I think that what really distinguishes countries in crisis are those that are lucky enough to have political leaders who are willing to take the brutal political cost of doing what’s necessary and those countries that waited and let the populist fires burn, or decided they were going to try to teach people a lesson and put populism ahead of other things. Those countries had far worse experiences in crisis than we had.    [my emphasis]. 
Politics here is identified with the messy, grubby business of getting votes.  And political costs are just that: electoral defeats.  Good "policy" can lead to electoral defeats, and President Obama, Geithner says, did not put politics first (meaning electoral victory), unlike a lot of other world leaders.   Or, to put it in a rather crude pictorial version:



The interview moves on to other things.  When asked what the most frustrating part of being the Treasury Secretary presiding over the financial crisis, Geithner says:
The most frustrating part of this work, but in some ways it’s the most consequential, is how effective you can be in relaxing the political constraints that exist on policy. You can see that most compellingly now in the fiscal debate. Paulson before us and the President were very successful during the crisis in getting a very substantial amount of essential authority essential to resolving the crisis. But it has been very hard since then to get out of the American political system more room for maneuver both on near-term support for the economy, as well as reforms that would lock in a sustainable fiscal path. That is the most frustrating thing, to get the political system to embrace better policies for the country.    [my emphasis]. 

On close analysis, we see that the politics/policy distinction is now no longer about winning elections versus crafting solutions.  The "fiscal debate" is about whether the Federal Government needs to boost spending in order to counteract the effects of the recession and boost employment.  But funding appropriations are done through Congress which has no stomach for more spending (preferring instead to concentrate on deficits).  Politics and policy now become separated by the different arms of government.  Congress, playing "politics,"--that is, concerned with re-election--denies the President the policy levers (fiscal stimulus, monetary expansion) that he can use to combat the recession. 



This is only bolstered when Geithner includes President Bush's team as part of an all-inclusive "we" who dealt with the crisis as policy-makers should.  As opposed to Congress, that is, which is clearly not part of the "we."  This also helps Geithner use the word "bipartisan"--which carries great currency in American political debate today.
I really believe that given the choices we had at the time, with the authority we had and the options available to us, that we did a very effective job. And by “we,” I mean in many ways this was a bipartisan response across two administrations that will look good against the comparison of what we know about other crises of this magnitude.
Then things get interesting.  Ahamed asks Geithner what his views on austerity are; was he, Geithner, responsible for the shift in the President's focus from fiscal stimulus to debt reduction?
TG: It was definitely my view, and it still is, that our ability to get more growth-promoting policies out of the Congress is contingent on our ability to put in place long-term fiscal reforms that restore sustainability. That’s true for lots of different reasons. It’s true not just because, without action, the natural dynamics of demography and healthcare costs would crowd out a whole range of investments over time. But it’s also true the average person, facing deficits this large, is just uneasy supporting substantial additional growth-relevant fiscal policy without that framework. So that’s the main reason why I was a supporter of trying to make a more credible commitment to some gradually phased-in, sensibly designed restraints over time. I think without that, there was no way we were going to be able to make the case for a big long-term infrastructure program.  [my emphasis.]
And then, suddenly, everything gets unscrambled here (on close reading, that is).  The President's policy-making--his ability to bring the economy back from high unemployment and low growth--is seen as constrained, not just because Congress is a grubby, political machine, intent on re-election, but because Congress needs to be convinced of "our ability to put in place long-term fiscal reforms that restore sustainability." And then, that staple of politics proper is brought in: "the average person."  "It’s also true," says Geithner, that "the average person, facing deficits this large, is just uneasy supporting substantial additional growth-relevant fiscal policy without that framework."  Who is this average person?  Does he stand for a member of the Congress?  Or does he stand for the public that each Congressman acts as a spokesman for? 

It's unclear, which, I think, is the point.  And note that the word "politics" doesn't appear at all in this quote, precisely (I think) because Geithner is talking about politics.  Not politics as grubby vote-getting, but rather, politics, as a mechanism of reconciling different sets of values and priorities.  And while that idea is present in his remarks ("without that ... no way we were ... able to make the case for"), he does not call it politics.  With his references to the "average person," Geithner brings in the imagined public in whose name the US Government acts.  And he suggests that policies of fiscal expansion need their stamp of approval from this public, and setting out a plan for the deficit is one way of securing this.  The refusal to use the word "politics" to describe this, and the use of the imagined public of "average person(s)" to legitimate his own policy, suggests, again, that this is an instance of boundary work. 

Why is this at all important?  As I suggested before, in public life, it is often easier to argue over facts, rather than values.  The use of numbers--risk analysis, what have you--in public life, as Theodore Porter suggests, accomplishes precisely this purpose.  The US Army engineers used the language of cost-benefit analysis (of public projects) to make their decisions seem more rational, and less political.  And while it has its advantages, and permits a certain kind of discussion, the use of mathematical cost-benefit analysis doesn't make a political decision less political; it simply puts the values into the background.

Sometimes--many times--it's useful to put values in the background and talk about facts and methods.  At others--and the financial crisis, it seems to me, is a key candidate for this--it's probably better to bring the values into the foreground, with the facts and the methods.  And that's why Tim Geithner's boundary work is important. 


------

[1]  The interview itself was guarded.  Geithner didn't say much about the conflicts within the Administration over the appropriate response to the Great Recession (fiscal stimulus vs. austerity?  bank nationalization vs. capitalization?  and so on).  He seemed to indicate that the route the Administration had taken--TARP, recapitalization, Fed lending, stress tests--had worked pretty well, under the circumstances.  And he ended on this chilling (but quite appropriate, I should think, for a regulator) about the inevitability of financial crises:
I think there’s something about human beings, and something about financial systems, where people tend to give less weight to the risk of an extreme event. So after a long period of relative stability, like we had in the U.S. and the world economy in the decade before this, that leads people to take on more risk than they should, borrow more than they should, and that’s what creates the vulnerability to crisis.

The things we did in this crisis, and certainly the things we did in financial reform, will significantly reduce the probability and the intensity of crises for a long period of time. Because there’s much more capital in the financial system. We did a pretty brutal restructuring of our financial system as a part of the crisis response. I know that markets over time will find their way around those things, and memories will fade. But if we’re lucky that will take a long time.   
And Geithner appears surprised at the finance community's rather fierce response to the teeny-weeny bit of class war rhetoric that President Obama used in his public addresses.
I’m biased but I felt that in the basic strategy that the President embraced and that we put into effect, we did something that was incredibly effective for the broad interest of the economy and the financial system. I feel the President’s rhetoric over that period of time was very moderate relative to the populist rage sweeping across the country. And I never quite understood why the financial community took such offense at what was such moderate rhetoric relative to what we have seen in other periods in history.