Monday, November 21, 2011
Sunday, November 6, 2011
I wrote a long post on how the disabling of the Google's "Note in Reader" bookmarklet (which went hand-in-hand with disabling Google Reader's sharing features) had taken the wind out of my sails, turning it into a lengthy point about how designed products end up working out for reasons other than what their designers envisioned.
That was an academic point. But what next? I am really confused. On the one hand, there is the Greasemonkey script that re-installs the sharing features in Google Reader; on the other is Tumblr which I have started to use as a sort of scrapbook of excerpts of interesting articles that I have read.
The Greasemonkey script is great - and it sort of suggests that Google has left the infrastructure for sharing in Reader intact while disabling the outer manifestations of it (the "share" "share with note" buttons, the list of followed people, etc.)***. But how long will this infrastructure stay the way it is? Google Buzz is going to be phased out soon and many people used Google Buzz to see the shared items rather than seeing them to Google Reader - so will this feature ever be the way it was? And on and on - the uncertainty seems too much.
Which is why I've pretty much decided to take the leap into using Tumblr. Of course, this means that I have to start building my network from scratch - but at least there's no uncertainty. And one of the great things about Tumblr - as I've discovered from using it over the past few days - is that it really makes you read the piece in question. That's because Tumblr is set up as a kind of commonplace book which means you need to pick out a paragraph or so from the piece that really really intrigues you; I've found myself reading pieces with that in mind and it's a big help. Plus the commentary format where you take an extract from the piece and offer commentary on it is great for expressing quick thoughts on the piece in question. In Google Reader, I would often share a piece without reading it with great care; it only needed to pass a certain minimum standard of "interestingness" and Reader made sharing much easier than Tumblr. But all in all, there may be some advantages to the Tumblr model. (Again, the whole thing goes to show how technical systems structure our practice in interesting ways.)
That is all. Tumblr link here.
*** This seems like a true instance of the "Do no evil" motto that Googlers spout. Also it's an interesting facet of information systems in general. You can leave the infrastructure intact while still disabling a feature, thus allowing users to use it for themselves if they are enterprising enough. Plus only some users need to be enterprising (the guys who wrote the Greasemonkey script in this case); the rest of us can reap benefits if the workaround is shared widely enough (and it seems to be, in this case).
Wednesday, November 2, 2011
The unintended consequences of technical tools: the demise of sharing in Google Reader and why I will miss it
All of this is a long way to say: I am bummed because Google Reader has disabled its "Sharing" features. I'm still going to continue to use Reader, I still like its interface and I love its "tagging" feature and its keyboard shortcuts. I'll miss the "Sharing" aspects because I'd built up a group of like-minded contacts (most of whom, I'd never met or talked to, my Reader buddies, so to speak) whose links I often found most interesting and most relevant to my own interests. But even more than that, the "Sharing" feature of Google Reader allowed me to do what I'd always wanted to do: have one receptacle for everything on the Web that I found interesting, which I could then search through as and when I wanted to. I don't think the Reader designers imagined that their "Sharing" feature could have other uses but that's what hurts most right now. I had a workflow which I thought I had perfected and it was all through Reader - and now I have to start all over again and come up with another workflow. In a way, this goes to show the danger of overly relying on one particular technology.
Let me start from the beginning. The discovery of RSS feeds was the best thing that happened to my web reading habits. I read randomly back then, and one had to actually go to a website to read - and this was difficult to do, I used bookmarks to remember all these sites I found interesting - yet it never quite worked out. Then came RSS feeds and suddenly, I didn't have to go to a website anymore, instead the website came to me. I experimented with a number of desktop RSS readers, then shifted to Bloglines (remember Bloglines??!!!) for a long time, before finally taking the plunge into Google Reader. It was great - I loved it. I could tag items that I thought were interesting and worth storing and it had an excellent "Search" feature which meant that I could look through all my feeds using keywords. This is most useful when you are blogging or writing: you can look through all your read-and-tagged articles and find the ones that most relate to the argument you are making.
There was one problem. But to explain that, I have to talk about my long-standing obsession with information. One of the things about reading on the Web is that you get access to lots of material that you wouldn't have before. And you don't actually get to read most of these pieces; some of them you skim, some of them you skim, find interesting and then read deeply; some of them you don't even skim but you think these could be potentially useful at a later time. Which means that you want to store everything that you think is useful - even remotely. And the great thing about the internet is that storing things is inexpensive and easy, although there are infinite ways to do it. I tried a variety of options: Google Bookmarks, Evernote, Delicious - but it never worked. There were just too many options and working across applications was tiring and inefficient and frankly, not very useful. Storing links and text is only useful if you re-read them and use them; and I found that I was rarely going back to what I had stored.
tag anything I thought remotely useful (the story of what tags I came to use is something I'll tell another time). Thus I had a nice folder of items that I thought might come in handy for me later.
But there were other articles that I would read outside Reader (usually links that came from the blogs that I read in Reader). And I wanted to store these too - but there was never a way to put them into Reader. So for the longest time, I had two places where I stored interesting things I'd read: in Google Reader and in Evernote - and needless to say, it got pretty unwieldy.
Note in Reader" bookmarklet, which was designed so that you could share interesting things with you Reader friends even if what you were reading was not actually through Reader.
That was the breakthrough. But not in the way the Google designers imagined. True, I used the bookmarklet to share more links. But once I clicked on "Note in Reader," I had the option of not just sharing the article, but also tagging it so that it would be accessible from within Google Reader. So there were a lot of pieces that I used the bookmarklet to just save and tag, and not necessarily to share with others. I had my one application to store everything on the Web that I found interesting: it was Google Reader. At some point, I stopped using Evernote.
Which is why the loss of the Sharing functions in Google Reader is depressing. Now when I click on "Note in Reader" here is what I get:
So forget sharing, I can't even tag and web-page and move it into my Google Reader folders. Which means I have to start my search for one storage application all over again - or content myself with two. I'm hoping that even as Google has disabled Sharing through Reader, they will at least allow us to import web-content into Reader folders. But we'll see.
As we learn all the time as designers, when you take away a feature, you take away practices that users have been relying on, practices that may not have been what you intended. It's a good lesson to learn as a user - hopefully it'll be something I'll remember when I do any kind of design work.
Saturday, September 3, 2011
Although there still are no rules regarding the performance of tennis court surfaces — the game can be played on anything — in January 2008 the ITF began regulating the speed of courts used in Davis Cup tournaments. Because the host country gets to choose the tournament location, the new rules prevent the use of extremely slow or fast surfaces. So far, only one court — a clay surface in Croatia, used in a 2008 match against Brazil — has violated the regulations.But Lehrer - a committed enthusiast of neuroscience - goes too far in reinforcing the distinction between propositional and tacit knowledge. He points out - correctly - that that laws of tennis are ultimately the laws of physics but the speed of the game means that no player actually computes the trajectory of the ball using Newtonian mechanics while playing. Instead the knowledge is displayed tacitly, in the way their bodies move, in the way they adjust their footwork and their racket motion, etc. In Michael Polanyi's terms, this is tacit knowledge - knowledge that is expressed in action but is hard to express propositionally. Lehrer chooses to illustrate this with the following example, which I found a little silly:
I met with the Caltech tennis team, arguably the smartest collegiate athletes in the country. (The average grade point average on the men's team is 3.73, which is one of the highest team GPAs in the NCAA. And these players are taking Caltech classes.) Despite this intellectual pedigree, the Caltech tennis players have struggled to win games: Last season, the men's tennis team went 1-16. Although many of the players can rattle off abstruse physics equations with ease, they all insisted that their textbook knowledge was not an advantage. "To be honest, it doesn't help at all," says Devashish Joshi, a freshman on the team. "I never think about science while playing."Well, okay, if you say so. But:
"The top-ranked guys are all intuitive physicists," Hofmann says. "They know how the ball will bounce even if they can't explain why. This is what allows them to change their strategy based on the surface."I don't want to de-emphasize how talented the top tennis players are. But this makes it seem as though that that the only way of bringing propositional knowledge of physics into the game is if the players start calculating in their heads. If you look at the role of knowledge in tennis, as simply something that gets displayed on courts, then, sure, there's only tacit knowledge. But if you look at the world of tennis as a network (channeling Edwin Hutchins and Bruno Latour), then the propositional knowledge of physics comes into it at a number of different points:
Racket technology: There are actual physicists and material scientists who work on rackets. They design rackets that are designed for different types of playing styles and different types of surfaces. This is propositional knowledge as encoded into artifacts (in this case, the tennis racket), which then the players use.
Coaching: Coaches help to get a lot of propositional knowledge on to the courts. What's a "good" service action? How much back-swing should you have while playing a stroke? Is a long back-swing bad for grass? A lot of this is backed up by actually thinking about physics and it gets incorporated into a player's game. Novak Djokovic recently improved his service by making a "minor" adjustment - but this may have been key to his recent success because he is able to get some free points on his serve (69 more aces, according to the article).
Playing strategy: Recordings of previous matches are now easily available and allows players and their coaches to construct what is called a game-plan. Game-plans are products of conscious reasoning and pattern recognition about an opponent's weaknesses and strengths - propositional knowledge and judgement at its best. Of course, this is, in Lucy Suchman's famous formulation, like planning how to take your canoe into the rapids: useful only as a resource once you start playing. But it is propositional knowledge, nevertheless, and is often the product of a whole network of people thinking: coaches, practice partners, managers, consultants, etc. as well as technologies like statistics, video recordings and such.
My point is that the propositional/tacit knowledge distinction is very useful. But there are ways in which the two interact that is only visible at the level of networks. In other words, knowledge, both propositional and tacit, is distributed - and while the best tennis players are definitely intuitive physicists, they are also beneficiaries of a lot of thinking that is careful and propositional - which is encoded into the artifacts they use and the practices of coaching and training in their day-to-day life.
Friday, September 2, 2011
Perhaps the most surprising thing I learnt was that Sholay had an ending where the Thakur kills Gabbar - savagely - and then breaks down into tears - because revenge doesn't really get him back what he has lost. Of course, since this was India in 1975, and even beyond that, an Emergency was in effect, the Censor Board found this piece of vigilante justice disturbing and insisted that the Sippys change the ending - that the police step in and stop the Thakur from killing Gabbar. Already over the bduget, and desperate to release the film, the Sippys agreed.
It's Chopra's final lines that chilled me:
But apparently, somewhere in this world, rumoured to be impossible to trace now, a few prints survive of the original untouched film, with all its final bleakness intact. Occasionally, videotapes and DVDs of this original film surface, copied from copies of copies. Those who have seen these nth-generation copies say that despite the fuzziness and the bad sound, the Thakur's hopeless weeping is chilling, and it becomes clear to the viewer that all the visceral attractions of power and violence lead inevitably to this agony, this loss.I wondered: had ALL copies disappeared? Why hadn't the Sippys kept a few prints? What was wrong with people?
But a few Youtube searches helped. Here, then, are the original excised scenes, the ending and Ahmed's killing, which was also cut (not by the censors but by Sippy himself).
Wednesday, August 31, 2011
Should we be worried about this? My sense is not really. In my opinion, the main problem with Carr's analysis is that he relies on cognitive and neurological studies and hopes to prove that skimming is bad for us because it changes the neural structure of the brain. This seems to me both wrong-headed and unnecessary. Certainly, studying what happens inside the head when we read (or write) is interesting for its own sake. But by understanding reading as a mental or neurological process, we are inevitably stripping the activity of reading of its context and its entanglement with our day-to-day life. It makes it seem as though reading and writing are activities that occur by themselves, independently, which is clearly not the case. In this post, I want to try and map out how reading will look like in the age of the internet. Obviously, these are mostly informed speculations -- but I hope to show that by looking at reading in all its different contexts, we will come to realize that the future of reading is not as dire as some people imagine it to be.
Here is a first stab at delineating the contexts of reading. Reading and writing occur in a variety of places. We read at home, we read at work, we read when we travel, we read on the way to work (in buses and trains and airplanes). Reading can be classified in terms of what we read for or in terms of what it means to us. We read for pleasure, we read for edification, we read for work. We read so that we can produce content of our own (scholarly papers, articles, reports, presentations, blog-posts, etc.) It can be thought of in terms of what we read. We read genre fiction, we read literary fiction, we read non-fiction, we read updates on Facebook, Twitter and email. We read newspapers, magazine articles and blog-posts. We read scholarly papers, white papers and consultant reports. Or we could think of it in terms of the media (or the instruments) we use: the web browser, the e-book reader, the codex, the pdf file-reader, the mobile browser, or the RSS feed-reader. Finally we could think about it in terms of time: reading in our leisure time, reading during work hours, reading on weekends and so on. Note that these are meant to be over-lapping categories (and are certainly not exhaustive). We may read for edification during our leisure time or we may read fiction for pleasure in our leisure time. We may read fiction or non-fiction for work. We read email for work as well as for our personal lives. We use the web for work as well as for our day-to-day shopping.
These contexts of reading are social structures. They structure and shape how reading and writing get done. Moreover, even a cursory examination of these contexts of reading will show that most of them predate the internet. Fears of the demise of deep reading, like Carr's, often take for granted that the only form of reading that matters is the reading of literary texts. Other forms of reading and writing in other contexts (technical reading, reading for work, scholarly writing etc.), are often not even mentioned. Which is why once we start to look at contemporary reading and writing practices as embedded in structured contexts, and as more than just the reading of literary fiction, the idea that in the age of the internet we don't read as deeply as we used to stops making sense. Instead we can start thinking about how each of these structuring frameworks have changed in the recent past and how that has transformed the practice of reading and writing.
Let's take the first and most important category: reading for pleasure, clearly the most common form of reading. When most lay-people say that they "like to read," this is what they mean: that they like to come home from work and settle in with a good novel, or that they read a novel while commuting to and fro from work every day, or that they usually go to the park on weekends and read for an hour or two. It need not be novels, of course: it could be biographies, self-help books, or books on history and politics. It could also be literary fiction although this is probably rare – genre novels like romances, science fiction and thrillers are far more likely to be read for leisure (and pleasure!) than Tolstoy or Proust. How will practices of leisure reading change with the Internet?
The amount of leisure reading that we do is already substantially less than it was before, thanks to television. Will the Web reduce it further? On the contrary, the Web might make us read more, rather than less. However it is likely that is likely that the reading of magazine articles, blog-posts and other forms of web-content will occupy a larger share of our reading time than they did before. We are also more likely to skim this web-content than deep-read it (more on this below). However the core of our leisure reading, novels and some form of non-fiction, will be the way it is now – from start to finish, in one continuous sweep – or in other words, "deep reading.” Other peripheral changes, however, are likely to occur. Perhaps we may increasingly start to use e-readers like the Kindle, rather than the codex, for our leisure reading. Perhaps the length of printed books (especially non-fiction) might be substantially reduced. Typically, the size of the book (200 pages, 300 pages, etc.) is determined by a publisher's economies of scale. If e-readers replace printed books (a very big “if”), and the cost of publishing plummets, then the length of a book could become less standardized. To summarize, some of our novel-reading will be replaced by the skimming of shorter-form web content, but most of our leisure reading will still involve long-form content, and we will consume this content by “deep reading.”
However, the share of newspapers, magazines, and blog-posts in our leisure reading is likely to increase. And day-to-day practices around the reading of newspapers and magazines will, and have, almost certainly changed with the rise of the World Wide Web. It is here – in the context of reading newspapers and magazines – that it seems to me that changes in long-established practices are occurring and where deep reading may well be on the decline. But it's not clear if this is a cause for worry. Let's examine this in more detail.
When newspapers were available only in print, the standard practice was to subscribe to one or two, usually locally available, newspapers and read them as fully as possible every day. A standard newspaper tries to cover all possible topics in the limited space available: national politics, international news, sports news, art and culture, and so on. The printed daily newspaper was also sold as one single commodity with all the articles bundled together. The assumption is that the reader will read only a newspaper or two in a day and the idea is therefore to provide him with a little bit of everything. The combination of the two factors, the reader's ability to only read one or two newspapers (or magazines) in a day (or month) as well as the publication's tendency to cover as many fields as possible, meant that the reading practices around reading printed periodicals emphasized “breadth.”
With the Web, it is no longer necessary that the day's newspaper or the monthly issue of the magazine be one packaged entity; instead individual articles or sections can be sold separately. As newspapers and magazines move to the Web, readers too do not have to limit themselves to only one newspaper or magazine. This suggests that an interested person could opt to read, say, only the economics section of many different magazines rather than reading any one magazine in full. Which, in turn, suggests that newspapers and magazines could choose to focus on certain topics in depth, while leaving others out completely, since their readers probably prefer to read about them in other places. That is, publications might become more specialized, catering to a certain niche of topics and readers. Both these trends together -- that readers can sample more publications, as well as the possibility that publications themselves might focus on only certain topics -- might result in a transition from a reading practice that emphasized “breadth” to one that emphasizes “depth”.
This brings us back to the problem of skimming. One of the central cultural anxieties today is that skimming a text is an inferior way of engaging with it. If indeed there is a shift towards reading more newspaper, magazine articles and blog-posts online in our leisure time, and these shifts result in more depth-oriented reading that emphasizes skimming large amounts of text, should we be worried about it? I am not convinced. Here's why.
First, it's worth keeping in mind that skimming newspaper and magazine articles was something we always did, even before we started reading these on the Web. In fact, newspaper articles are structured precisely so that they can be skimmed. Second, even if one skims most articles that one reads, the sheer amount of reading one does means that it is far from shallow. But most important, this anxiety about skimming and deep reading ignores the central issue by focusing attention on the skimming itself. Instead the key issue is: what is this skimming for? How does it fit into the context of our other activities? This, it seems to me, is what needs to be investigated empirically. How do people decide what to skim and what to read in detail? And how do they skim? And how do the artifacts of reading like the blog-post, the web article, or a Wikipedia essay structure these reading practices? Brain-scans and mental models will not help here; what we need is concrete observations of what people do and what it means to them.
Deciding what to skim and what to read deeply is always a difficult question. I would argue that this was never a problem in leisure reading before because people never had that kind of content at their finger-tips. On the other hand, the problem of what to skim and what to deep-read has always been faced by those who read for work (graduate students, academics, researchers, journalists, some other categories of white-collar workers). Let me call this category of people "discourse producers." What may be new is that the decision-making problem of what to skim and what to deep-read, usually faced only discourse producers (who are reading for work), is now also faced by lay-people (those who do not read and write for a living) -- and it is faced while reading for leisure. Note that discourse producers are also used to skimming long-form (usually non-fiction) books; lay-people will be less interested in skimming books as such, especially works of fiction. However skimming will play a big role in the short-form content that they read (blog-posts, articles, emails, etc.) To read well then, it not only becomes necessary to skim but it becomes essential for everyone (not just discourse producers) to skim well. One way to understand Ann Blair's work on note-taking in early modern Europe is to think of it this way: in early modern Europe, discourse producers started to face the problem of what to skim and what to read deeply, which is precisely the problem of “information overload.” In our age, this problem has stopped being solely the problem of discourse producers, hence we are experiencing “information overload” all over again.
This background therefore allows me to restate the worries about the demise of deep reading as follows: given that there is a possibility that the Web will involve a slight shift in our leisure reading from long-form books to more short-form content (blog-posts, magazine articles, etc.) and that this engagement with short-form content will be more depth-oriented rather than breadth-oriented, should we be concerned about the demise of the so-called practice of deep reading?
Again, it seems to me that the increasing focus on the neurological aspects of reading leads researchers and cultural critics to fetishize the act of reading itself rather than to focus on the question of what this reading is for. For instance, discourse producers have always faced the question of what to skim and what to read deeply. If the rise of the Web has increased the magnitude of this problem for them, then has their output (journalistic and scholarly articles, reports, essays, etc.) significantly diminished in quality? Even though this question is too broad to be answered in any meaningful way, it could be broken up into smaller, more empirically tractable parts (say, by looking at only journalistic output or the scholarly output of historians). For lay-people, the research questions are even harder to frame. If there is a shift in leisure reading from a deep-reading of long-form books to a depth-focused reading of short-form web-pieces, what are its implications exactly? Concerns that this may make us less “thoughtful” are too broad and frankly, too elitist, to mean much. A better research question could be: does this shift from long-form books to more short-form web content focusing on politics and current affairs make us more politically conscious? Cass Sunstein and others have speculated that the internet with its tendency to exacerbate homophily (i.e. the tendency of people to talk to people who are similar to them in some respects) may increase the political polarization of the electorate. However, empirical work on this topic is still inconclusive. We need to develop further research questions on similar lines rather than simply thinking about deep reading/skimming dichotomy in isolation.
The brief segue into political polarization allows me to bring up another closely-related issue: the problem of gate-keeping on the Web. Let's assume that the key effect of the Web has been that the traditional problem of what to deep-read and what to skim, faced only by discourse producers, is now also faced by lay-people. Clearly, discourse producers have institutionalized ways of deciding what to skim and what to deep-read. This depends on their training, their disciplinary identity as well as the task at hand. What kinds of practices will lay-people evolve to solve the same problem and how will it affect them? On one hand, this is an empirical question. But on the other hand, it could also be posed as a normative question. Will the practices of lay-people ever measure up to those that discourse producers have evolved over centuries (which Blair brings out)?
Or we could think of this in yet another way. Before the internet, there were a set of mechanisms which ensured that what people read was of a certain standard. These were mostly on the side of content-producers. For example, newspapers have editors and reporters, each of whom are trained to judge the quality of a publication. Publishing houses have teams of editors who decide what gets published. With the internet, these mechanisms can be (apparently) bypassed because on the internet, anyone can get published (more on this below). Instead the burden of discerning the wheat from the chaff, the "good" content from the "bad," has instead shifted to consumers of this content, in this case, the lay-public. How do we make sure that the lay-public is able to separate the "good" content from the bad, the wheat from the chaff?
It's worth clarifying a few points here. First off, it just isn't true that anyone can get published on the internet. Well, this is true in a trivial sense that anyone can create a Blogspot account and start publishing their writing to the Web. But this is not the same as saying anyone can get read on the Web. The best way to be read on the Web is to make sure that another web-publisher links to you. And because linking is so important, the blogosphere has evolved its own informal norms around the practice. Bloggers always reciprocate with links: if X links to Y, then Y links back to X. They are very careful in attributing where they found any piece of writing. And even when they excerpt from another blog-post or news article or a Brookings Institution study, they link to it so that their readers can go ahead and sample the article itself. An article that gets linked to by many people gets read more and will also rank higher in an internet search. Being read on the internet then is a matter of convincing the right people that an author is worth reading. And in that sense therefore, the blogosphere or the internet is no different from the rest of society or the publishing industry. In particular, the norms of the blogosphere are startlingly similar to those of the scholarly community: scrupulous citation, an emphasis on linking to the source of a claim so that readers can judge for themselves if it is true, and so on. What I'm trying to say here is that there is a quality-control process operating on the internet as well. But this quality-control happens through an informal economy of links rather than through a formal, organizational process. This is why it makes people uneasy. And yet, this is precisely the reason for its relative “openness”: newcomers have a better chance of being noticed than in the formal world of publishing.
That still doesn't tell us what we can do to help lay-people make "good" judgements about what to skim and what to deep-read - or in other words, how to establish the quality of the content they are reading. It strikes me that we are barking up the wrong tree here. These practices will evolve, just as they did for discourse producers. Perhaps we could start teaching school-children how to skim as well so that they have a repertoire of techniques which they can employ when they read on the internet. But our central concern should be with making sure that the online world remains, to some extent, "open." The internet is not some magical medium that is innately free. It's only what it is because of the practices around its use. We need to make sure that the open linking practices of the blogosphere persist and are not eroded (by, say, increasing corporatization and the tendency to link only to one's own content). A second way may be to ensure that linking practices retain the element of serendipity. This means making sure that the barrier to entry remains low enough, so that content from newer people can still be read by many. It means making sure that the mechanisms of sharing (say, Facebook and Twitter) do not lead to stifling homogeneity. In other words, we need to make sure that the depth-oriented reading that the internet facilitates is not too depth-oriented, that some amount of breadth does creep in. How this is to be done, I am not sure, but someone will think of something.
Saturday, July 9, 2011
Unfortunately, as Paul Krugman never tires of pointing out, this comparison is wrong - governments are NOT like families: they can print money, raise taxes, they can intervene in the economy in a hundred productive ways. Therefore - while governments can't sustain deficits forever, especially when economic growth is sluggish - cutting the deficit is not going to put the economy back on the path to recovery - unlike a family, where cutting spending can indeed help a family from going bankrupt.
Unfortunately this argument never cuts any ice with deficit worriers (I'm talking about non-economists here, who, like me, have only a rudimentary and basic understanding of how the whole system works). This is important because one of the reasons politicians - even Democratic ones - seem so perversely obsessed with the deficit is because the American public in general cares about it deeply.
Why does the assertion of how governments and families are different not make deficit worriers change their minds? The reason is not hard to find. There seems to be something almost unintuitive about this - and also something that offends what I call the "protestant ethic*" of the middle-class. Michael Kinsley expressed it well in an article he wrote:
My fear is not the result of economic analysis. It’s more from the realm of psychology. I mean mine. The last time I wrote about this subject, The Atlantic’s own Clive Crook called me a “fiscal sado-conservative.” I would put it differently (you won’t be surprised to hear). Maybe, at least on economic matters, I’m a puritan. The recession we’ve been going through did not occur for no reason. Even though serious misbehavior by the finance industry triggered it, sooner or later it was bound to happen. For a generation—since shortly after Volcker saved the country, and except for a brief period of surpluses under Bill Clinton—we partied on borrowed money. We watched a real-estate bubble get larger and larger, knowing but not acknowledging that it had to burst. Then it did burst, and George W. Bush slunk off to Texas, leaving Barack Obama to clean up the mess. Obama has done the right things, mostly, pushing through a huge stimulus package and bailing out a few big corporations and banks. Krugman says we need yet another dose of stimulus, and maybe he’s right.Some of this resonates with me - and I suspect - this is one of the things that animates the deficit worriers, even if subconsciously - let's think of it as an internalization of the Protestant Ethic especially its "there is no free lunch" feature.
But this cure has been one ice-cream sundae after another. It can’t be that easy, can it? The puritan in me says that there has to be some pain. That’s not to say that there hasn’t been plenty of economic pain. But that pain has come from the recession itself, not the cure. [Emphasis mine.]
Which is why I like David Leonhardt's blog-post today - it offers us reasonable people another way of convincing deficit worriers - a large percentage of the United States voting population - about why deficit-cutting is not a good idea at the moment:
Ms. Snowe and Mr. DeMint compare the federal government to a family that supposedly has to balance its budget, and the comparison is actually a useful one. If a middle-class family had to run a balanced budget every year, it would never be able to buy a house or send a child to college.I think this is a great argument - especially since it has the potential to appeal to the middle-class innate Protestant Ethic. It makes the case that a deficit, or an unbalanced budget, is a productive thing - just like taking out a loan for a car, a house or college is considered a device for upward mobility and progress.
I'll let you know how it works.
*While I use the word Protestant Ethic loosely - and certainly not in the exact sense in which Max Weber used it - I use it especially to gesture towards the middle-class tendency for delayed gratification. The whole idea of "save now, use later" or "work hard now, have fun later" is deeply inscribed in middle-class habits and I suspect is the reason why social disapproval of families who live beyond their means is so common and well-entrenched.
Monday, June 20, 2011
My first thought was: what a coup for these Aquashine guys - to get a bonafide superstar like Dixit to endorse their product! The advertisement is not badly done - but it does go significantly against the grain to stand out - sociologically speaking, of course. Since I've started reading the Sociological Images blog recently, I've been more than impressed with how much images can be used to illustrate certain social facts. So consider this my attempt to apply the principles of visual culture to this advertisement.
First off, here's a comment on Youtube, always a good source for analysis:
feel kinda sad seeing madhuri jump on the endorsment bandwagon,that too for a dish washing liquid...she´s an artist par excellence,trully an asset to the indian film industry...movies and shows should just be created for her ´cause that´s the kind of talent she has,lil disappointed seeing her in such an ad..The commenter here expresses some disappointment with the ad ("such an ad") but doesn't really come out with why he or she doesn't like it - perhaps because she realizes that it wouldn't be politically correct to say it explicitly. Or perhaps she isn't really able to articulate why she doesn't like it.
Another commenter (on a different site) has no such qualms:
No doubt it was cute and nice, and mads looked young and charming, but acting like a kamwali bai in an add? Really have madhuri 's days become so bad that she accepts an add to act as a bai ? Its below the level of a top actress. Why couldn't they just make her a marathi housewife doing the advertisment?So there are at least two key dissonances here:
- That Dixit is endorsing a dish washing liquid - not something a star of her level generally does.
- That Dixit plays a kaamwali bai - in other words, a domestic help - a significant fall in status, especially to help sell something. [It wouldn't probably matter if she played a domestic helper in a movie - but to help sell a dish washing liquid? No.]
What products do film stars usually endorse? It strikes me that in India at least, endorsements are far more gendered. Male high-profile stars like Shah Rukh Khan will usually endorse cars - and these advertisements will play on their male gender. Coke and Pepsi in their ongoing war have used pretty much every high-profile actor in their advertisements: Shah Rukh Khan, Aamir Khan, Salman Khan. The cola companies usually target young people so the star must have at least some youth appeal. Actresses endorse soft drinks as well - but again, they need to be young and hip. Kareena Kapoor, Aishwarya Rai, Rani Mukherjee have all appeared in cola ads - and for the life of me, I can't imagine even a pre-retirement Madhuri Dixit doing one.
What do actresses endorse? Soft drinks, yes - although one has to be cool and hip and young (see above). Mostly though, it strikes me that actresses endorse beauty products: cosmetics, shampoos, soaps. What about more "domestic" products? Washing powder, dish washing soaps, and so on? Those too, although then the actress needs to be older, at least semi-retired and perhaps married.
Which is what makes the Aquashine "Gangubai" ad so much more interesting. The ad-makers could have shown Madhuri as a housewife - instead they show her as a domestic help - certainly a way of making people sit up and take notice, even if in a negative way.
The first thing of course to notice is that complete absence of men in the ad. Although this is hardly new - dish washing in India is a woman's job.
Notice that the three other women in the ad - who I assume is a frumpy middle-aged Maharashtrian housewife and her two daughters (or bahus?) - are shot from so far off that it's hardly even possible to see their faces. Dixit wears a bright-green nine-yard saree while the other three wear pale colors that make them blend almost into the background (the older woman wears a six-yard saree, showing that she's from an older generation, the younger women, her daughters, presumably, wear chaste salwar-kameezes). In fact the three women appear in the same shot with Dixit less than 5 times (and the cuts are pretty rapid). And they are placed really far from Dixit and the camera - almost awkwardly so - an interesting choice, considering that in a real situation, they'd have been looking over her shoulder.
The key reason for keeping the women so far apart seems to me to be the fact that Dixit plays a domestic helper, a position that entails a substantial drop in status. At least in Mumbai, a domestic helper lives in a slum, speaks a certain kind of Marathi and perhaps a little bit of Marathi-accented Gujarati and Hindi, is usually illiterate. By keeping the middle-class women as far away from her as possible, and by making them dress in pale colors, the ad-makers hoped to pull off the feat of having Madhuri play a domestic (while endorsing a dish washing liquid!) while still trying not to juxtapose her character with the more high-status ones.
Madhuri, of course, is a native Marathi speaker - who has never acted in a Marathi production - but has a striking image of being a middle-class girl at heart, despite her stardom, at least among her adoring Marathi fans. This is precisely the image the ad uses - downgrading her status but still keeping her glamorous. It uses another image - that of an empowered domestic - who has a cell-phone and does an "een-ees-pection" of the dirty dishes while her hapless employers can only watch.
What I would really like to know, I guess, is which channels the ad was shown so that it's possible to figure out the demographic the ad is aimed at. It's aimed at women, of course, who usually make the decision in India about which dish washing detergent to buy. But I'd be curious to see if it also worked with women outside Maharashtra, for whom the domestic is not usually Marathi-speaking.
I'm sure I've missed tons of things in the advertisement - feel free to let me know what I've missed and where I'm wrong.
Dixit seems to have appeared in a lot of advertisements recently!
For instance, here she is again, endorsing a different product: Comfort fabric conditioner.
The ad is strikingly different. Now Dixit is a housewife - and clearly, from the looks of it, an upper-middle class housewife, mother of a child - all of it pretty much in keeping with her current status. It's not what you would expect of a movie-star but it's in keeping with what a semi-retired middle-aged married movie star would endorse.
Here's another one that's similar - this time she's endorsing a certain brand of Basmati rice. Notice how similar the ad is visually to the one before. She is dressed similarly, in chaste salwar-kameez with no jewellery and the camera mostly captures her in mid-shots. The only reminder of her movie-star status is that she dances - but the dance itself is again classical and chaste - hardly the kind of thing she was known for.
Friday, June 10, 2011
Me, the first thing I thought after reading it was: why are well-researched and well-presented solid pieces like this not printed in the Indian media? Why is it that it is the New York Times that publishes a story like this - which we all (Indians, that is) then email to each other saying "Have you checked out this nice story about Gurgaon?"
Of course, the answer to that came pretty immediately. The story wouldn't really make sense in an Indian context because most Indians (middle-class Indians that is, who would be reading such a story) would know exactly what it was talking about and greet it with a yawn. Yes, shabby or non-existent government services are the order of the day in India.
As Yglesias points out (contra Drum), the one "ideological" conclusion you can definitely draw from the piece is:
The first takeaway point from Gurgaon’s success in the face of the lack of municipal government is to underscore the incredible value of good government.True enough - although this is hardly an ideological point. Ideology comes in when we debate what services the government should provide - and everyone pretty much agrees that whatever services it provides need to be good.
And of course, the next question becomes: what exactly is the way towards good government services? And the answer - at least right now for India - is to encourage neo-liberal reforms and hope in turn that the rise in the standard of living and competition from private services leads to a citizenry that expects better government services. It's by no means clear that this will work - but it seems to me the only possible way.
Sunday, May 22, 2011
A teacher of mine named Leo Rockas had a brilliant way of characterizing Chekhov: The author, he said, began by writing conventional narratives with twist endings and then, over time, lopped off the beginnings and twists, leaving only the suggestive essence—the model for the modern short story.
Tuesday, May 10, 2011
Q. Yes, you argue that the research shows all children — including ill-prepared ones — can learn and that even modest differences in outcomes — say, finishing fifth grade instead of second grade — have positive effects. But obviously many, many schools, from Mumbai to Lagos to Houston, do a bad job of educating poor children. What distinguishes the schools that get impressive (and rigorously evaluated) results?
Ms. Duflo: That’s indeed a vexing puzzle: experiences in the developing countries (the very successful remedial education programs run by Pratham, in India, for example) but also in the U.S. (the “no excuses” charter schools in Boston, or schools in the Harlem Child Zone in New York City) suggest that it is possible, perhaps even not that difficult, to significantly improve the quality of education. Yet most schools completely fail their students: why is that? It would be too easy to blame a lackadaisical public school system, but even the private schools that are attended by many poor kids around the world could do much better. In the U.S., not all charter schools deliver quality education.
Our sense is that what is going on is that schools have forgotten, or perhaps never knew, that teaching fundamental skills to everyone should be their prime objective. In Kenya, India or Ghana, teachers still try to teach an absurdly demanding curriculum to a very diverse set of pupils, many of whom are first-generation literates and get little or no help at home. Covering the entire curriculum is the priority, even though the majority of children may be lost by the end of the first week.
Why aren’t parents revolting, one might wonder. Why are they not demanding that their children be taught at the appropriate level, instead of sitting through day after day of teaching that mean nothing to them? In part this is because they do not know how badly schools are doing: they are not in a position to evaluate what their children are learning, and no one tells them that they are not. In part it is because they have bought into the elite bias that plagues the entire system: parents often seem to believe that education is worth it only if the child can reach the highest level.
Making sure that schools deliver may be in part a matter of defining what “deliver” means: not preparing the top of the class for some difficult public exam while ignoring the rest, but ensuring that every child learns core skills, and learns them well.
Sunday, May 8, 2011
I first heard of Garfinkel when I read Paul Dourish "Where the Action Is" and then Lucy Suchman's "Plans and Situated Actions." The field of ethnomethodology had a big impact on me - and was one of the reasons why I decided I wanted to study social science rather than design technology. In a way, the building assumption of both Garfinkel and Harvey Sacks fed into the reductionist part of me. I understood them to be saying: Fine, you want to investigate social life? Then don't start with the big things: class, gender, conflict, revolutions. Start small, let's first understand how day-to-day interaction is structured and constituted. Then - once we have a better understanding of this - only then, let's go on to understanding the big things. This fit in well with the part of me that liked physics and mathematics: in physics, you start off by understanding the motion of bodies in space - but that helps you explain planetary motion as well.
It's debatable whether this reductionist stance can be applied to social life - do we need to understand micro-social interactions in order to understand macro-social phenomena? I don't know - but I don't believe in this as strongly as I did before, when I first heard of ethnomethodology***. But let me get back to where I originally wanted this post to go: a novice's reading guide to Garfinkel for other novices.
First, don't read Garfinkel at all at the beginning. (Ha!) I suggest reading the following texts in this order:
John Heritage's essay on Garfinkel in "Key Sociological Thinkers." This is a simple, easy-to-read, and accessible introduction to Garfinkel's key sociological insights. [Email me if you want a pdf.]
Then read John Heritage's (again!) review essay on ethnomethodology in "Social Theory Today." [pdf] These two should be enough.
But at this point, it might be good to dip into Heritage's book-length exposition of Garfinkel: "Garfinkel and Ethnomethodology." This is, in its own way, a difficult text to read - and it's a whole book. But I'd say concentrate on chapters 4 and 5.
- Chapter 4 is titled "The Morality of Cognition" and among other things, has an account of Garfinkel's famous "breaching experiments." My own favorite example is on pages 94-94 that contains a transcript of a conversation between a husband and wife - which shows how much is taken-for-granted, and therefore incomplete in seemingly ordinary conversations.
- Chapter 5 is titled "Actions, Rules and Contexts" and is pretty dense. However, it is a good introduction to how Garfinkel goes against the typical notion of "rules" as driving action.
- Read Chapter 3 first - titled "Common sense knowledge of social structures: the documentary method of interpretation in lay and professional fact finding." This chapter gives - with vivid examples - a nice account of the work that goes on in day-to-day social interactions.
- Then read chapter 8 titled "The rational properties of scientific and common sense activities."
- Then read chapter 2 titled "Studies of the routine grounds of everyday activities" - again, there are some funny breaching experiments here - but more importantly, these experiments serve to ground what Garfinkel is trying to point out.
- And finally - yes! - read chapter 1, "What is Ethnomethodology?" I don't guarantee that this will help you understand this chapter but it does help to place it in context.
- And finally, after all this, read the remaining chapters in whatever order you like. Don't miss Chapter 5, "Passing and the management achievement of sex status in an 'inter-sexed' person, part 1" which concludes with an almost fantastic revelation that would rival the sting-in-the-tail stories of O'Henry.
** For example, here is how Garfinkel outlines his key notion of "account-ability":
The following studies seek to treat practical activities, practical circumstances, and practical sociological reasoning as topics of empirical study, and by paying to the most commonplace activities of daily life the attention usually accorded extraordinary events, seek to learn about them as phenomena in their own right. Their central recommendation is that the activities whereby members produces and manage settings of organized everyday affairs are identical with members’ procedures for making those settings “account-able.” The “reflexive,” or “incarnate” character of accounting practices and accounts makes up the crux of that recommendation. When I speak of accountable my interests are to such matters as the following. I mean the observable-and-reportable, i.e. available to members as situated practices of looking-and-telling. I mean, too, that such practices consist of an endless, ongoing, contingent accomplishment; that they are carried on under the auspices of, and are made to happen as events in, the same ordinary affairs that in organizing they describe; that the practices are done by parties to those settings whose skill with, knowledge of, and entitlement to the detailed work of that accomplishment – whose competence – they obstinately depend upon, recognize, use, and take for granted; and that they take their competence for granted itself furnishes parties with a setting’s distinguishing and particular features, and of course it furnishes them as well as resources, troubles, projects, and the rest.John Heritage, whose book on ethnomethodology is really really good, has this to say about Garfinkel's writing style:
These studies are discussed in a difficult prose style in which dense thickets of words seem to resist the reader's best endeavours, only to yield, at the last, forceful and unexpected insights which somehow remain obstinately open-ended and difficult to place.To be fair, Heritage also mentions that Garfinkel's short story was anthologized in the 1941 "Best American Short Stories" volume -- so the man could clearly write!
***That said, my sense is that the guiding principle of ethnomethodology is also the guiding principle of Science and Technology Studies. In Science Studies, we like to show the work that goes on, even in the most routine, taken-for-granted situations - for example, the notion of "transmitting" knowledge, especially scientific knowledge is not as simple as it seems. There are pedagogic practices (for example, the instructor solving problems and then setting examples for students to solve on their own), exam structures (what kinds of questions are students tested for?), and most importantly, the background understandings these create when scientists schooled in one discipline interpret the work of others. Garfinkel's analysis of the work that goes on even in the most day-to-day social encounters is tremendously useful here.
Wednesday, April 20, 2011
But may I just point out that the sociology reached this conclusion a century ago? You don't really need neuroscience or fancy brain mechanisms to understand this; looking closely at social practices will get us there faster. I don't really have any bone to pick with the article -- if neuroscience is what the public needs to understand that there is no such thing as pure unadulterated rationality, then I'll take that any day.
But there are still a couple of points that I'd like to make -- because they are implicit in Mooney's article and in the arguments of some of my friends as well.
It's nice that psychologists and neuroscientists think that values color the way we think of facts. The problem is that even after knowing this, we still like to think of "facts" and "values" as useful terms. Even more problematic is that we continue to think that people change their minds because of arguments. This, to me, is almost entirely wrong, and still is the guiding assumption of Mooney's piece. No one ever became an atheist because someone presented him with an irrefutable proof of God's non-existence (which is why I find the New Atheists a bit boring). The secularization of Europe did not happen because Voltaire's diatribes against God -- it happened because a Church-State separation was put into place following all the bloody wars of religion that people were so tired of. This separation, the rise of industrial capitalism and the separation of spheres that forms such a big part of the classical liberal political system -- all of these were instrumental in the rise of secularism (and Voltaire must be understood as a part of this current, rather than someone who made arguments that presented certain "facts" to the reader.)
When I say this to my friends, the answer always is: "But that's not true. We did change our minds because of someone's arguments." Or "that's not true. I know plenty of people who became atheists after reading Dawkins." Now there's no way I can offer a mathematical proof of what I am saying. My point is simply: that this so-called argument that changed someone's mind was simply, to use a tennis metaphor, the last point of a match. Of course, the winner wins the last point -- but it's even more crucial to know the events that led up to the last point. If we want to know why someone changes his mind about something important, we need to look at the wider narrative of practices and that person's history, rather than just at some fact that convinced him (even if he himself attributes his change of beliefs to the presentation of certain facts).
Monday, April 18, 2011
I've been working on a course project that looks at the various aspects of the Hawk-Eye system used in tennis for adjudicating line calls. For those that don't follow tennis, Hawk-Eye is a computer vision-based system, that infers the trajectory of the ball on the tennis court and calculates where it hit the ground (inside, outside, on the line, etc.) The idea is to eliminate the human sources of error when calling the ball "in" or "out"; errors that can be all the way from "the ball is too fast for the human eye" to "the linesman hates this player."
But there's one place where I'm getting stuck. The Hawk-Eye system, once deployed on a court, generates a bunch of statistics about the ball placement. Who owns these statistics? The tournament organizers, who have presumably purchased the Hawk-Eye system? The television stations, like Tennis Channel, who use the Hawk-Eye generated statistics (and the awesome visualization) in their broadcasts? Or is it a combination of the two? And if someone wants to use the Hawk-Eye statistics for coaching or strategic purposes (say, for Andy Roddick to figure out how to beat Federer in the next match), how do they get those statistics?
Any help would be much appreciated. If you have any suggestions, answers or tips (including books or links I could look up), please use the comments. Thanks!
Sunday, April 10, 2011
[The Strong Program] would be symmetrical in its style of explanation. The same types of cause would explain, say, true and false beliefs.There is a certain aesthetic reasonableness to this principle that I like very much: after all why should there be different explanations of true and false beliefs? And the principle itself is intended to oppose the traditional conception of scientific knowledge: that true beliefs need no explanation, but false beliefs do. The explanation of false beliefs is usually distorting factors like personal beliefs, commitments and ideology.
But I've always found it hard to explain the utility of the symmetry principle to others. What's the use of it? is usually the question. And I must admit it was always hard to explain its utility outside the field. As a principle in understanding any kind of knowledge (including scientific knowledge), the symmetry principle has always seemed to me an indispensable tool. What it could be used for -- outside of the sociology of knowledge -- I couldn't really say.
Well, until today, that is.
I've been reading Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming by Naomi Oreskes and Erik M. Conway. The book is a meticulously researched piece of political journalism. The story is galvanizing, if a little wearying in its repetitiveness: how a handful of scientists, financed by corporations, helped to create doubt about the scientific consensus on topics like the risks of smoking and acid rain to the ozone hole and global warming, leading to considerable delay in the enactment of regulatory policies (and for global warming, no policies at all). It's astonishing how the same names of the same scientists keep popping up in every debate: these guys really were deep-pocketed merchants of doubt. They had a clear objective which they shared with the rest of the American conservative movement: to oppose any possible Government regulations on corporations as well as dismantle the already existing ones.
So far so good. Unfortunately, despite all the wonderful data that the authors have combed through, the story the authors tell is frustratingly traditional and asymmetric. It's true that the road to the regulation of tobacco (and now it seems global warming) was long and arduous and disturbing. But the road to regulation for acid rain and the ozone hole was arguably much shorter. We were able to regulate the emissions of sulfur and CFCs despite the doubts created by the right-wing machine using both straight-forward bans and cap-and-trade mechanisms. Why were regulators quick to respond in these cases but not in the others despite the right-wing noise machine? The authors, it seems to me, don't think it is especially relevant. For instance on page 124, they say:
The combined results of the Ozone Trends Panel and the field expeditions caused the Montreal Protocol to be renegotiated. The results also convinced the industry that their products really were doing harm, and opposition began to fade. CFCs would now be regulated based on what had already happened, not on what might happen in the future. Because the chemicals had lifetimes measured in decades, there was no longer any doubt that more damage would happen. [My emphasis]But didn't they spend the previous three chapters describing how industry leaders almost never accept scientific findings when they go against their own interests (e.g. Big Tobacco on the risks of smoking)? So why should the industry be convinced in this case? It seems to me that the authors don't really care. When scientific findings lead to the appropriate regulations, it's because they were true. When they don't, it's because of the right-wing doubting machine and its near-fanatical free market ideology.
This is where the symmetry principle would have been useful. If we assume that there is one process that leads from scientific findings to the appropriate regulations, then the same process holds irrespective of whether said regulation was enacted or not. (It's not as if the doubting thomases didn't start beating their drums during the ozone hole controversy, it's just that they were not successful in blocking regulation.) So knowing what we did right in the ozone hole and acid rain case will arguably be important for us if we want to enact global warming regulation.
I don't mean to suggest that if the authors did treat these cases symmetrically, we would know what to do to enact emissions reductions in the US. No. And it's possible that the difference is just that the right-wing machine threw less money at these problems where we were able to enact regulation. Or maybe because of the consequences of acid rain or ozone depletion were so close to home (skin cancer, etc.) whereas the consequences of global warming are strikingly diffuse (what exactly does it mean for average temperatures to rise by 2 degrees?), the American public was just more supportive of regulation in these cases. But whatever it is, it would be useful to treat the cases of successful regulation and delayed (or impossible) regulation symmetrically.
The Symmetry principle is often derided for its relativism towards science. Here is one case where it could be used (although in a political economic analysis) for science, and not against it.
[I don't mean to hit on the book, I think it's rich and very detailed and a rich source of data for anyone who wants to understand the political economy of scientific findings and their relation to regulation. I highly recommend it.]
Saturday, April 2, 2011
In the chapter "Can we fix things?" Cowen offers the following policy prescription: "Raise the social status of scientists." And by social status he seems to mean something like: "do something that makes doing science something more young people aspire to." (And remember this book is specifically about the United States.)
I think that is exactly right. But I'm curious: does anyone have any ideas why the social status of science (and I presume, engineering and technology) declined in the US? And when it started to decline? I am starting to get particularly interested in this question. Any books, articles, or your own hypotheses that you think are relevant to this?
Please leave your response in the comments.
Thursday, March 31, 2011
But as the story progressed (parts two, three, four, five), my enjoyment started to wane. It wasn't the flowing, free-associative style, which was still fun, although I really didn't see what the digressions into the Pythogorean society had to do with the concept of incommensurability. No, it was the intellectual portrayal of Thomas Kuhn. If one reads Morris without reading Kuhn, then Kuhn comes across as an idiot. And all of us who found The Structure of Scientific Revolutions useful come across as either idiots or fashionable post-modernists who like Structure because it fits in with the relativistic fashions of the day.
What is Morris' problem with Kuhn (other than that he clearly didn't like him)? Two things. First, he argues that the concept of incommensurability is incoherent. And second, that it opens the doors to all sorts of relativism. The second point is old hat. Much ink has been shed on how if we give up the idea of Truth, of an unmediated reality out there that we are accountable to, then we are on a slippery slope to ethical relativism. That it leaves us without a reply to totalitarian dictatorships, that it leaves the door open to the O'Briens of the world who want us to believe that 2+2=5. Etc. The first point is old hat too. As John Holbo points out, incommensurability was criticized in just this way even when Structure was published. The idea here is that if two paradigms are incommensurable, then we have no way of doing the history of science; so the fact that Kuhn himself could understand the older paradigms proves that paradigms are not really incommensurable.
As fields, the history of science, and Science and Technology Studies (STS) have moved on: the idea of a linguistic conceptual framework that under-girds scientific theories has been replaced by a search for material practices that constitute science. Kuhn is studied less as someone who offered fresh new insights but as one of the oldies: routinely grouped together with Popper, Feyerabend and Lakatos. (I also think that while Kuhn makes several missteps in Structure -- he tries to define incommensurability linguistically, he starts to talk about using computer programs to understand incommensurability -- the idea of science as being constituted by material practice is there in the book. As are other things that we now look for as STS scholars: the practices of pedagogy and training in science, the incentive structure, etc.)
Still. I'd like to offer a defense of Kuhn and of Structure and why the criticisms that Morris offers are mistaken. Reading Structure was a transforming experience for me -- and I'd like to bring out why.
So without any further ado, here goes.
- That scientific advances happen not cumulatively, but in bursts -- conceptual revolutions alternating with periods of "normal" science
- That underlying all scientific theories is something called a "paradigm" (think of it as some kind of underlying conceptual scheme) and that one paradigm gets replaced by another during a scientific revolution.
- And finally, when paradigms change, when a new paradigm replaces the old one, the two are incommensurable, so that when scientists argue for the merits of either paradigm, they are essentially talking past each other. The paradigm that wins out wins not because it is true.
Much of what Kuhn says about great theoretical shifts, and the inertial role of long-established scientific paradigms and their cultural entrenchment in resisting recalcitrant evidence until it becomes overwhelming, is entirely reasonable, but it is also entirely compatible with the conception of science as seeking, and sometimes finding, objective truth about the world. What has made him a relativist hero is the addition of provocative remarks to the effect that Newton and Einstein, or Ptolemy and Galileo, live in "different worlds," that the paradigms of different scientific periods are "incommensurable," and that it is a mistake to think of the progress of science over time as bringing us closer to the truth about how the world really is.
A phenomenon familiar to both students of science and historians of science provides a clue. The former regularly report that they have read through a chapter of their text, understood it perfectly, but nonetheless had difficulty solving a number of the problems at the chapter's end. Ordinarily, also, those difficulties dissolve in the same way. The student dicovers, with or without the assistance of his instructor, a way to see a problem as like a problem he has already encountered. Having seen the resemblance, grasped the analogy between two or more distinct problems, he can interrelate symbols and attach them to nature in the ways that have proved effective before. The law-sketch, say f = ma, has functioned as a tool, informing the student what similarities to look for, signaling the gestalt in which the situation is to be seen. The resultant ability to see a variety of situations as like each other, as subjects for f = ma or some other symbolic generalization, is, I think, the main thing a student acquires by doing exemplary problems, whether with a pencil and paper or in a well-designed laboratory. After he has completed a certain number, which may vary widely from one individual to the next, he views the situations that confront him as a scientist in the same gestalt as other members of his specialists' group. For him they are no longer the same situations he had encountered when his training began. He has meanwhile assimilated a time-tested and group-licensed way of learning. (Structure, pg 189) [emphasis mine]Believe it or not, this is exactly how it happened for me. I studied how to draw free-body diagrams on my own, just after 10th grade. So I had no teacher to take me step-by-step through a few solved examples, which is generally the case. Instead, I had to read the solved examples in my textbook -- many of which struck me as incomprehensible. I remember staring at diagrams of bodies moving on inclined planes in frustration, almost ready to cry because I didn't know what to do. My frustration was accentuated, I think, because we had moved to a new town and I was just starting to adapt to it.
And then one day -- I am not sure how -- it went away, . All I remember is that one fine day I found I could do free-body diagrams just fine, that in fact, I even enjoyed doing them. Gestalt switch is probably a really good way of describing the change. A diagram of a body on an inclined plane now means something to me that it did not before. To use some of Kuhn's own expressions, it was as if, for me, the world had changed and while I could recall recall my frustration with how the world had looked before, I could never recapture my previous world-view; it was irretrievably lost. Kuhn's description of paradigm shifts as gestalt switches and his talk of scientists with differnet paradigms living in "different worlds" often leads people like Morris and Nagel to call him an idealist and a relativist but I know exactly what he is talking about.
Personal remininsces aside, it's worth unpacking the paragraph in detail and making explicit all the points that Kuhn is making in it:
- A paradigm is not a set of rules. It's more like practice, a skill, like knowing how to apply the equation f=ma to different types of scenarios (a body moving on an inclined plane, an oscillating pendulum, a body attached to a spring, etc.) There is no way to describe in the form of rules what it means to solve problems using f=ma, you just have to learn to do it. It's tacit knowledge, sort of like riding a bicycle. At the same time, being able to solve problems is the only way to become a scientist.
- One has to literally go through hell to be initiated into a paradigm. And as a matter of pedagogy, scientists have it down pat what it takes to create a competent member of their community: you teach him or her the concepts, and then you make them solve a bunch of problems. Somewhere along the way, the student gains competence. This authoritarian way of doing things turns away many people from doing science, but it works wonderfully well for those who like it.
- Finally, it provides an explanation of incommensurability, of what Kuhn means when he asserts that when scientists argue over competing paradigms, they are essentially talking past each other. Yes, if they wanted to, if they took time and effort, they could possibly understand each other. Philosophically, incommensurability can be refuted (as Morris tries to do). But practically, in practice, it exists. Because learning a paradigm is a long back-breaking process, and is done usually when one is a student, established, practicing scientists have neither the time, nor the inclination, to imbibe and learn a new paradigm. So they keep on arguing without truly understanding each other.
"But doesn't that open the door to relativism?" Morris might say. Kuhn's answer is that the paradigm that wins out is one that the community perceives as more productive. The key word here is productive. Scientists should feel that a certain paradigm allows them to create a rich set of problems (puzzles) that they can then solve -- which makes them prefer that paradigm. So there is some sort of progress. But what about the role of nature, you might ask. Does nature play a role in the choice of a paradigm during a revolution? The answer is yes. And again, it goes back to the learning. Paradigms need to be learned: by solving problems and by practicing doing experiments; nature plays a role in both of these.
All in all, Structure taught me three things. First, to pay attention to material practices and craft-work that are often the building blocks of any kind of scientific or engineering work. And second, contrary to rhetoric, to pay attention to the values that often underlie scientific work. By values, I mean things like what counts as a "good" problem, what counts as an "elegant" solution etc. To be able to use these words successfully is to have mastered scientific practice. Scientific knowledge can't be studied without a close understanding of the practices and values that produce it. And finally, pedagogy is incredibly important. Pedagogy is how a community licenses its practitioners. How scientific practitioners are made is important if we want to understand scientific knowledge.
I am not sure Morris will buy this defense of Kuhn. He is an admirer of Saul Kripke, after all.