Friday, November 14, 2008

From Edge, a talk on consciousness by the Gibsonian philosopher Alva Noe. I particularly liked this metaphor he uses:
A much better image is that of the dancer. A dancer is locked into an environment, responsive to music, responsive to a partner. The idea that the dance is a state of us, inside of us, or something that happens in us is crazy. Our ability to dance depends on all sorts of things going on inside of us, but that we are dancing is fundamentally an attunement to the world around us.

Thursday, November 6, 2008

Why you shouldn't trust articles that say "Brain scans have proved this .. or that"

Great piece from Scientific American that cautions about using the results of brain scan experiments to deduce too much. My favorite part?
I visited neuroscientist Russell Poldrack’s laboratory at the University of California, Los Angeles, and arranged to get my brain scanned inside its MRI machine. Scanners typically weigh around 12 tons and cost about $2.5 million (not including installation, training and maintenance, which can drive the typical bill up by another $1 million). Right off the bat I realized how unnatural an environment it is inside that coffinesque tube. In fact, I had to bail out of the experiment before it even started. I had suddenly developed claustrophobia, a problem I had never experienced earlier. I’m not alone. Poldrack says that as many as 20 percent of subjects are similarly affected. Because not everyone can remain relatively relaxed while squeezed inside the tube, fMRI studies are afflicted with a selection bias; the subject sample cannot be completely random, so it cannot be said to represent all brains fairly.

A person jammed into the narrow tube also has his or her head locked firmly in place with foam wedges inside the head coil—nicknamed “the cage”—to reduce head motion (which can blur the images) before the experiment begins. The MRI scanner snaps a picture of the brain every two seconds while the subject watches images or makes choices (by pushing buttons on a keypad) presented through goggles featuring tiny screens.

So when you read popular accounts of subjects who had their brains scanned while they were shopping, for example, remember that they were not walking around a Wal-Mart with headgear on. Far from it.

Via Pure Pedantry. See also here.

The best experiments, I think, would be precisely those where people could walk around Walmart with headsets on and we could measure some of their brain activity. Sadly, that's not such an easy thing to do. What we do therefore is strap them into scanners or else conduct simulated experiments in the laboratory. While these are very valuable, the point is -- we cannot read too much into them.

Thursday, August 21, 2008

Is there social science research on the habits of movie-renters?

I've just started exploring the Netflix data -- and want to use it to look into the habits of folks who rent movies regularly. There's a lot of work on collaborative filtering algorithms and other "recommendation" systems that goes on in Computer Science departments.

But does anyone know of any academic work that was done documenting the behavior patterns exhibited by movie-renters? Questions such as:
  • Is the renting behavior exhibited by families different from that of individuals?
  • How does liking or disliking a movie affect the next movie that person or family rents?
  • What role does the social network play? What about advertisements?
Anything that anyone knows will be much appreciated.

Friday, August 15, 2008

In praise of skimming: A response to Nicholas Carr's "Is Google making us stupid?"

Nicholas Carr's essay in the Atlantic -- provocatively titled "Is Google making us stupid?"-- has, not surprisingly, aroused a lot passionate reactions.

Channeling Maryanne Wolf's Proust and the Squid, Carr makes the reasonable assertion that the way we read shapes the way we think. And increasingly, we do most of our reading on the internet: skipping, skimming and scanning our way through articles and blogposts. He contrasts this style of reading with the kind of deep contemplation that is necessary for reading, say, War and Peace. If reading on the internet discourages "deep" reading, Carr worries, does that mean we are all starting to think differently -- becoming more "transient" and less reflective? Is deep thinking is on the way out? Is Google making us stupid? (I know, I know, this sounds like one of those voiceovers from Sex and the City).

This is an interesting and important question. But Carr's article, at least in the way it is framed, manages to conflate a number of related issues. It's fundamental flaw is the implicit assumption that there is only one (correct) way to read and it involves "strolling through long stretches of prose". It doesn't matter whether we're reading fiction or non-fiction or whether we're reading for work or in our leisure time. The result is an essay whose underlying anxiety manages to resonate but whose point isn't at all clear. I actually agree with Carr. I certainly think that the internet is changing something in the way discourse producers work (and by this term I mean academics, journalists, writers etc who read and write for a living). Notice that this is a much narrower claim -- how they think is another, more nebulous issue and I am not particularly sure that that's going to change.

Consider the disconnect in Carr's first anecdote.
Bruce Friedman, who blogs regularly about the use of computers in medicine, also has described how the Internet has altered his mental habits. “I now have almost totally lost the ability to read and absorb a longish article on the web or in print,” he wrote earlier this year. A pathologist who has long been on the faculty of the University of Michigan Medical School, Friedman elaborated on his comment in a telephone conversation with me. His thinking, he said, has taken on a “staccato” quality, reflecting the way he quickly scans short passages of text from many sources online. “I can’t read War and Peace anymore,” he admitted. “I’ve lost the ability to do that. Even a blog post of more than three or four paragraphs is too much to absorb. I skim it.”
Notice that there are two arguably unrelated things here:
  • (a) Friedman's inability to "read and absorb a longish article on the web or in print": This presumably has to do with his work as a blogger.
  • (b) Friedman's inability to read War and Peace in his leisure time.
Carr clearly seems to be implying that (a) and (b) are related, and that both are caused by the amount of time that Friedman, a blogger, spends on the internet for work.

I suspect that reading War and Peace after a hard day's work is ... difficult, even for those whose work doesn't involve reading tons of things on the internet. Indeed, as Friedman admits here (scroll down for his comment) -- he reads a lot of novels (mainly mysteries and thrillers) for recreation in his leisure time. We can therefore safely eliminate Friedman's high volume of internet reading as the cause of (b).

(a) is much more interesting but Carr has next to nothing to say about it. Now presumably, as a discourse-producer, Friedman has to look at different books, articles, blog-posts, papers and then synthesize/summarize them in blog-posts of his own. But notice that when Friedman says that he cannot read a blog-post of more than 4 paragraphs, he means that he does not read it word for word, instead he skims it. In other words, when something slightly longish presents itself to him, Friedman slips, almost by default, into "skim mode".

Carr clearly despairs at this but my own reaction is: so what? Carr seems to think that all reading takes place in isolation -- that we read for the sake of reading. But while this may be true for our leisure reading, this is almost certainly not true when we read for work. The real issue here is that assuming Friedman skims more now than he did pre-internet, has it affected his output? Is his writing better or worse? Are his arguments sharper, deeper, shallower? Does he write more? Or less? These are all very interesting questions but it doesn't seem to me that Carr is really interested in these. (Of course, these questions might get you published in an academic journal but they certainly wouldn't be cover-story material for The Atlantic Monthly, no?)

So now let me restate Carr's point -- which he may or may not buy. His thesis is that the internet and Google will change the way that these producers of discourse go about their work, that they will skim more than they used to -- and more worrying for Carr, I think -- and they will not think as deeply because deep reading is deep thinking.

I am not convinced about the second part of this at all. First, because discourse producers don't just read aimlessly, they read for a purpose (an article, an essay, a review) and that purpose itself will require rigorous thinking. In the course of their project, they may go through a variety of reading material (blog-posts, journal papers, books). They may skim most of it, deep-read a few. But in the end, they have to synthesize their arguments and marshal facts and sources to back them up -- and to do all this well, deep thinking is essential.

However:
  • The weak part of Carr's thesis is certainly true -- we do skim more. And perhaps reading on the internet may have reduced our "skimming threshold" -- like Friedman who starts skimming if an article is more than 4 paragraphs. But the reason for this is that the internet presents us with ten times the reading material we would otherwise have had.
In my own case, to write this blog-post, I read Carr's original piece in the Atlantic the "traditional" way, word for word. I skimmed through most of the reactions the article generated -- concentrating on the few that seemed most relevant. (Clay Shirky's is the most penetrating, if a bit short. Jaron Lanier's is bizarrely short and off-topic.) I knew the New Yorker had reviewed Proust and the Squid but after looking it up on Google, I ended up skimming four reviews: times.co.uk, Bookforum, The New Yorker and the Guardian. Google also led me Bruce Friedman's comment about his reading habits. And it took me to the study by University College, London, that Carr quotes from.

This is perhaps the key problem that discourse producers face -- what to skim and what to read deeply. And the internet has exacerbated it -- because now everything is available at our fingertips. To be successful today, it not only becomes necessary to skim but it becomes essential to skim well.
  • We may end up writing in shorter bites -- almost certainly a good thing in my opinion.
As Kevin Drum points out, most non-fiction articles are grossly padded. Knowing that there will be a lot of articles users will want to browse, will writers try to be more concise in their arguments? More brief, to the point? I am sure this will not be a completely unwelcome development.

Conclusion? The internet certainly encourages skimming -- but skimming isn't easy! It could certainly mean less deep-reading in the long run but deep thinking will almost certainly remain. Finally the very fact that there is so much to skim will compensate to some extent the fact that we have less time to read something word by word.

Monday, July 28, 2008

The Kindle

James Fallows has a good post on the deficiencies of the Kindle, and by extension, most online reading applications:
3) And about the process of reading:

Spent six or seven hours of the flight reading on the Kindle. Perfectly pleasant and legible. Only one inconvenience relative to " real" books -- harder to flip ahead or back several pages at a time. (You scroll page by page, or else go to the table of contents.) And a kind of mental-picture adjustment: it's easier to insert bookmarks or placeholders, or seach for a specific word in the text; harder to have a remembered visual image of a certain passage as it fits on a certain place on a page. Not good for books where pictures, illustrations, maps, production quality matter a lot. Very, very good for reading Word .DOC files or .PDFs that I would otherwise have to read on the computer.
In the same vein:
One added observation, however, would be that the Kindle actually suffers from several ridiculous flaws. James refers to the inability to "flip" multiple pages at a time. It also doesn't let you cross-reference Kindle "locations" with brick-and-mortar page numbers. And you can only highlight whole lines at a time rather than starting with specific words. There are various other things like that. They're annoying. But at the same time, these are problems that I'm sure have solutions. When the basic technology of the Kindle Reader and Kindle Store are married to a design team (either at Amazon or at a competing firm like Apple) that's somewhat better at thinking this stuff through then I think you'll have a product a lot of people want to buy.

Wednesday, February 6, 2008

Review: Dreaming in Code by Scott Rosenberg

Scott Rosenberg's "Dreaming in Code", despite its rather evocative title, is the mundane story of an ongoing software project called "Chandler", a tool for Personal Information Management (PIM). Wait, that's not right. It is a vividly written but altogether-familiar story of software development. A story of requirements ("specs") and disagreements, of delays and deadlines, of plans, of changes in plans and of more changes in plans, of ideals and of pragmatism, and of course, of bugs that make you tear your hair out in agony. The sort of things that are all too familiar, when it comes to coding.

Why then, was it written? Over to Rosenberg :
Chandler offered a look at the technical, cultural and psychological dimension of making software, liberated from the exigencies of the business world. [pg 54]
But I am getting ahead of my tale.

In the post-war era, a project that took on a special meaning with the development of computing machines: the augmentation of human intellect. Here, many people thought, was a tool to rival language, and writing, a tool perhaps to re-invent man himself. The vision has been expressed eloquently: most notably by Vannevar Bush ("As We May Think") and Douglas Engelbart ("Augmenting Human Intellect"). Chandler, an open-ended open-source project, follows in that same tradition.

Mitchell Kapor, the owner of OSAF, and the hero of our story, was the designer behind Lotus 1-2-3, the first widely successful spreadsheet used in the business world. Lotus 1-2-3 made Kapor a millionaire several times over. His next product called "Agenda", was supposed to take personal information management to the next level and from the reviews, it did! -- but it never really took off and Lotus dropped it like a hot potato (see this review by James Fallows, that praises Agenda but also gives a good idea of its difficulties). But Agenda remained on Kapor's mind and in Chandler, he tried to go back to the same feeling that inspired him to build it.

What was this spirit? PIMs like Microsoft Outlook separate their content into silos: there's email, there are tasks, there are lists, there are action items, and so on. But of course there are no such neat categories in human activity where everything is also something else. My email is also a task (not the least because I actually have to type a reply to it), a task or a project involves emails, emails are parts of projects, emails are a way to store and access files, documents. In other words, the decomposition of the artifacts of human activity, while convenient, is also just that: an analytic convenience. Agenda aimed to go beyond silos -- as does Chandler. A personal tool that would not be caught into silos, something that could truly capture the way humans actually worked, and thereby help them do their tasks better.

Of course, all this is easier said than done. Programming itself is all about silos. Formalization, which is after all what programs are all about, require us to logically decompose processes and methods into categories. The better the categories are defined, the better a program will work. Overlapping categories and amorphous boundaries, while not unimplementable, are almost guaranteed to break down in some scenario or the other. Still, Chandler was almost a romantic project, a project that followed in the tradition of Bush and Engelbart so it had an open-ended vision, something most software projects can't really afford.

That Chandler could do so was because it was administered by Kapor's OSAF, the Open Source Applications Foundation. This was to be a non-profit project, a project solely instigated by what is called "the programmer's itch", an urge to get something done that starts off with a personal wish or frustration ("I wish I had a software that did this" or "Damn, why can't this calendar software do that??" etc). The non-profit part (which of course depended heavily on Mitch Kapor's deep pockets) meant that the Chandler team was liberated from constraints that come when you program for profit.

But there was a third component that made Chandler special. Chandler was to be open-source and therefore it was meant to harness the forces of peer production. This meant that a core Chandler team would work on the code, while at the same time, at least in theory, relying on a vast array of programmers all over the world. In short it would take the best of the two modes of production, made famous by Eric Raymond: "the Cathedral and the Bazaar". In his essay, Raymond postulates that there are two modes of production. One of them, embodied by the cathedral, is a top-down, command-and-control approach, where a plan is built up and then systematically carried out. The other, embodied by the bazaar, is bottom-up: a group of people find each other and self-organize, without any command-and-control structure. The development of Linux followed the bazaar model -- and the main reason for its success that the internet offered a wide range of tools for a lot of people, all over the world, to collaborate. Raymond's conclusion is what he called Linus' Law: "Given enough eyeballs, all bugs are shallow".

So, did Chandler's combination of the cathedral and the bazaar work? Its still too early to tell, Rosenberg's book ends with the release of Chandler 0.5 (the most recent version is 0.7 -- the fully functional Chandler release is 1.0, still a long way off). But the book does illustrate the problems of having an open-ended software project. Because the Chandler team could not agree on a set of requirements, they could not get a batch of workable code out immediately -- the release of workable code being a crucial component in harnessing the forces of peer production. But Chandler never quite managed to get the participation that a peer-produced project generally needs and slowly its team slogged on, making compromises, slowly and steadily.

I could go on and on but I'll stop now. Try Chandler out, I liked it, even if I don't quite visualize using it just yet. What did I take away from Dreaming in Code? Perhaps, if nothing else, this quote by Linus Torvalds, his advice for people starting large open-source projects, burned into my brain (pg 174):

"Nobody should start to undertake a large project, " Torvalds snapped. "You start with a small trivial project, and you should never expect it to get large. If you do, you'll just overdesign and generally think it is more important than it likely is at that stage. Or, worse, you might be scared away by the sheer size of the work you envision. So start small and think about the details. Don't think about some big picture and fancy design. If it doesn't solve some fairly immediate need, it's almost certainly overdesigned."

Words from the wise, indeed.

[X-posted on "Crack a book"]

Thursday, January 10, 2008

Wikinomics: a review

Wikinomics: How Mass Collaboration Changes Everything is a full-throated call to action for senior managers. The authors, Don Tapscott and Anthony D. Williams, argue that the new technologies of the web, and the new forms of production that they engender, will change the nature of the corporation, at least as it has traditionally been conceived. This is not really a book for researchers: it’s tone is evangelistic, and the authors end almost every chapter with a bullet list-like set of points which they intend as guidelines for the senior manager intent on changing the way her company works. ( An example (from page 176): “Use industry-university partnerships to shake up product road maps”, “Make sure the collaboration is win-win”, etc.)

Nevertheless, the book is interesting and a more than brief summary may be in order. The modern corporation today has a strongly demarcated boundary, a clear place that separates the people (and materials, and assets) “within” from those “without”. Of course, the modern corporation is not a monolith — there is a vast confusing network of business partners, subsidiaries, and the like, but the boundary is still clear, at least for its employees (Sometimes, for an employee, this boundary may even be that of her own group) . The book’s central thesis is that this boundary needs to be made porous; that, in fact, this is an inevitable change and that companies need to do this or perish. The point is made in starkly economic terms for corporations; the authors are not arguing for the adoption of “peer production” because it enhances human freedom or because it decentralizes the production of information (as Yochai Benkler does in his “The Wealth of Networks“, although his book is not pitched to corporations) but because this is necessary for both, innovation and growth and perhaps, even survival. Whether this is true or not, it is definitely a good rhetorical strategy to adopt, particularly by those who wish to adopt new methods of information production into the modern enterprise, and run into the usual barriers (resistance to change, inertia, etc).

The authors spend seven chapters of the book detailing seven ways in which companies can evolve and how they can harness the new technologies of communication and production now at their disposal.

(1) The first is to harness the power of, what Yochai Benkler has called, “peer production” (also see here). Peer production, in this case, refers to the way in which, the operating system, Linux and Wikipedia were created: by a bunch of distributed users who self-organized (i.e. without any command-and-control) via some communication channels (in this case, the web). The authors point to the example of IBM, which let its employees wade into the open-source community (as developers for Linux and other open-source programs) and integrated several open-source applications into its own proprietary products.

(2) The second, is what the authors call “Ideagoras”: a market place for ideas. Using the example of Goldcorp CEO Rob McEwen who opened up the company’s geological data to the public, which then lead to the discovery of more gold deposits (the company’s in-house experts had been unable to pinpoint to anything), the authors propose that by utilizing expertise outside their boundaries, corporations stand to gain more than they think. “Idea Markets” are platforms such as Innocentive and yet2 , where corporations (or people) can post “questions in search of answers” or “answers in search of questions”. In a world where there are a vast number of under-utilized ideas, this seems to make perfect sense.

(3) The third, is to view consumers as “prosumers” (the authors have a thing for inventing bad names) and to sell, instead of finished products, something more like “hackable” products. This means providing things like APIs and manuals (but systematically, and making it a core part of the product) so that users can pro-actively modify their products, and then (perhaps) share them with other users. creating, in the process, a rich community of interacting users. Here the example is Lego Mindstorms, with its hive of users/hackers, who are forever tinkering with the product, and sharing the results with the community at large.

(4) The fourth, which they call “The New Alexandrians” looks into the story of the sequencing of the human genome. This could have been done by each company staking a claim to some sequence of the genome (since a court ruling had effectively made sequences patentable). Instead companies started releasing sequences that they discovered into the public domain, thereby creating a huge database which could be used by all researchers, anywhere in the world. This was, and is, the key to the advances in the field and could arguably lead to some significant advances in the treatment of diseases. The lesson for corporations here, the authors argue, is that research can be done, perhaps better in collaboration with those outside their own boundaries. They point to business-university research collaborations as something most companies should do more of — an example being Intel’s Open University Network, which gives IP rights to all parties involved in the research.

(5) Fifth: Platforms for participation. This is the most interesting chapter of the book, and I think, the most relevant for our work. The authors take up, what we today call “mashups”. Mashups are possible, because companies like Google and Amazon are offering “services” instead of portals, which can then be used by other programmers/companies. By opening up their services (in the form of open APIs), corporations such as Google are effectively harnessing a wide range of programmers who are not technically Google employees. This gives Google and Amazon visibility and increases the value of their services and increases the chances of their usage. More corporations need to follow this “open API” system.

(6) Sixth, “the collaborative shop floor” gives the authors’ take on how peer production can influence manufacturing. The examples are Boeing and car manufacturers like BMW, which rely on a surprising amount of distributed production and allow their suppliers to make significant changes into the actual product.

(7) Seventh and last, the workplace of the future, which the authors call “the wiki workplace”. How will the adoption of new forms of web-based communication (IM, chat, VoIP, email, wikis, blogs etc) affect the nature of collaboration in the workplace? A truly interesting example is Geeksquad, now a part of BestBuy. One of the ways in which the squad members kept in touch with each other is though MMOLGs (Massive Multi-Player Online Games). This is not by design, it just turned out to be so. The significant thing is that once this was discovered, the company actually encouraged it with even its owner Robert Stephens joining in from time to time. Bestbuy itself is engaged in some organizational changes: for e.g. allowing employees who actually come into contact with customers to contribute to the company’s strategy, giving significant responsibility to individual stores to design their displays, etc. All of these involve the use of new forms of communication such as wikis and blogs — hence the “wiki workplace”. The authors encourage more self-organization within the enterprise, with employees forming groups and disbanding when the task is done, rather than the normal way in which offices are organized: into rigid teams and sub-teams. While this is easier said than done, it can only be accomplished by workplace communication mechanisms that encourage this form of social self-organization.

In the next post I will try and list some points that could be potentially useful in the design of collaborative applications for the enterprise.