Wednesday, July 28, 2010

What's good for me is good for you

Yes, what's good for the publishing industry is also good for us. How could we be so stupid and not know this?
Many would argue that the efflorescence of new publishing that Amazon has encouraged can only be a good thing, that it enriches cultural diversity and expands choice. But that picture is not so clear: a number of studies have shown that when people are offered a narrower range of options, their selections are likely to be more diverse than if they are presented with a number of choices so vast as to be overwhelming. In this situation people often respond by retreating into the security of what they already know.


At the Book Expo in New York City, Jonathan Galassi, head of Farrar, Straus and Giroux, spoke for many in the business when he said there is something "radically wrong" with the way market determinations have caused the value of books to plummet. He's right: a healthy publishing industry would ensure that skilled authors are recompensed fairly for their work, that selection by trusted and well-resourced editors reduces endless variety to meaningful choice and that ideas and artistry are as important as algorithms and price points in deciding what is sold.
I always love how a "we are under attack and need to survive" argument is never made, instead the argument becomes -- without any self-consciousness whatsoever -- "we are under attack and our demise will result in the decline of civilization and therefore bad for EVERYONE."

Friday, July 23, 2010

Algorithmic culture and the bias of algorithms

Via Alan Jacobs, I came across a thought-provoking blog-post by Ted Striphas on "algorithmic culture."  The issue is the algorithm behind Amazon's "Popular Highlights" feature.  (In short, Amazon collects all the passages in its Kindle books that have been marked, collates this information and displays it on its website and/or on the Kindle.  So you can now see what other people have found interesting in a book and compare it with what you found interesting.)

Striphas brings up two problems, one minor, one major.  The minor one:
When Amazon uploads your passages and begins aggregating them with those of other readers, this sense of context is lost. What this means is that algorithmic culture, in its obsession with metrics and quantification, exists at least one level of abstraction beyond the acts of reading that first produced the data.
This is true but it could easily be remedied.  Kindle readers can also annotate passages in the text and if they feel like it, they could upload their annotations along with the passages they have marked.  That should supply the context of why the passages were highlighted. (Of course, this would bring up another thorny question: what algorithm to use to aggregate these annotations.) 

But he brings up another far more important point:
What I do fear, though, is the black box of algorithmic culture. We have virtually no idea of how Amazon’s Popular Highlights algorithm works, let alone who made it. All that information is proprietary, and given Amazon’s penchant for secrecy, the company is unlikely to open up about it anytime soon.
This is a very good point and it brings up what I often call the "bias" of algorithms.  Algorithms, after all, are made by people and they show all the biases that their designers put into them.  In fact, it's wrong to call them "biases" since these actually make the algorithm work!  Consider Google's search engine.  You type in a query and Google claims to return the links that you will find most "relevant."  But "relevant" here means something different from the way you use it in  your day-to-day life.  "Relevant" here means "relevant in the context of Google's algorithm" (a.k.a. PageRank). 

The problem is that this distinction is lost on people who just don't use Google all that much.  I spend a lot of time programming and Google is indispensable to me when I run into bugs.  So it is fair to say that I am something of an "expert" when it comes to using Google.  I understand that to use Google optimally, I need to use the right keywords, often the right combination of keywords along with the various operators that Google provides.  I am able to do this because:
  1. I am in the computer science business, and I have some idea of how the PageRank algorithm works (although I suspect not all that much) and 
  2. because I use Google a lot in my day-to-day life.  

I suspect that (1) isn't at all important but (2) is.  

But (2) also has a silver lining.  In his post, Striphas comments:
In the old paradigm of culture — you might call it “elite culture,” although I find the term “elite” to be so overused these days as to be almost meaningless — a small group of well-trained, trusted authorities determined not only what was worth reading, but also what within a given reading selection were the most important aspects to focus on. The basic principle is similar with algorithmic culture, which is also concerned with sorting, classifying, and hierarchizing cultural artifacts. [...] 

In the old cultural paradigm, you could question authorities about their reasons for selecting particular cultural artifacts as worthy, while dismissing or neglecting others. Not so with algorithmic culture, which wraps abstraction inside of secrecy and sells it back to you as, “the people have spoken.”
Well, yes and no.  There's a big difference between the black box of algorithms and the black box of elite preferences. Algorithms may be opaque but they are still rule-based.  You can still figure out how to use Google to your own advantage by playing with it.  For any query you give to it, Google will give the exact same response (well, for a certain period of time at least).    So you can play with it and find out what works for you and what doesn't.  The longer you play with it, the longer you use it, the more you become familiar with its features, the less opaque it seems.

Not so with what Striphas calls "elite culture," which, if anything, is far more opaque and far less amenable to this kind of trial-and-error practice.  (That's because the actions of experts aren't really rule-based.)

I am not sure where I am going with this and I am certainly not sure whether Amazon's Kindle aggregation mechanism will become as transparent as Google's search algorithm by trial-and-error but my point is that it's too soon to give up on algorithmic culture.

Postscript: My deeper worry is that when we actually reach the point when algorithms are used far more than they are now, the world will be divided into two types of people.  Those who can exploit the biases of the algorithm to make it work well for them (like I do PageRank).  And those who can't.  It's a scary thought although since I have no clue about how such a world will look like, this is still an empty worry.

Monday, July 19, 2010

The problem with the problem with behavioral economics

There's a lot to like in George Loewenstein and Peter Ubel's op-ed in the New York Times on the limits of behavioral economics and it's possible to draw various conclusions from it.  But the piece is at heart just a good old-fashioned moral criticism of government (and the Democratic Party)  for not doing the "right" thing and indirectly, also of democratic politics in general.

Loewenstein and Ubel’s op-ed is mostly aimed at warning people that behavioral solutions are no panacea. I am only a modest consumer of this research so I cannot evaluate all their claims. Yet, it strikes me that if they are right, their argument is really quite damning for the behavioral economics revolution. Essentially, they assert that traditional economic analysis has ultimately much more relevance for the analysis of major social problems and for finding solutions to them. Behavioral economics can complement this but cannot be a viable alternative. Within political science and other social sciences the insights of behavioral economics are sometimes interpreted as undermining the very foundations of classical economic analysis and warranting an entirely different approach to social problems. At the very least, the op-ed is a useful reminder that careful scrutiny of effect sizes matters greatly. 
I suspect this is right.  But this has far more to do with the disciplinary matrix of economics than it has to do with the insights of the behavioral revolution.  I am no economist but as I understand it, the idea that  human beings use a certain form of cost-benefit analysis to make economic decisions (homo economicus) forms the cornerstone of traditional economics.  According to this theory, people make decisions that maximize benefits and minimize costs.  But this, of course,  all depends on what counts as a cost and what counts as a benefit, about which the homo economicus model says nothing.

Initially at least, costs and benefits were thought to be monetary.  Increasingly, we are beginning to realize that we need to factor in culture into these costs and benefits.  And while behavioral economics has helped us understand that there are costs and benefits that are not monetary in nature, it has not changed the cost-benefit model itself.  In that respect, it has in no way undermined the foundations of economic theory.

As an reviewer puts it (scroll down to read):
At root, the rational actor model says "people act to maximize utility." Utility is left more or less undefined.
Thus, the model really says "people act to maximize something." Or put differently, "There exists a way to interpret human behavior as a maximization of something."  Although it does not look like it, this is a statement about our ability to simulate a human using an algorithm. It is on par with the statement that humans are Turing machines.
This model will never lose, because it is very flexible. A challenge to the model will always take the form of a systematic or nonsystematic deviation between some specific "rational actor" model and the true actions of a human. But the challenge will always fail:
- A systematic deviation will by definition always be combatted by enriching the rational actor model to eliminate the deviation.
- A nonsystematic deviation is explainable as "noise" or "something we don't understand yet."

[...] There is really no way to beat the rational actor model, because it is really the outline of a research program, it is not a fleshed-out model. And the research program is the correct one, by the way.
Being a Kuhnian, I wouldn't say that this research program is the "correct" one.  But it's an extraordinarily productive one, that gives its practitioners the ability  to construct a variety of problems and solve them.  In that respect behavioral economics cannot be -- and never was -- an alternative to it.