Tuesday, January 29, 2013

Immigration, economic numbers and the politics of culture

I usually don't write about US politics on this blog but sometimes it helps illustrate some interesting points about the role of science in public life.

Immigration reform is now on the agenda of the US Congress.  In response, Matt Yglesias (using a chart that illutrates the results of two econometric analyses, see above) writes today that:

Unfortunately, immigration scolds seem to be excessively afraid to voice their real concerns about this, which makes it difficult to address them with either evidence or policy concessions. Instead, we're stuck in a mostly phony argument about wages that does nothing to ease people's real fears about nationalism and identity.
A set of interesting claims is being put forward here, and if I understand it right, a rather strange use of Marx's ideas about the base and superstructure.  For Marx, the base was economic relations and the relationship of the different groups of people to the means of production.  The superstructure was culture; this was seen as deriving from the economic structure, and often served to reproduce this base (through what Marx called false consciousness and ideology). Implicit in Yglesias' assertion is that the effect of immigration on wages uncovered by econometricians corresponds in some sense to what a certain set of Americans feels.  A cumulatively economic effect of immigration on wages that can be uncovered only through the efforts of econometricians is seen as an objective measure (in Theodore Porter's sense) that maps onto the subjective states of people who oppose immigration.  Unfortunately, the econometricians find low or negligible effect, therefore people who oppose immigration must be lying: their reasons are cultural--about nationalism and identity--rather than economic.  We would have a better, more honest discussion, he suggests, if this sham of economic impact (expressed through numbers) was let go of and concentrated on the cultural fears. 

Something similar happened a few days ago but in a different context.  Blogger wunderkind Ross Douthat wrote an op-ed in the Times arguing that that the United States had low fertility rates, which he argued was a problem that the Government needed to think about and perhaps mitigate using policy measures (primarily by creating a "more secure economic foundation" for working-class Americans).  He concluded--omniously--by saying that policy measures could only be effective to a point; low fertility rates were a symptom of "decadence":
The retreat from child rearing is, at some level, a symptom of late-modern exhaustion — a decadence that first arose in the West but now haunts rich societies around the globe. It’s a spirit that privileges the present over the future, chooses stagnation over innovation, prefers what already exists over what might be. It embraces the comforts and pleasures of modernity, while shrugging off the basic sacrifices that built our civilization in the first place.

Such decadence need not be permanent, but neither can it be undone by political willpower alone. It can only be reversed by the slow accumulation of individual choices, which is how all social and cultural recoveries are ultimately made.
To which Matt responded, (calling this last paragraph "nutty")
It'd be a much better country if social conservatives would stop writing things like that second paragraph and focus instead on what's in the first paragraph.
 There was some predictable back-and-forth (see also this).  But his point was clear: it was far better to talk about policy levers--about which a debate can be had--than about notions of "decadence" about which debates are never possible, particularly if you don't subscribe to such notions.

All of which makes me thing that Porter is right on this.  Numbers are indeed "technologies of trust."  At least in the American context, they allow arguments about public life to be made; they make possible arguments that are about objective "facts" rather than subjective "values."  Not that arguments about facts are guaranteed to be settled, but arguments can indeed be made.  Whereas arguments over whether the late modern age is "decadent" or not, between people with incommensurate values, are guaranteed to go nowhere.

Which brings me back to the original post I began this discussion with where Yglesias suggests that it is better to have a national American conversation about the cultural anxieties of immigration (I suspect it would turn out to be equally "nutty").  Kevin Drum responds: 
Cultural insecurity and language angst are the key issues here. It doesn't matter if they're rational or not. Anything we can do to relieve those anxieties helps the cause of comprehensive immigration reform.
A public conversation to relieve cultural anxieties--would it have to take the form of numbers?  What other forms could it take--and where would it lead? 

Thursday, January 24, 2013

Gillian Tett, Meet William Cronon

On my way back from India, I read Gillian Tett's "Fool's Gold: The Inside Story of J.P. Morgan and How Wall St Greed Corrupted Its Bold Dream and Created a Financial Catastrophe."  [The book's changing subtitles is a topic in itself!]  That awkward subtitle is perfect for the book's quite strange structure.  Tett wants to tell the story of how credit derivatives began (with J.P. Morgan's invention of the BISTRO instrument) but that's not the story her readers will be mostly interested in (that would be the financial crisis).  So she tells the story of the invention of credit derivatives, and then quickly segues into the story of the financial crisis, the prime culprit of which was the use of credit derivative instruments to home mortgages.  The problem is that J.P. Morgan was not a part of this second trend; other banks, however, plunged in with relish, with consequences that we all know about.  Which puts Tett in the strange position that she tells the story from J.P. Morgan's point of view, when all the real action was happening somewhere else.  
But no matter.  All that said, Fool's Gold is the best book I've read about the crisis. (To be fair, I've only read three others.)  First, it's about the invention of credit default swaps (CDS), rather than simply mortgage-backed securities (which, if I understand Tett right, were benign instruments dating back to the 1970s).  Second, she gives a really good account of why credit derivatives were invented, and consequently what the "shadow banking" sector is all about.  Moreover, her account convinced me that contrary to what the J.P. Morgan bankers say, the invention of the credit derivative was the key to the financial crisis, not so much because of the instrument itself, but for the reasons for which it was invented.    

Let me explain.  Essentially, the J.P. Morgan bankers invented the CDS to circumvent Basel regulations about capital requirements.  Basel rules required that banks keep a certain amount of liquidity to insulate them from bank runs in the case that their clients default on their loans; this liquidity requirement is in direct proportion to the "risk" that the bank has taken on.  J.P. Morgan's clients tended to be blue-chip corporations, and the risk of default was therefore minimal.  Basel regulations were hindrances that prevented Morgan from making more profits; therefore they had to be subverted.  Morgan accomplished this by creating the swap, it insured the riskiest part of its loans, and thereby shifted the risk off its books.  This meant they required lower liquidity, and could invest the money into higher profits.  (A CDS is a contract the bank makes with another party: the bank pays the party a steady fee in return for which the party agrees to insure the bank in the case of default.)  

Morgan's key innovation, as Tett documents, was that it was able to standardize the credit default swap.  Rather than the slow process whereby the two parties for a CDS contract needed to be found, the Morgan bankers found a way to mass-produce these instruments: their solution was "tranching."  All the loans on the books were bundled together, and separated into portions with different risks.  A key part of this was that the amount of risk needed to be standardized so that the parties to the CDS contract would know exactly the kind of debt that was being insured.  Enter the ratings agencies which were only too happy to do this: for a nice juicy fee, they looked at the debt, applied their models to it and estimated the amount of risk, which they then standardized into levels or tranches.  AAA was the least likely to default, BB was a little more likely and so on.  This worked out well for corporate loans, which could be diversified, but not so well for home mortgages.
Reading Tett's account of the invention of BISTRO took me back to William Cronon's story of the grain trade in Nature's Metropolis.  Cronon begins by describing how the grain trade worked prior to the railroads.  The grain would be stored in sacks, and transported manually (across the river and across Lake Michigan) by the traders (who were usually small shop-keepers), who would then sell it in other cities.  The sack in which the grain was stored was crucial to its transportation: it changed boats multiple times during its journey and the sacks of each seller were kept separate.  It was also the key to its exchange value: the grain was examined by a buyer, a mutual price decided on based on the quality of the grain, and then sold.   

The building of the railroads changed this.  Because railroads were built privately, and had a great capital cost, they concentrated on maximizing the shipment of goods from the hinterland to the city (and they could even operate in the winter!).  This meant a rapid turnover – quickly emptying a carriage so that it could be used again for a different trip – and the sack of grains became an obstacle to this.  The railroads solved it by the invention of the steam-powered grain elevator which made the loading of grains from warehouses into the railway cars easy and efficient.  The problem was that there was no room for sacks of grain in this scheme; to maximize profit, all the bins in the elevator needed to be filled with grain, therefore grain from different farmers had to be mixed.  Thus the first step in the chain of standardization took place: the local merchant (or even farmer) was separated from the grain he produced.  

Yet, mixing the grains from different farmers together needed one further step: the creation of different "grades" of grain so that even if mixed together, its price could still be determined.  The Chicago Board of Trade, formed in 1848 and whose membership consisted of Chicago grain traders came up with such a classification and over time, this classification started to be widely used.  Thus a further distancing of the grain from its trade took place.  Traders no longer had to sell grains physically.  Instead an elevator receipt – which showed that a certain quantity of a certain kind of grain had been sold – could itself be traded amongst traders.  The receipt could be used to buy grain from a warehouse but this was not the grain that was actually sold; instead, it was a functionally equivalent substitute.

A final step was the role of the telegraph in setting the prices of grain.  The telegraph allowed communication between, say, Chicago and New York markets, which meant that the price of grain in New York markets could affect Chicago's.  The standardization of the grain also helped here: it was no longer necessary for a buyer in New York to manually inspect the grain that he was being sold; instead he would know of its quality because it had been "rated" as being of a certain category.      
All of these trends – the standardization of the grain into grades, its effective separation from the particulars of its production i.e. its commoditization, and the fact that it was now easy for buyers and sellers to make contracts over long distances – culminated in large volumes in futures trades of grains and sometimes led to “Corners,” artificial shortages created by speculators.  Cronon sees the increase in the grain trade due to standardization as responsible for the change in the landscape around Chicago.  

I will stop here, but the parallels between the grain trade and the derivatives trade are clear.  Just as the grain becomes effectively separated from its producer, the loan issued by the bank became separated from the party that the loan was given to.  And just as the grains of farmers were now mixed together in standardized bins of different "grades" of grain, so also debt from different parties was mixed together and labeled along a standardized spectrum of risks.  And this standardization helped along a further trade in these instruments themselves... Cronon calls this the "logic of capital."  And so it is.  But the commoditization of the grain trade grew from the railroads' insistence on maximizing turnover: this was the only way they saw of making profits to offset their high capital costs.  The banks too wanted higher profits, but their obstacle was not the movement of goods but capital regulations.  Credit derivatives, it is very clear from Tett's story, were created to make a run around regulations.    

All of which is not to say that standardization is a bad thing, per se.  But rather to say that standards need regulators.  When the standardization doesn't work, when the standards stop reflecting what's "inside," then we all pay a price. 

Friday, January 18, 2013

Is the neo-Darwinian synthesis intuitive?

In a sober [1], concise, careful and clear review of Thomas Nagel's Mind and Cosmos, H. Allen Orr quotes a lengthy passage from the book which serves to ground Nagel's subsequent arguments against neo-Darwinism in favor of a "teleological" model: 

I would like to defend the untutored reaction of incredulity to the reductionist neo-Darwinian account of the origin and evolution of life. It is prima facie highly implausible that life as we know it is the result of a sequence of physical accidents together with the mechanism of natural selection. We are expected to abandon this na├»ve response, not in favor of a fully worked out physical/chemical explanation but in favor of an alternative that is really a schema for explanation, supported by some examples. What is lacking, to my knowledge, is a credible argument that the story has a nonnegligible probability of being true. There are two questions. First, given what is known about the chemical basis of biology and genetics, what is the likelihood that self-reproducing life forms should have come into existence spontaneously on the early earth, solely through the operation of the laws of physics and chemistry? The second question is about the sources of variation in the evolutionary process that was set in motion once life began: In the available geological time since the first life forms appeared on earth, what is the likelihood that, as a result of physical accident, a sequence of viable genetic mutations should have occurred that was sufficient to permit natural selection to produce the organisms that actually exist?  [my emphasis]. 
I think there is something to this.  Not because I think Nagel's right; Orr's review disposes off his objections quite convincingly.  But there is something non-intuitive about the neo-Darwinist synthesis.  A simple mixture of random mutations and natural selection that leads to mammals?  Creatures with complicated mechanisms like eyes, an immune system, a circulatory system (and elaborate processes of clotting and repair!) and on and on -- how on earth could all of that arise randomly?

 I felt like this for a while.  Obviously not badly enough that I became a creationist or anything.  But it was puzzling.  What solved it for me was reading the first few chapters of Daniel Dennett's book Darwin's Dangerous Idea: Evolution and the Meanings of Life.  Dennett thinks that natural selection is actually (or like, sometimes the distinction is unclear) an algorithm. "Darwin had discovered the power of the algorithm"  (p. 50).  Evolution was not designed to produce us per se, he says, but there's no reason to believe that it is an algorithmic process that has in fact ended up producing us. 

To illustrate his point, Dennett gives us this nifty little diagram.  Think of what natural selection does, he says, as an example of a tennis tournament draw.  Whatever happens, a tournament has to have a winner!  And natural selection picks winners; there is nothing inevitable about these winners -- they are contingent -- but the process of picking winners is inexorable.  And this happens over long, long, periods of time -- millions and millions of years. 

So first, you have an algorithm that picks a winner.  And then you have an algorithm that picks winners over large time-scales.  That nailed it for me.  As a programmer, you are constantly faced with time and space constraints when you program. Think about writing a program to play chess where the program essentially tries to look as far ahead as it can.  How far down the tree of moves should the computer look?  Ideally -- all the way down!  But wait - then it'll take forever for the program to make the next move, so we need to compromise.  Or maybe we can store everything in the memory all at once so that it won't take forever?  No luck again because memory is limited. 

When you work with programs, the power of algorithms is evident.  And the constraints on algorithms are all too visible.  Evolution then is like an algorithm with no constraints; it gets infinite time and infinite space to do its work.  And for something like that, anything is possible -- mammals, conscious mammals, insects, plants, whatever.  It doesn't seem non-intuitive at all.

Now obviously, there's a lot of flimflammery to Dennett's thesis.  Is evolution actually an algorithm?  Or is it like an algorithm?  And Dennett clearly has a lot more up his sleeve: his point is not to make evolution intuitive (that was a byproduct for me; not everyone works with algorithms), but rather to show that evolution is indeed something like "universal acid" -- a concept that can explain the deepest philosophical mysteries: the mind-body problem, consciousness and so on.  Critics, quite rightly, beg to differ.

No matter.  My point was to show that there are ways in which natural selection's workings can be made to seem entirely intuitively.  The trick is to find the metaphor that works for you. 


  1. I use the word sober for a special reason.  You would think Orr would have more in common with Daniel Dennett who at least is not rejecting the neo-Darwinian synthesis like Nagel is (by starting with doubts from intuition).  (In fact, one might argue that Dennett assigns it far too much significance.)    But while Orr's review of Nagel is scrupulously respectful, his review of  Darwin's Dangerous Idea, the book that's the subject of this blog-post, is quite scathing.  His correspondence with Dennett is even more so.  (Dennett doesn't get much love at the New York Review of Books.)     

Monday, January 14, 2013

Market Socialism Vs. Market Capitalism (and the Centrally Planned Economy)

Seth Ackerman’s Jacobin essay (via Yglesias) is well worth your time, although I have to say that I don’t buy it at all as a wholesale alternative to our current system; it seems to me though that this could well be a model for certain parts of our system.  Ackerman starts off by suggesting that the problems of Communist or centrally planned economies was not that they were inefficient (by standard economics measures of efficiency/equilibrium) but their firms lacked autonomy.  He suggests instead a system where the management of firms is autonomous but the financial system is socialized, and therefore the “public” owns the firms collectively but does not really manage them.  He suggests that (a) this system is superior to the traditional social democratic apparatus where the role of Government is seen to be regulatory i.e. erecting checks and balances to the system of capitalist profit-seeking while still seeing profit-seeking as the engine of development and innovation.  He suggests that this system is never really sustainable because it ignores politics and the relative power of capital.  And (b) that this is the system we’ve been moving to all along.  Capitalism started off from a place where the management of the firm and the owners of capital were the same, to one where the two were different (a.k.a. the managerial corporation).  The logical outcome, he suggests, is one where the shareholders themselves are the “public” and are publicly accountable.  His “market socialism” is thus a logical culmination of capitalism itself (a la Marx).
What is needed is a structure that allows autonomous firms to produce and trade goods for the market, aiming to generate a surplus of output over input — while keeping those firms public and preventing their surplus from being appropriated by a narrow class of capitalists. Under this type of system, workers can assume any degree of control they like over the management of their firms, and any “profits” can be socialized– that is, they can truly function as a signal, rather than as a motive force. But the precondition of such a system is the socialization of the means of production — structured in a way that preserves the existence of a capital market. How can all this be done?

Start with the basics. Private control over society’s productive infrastructure is ultimately a financial phenomenon. It is by financing the means of production that capitalists exercise control, as a class or as individuals. What’s needed, then, is a socialization of finance — that is, a system of common, collective financing of the means of production and credit. But what does that mean in practice?
You should all just read the whole thing.    But let me just point out what I think is the main flaw in this model (although admittedly the devil is always in the details).  His proposal is essentially one where the economy consists of a huge, dominant public sector, but where the management of the public sector is radically separated from ownership (which is “public”).  I think this is not as easy as he thinks it is.  In particular, the relationship between the management (and workers) and the shareholders (the public) will now be mediated through the political process.  Ackerman thinks that by eliminating the profit-mechanism, his system will make the prices more rational or efficient, but I suspect that the political process will step in and provide its own irrationality.  In other words, Ackerman’s system, which he thinks is the golden middle between profit-seeking capitalism and centrally planned socialism may actually end up looking like one or the other eventually.    

Monday, January 7, 2013

Performativity, Realism and Social Construction

[This post is going to be a little abstract but I hope, not too jargon-ridden.  I wrote it mostly to clear my own head.  But hopefully, it's useful to others too.]

A largely problematic concept in contemporary Science and Technology Studies (STS) is the issue of
performativity.  Performativity was explored first by Robert Merton in his discussion of what is sometimes called a "self-fulfilling prophecy."  The classic example is bank-runs.  A line forms outside the bank of people who think that the bank is going bust and who'd like their money back.  These people are seen by others who join the queue, and so on and on until the bank actually goes bankrupt because all its depositors demand their money back.  Whether the bank was really going to go bankrupt before that initial group of people queued up to demand their money is largely beside the point.  A bank run is a self-fulfilling prophecy: a belief that brings its referent into existence. 

Merton used the word "self-fulfilling prophecy" when the prophecy itself was false.  And self-fulfilling prophecies are considered, on the whole, to be bad science, at least according to normative theories of science like
Karl Popper's.  Take, for instance, Popper's notion of falsifiability as being the key idea that distinguishes science from mere pseudo-science.  Psychoanalysis was, according to Popper, a pseudo-science because it was not falsifiable.  Thus, the analyst could diagnose his patient as suffering from a certain psycho-sexual malaise; if the patient denied that he suffered this way, the analyst could attribute that to denial.  Thus there was no way in psychoanalysis to prove the analyst wrong.  Here, the psychoanalyst’s diagnosis is a self-fulfilling prophecy; by denying what the analyst has posited, I am indirectly confirming his hypothesis.  Marxism too, Popper thought, was something similar, despite its pronouncements about being scientific.  Instead of denial, Marxism used the trope of ideology and false consciousness to refute its detractors - thereby rendering itself impervious to falsification. 

Self-fulfilling prophecies, then, are considered to be bad science, by Popperian standards.  Or at least, they are not science.  Are scientific theories self-fulfilling prophecies?  This is an interesting question, philosophically speaking, but not a very interesting one [1].  [These numbers stand for the end-notes at the end of this post.] It is in the social sciences like economics that ideas about performativity (which is understood as a self-fulfilling theory performing itself into existence) start to become really interesting.  An interesting issue explored by economic sociologists is performativity of prices.  Are prices "real"?  Is there something out there in the world that prices correspond to -- stable, unchanging features that are outside human intervention?  Or are prices self-fulfilling prophecies?  If prices are self-fulfilling prophecies, how do we know that "bubbles" -- which are scenarios where prices have shot through the roof and are clearly out of sync with "fundamentals" -- exist?  As the title of Donald Mackenzie's
famous book suggests, financial models are "an engine, not a camera."  That is, financial models are self-fulfilling prophecies; they bring the realities they describe into existence.

In a
long, thoughtful, meditative blog-post, Kieran Healy points out the different senses in which the word performativity is used, and in particular, how one can understand the word in order to make sense of Mackenzie's argument -- without understanding financial models are complete hoaxes.  Because, clearly, one could interpret performativity trivially -- this is that economic models have no correspondence with reality. (Which would mean that economists are fraudsters and economic models are a hoax.)  This is clearly a possibility but only turns out to be the start of the problem.  For, if economic models have no correspondence with reality, whatever that is, then what really constrains these models?  Could it be that economists could just say whatever they like, and their scholarly ideas would just perform the economy into existence?  Just intuitively, this seems untenable.  It is not so easy to change the world.  I may have any number of ideas about how to do things -- but there is no way for me to just "perform" them into existence. Changing the world is hard and difficult.

In what sense then can economic models be said to perform, or enact, economic realities?  Here, Healy draws on
Wittgenstein's ideas about rules, languages games and forms of life.   Briefly, in his analysis of rules, Wittgenstein shows that rules only make sense when considered against a form of life.  That is, rules take for granted a certain social form.  The rule "if the light is green, then start driving the car" can only be understood in a world of cars, traffic lights, roads and junctions.  One could modify the rule to say "if the light is green but there is a car at the intersection, start driving only after that car has left the intersection," or "if the light has just turned green, then decide depending on the traffic on the road if you can just stay at the intersection for a bit to decide which is the right way to go."  And so on.  IN ordinary life, these are routinely ways in which this rule is "followed" and there could be infinite variations on this.  Social life needs to be understood not as a following of rules; rather, rules need to be understood as the reifying of conventions that draw on the largely taken-for-granted background of social life. 

A further distinction made by analytic philosophers is the notion of
regulative rules versus constitutive rules.  Regulative rules are rules like the ones I used as examples in the last paragraph: "if the light turns red, stop the car" or "if the light turns green, start driving."  These rules are supposed to regulate activities.  Constitutive rules, on the other hand, do not merely regulate, but constitute activities.  Thus the rules of chess, for e.g., "the bishop only moves diagonally," "the rook moves in straight lines," actually make up the game of chess.  [A rule like "if the player's hand is on the chess-piece, then he has not yet completed the move" can be seen to be a regulative rule.]  What Wittgenstein shows is that the difference between regulative and constitutive rules is a difference in degree, not in kind.  All rules are at heart constitutive rules.  They constitute a form of life, and the form of life constitutes them. 

Healy explains the performativity of economic models in the sense of being transformative rules.  In this case, some rules merely constitute a form of life -- but other rules, game-changing ones, can actually transform the form of life itself radically.  This is like the notion of finding a "trick," say, in a card game. If the trick is trivial, then the game remains the same.  If the trick is truly radical, then others start imitating it, and in time, it transforms the card game completely. 

Economic models (like the
Black-Scholes formula) are like truly game-changing tricks (or rules).  They are performative in the sense that as they are increasingly adopted (primarily because they seem to confer an advantage on those who use them), they change the game itself -- in this case, finance.  And it is that sense that financial models *have* truly been transformative: an engine, not a camera.

Is Healy's argument a form of social construction, or is it not?  I think it is, although arguments about whether a theory is constructionist or realist are often debates over values and postures that theory-makers should adopt.  Because, Wittgenstein, as I see him, was a social constructionist: he argued that all rules ground themselves in social conventions; that, at bottom, most structured activities are a matter of how we agree or disagree with each other.  Our methods for agreeing or disagreeing with each other, of checking up to see if we believe an assertion or don't, are all different in different fields of activity.  The BSM formula, in Mackenzie's case, "worked," in some sense -- and the fact of its working was what allowed it to be a game-changer, performing a different kind of economic reality into existence. 

Orgtheory had a separate post (part 1, part 2) from Ezra Zuckerman who argued against "pure" social construction.  Again, his blog-posts are about the performativity of prices: how are prices constrained by more than just the beliefs of market participants?  Zuckerman suggests that a sociological theory about prices must take insights from both, a realist, and a constructivist perspective.  The constructivist perspective, he argues, has taught us that beliefs play an important role in price determination (why else would people talk about a "bubble"?).  But they make it seem as if prices are only constrained by what he thinks are "subjective" factors.  He argues that there are "objective" factors that also determine/constrain prices -- and it is not enough here to say that prices are conventions, although not easily changeable.  Rather, a theory must specify what factors constrain prices -- and the list of these factors must include both subjective and objective ones. 

However, his definition of "objective" is cultural.  Objective factors are those, he suggests, drawing on
Andrew Abbot's work, that resist and cannot be changed by short-term cultural work.  Thus our beliefs about the price of Manhattan apartments and GE stocks as being worthwhile investments are objective; because short of an earthquake destroying both Manhattan and all GE plants, this is a reality that is not going to change soon.  (One could probably undertake a long-term project, a social movement, to change Manhattan's status, but that might take hundreds of years, if not more.)  Prices, he therefore argues, have lower bounds (say for an apartment in Manhattan) that are not amenable to short-term cultural work.  So there are objective features that determine/constrain prices.    

This also explains, for Zuckerman, why bubbles exist and proliferate despite the existence of skeptics who warn that a bubble is occurring.  However, these skeptics were not able to express themselves in practice -- which is why the prices of homes rose and rose (until they fell, in his view, because they violated the objective constraints).  This explains bubbles: a bubble happens because of a self-fulfilling prophecy, that violates the objective reality (which is not amenable to short-term cultural work).  It also explains the presence of skeptics who doubt the bubble -- but who are unable to do anything about it in substantive terms.  It explains why a bubble happens even when so many people think that there is a bubble.

I find Zuckerman's analysis largely persuasive [2]. But here's the rub: he would argue that it is a realist (or at least, semi-realist) account while I think it is a through-and-through constructionist account.  Why is this?  Does this difference between what I see as social construction and what he sees as a form of realism tell us something about how these terms are used - and where the substantive difference between them is?  I think so -- more on this below.

First, I think the really real realists would disagree with Zuckerman over his definition of objective factors as those that are not amenable to short-term cultural work.  This suggests that objective factors are amenable to long-term cultural work (even if the possibility that this cultural work will be successful can never be predicted), something that true realists will disagree.  The truly objective never changes; it is timeless.

But I think there is a second, and perhaps, an even more important point where even semi-realists like Zuckerman would differ from social constructionists like me.  That point, I think, is about the variability of social worlds.  Let me explain. 

Let's take the lower-bound on the price of a Manhattan apartment as an example (Zuckerman's objective factor).  Let's say I am able to wind the clock back two thousand years and start again.  Or to put it differently, assume the world is a video cassette, which I rewind back 2000 years, and then play again.  Will we reach the same place we are in today?  In particular, in this alternate world, will a Manhattan apartment still have that lower bound?  Will Manhattan still be Manhattan? 

I suspect that the answer to this question will separate the realists from the constructivists [3].  A committed realist would argue that this alternate world will not be that significantly different from the world today; because the world is full of objective, unchanging, and timeless factors.  As a constructivist, I would argue that there is a large chance that this alternate world will be significantly different because the world is shaped by human actions and social institutions, just as much as it is shaped by objective realities.  And the social world is, as Barry Barnes has argued, largely a self-fulfilling prophecy, as actors act based on how they are expected to act, thereby reinforcing those very expectations.  If the world is shaped by actions and the expectations of actions (i.e. beliefs about those actions) then an alternate world that is produced by my video-cassette time-machine would be, quite possibly, a very different one.

But a semi-realist might say: that even if Manhattan is not Manhattan, there would still be lower bounds on prices.  The real lower bound would be Chicago real-estate prices.  There would still be lower-bounds on some prices, and there would still be objective factors (in Abbott's sense) that constrain prices.  I agree.  The difference though is that as a social constructionist, I can offer no predictions on how this world will look like.  The so-called objective factors in this alternate world will be different -- but the beliefs of market participants (the subjective factors) will still influence the prices.  Once we see that the objective factors are not predictable in this alternate world, the subjective factors (people's beliefs, the way they act on those beliefs and so on) start to seem more fundamental.

To put in yet another way, the difference between constructivists and realists is over the issue of prediction, and in particular over the issue of long-term prediction.  Short-term predictions are possible for both the realist and the constructivist.  But long-term predictions, say, about housing prices or computer prices 50 years from now, will be more difficult for constructivists to make than realists.  It is difficult only because even objective factors that determine prices can be changed by long-term cultural work; and this cultural work is impossible to predict.  The more confident you are about prediction, you shift to the realism side of the spectrum.  The less confident about prediction you are, will make you more of a constructivist.

And this explains, finally, some of the arguments that have been happening in the Orgtheory comment threads.  Would you like your regulator to be a realist or a constructivist?  Realists argue that even the existence of regulators is premised on realism; for if there were no objective factors constraining social facts (like prices) then how would one even begin to regulate something in the first place?  I would disagree.  I think it depends on the time-frame that the regulator is supposed to regulate.  A regulator who is thinking about the future 50 years from now is simply deceiving himself or herself.  For a regulator who is thinking 5 or 10 years down the line: it simply doesn't matter whether he is a realist or a constructivist.

1.  Clearly, in one key sense, they are not.  Because one is able to do things with scientific theories: predict the behavior of projectiles with reasonable accuracy, build rockets, make aeroplanes fly, manufacture cars and plastics and so on.  But in another, perhaps minor, sense they are.  A theory is a piece of representation that can "fit" many different worlds and situations.  I put fit in quotes because the process of "fitting" takes a lot of work and labor (until it gets routinized when that labor simply becomes taken-for-granted).  Galileo's key insight is said to be that he saw the motion of a body on an inclined plane as similar to the motion of an oscillating pendulum.  When students learn mechanics in high school and early college, they are taught to see (to "apply") the problem-sets as examples of the canonical practice problems that they study (Kuhn 1969).   In this way, one might say, is a self-fulfilling prophecy: more and more things can be seen as examples of the theory; the theory essentially fulfills itself. 

Naturally, the skeptic might ask: well, the theory actually works.  In some sense or the other, one can verify or check up whether it works.  This is true.  But with one qualification, which is of course, how the word "works" is understood.  The sense in which the theory is a self-fulfilling prophecy, however, is that there might be some alternate theory that "works better."  Or "works," but in a different sense.  And a dominant theory, by working reasonably enough, might be letting us overlook other possible theories, because we only care about a theory "working," in one particular sense.  As long as there is only one sense of "working" that we are preoccupied with, then that one dominant theory is a self-fulfilling prophecy.

All that said, the performativity of scientific theories is not really an interesting issue.  [Because the way a theory is taken to "work" is a pretty much unproblematically taken for granted notion in science.  Unless, in Kuhnian terms, there's a scientific revolution brewing, in which case, this becomes a fiercely debated issue.]

  There's also an element of cheating in Zuckerman's model.  It is easy to see that there is a lower bound for Manhattan real estate prices.  Could one think of a lower bound for, say, Phoenix real estate?

3.  For another take on this, see Chapter 3 of Ian Hacking's "TheSocial Construction of What," titled "What about NaturalScience?" where Hacking offers three factors that separate realists from constructivists: the contingency of scientific theories (or in our case, social facts like prices), the nominalism of the analyst, and explanations of the staying power of theories and social facts.