Sean A. Munson, & Paul Resnick (2010). Presenting diverse political opinions: how and how much Proceedings of the 28th international conference on Human factors in computing systems : http://doi.acm.org/10.1145/1753326.1753543
Can we ever be convinced by someone we usually disagree with completely? Can we even manage to read regularly people whose views are antithetical to our own? These are fascinating questions, I think. First, because they are political questions; conversations and debates matter very much for any kind of open, democratic society. But I find them fascinating because they also bring up questions about the nature of knowledge: is knowledge just a matter of true and false propositions? Or is it something different, a mangle of practices, propositions and institutions, and in some way inherently inarticulable?
I bring all this up because of a talk I attended at CHI 2010 -- a presentation of a paper by Sean Munson and Paul Resnick at the University of Michigan -- that explored their very preliminary results of getting people to read articles with opposite political persuasions. Here's the abstract:
Is a polarized society inevitable, where people choose to be exposed to only political news and commentary that reinforces their existing viewpoints? We examine the relationship between the numbers of supporting and challenging items in a collection of political opinion items and readers' satisfaction, and then evaluate whether simple presentation techniques such as highlighting agreeable items or showing them first can increase satisfaction when fewer agreeable items are present. We find individual differences: some people are diversity-seeking while others are challenge-averse. For challenge-averse readers, highlighting appears to make satisfaction with sets of mostly agreeable items more extreme, but does not increase satisfaction overall, and sorting agreeable content first appears to decrease satisfaction rather than increasing it. These findings have important implications for builders of websites that aggregate content reflecting different positions. [pdf]Remember this was a CHI paper so there was a lot of emphasis on how to "present" diverse views so as to make people read them. The results were disappointing -- people don't really seem to want to read the opposite side -- but since not everything has been tried yet, and the web still has a lot to evolve, we shouldn't really lose hope.
I want to speculate on a different sort of model in this blog-post. I am going to take for granted that people of opposite political persuasions need to talk to each other in a liberal democracy*. But if it is important for Democratic-leaning and Republican-leaning (or as they are called in the US, liberal and conservative) voters to talk to each other, is it a good way to rank news articles and people on a sliding scale from liberal to conservative and then mix and match them up? Or do we need to classify people (and articles) on some other orthogonal parameter? I.e. a parameter that doesn't correlate with being Democratic or Republican-leaning.
In what follows, I am going to propose two such parameters. This is by no means a very systematic analysis, just some thoughts that I've been playing around with, based on my own personal experiences. Most important, I have absolutely no idea how I would go about implementing such a system computationally and frankly, it may be wrong and not even work in practice. With all those caveats in mind, here goes.
The first parameter maps the content of an article. The content of an article spans the spectrum from "uncertain" to "certain". An "uncertain" article sounds unsure of itself, has many caveats in it, has a respectful tone perhaps, even if it does have definite conclusions. A "certain" article is more sure of itself, perhaps more dogmatic, even snarky. One point though. If an article is uncertain, does that mean that its author is non-ideological? Not at all. All it means is that he chooses to express himself in a certain way that seems uncertain even though the article itself may end up endorsing a very specific ideological point.
The second parameter maps the disposition of the reader. The disposition of the reader spans from "prefers interesting" to "prefers true". This is not a straight-forward spectrum and its terms need some explanation.
The terms are based on the expression "I prefer saying something interesting to something true". E.g. most philosophers**, you see, would rather write something interesting and therefore be read by generations, than solve a problem definitively and thus not be discussed at all in a few decades. In the same vein, a reader who prefers something interesting likes intricate arguments even if sometimes they lead to conclusions he does not agree with. Note that this does NOT mean that this sort of reader does not have an ideology or that he is an "independent." Nothing of the sort is required, just that he likes to read clever things.
Since no ideology has a monopoly on clever things to say so this reader probably reads a lot of clever things that come out of his own camp. A reader who prefers "true" things is the opposite. He prefers "truth" to "play", has no use for "play" and would rather prefer to say it as it is. When I say truth, I don't really mean truth-as-it-exists. I mean things that the reader believes to be true. Note that most readers will fall somewhere in between these two positions.
If indeed, in an ideal world, these metrics i.e. the certainty of an article, and the disposition of the reader could be computed with some degree of accuracy, and if we also knew the ideology of the reader as well as the ideology of the article, this is what I would do in order to get people to read articles with opposite points of view:
The flowchart is clearly a little vague and is not meant to represent some definite algorithm. The heuristic that it depends on is that those readers with a taste for the interesting will find at least some of the uncertain articles that are however in the opposite ideological camp thought-provoking to read.
At this point, it also seems appropriate to explain the theory of knowledge that underwrites this model:
- I assume that a person's ideology gets fixed pretty early in life. A person has an ideology by the time he or she reaches the mid-twenties. Ideology however does not mean a voting preference. It means a way of looking at the world, a preference for certain types of people (who become your friends) and a certain set of issues that become important, and a certain stance towards them.
- Ideologies do not change because someone offers "rational" arguments for the opposite side. People only change their minds about certain things, and these things are small technical things. World-views do not change much. When they do, and this is rare, the comparable analogy for it is religious conversion.
Let me make one final point before I end this (pretty unsatisfactory) post. What makes me think that people don't change their minds all that much? Well, for one, it's my own interactions with people. Arguments about politics or policies are rarely closed, the way that arguments about the validity of a mathematical proof are, for instance. So there is no control on what counts as an argument. People usually have an infinite variety of arguments to choose from and many discussions are a circle of question-begging assertions.
In his book, Knowledge and Social Imagery, David Bloor provides a very good example of how world-views are woven together. In tribal Azande society, an oracle is usually asked to tell the people the witches residing in their midst. Being a witch is taken to be a hard physical fact and it is commonly believed that a male witch passes on this "substance" to his sons, a female witch to her daughters. One would therefore think that when the oracle says someone is a witch, the whole corresponding line of people will have been or will be witches. In practice, however, the Azande don't act this way, only the close paternal kinsmen of a known witch are considered witches.
At this point, one could simply say that the Azande are being illogical and irrational. But to counter this, Bloor gives another example, one that's closer to our own culture. We commonly believe that a murderer is one who deliberately kills people. But in that case, are pilots and soldiers also murderers? As citizens who rely on the armed forces to protect us, we will immediately resist this conclusion. But aren't we being illogical or irrational here? After all, if one takes the laws of logic seriously, they lead us inexorably to this conclusion: All murderers kill people deliberately. Soldiers kill people deliberately. So all soldiers are murderers.
But no, someone will say. It all depends on what you mean by "deliberately." Soldiers don't kill "deliberately," they kill to protect us, or they kill because one of us had been killed. Or it all depends on what you mean by "kill." And so on and on. The point here is that our ideologies are "informal knowledge", we believe them because they are common cultural practices. When we reason about them ("formal knowledge"), the laws of logic and reasoning are flexible enough that we can use them to justify what we only know informally (i.e. our ideologies). That's why rational argument never succeeded in converting someone to a different ideology. Arguments like that tend to go round and round in circles. But that doesn't mean that they are unimportant or that one shouldn't be having them. One should just remember what they can or can't accomplish.
*The discussion above is heavily US-centric. In India, for example, it is very hard to isolate "opposite" political persuasions, but that's a topic for another day.
** E.g. here is Daniel Dennett's variant of it:
In an informal survey, I have been asking philosophers a slightly different question recently, and will be pleased to field further answers in response to this review: Which would you choose, if Mephistopheles offered you the following options?
(1) simply solving an outstanding philosophical problem so definitively that after a few years, only historians ever mentioned it (or your work) again, or
(2) writing a book that was so tantalizingly equivocal and problematic that it would be required reading for philosophy students for centuries to come.
The history of science offers many instances of the first sort, and none, really, of the second, but I find that many of my philosophical colleagues admit to being at least torn by the choice. They would rather be read than right. Perhaps it is of the "essence" of philosophical problems to admit of no permanent solutions, though I doubt it, but in either case it is no wonder we make so little progress.