Tuesday, February 2, 2016

What is "social contruction"? Well, take the Iowa caucuses

We often struggle to teach our students what "social construction" actually means.  (For the record, I am with Woolgar and Latour for taking out "social" and just saying "construction.").  Well, here is a Voxsplainer article by Andrew Prokop explaining the outsized importance of the Iowa caucuses that could serve as a great introduction to undergrads on what we STS-types actually mean by social construction. For my money, it makes the a number of points:

  1. "Social construction" doesn't mean that something is not real.  
  2. Socially constructed things take other things for granted. These things, while real enough, tend to be also socially constructed and on and on. 
  3. Socially constructed things can be really hard to change (see point 2).  They require real long-term work to create cultural change, itself a very unpredictable thing



   

Saturday, April 11, 2015

On the question of trusting experts

Over the years, journalist Chris Mooney has made a name for himself as a chronicler of what he has called The Republican War on Science: the numerous battles being fought within/across America's political landscape over issues like global warming, pollution and regulation.  As time has gone by, Mooney has also started to draw on social psychological and brain imaging research on political bias: how people interpret scientific findings and facts in the light of their ideological convictions, or as Mooney's article for Mother Jones was titled, "The Science of Why We Don't Believe Science: How our brains fool us on climate, creationism, and the vaccine-autism link."  From an STS perspective, these findings, even though couched in the slightly problematic scientistic idiom of social psychology, make perfect sense: they suggest that "data" is always interpreted in the light of previously held beliefs; that facts and values are not easily separable in practice. (Mooney's second book is titled "The Republican Brain" which does sound problematic.  But since I haven't read it, I'm not going to comment on it.  See this discussion between him and Mother Jones' Kevin Drum: here, here and here.)

In a new article in the Washington Post, Mooney reports on a recent experiment from social psychologist Dan Kahan to argue that you should, yes, trust experts more than ordinary people.  Kahan and his collaborators asked the subjects in their pool (judges, lawyers, lawyers-in-training and laypeople, statistically representative of Americans) to interpret whether a given law was applicable to a hypothetical incident; the question was: would they apply the rules of statutory interpretation correctly? So first, they were informed about the law that bans littering in a wildlife preserve.  Next they were told that a group of people had left litter behind, in this case, reusable water containers.  But there was a catch: some were told that these were left behind by immigration workers helping illegal immigrants cross over the US-Mexico border safely.  Others were told that the litter belonged to a construction crew building a border fence.  All were polled to understand their political and ideological affiliations.  Predictably, depending on their ideological beliefs, people came down on different sides of the issue: Republicans tended to be more forgiving of the construction workers, etc.  What was different was that judges and trained lawyers tended, more than laypeople, to avoid this bias.  They interpreted the law correctly (the correct answer here was that it didn't constitute littering because the water bottles were reusable) despite their ideological convictions. Well, so far so good. As an anthropologist, I interpret the experiment to be saying that lawyers are subjected to special institutional training, unlike the rest of us, and that this habitus lets them reach the "correct" result far more than us. Experts are different from the rest of us, in some way.

But what's interesting is the conclusion that Mooney draws from this experiment: that while experts are biased, they are less biased than the rest of us, and that therefore, experts should be trusted more often than not.  Well.  Scientific American's John Horgan has a pragmatic take on this: that this, of course, leaves open the question of which experts to trust, and scientists, like other experts, have been known to be wrong many many many times.  To trust experts because they are experts, seems, well, against the spirit of a democratic society.  (Another Horgan reponse to Mooney here.)

But I think there's something here about the particular result that Mooney is using to make his point.  Something that I can think can help to show that yes, experts matter, but no, that doesn't mean that there's a blanket case for trusting experts more than others.  Lawyers and judges do come to different conclusions than the rest of us when it comes to statutory interpretation.  But there is one huge elephant in the room: the US Supreme Court, at this very moment we speak, is considering King vs. Burwell, a challenge to the Affordable Care Act that hinges precisely on questions of statutory interpretation. How do you interpret a law that says that the federal government will provide subsidies to insurance exchanges created by "States"?  Does "States" mean the states that constitute the United States or does it mean state, in the abstract, whether it's the federal government or the states?  How difficult can that be?  It seemed very clear in the long battle over the ACA that the subsidies were meant for everyone.  But no, the question was contentious enough that that courts disagreed with each other and the Supreme Court took it up.  With a good chance that they might rule in a way that destroys the very foundations of the Affordable Care Act.

How might one reconcile the findings of the Kahan study with what's happening with the Supreme Court?  The Supreme Court judges are certainly experts, elite, well-trained, and at the top of their respective games.  And yet, here they are, right at the center of a storm over what journalist David Leonhardt has called the "federal government's biggest attack on inequality."   I think there's a way.  Experts are conditioned to think in certain ways, by virtue of their institutional training and practice, and when stakes are fairly low, they do.  But once stakes are high enough, things change.  What might seem like a fairly regular problem in ordinary times, a mere question of technicality, may not look like one in times of crisis.  At this point, regular expert-thinking breaks down and things become a little more contested.

And we do live in a polarized time.  As political scientists have shown time and time again, the polity of the United States experienced a realignment after the Civil Rights movement.  The two major parties had substantial overlaps before but now they don't.  They cater to entirely different constituencies: the Republicans being the party of managers, evangelicals and the white working class, the Democrats being the party of organized labor, affluent professionals and minorities.  Political polarization has meant that even institutions which are supposed to be non-political (but really have never been so) start to look more and more political, because there are basic questions of disagreement over things that may have seemed really simple and technical.  This explains the spate of decisions from the Supreme Court where the conservatives and liberals on the bench have split neatly along ideological lines.  

But does that mean that judges are just politicians in robes? (Which is the thesis that Dan Kahan and others set out to debunk.)  Not really.  The US Supreme Court actually resolves many many cases with  fairly clear majorities; more than a third of them through unanimous decisions.  These cases hardly ever make it into the public eye, and they involve, to us at least, what seem like arcane questions of regulation and jurisdiction.  Another way to interpret this is to say that these cases are "technical" because they are not in the public eye, no great "stakes" attach to these decisions, unless it's to the parties in question.  When stakes are high -- Obamacare, campaign finance funding, abortion, gay marriage -- the Supreme Court, just like the rest of the country, is hopelessly polarized.  And a good thing too because fundamental crises in values are best addressed through Politics (with a capital P) rather than leaving them to bodies of experts.

Does this sound at all familiar to you?  STS has come a long way since The Structure of Scientific Revolutions but this is not unfamiliar to what Kuhn calls a time of crisis. Scientists and others have (often in self-serving ways) taken up the message of the book to be that science moves in cycles of innovation: a time of normal science, and then a time of revolutionary science (ergo, starting a new paradigm is the best way to do science).  But really, the point of the book that's missed is that the crises that Kuhn talks about (that happen through a buildup of anomalies) are organizational crises; at such times, fundamentally taken-for-granted understandings of what is right and wrong break down.  New taken-for-granted understandings emerge, but they emerge along-side a different organizational order.

Social psychological experiments on political bias in the "public understanding of science" need to be understood not as grand truths about how people interpret, but as historically contingent findings.  Yes, judges will vote more "correctly" than laypeople, but a toy case presented as part of a social psychological study is not the same as an actual case.  Real cases have audiences, as do the judges and the lawyers.  I remember when the first original challenge to Obamacare was floated, many liberals (including me) found the question of the constitutionality of the individual mandate ridiculous.  Of course, the federal government could mandate that everyone needed to purchase health insurance, that's what governments do!  (Not to mention that it provided subsidies to those who couldn't afford it, so it was hardly unjust.)  But the case just about squeaked through the Supreme Court in our favor and it could have well gone the other way.  Burrell vs King is, if anything, an even sillier case, but no one is underestimating it anymore.

It seems like social psychological studies of "bias" might be doing in this day and age what the social studies of scientific expertise did many years ago.  Although Mooney seems to misunderstand the point of science studies.  These studies weren't meant to show that experts are "biased."  They were meant to show that that expert practices and discourse are designed to construct certain questions as "technical."  This is not necessarily a bad thing, but it does, at certain points, drown out other voices who disagree with the experts' conclusion (again, this is true of all political choices, usually). What is more, once experts are framed as objective and arguing only from facts rather than values, opposing voices, who have no recourse to the language of facts, get delegitimized even further.  STS recommendations were not that you need to trust experts more or less, but that questions about competing values needed to be brought to the fore in public  debates that involved science and technology (along with technical expertise, of course).  And while scientific and legal experts work in different institutional ecologies, all experts work within institutional ecologies, which means their work is shaped by their collective understandings of what is technical and what is not.  The solution is not to trust experts more but to find better ways to debate differences in fundamental values, while still using what we can from experts.  

Wednesday, April 8, 2015

New blog-posts around the web: Crowdsourcing and Alan Turing

This blog has been quiet for a while but I had two posts at the CASTAC blog in the past two months.

The first one, titled Crowdsourcing the Expert, points out that computer scientists have now turned their attention to more sophisticated forms of crowdsourcing: not just crowds of uniform homogeneous click-workers, but also crowds of experts.  The crowdsourcing platform is now seen as a manager, not just for the unskilled worker, but also for the creative classes?   And what about the expertise of computer scientists themselves which is left fairly undefined? Anyway, read the whole thing if it strikes your interest.

The second one is about Alan Turing.  I summarize some recent articles on Alan Turing and computer science published by historians Thomas Haigh and Edgar Daylight,  where they suggest that some of the recent commemorations of Alan Turing are not quite historically accurate.  But fascinating nonetheless because they show us how computer science, as a discipline, was constituted.


Wednesday, October 15, 2014

Speed-bump, meet Knee Defender

I wrote up a post for the CASTAC blog suggesting that the recent debates about airline seat space (do we have a right to recline?  do we have a right to have more knee-space?) might be good fodder for teaching undergraduates about the relationship between technology and politics.  Particularly, the device known as the Knee Defender.
I submit that the Knee Defender might be a great test-case for an introductory STS class (right there with the speed-bump) to teach undergraduates about the relationship between technology and politics. Three reasons: there is a big-picture story about the air-line industry that undergraduates might enjoy parsing; there is a concrete material environment–the inside of an aircraft–that the Knee Defender operates in; and finally, debates over this device can be a great introduction to the vexed concept of ideology.
You can read the whole thing here.

Sunday, July 27, 2014

Science vs. Politics: A pragmatic argument for why this distinction doesn't work


[X-posted on the HASTS blog.]

Recently, I talked to a doctor and public health professional about the relationship between science and policy; he told me, in a vivid metaphor, of how things work, and should work, in the regulatory process. The science produces the facts, which then get funneled through our values through the process of politics.  What comes out of this machine, he said, are policies.

It was quite a beguiling vision, but as an STS person, I couldn't help asking: did he really believe in it? Yes, he said.  I pressed on.  How, I asked, would he explain the controversy over global warming? Why was it difficult to implement policy when the scientists had a decent agreement over the facts?  His answer was that it was Fox News, fed by the big bad industry, which had fooled certain people into not believing the scientists.  I asked if it might not be more useful to wonder whether this disagreement over what to do about climate change (or about whether anthropogenic climate change even exists) might be an indication of something deeper: perhaps a reflection on the particular ways in which American society is now polarized rather than about Fox News brainwashing susceptible viewers.  He didn't think so, he said.  (He objected strenuously to my use of the word "brainwashing"; I took it back, but I maintain that it was an accurate descriptor of what he was saying.) I asked at the end what he thought should be done about all of this.  He said it was a long-term project; but it began with education; scientific literacy had to begin at a very early age.  Only then would people stop listening to Fox News. At that point, I gave up.


I admit that there is something really alluring about this picture of a science that produces facts which are then funneled through our values by the process of politics, all of which combines to produce rational public policy.  Even if we admit that this isn't really how it works in practice, perhaps this is how it should work.

But even holding on to this vision as a normative ideal may not be in our best interests.  As Sheila Jasanoff and Bryan Wynne have shown, this is because the process of science is shot through and through with values. Wynne suggests that scientific models to measure risk (e.g. risk analysis, cost-benefit calculations) often contain hidden assumptions and prescriptions: about what it means to be social and human, and what an ideal social order should be.  These visions of the human and the social are often found wanting by different publics.  E.g., the language of risk analysis comes coded with what a risk is or is not, and what things humans should worry about, points about which different publics disagreed but a) did not have the tools to express their disagreement, and b) were not taken seriously by experts and understood as only lacking an understanding of the science.  One of Jasanoff's suggestions is that rather than trying to cure science of its values, or create a politics that is based on "facts," we accept the value-riddenness of science and use that to think about how expert advice fits into the political process.  (Needless to say, I agree.)

All of which brings me the real reason why I'm writing this: this Scientific American blog-post which the worst combination, in my mind, of two overlapping tendencies: the plague-on-both-houses bipartisan strategy of journalism (something that journalist James Fallows calls "false equivalence"), and the dichotomous conception of "science" and "politics" as two mutually opposing entities.

The post details the ways in which the EPA's efforts to establish a new regulatory standard for drinking water, with an even smaller permitted amount of arsenic in it, were stymied by a Republican Congress.  The contours of the story itself will not surprise anyone.  Surveying some of the research that had been conducted, the EPA was on the verge of making official its stance that arsenic was a more dangerous carcinogen than it had originally thought.  This would be a prelude to a tougher drinking water standard. Naturally, this meant that corporations that produced arsenic or used arsenic in their products lobbied hard to make sure this didn't happen.  In these polarized days of American politics, it made sense to turn to the Republican party.  And the Republicans delivered by delaying the process.  Essentially, they got the the National Academy of Sciences (NAS) do an independent review.  Read the whole piece; it's detailed and precise to the point where it can exhaust the reader.

And here my problems begin.  Take the headline:
Politics Derail Science on Arsenic, Endangering Public Health
Why "Politics" and "Science"?  Why not say "Republicans Derail Science on Arsenic"?  Or even better and my personal preference: "Republicans derail EPA on Arsenic"?

Then take the leading line after the headline:
A ban on arsenic-containing pesticides was lifted after a lawmaker disrupted a scientific assessment by the EPA.
Again, why this coyness about the identity of this "lawmaker"?  Why not mention upfront that that this is a Republican congressman?  Why does it take until well into half-way into the article to identify the offending Congressman: Mike Simpson of Idaho?

Why, for example, is this sentence worded in this particular way when we know we're talking about the Bush White House?
The White House at that point had become a nemesis of EPA scientists, requiring them to clear their science through OMB starting in 2004.
The piece, for all its commendable whistle-blower reporting, contains the worst tendencies of what journalist James Fallows has called "false equivalence" in journalism, which is the plague-on-both-houses stance (see Fallows' copious collection of examples).  Essentially, newspaper reporting has a tendency to blame both political parties, or politics in the abstract, when things reach a bad state.  Here the newspaper is seen as above politics, which is what grubby politicians do. And therefore the contrast between the policy that the newspaper is advocating (which is not politics but merely good moral sensible stuff), and that what the politicians are doing, which is bad, i.e. politics.  E.g., the tendency to see the US Congress itself as dysfunctional, rather than the threats of the Republican Congressmen to filibuster pretty much any legislation.

The same forces are at work in the Scientific American piece.  Notice that the piece is not explicitly portrayed as a Republicans vs. the EPA piece but rather as a Politics vs. Science piece.  If I had to caricature it, the main point is: science good, politics bad.  The problem is that this often serves to paint politics itself as grubby and well, dishonest.  

This also leads to a manifest lack of curiosity about certain topics.  You might wonder why the makers of the arsenic-containing herbicide choose to work through the Republican Party and not the Democratic Party.  There's no way you could answer this question without looking at the broader trends in American politics over the last 50 years.  The two parties now occupy non-overlapping spaces on the political spectrum: the Democrats are a hodge-podge of interest groups: minorities, relatively affluent social liberals, unions, etc; the Republicans, on the other hand, have only two constituencies: evangelicals and big business.  Perhaps, 50 years ago, a business that wanted to fight a piece of regulation, would have had to think harder before deciding which political channel to use; today, it doesn't take more than a minute to decide what to do.

I understand that this is perhaps unfair criticism.  The piece is long enough, and talking about the realignment of American politics will only make it longer.  But that's exactly the point: if you black-box both science and politics, and paint the regulatory battle in question as a contest between them, then you don't need to think deeply about either.  Framing the article as the Republicans' battle against the EPA would have required the writers to ask the question of why these two actors are arrayed against each other. Editorial choices matter.


But the worst thing about the article is what's NOT even in it.  What, one would wonder, is a citizen to do after reading it?  The article doesn't say but I have an answer: call or write to your Congressman (especially the Republican ones but it doesn't really matter).  Tell him or her that you think the gutting of the EPA's power is something you don't agree with.  That you believe in a robust regulatory structure with teeth.  That perhaps you believe in a more take-precautions-first European style of regulation rather than a do-it-first-deal-with-consequences-later American style of regulation.  Why couldn't the SciAm article include a link for us to call or email our Congressmen?  Because that would have been too political, that's why.  And why should we bother with grubby politics when the science is in our favor?

One of the recent revelations for me has been how easy the Web can make it for us to call or email our legislators to inform them about our opinions on particular issues.  The techies did it really effectively with their "blackout" in protest of SOPA and PIPA.  Recently, in protest against the FCC's proposal to gut net neutrality, we were able to flood the FCC's comment-solicitation notice board with some good arguments for net neutrality.  At heart, this is just good old-fashioned politics, trying to convince our fellow-citizens about the rightness and wrongness of certain causes, sometimes celebrating victory, at other times, accepting defeat and vowing to fight another day [1].

Now I understand that explicitly political action might not be feasible for certain organizations responsible for the article, a collaboration between the Center for Public Integrity, and the Center for Investigative Reporting, both of which may have explicit prohibitions (because of their funding model, for e.g.) against participating explicitly in politics.  But that's part of what's got to change because that's the most important shortcoming of the science-vs.-politics narrative.  It precludes avenues of action for citizens.  What do you do? Trust science, which is what the SciAm investigative piece seems to suggest?  Despair that your representatives are morally and politically corrupt [2]?   Or do the hard work of politics and convince your fellow-citizens that they're better off having a robust EPA?  I vote for the latter.

---------------

Notes
[1] Certainly, citizens are starting to participate in science-politics in other ways, most importantly, through the practices of citizen science. Citizen science is perhaps the most interesting way of making science "impure." But making phone-calls to your legislators, voting, giving money to causes you deem fit, are also equally good ways of participating in the political process.

[2] And that, perhaps, explains why the show of our times is Netflix's House of Cards.  More on that another time.  

Sunday, July 13, 2014

Recent blog-posts around the Web

In the past few months, I've been blogging at multiple places and as a result, have completely neglected this blog.  In the future, when I post somewhere else, I will cross-post it here, or at least, post a link.  In the meantime, though, here are some of the posts I wrote recently:

For the CASTAC blog, a post on the history of artificial intelligence and the new field of machine learning.

Also, for the CASTAC blog, a revised post on the phenomenon called "data science" where I speculate that the proliferation of claims about "big data" is more about a crisis in professional identities (who has the expertise to work on particular problems: those with domain knowledge or those with data manipulation skills?) rather than an epistemological crisis (can we analyze phenomena without pre-existing theory?).

Finally, a post on the HASTS blog about how one might use the game of tennis as a way of understanding what the history of technology is all about.

I've also started posting interesting articles I see to my Tumblr.


  

Thursday, November 14, 2013

Big Data, Boundary Work and Computer Science

A Google Data Center. Image taken from here.

The Annual Meeting of the Society of the Social Studies of Science this year (i.e. 4S 2013) was full of "big data" panels (Tom Boellstorff has convinced me to not capitalize the term). Many of these talks were critiques; the authors saw big data as a new form of positivism, and the rhetoric of big data as a sort of false consciousness that was sweeping the sciences*.

But what do scientists think of big data?

In a blog-post titled "The Big Data Brain Drain: Why Science is in Trouble," physicist Jake VanderPlas (his CV lists his interests as "Astronomy" and "Machine Learning") makes the argument that the real reason big data is dangerous because it moves scientists from the academy to corporations.
But where scientific research is concerned, this recently accelerated shift to data-centric science has a dark side, which boils down to this: the skills required to be a successful scientific researcher are increasingly indistinguishable from the skills required to be successful in industry. While academia, with typical inertia, gradually shifts to accommodate this, the rest of the world has already begun to embrace and reward these skills to a much greater degree. The unfortunate result is that some of the most promising upcoming researchers are finding no place for themselves in the academic community, while the for-profit world of industry stands by with deep pockets and open arms. [all emphasis in the original]
His argument proceeds in three steps: first, he argues that yes, new data is indeed being produced, and in stupendously large quantities. Second, processing this data (whether it's in biology or physics) requires a certain kind of scientist who is both skilled in statistics and software. Third, because of this, "scientific software" which can be used to clean, process, and visualize data becomes a key part of the research process. And finally, this scientific software needs to be built and maintained, and because the academy evaluates its scientists not for the software they build but for the papers they publish, all of these talented scientists are now moving to doing corporate research jobs (where they are appreciated not just for their results but also for their software). That, the author argues, is not good for science.

Clearly, to those familiar with the history of 20th century science, this argument has the ring of deja vu. In The Scientific Life, for example, Steven Shapin argued that the fear that corporate research labs would cause a tear in the prevailing (Mertonian) norms of science, by attracting the best scientists away from the academy, was a big part of the scientific (and social scientific) landscape of the middle of the 20th century. And these fears were largely unfounded (partly, because they were largely based on a picture of science that never existed, and partly because, as Shapin finds, scientific virtue remained nearly intact in its move from the academy to the corporate research lab.) [And indeed, Lee Vinsel makes a similar point in his comment on a Scientific American blog-post that links to VanderPlas' post.]

But there's more here, I think, for STS to think about. First, notice the description of the new scientist in the world of big data:
In short, the new breed of scientist must be a broadly-trained expert in statistics, in computing, in algorithm-building, in software design, and (perhaps as an afterthought) in domain knowledge as well. [emphasis in the original].
This is an interesting description on so many levels. But the reason it's most interesting to me is that it fits exactly with the description of what a computer scientist does. I admit this is a bit of a speculation, so feel free to disagree. But in the last few years, computer scientists have increasingly turned their attention to a variety of domains: for example, biology, romance, learning. And in each of these cases, their work looks exactly like the work that VanderPlas' "new breed of scientist" does. [Exactly? Probably not. But you get the idea.] Some of the computer scientists I observe who design software to help students learn work exactly in this way: they need some domain knowledge, but mostly they need the ability to code, and they need to know statistics both, in order to create, machine learning algorithms, as well as to validate their argument to other practitioners.

In other words, what VanderPlas is saying that practitioners of the sciences are starting to look more and more like computer scientists. His own CV, which I alluded to above, is a case in point: he lists his interests as both astronomy and machine learning. [Again, my point is not so much to argue that he is right or wrong, but that his blog-post is an indication of changes that are afoot.]

His solution to solving the "brain drain" is even more interesting, from an STS perspective. He suggests that the institutional structure of science should recognize and reward software-building so that the most talented people stay in academia and do not migrate to industry. In other words, become even more like computer science institutionally so that the best people stay in academia. Interesting, no?

Computer science is an interesting field. The digital computer's development went hand-in-hand with the development of cybernetics and “systems theory”—theories that saw themselves as generalizable to any kind of human activity. Not surprisingly, the emerging discipline of computer science made it clear that it was not about computers per se; rather, computers were the tools that it would use to understand computation—which potentially applied to any kind of intelligent human activity that could be described as symbol processing e.g. see Artificial Intelligence pioneers Newell and Simon’s Turing award speech. This has meant that computer science has had a wayward existence: it has typically flowed where the wind (meaning funding!) took it. In that sense, its path has been the polar opposite to that of mathematics, whose practitioners, as Alma's dissertation shows, have consciously policed the boundaries of mathematics.   (Proving theorems was seen to be the essence of math; anything else was moved to adjoining disciplines.)

X-posted on Tumblr and the HASTS blog.  

--------------------

*The only exception to this that I found was Stuart Geiger's talk which was titled "Hadoop as Grounded Theory: Is an STS Approach to Big Data Possible?," the abstract of which is worth citing in full:
In this paper, I challenge the monolithic critical narratives which have emerged in response to “big data,” particularly from STS scholars. I argue that in critiquing “big data” as if it was a stable entity capable of being discussed in the abstract, we are at risk of reifying the very phenomenon we seek to interrogate. There are instead many approaches to the study of large data sets, some quite deserving of critique, but others which deserve a different response from STS. Based on participant-observation with one data science team and case studies of other data science projects, I relate the many ways in which data science is practiced on the ground. There are a diverse array of approaches to the study of large data sets, some of which are implicitly based on the same kinds of iterative, inductive, non-positivist, relational, and theory building (versus theory testing) principles that guide ethnography, grounded theory, and other methodologies used in STS. Furthermore, I argue that many of the software packages most closely associated with the big data movement, like Hadoop, are built in a way that affords many “qualitative” ontological practices. These emergent practices in the fields around data science lead us towards a much different vision of “big data” than what has been imagined by proponents and critics alike. I conclude by introducing an STS manifesto to the study of large data sets, based on cases of successful collaborations between groups who are often improperly referred to as quantitative and qualitative researchers.