School of Information Blogs

November 23, 2015

Ph.D. student

late modern social epistemology round up; technical vs. hermeneutical correctness

Consider on the one hand what we might call Habermasian transcendental pragmatism, according to which knowledge can be categorized by how it addresses one of several generalized human interests:

  • The interest of power over nature or other beings, being technical knowledge
  • The interest of agreement with others for the sake of collective action, being hermeneutic knowledge
  • The interest of emancipation from present socially imposed conditions, being critical or reflexive knowledge

Consider in contrast what we might call the Luhmann or Foucault model in which knowledge is created via system autopoeisis. Luhmann talks about autopoeisis in a social system; Foucault talks about knowledge in a system of power much the same way.

It is difficult to reconcile these views. This may be what was at the heart of the Habermas-Luhmann debate. Can we parse out the problem in any way that helps reconcile these views?

First, let’s consider the Luhmann view. We might ease the tension in it by naming what we’ve called “knowledge” something like “belief”, removing the implication that the belief is true. Because indeed autopoeisis is a powerful enough process that it seems like it would preserve all kinds of myths and errors should they be important to the survival of the system in which they circulate.

This picture of knowledge, which we might call evolutionary or alternately historicist, is certainly a relativist one. At the intersection of institutions within which different partial perspectives are embedded, we are bound to see political contest.

In light of this, Habermas’s categorization of knowledge as what addresses generalized human interests can be seen as a way of identifying knowledge that transcends particular social systems. There is a normative component of this theory–knowledge should be such a thing. But there is also a descriptive component. One predicts, under Habermas’s hypothesis, that the knowledge that survives political contest at the intersection of social systems is that which addresses generalized interests.

Something I have perhaps overlooked in the past is the importance of the fact that there are multiple and sometimes contradictory general interests. One persistent difficulty in the search for truth is the conflict between what is technically correct and what is hermeneutically correct.

If a statement or theory is technically correct, then it can be reliably used by agents to predict and control the world. The objects of this prediction and control can be objects, or they can be other agents.

If a statement or theory is hermeneutically correct, then it is the reliable consensus of agents involved in a project of mutual understanding and respect. Hermeneutically correct beliefs might stress universal freedom and potential, a narrative of shared history, and a normative goal of progress against inequality. Another word for ‘hermeneutic’ might be ‘political’. Politically correct knowledges are those shared beliefs without which the members of a polity would not be able to stand each other.

In everyday discourse we can identify many examples of statements that are technically correct but hermeneutically (or politically) incorrect, and vice versa. I will not enumerate them here. In these cases, the technically correct view is identified as “offensive” because in a sense it is a defection from a voluntary social contract. Hermeneutic correctness binds together a particular social system by capturing what participants must agree upon in order for all to safely participate. For a member of that social system to assert their own agency over others, to identify ways in which others may be predicted and controlled without their consent or choice in the matter, is disrespectful. Persistent disrespect results in the ejection of the offender from the polity. (c.f. Pasquale’s distinction between “California engineers and New York quants” and “citizens”.)

A cruel consequence of these dynamics is social stratification based on the accumulation of politically forbidden technical knowledge.

We can tell this story again and again: A society is bound together by hermeneutically stable knowledge–an ideology, perhaps. Somebody ‘smart’ begins experimentation and identifies a technical truth that is hermeneutically incorrect, meaning that if the idea were to spread it would erode the consensus on which the social system depends. Perhaps the new idea degrades others by revealing that something believed to be an act of free will is, in fact, determined by nature. Perhaps the new idea is inaccessible to others because it depends on some rare capacity. In any case, it cannot be willfully consented to by the others.

The social system begins to have an immune reaction. Society has seen this kind of thing before. Historically, this idea has lead to abuse, exploitation, infamy. Those with forbidden knowledge should be shunned, distrusted, perhaps punished. Those with disrespectful technical ideas are discouraged from expressing them.

Technical knowledge thereby becomes socially isolated. Seeking it its own, it becomes concentrated. Already shunned by society, the isolated technologists put their knowledge to use. They gain advantage. Revenge is had by the nerds.

by Sebastian Benthall at November 23, 2015 04:30 PM

November 20, 2015

Ph.D. student

trust issues and the order of law and technology cf @FrankPasquale

I’ve cut to the last chapter of Pasquale’s The Black Box Society, “Towards an Intelligible Society.” I’m interested in where the argument goes. I see now that I’ve gotten through it that the penultimate chapter has Pasquale’s specific policy recommendations. But as I’m not just reading for policy and framing but also for tone and underlying theoretical commitments, I think it’s worth recording some first impressions before doubling back.

These are some points Pasquale makes in the concluding chapter that I wholeheartedly agree with:

  • A universal basic income would allow more people to engage in high risk activities such as the arts and entrepreneurship and more generally would be great for most people.
  • There should be publicly funded options for finance, search, and information services. A great way to provide these would be to fund the development of open source algorithms for finance and search. I’ve been into this idea for so long and it’s great to see a prominent scholar like Pasquale come to its defense.
  • Regulatory capture (or, as he elaborates following Charles Lindblom, “regulatory circularity”) is a problem. Revolving door participation in government and business makes government regulation an unreliable protector of the public interest.

There is quite a bit in the conclusion about the specifics of regulation the finance industry. There is an impressive amount of knowledge presented about this and I’ll admit much of it is over my head. I’ll probably have a better sense of it if I get to reading the chapter that is specifically about finance.

There are some things that I found bewildering or off-putting.

For example, there is a section on “Restoring Trust” that talks about how an important problem is that we don’t have enough trust in the reputation and search industries. His solution is to increase the penalties that the FTC and FCC can impose on Google and Facebook for its e.g. privacy violations. The current penalties are too trivial to be effective deterrence. But, Pasquale argues,

It is a broken enforcement model, and we have black boxes to thank for much of this. People can’t be outraged by what they can’t understand. And without some public concern about the trivial level of penalties for lawbreaking here, there are no consequences for the politicians ultimately responsible for them.

The logic here is a little mad. Pasquale is saying that people are not outraged enough by search and reputation companies to demand harsher penalties, and this is a problem because people don’t trust these companies enough. The solution is to convince people to trust these companies less–get outraged by them–in order to get them to punish the companies more.

This is a bit troubling, but makes sense based on Pasquale’s theory of regulatory circularity, which turns politics into a tug-of-war between interests:

The dynamic of circularity teaches us that there is no stable static equilibrium to be achieved between regulators and regulated. The government is either pushing industry to realize some public values in its activities (say, by respecting privacy or investing in sustainable growth), or industry is pushing regulators to promote its own interests.

There’s a simplicity to this that I distrust. It suggests for one that there are no public pressures on industry besides the government such as consumer’s buying power. A lot of Pasquale’s arguments depend on the monopolistic power of certain tech giants. But while network effects are strong, it’s not clear whether this is such a problem that consumers have no market buy in. In many cases tech giants compete with each other even when it looks like they aren’t. For example, many many people have both Facebook and Gmail accounts. Since there is somewhat redundant functionality in both, consumers can rather seemlessly allocate their time, which is tied to advertising revenue, according to which service they feel better serves them, or which is best reputationally. So social media (which is a bit like a combination of a search and reputation service) is not a monopoly. Similarly, if people have multiple search options available to them because, say, the have both Siri on their smart phone and can search Google directly, then that provides an alternative search market.

Meanwhile, government officials are also often self-interested. If there is a road to hell for industry that is to provide free web services to people to attain massive scale, then abuse economic lock-in to extract value from customers, then lobby for further rent-seeking, there is a similar road to hell in government. It starts with populist demagoguery, leads to stable government appointment, and then leverages that power for rents in status.

So, power is power. Everybody tries to get power. The question is what you do once you get it, right?

Perhaps I’m reading between the lines too much. Of course, my evaluation of the book should depend most on the concrete policy recommendations which I haven’t gotten to yet. But I find it unfortunate that what seems to be a lot of perfectly sound history and policy analysis is wrapped in a politics of professional identity that I find very counterproductive. The last paragraph of the book is:

Black box services are often wondrous to behold, but our black-box society has become dangerously unstable, unfair, and unproductive. Neither New York quants nor California engineers can deliver a sound economy or a secure society. Those are the tasks of a citizenry, which can perform its job only as well as it understands the stakes.

Implicitly, New York quants and California engineers are not citizens, to Pasquale, a law professor based in Maryland. Do all real citizens live around Washington, DC? Are they all lawyers? If the government were to start providing public information services, either by hosting them themselves or by funding open source alternatives, would he want everyone designing these open algorithms (who would be quants or engineers, I presume) to move to DC? Do citizens really need to understand the stakes in order to get this to happen? When have citizens, en masse, understood anything, really?

Based on what I’ve read so far, The Black Box Society is an expression of a lack of trust in the social and economic power associated with quantification and computing that took off in the past few dot-com booms. Since expressions of lack of trust for these industries is nothing new, one might wonder (under the influence of Foucault) how the quantified order and the critique of the quantified order manage to coexist and recreate a system of discipline that includes both and maintains its power as a complex of superficially agonistic forces. I give sincere credit to Pasquale for advocating both series income redistribution and public investment in open technology as ways of disrupting that order. But when he falls into the trap of engendering partisan distrust, he loses my confidence.

by Sebastian Benthall at November 20, 2015 03:01 PM

November 17, 2015

Ph.D. student

“Transactions that are too complex…to be allowed to exist.” cf @FrankPasquale

I stand corrected; my interpretation of Pasquale in my last post was too narrow. Having completed Chapter One of The Black Box Society (TBBS), Pasquale does not take the naive view that all organizational secrecy should be abolished, as I might have once. Rather, his is a more nuanced perspective.

First, Pasquale distinguishes between three “critical strategies for keeping black boxes closed”, or opacity, “[Pasquale’s] blanket term for remediable incomprehensibility”:

  • Real secrecy establishes a barrier between hidden content and unauthorized access to it.”
  • Legal secrecy obliges those privy to certain information to keep it secret”
  • Obfuscation involves deliberate attempts at concealment when secrecy has been compromised.”

Cutting to the chase by looking at the Pasquale and Bracha “Federal Search Commission” (2008) paper that a number of people have recommended to me, it appears (in my limited reading so far) that Pasquale’s position is not that opacity in general is a problem (because there are of course important uses of opacity that serve the public interest, such as confidentiality). Rather, despite these legitimate uses of opacity there is also the need for public oversight, perhaps through federal regulation. The Federal Government serves the public interest better than the imperfect market for search can provide on its own.

There is perhaps a tension between this 2008 position and what is expressed in Chapter 1 of TBBS in the section “The One-Way Mirror,” which gets I dare say a little conspiratorial about The Powers That Be. “We are increasingly ruled by what former political insider Jeff Connaughton called ‘The Blob,’ a shadowy network of actors who mobilize money and media for private gain, whether acting officially on behalf of business or of government.” Here, Pasquale appears to espouse a strong theory of regulatory capture from which, we we to insist on consistency, a Federal Search Commission would presumably not be exempt. Hence perhaps the role of TBBS in stirring popular sentiment to put political pressure on the elites of The Blob.

Though it is a digression I will note, since it is a pet peeve of mine, Pasquale’s objection to mathematized governance:

“Technocrats and managers cloak contestable value judgments in the garb of ‘science’: thus the insatiable demand for mathematical models that reframe the subtle and subjective conclusions (such as the worth of a worker, service, article, or product) as the inevitable dictate of salient, measurable data. Big data driven decisions may lead to unprecedented profits. But once we use computation not merely to exercise power over things, but also over people, we need to develop a much more robust ethical framework than ‘the Blob’ is now willing to entertain.”

That this sentiment that scientists should not be making political decisions has been articulated since at least as early as Hannah Arendt’s 1958 The Human Condition is an indication that there is nothing particular to Big Data about this anxiety. And indeed, if we think about ‘computation’ as broadly as mathematized, algorithmic thought, then its use for control over people-not-just-things has an even longer history. Lukacs’ 1923 “Reification and the Consciousness of the Proletariat” is a profound critique of Tayloristic scientific factory management that is getting close to being a hundred years old.

Perhaps a robust ethics of quantification has been in the works for some time as well.

Moving past this, by the end of Chapter 1 of TBBS Pasquale gives us the outline of the book and the true crux of his critique, which is the problem of complexity. Whether or not regulators are successful in opening the black boxes of Silicon Valley or Wall Street (or the branches of government that are complicit with Silicon Valley and Wall Street), their efforts will be in vain if what they get back from the organizations they are trying to regulate is too complex for them to understand.

Following the thrust of Pasquale’s argument, we can see that for him, complexity is the result of obfuscation. It is therefore a source of opacity, which as we have noted he has defined as “remediable incomprehensibility”. Pasquale promises to, by the end of the book, give us a game plan for creating, legally, the Intelligible Society. “Transactions that are too complex to explain to outsiders may well be too complex to be allowed to exist.”

This gets us back to the question we started with, which is whether this complexity and incomprehensibility is avoidable. Suppose we were to legislate against institutional complexity: what would that cost us?

Mathematical modeling gives us the tools we need to analyze these kinds of question. Information theory, theory of computational, and complexity theory are all foundational to the technology of telecommunications and data science. People with expertise in understanding complexity and the limitations we have of controlling it are precisely the people who make the ubiquitous algorithms which society depends on today. But this kind of theory rarely makes it into “critical” literature such as TBBS.

I’m drawn to the example of The Social Media Collective’s Critical Algorithm Studies Reading List, which lists Pasquale’s TBBS among many other works, because it opens with precisely the disciplinary gatekeeping that creates what I fear is the blind spot I’m pointing to:

This list is an attempt to collect and categorize a growing critical literature on algorithms as social concerns. The work included spans sociology, anthropology, science and technology studies, geography, communication, media studies, and legal studies, among others. Our interest in assembling this list was to catalog the emergence of “algorithms” as objects of interest for disciplines beyond mathematics, computer science, and software engineering.

As a result, our list does not contain much writing by computer scientists, nor does it cover potentially relevant work on topics such as quantification, rationalization, automation, software more generally, or big data, although these interests are well-represented in these works’ reference sections of the essays themselves.

This area is growing in size and popularity so quickly that many contributions are popping up without reference to work from disciplinary neighbors. One goal for this list is to help nascent scholars of algorithms to identify broader conversations across disciplines and to avoid reinventing the wheel or falling into analytic traps that other scholars have already identified.

This reading list is framed as a tool for scholars, which it no doubt is. But if contributors to this field of scholarship aspire, as Pasquale does, for “critical algorithms studies” to have real policy ramifications, then this disciplinary wall must fall (as I’ve argued this elsewhere).

by Sebastian Benthall at November 17, 2015 08:45 PM

November 15, 2015

Ph.D. student

organizational secrecy and personal privacy as false dichotomy cf @FrankPasquale

I’ve turned from page 2 to page 3 of The Black Box Society (I can be a slow reader). Pasquale sets up the dichotomy around which the drama of the hinges like so:

But while powerful businesses, financial institutions, and government agencies hide their actions behind nondisclosure agreements, “proprietary methods”, and gag rules, our own lives are increasingly open books. Everything we do online is recorded; the only questions lft are to whom the data will be available, and for how long. Anonymizing software may shield us for a little while, but who knows whether trying to hide isn’t the ultimate red flag for watchful authorities? Surveillance cameras, data brokers, sensor networks, and “supercookies” record how fast we drive, what pills we take, what books we read, what websites we visit. The law, so aggressively protective of secrecy in the world of commerce, is increasingly silent when it comes to the privacy of persons.

That incongruity is the focus of this book.

This is a rhetorically powerful paragraph and it captures a lot of trepidation people have about the power of larger organization relative to themselves.

I have been inclined to agree with this perspective for a lot of my life. I used to be the kind of person who thought Everything Should Be Open. Since then, I’ve developed what I think is a more nuanced view of transparency: some secrecy is necessary. It can be especially necessary for powerful organizations and people.


Well, it depends on the physical properties of information. (Here is an example of how a proper understanding of the mechanics of information can support the transcendent project as opposed to a merely critical project).

Any time you interact with something or somebody else in a meaningful way, you affect the state of each other in probabilistic space. That means there has been some kind of flow of information. If an organization interacts with a lot of people, it is going to absorb information about a lot of people. Recording this information as ‘data’ is something that has been done for a long time because that is what allows organizations to do intelligent things vis a vis the people they interact with. So businesses, financial institutions, and governments recording information about people is nothing new.

Pasquale suggests that this recording is a threat to our privacy, and that the secrecy of the organizations that do the recording gives them power over us. But this is surely a false dichotomy. Why? Because if an organization records information about a lot of people, and then doesn’t maintain some kind of secrecy, then that information is no longer private! To, like, everybody else. In other words, maintaining secrecy is one way of ensuring confidentiality, which is surely an important part of privacy.

I wonder what happens if we continue to read The Black Box society with this link between secrecy, confidentiality, and privacy in mind.

by Sebastian Benthall at November 15, 2015 09:43 PM

November 14, 2015

Ph.D. student

Marcuse on the transcendent project

Perhaps you’ve had this moment: it’s in the wee hours of the morning. You can’t sleep. The previous day was another shock to your sense of order in the universe and your place in it. You’ve begun to question your political ideals, your social responsibilities. Turning aside you see a book you read long ago that you remember gave you a sense of direction–a direction you have since repudiated. What did it say again?

I’m referring to Herbert Marcuse’s One-Dimensional Man, published in 1964.Whitfield in Dissent has a great summary of Marcuse’s career–a meteoric rise, a fast fall. He was a student of Heidegger and the Frankfurt School and applied that theory in a timely way in the 60’s.

My memory of Marcuse had been reduced to the Frankfurt School themes–technology transforming all scientific inquiry into operationalization and the resulting cultural homogeneity. I believe now that I had forgotten at least two important points.

The first is the notion of technological rationality–that pervasive technology changes what people think of as rational. This is different from instrumental rationality, which is the means ends rationality of an agent, which Frankfurt School thinkers tend to believe drive technological development and adoption. Rather, this is a claim about the effect of technology on society’s self-understanding. And example might be how the ubiquity of Facebook has changed our perception of personal privacy.

So Marcuse is very explicit about how artifacts have politics in a very thick sense, though he is rarely cited in contemporary scholarly discourse on the subject. Credit for this concept goes typically to Langdon Winner, citing his 1980 publication “Do Artifacts Have Politics?” Fred Turner’s From Counterculture to Cyberculture gives only the briefest of mention to Marcuse, despite his impact on counterculture and his concern with technology. I suppose this means the New Left, associated with Marcuse, had little to do with the emergence of cyberculture.

More significantly for me than this point was a second, which was Marcuse’s outline of the transcendental project. I’ve been thinking about this recently because I’ve met a Kantian at Berkeley and this has refreshed my interest in transcendental idealism and its intellectual consequences. In particular, Foucault described himself as one following Kant’s project, and in our discussion of Foucault in Classics it became discursively clear in a moment I may never forget precisely how well Foucault succeeded in this.

The revealing question was this. For Foucault, all knowledge exists in a particular system of discipline and power. Scientific knowledge orders reality in such and such a way, depends for its existence on institutions that establish the authority of scientists, etc. Fine. So, one asks, what system of power does Foucault’s knowledge participate in?

The only available answer is: a new one, where Foucauldeans critique existing modes of power and create discursive space for modes of life beyond existing norms. Foucault’s ideas are tools for transcending social systems and opening new social worlds.

That’s great for Foucault and we’ve seen plenty of counternormative social movements make successful use of him. But that doesn’t help with the problems of technologization of society. Here, Marcuse is more relevant. He is also much more explicit about his philosophical intentions in, for example, this account of the trancendent project:

(1) The transcendent project must be in accordance with the real possibilities open at the attained level of the material and intellectual culture.

(2) The transcendent project, in order to falsify the established totality, must demonstrate its own higher rationality in the threefold sense that

(a) it offers the prospect of preserving and improving the productive achievements of civilization;

(b) it defines the established totality in its very structure, basic tendencies, and relations;

(c) its realization offers a greater chance for the pacification of existence, within the framework of institutions which offer a greater chance for the free development of human needs and faculties.

Obviously, this notion of rationality contains, especially in the last statement, a value judgment, and I reiterate what I stated before: I believe that the very concept of Reason originates in this values judgment, and that the concept of truth cannot be divorced from the value of Reason.

I won’t apologize for Marcuse’s use of the dialect of German Idealism because if I had my way the kinds of concepts he employs and the capitalization of the word Reason would come back into common use in educated circles. Graduate school has made me extraordinarily cynical, but not so cynical that it has shaken my belief that an ideal–really any ideal–but in particular as robust an ideal as Reason is important for making society not suck, and that it’s appropriate to transmit such an ideal (and perhaps only this ideal) through the institution of the university. These are old fashioned ideas and honestly I’m not sure how I acquired them myself. But this is a digression.

My point is that in this view of societal progress, society can improve itself, but only by transcending itself and in its moment of transcendence freely choosing an alternative that expands humanity’s potential for flourishing.

“Peachy,” you say. “Where’s the so what?”

Besides that I think the transcendent project is a worthwhile project that we should collectively try to achieve? Well, there’s this: I think that most people have given up on the transcendent project and that this is a shame. Specifically, I’m disappointed in the critical project, which has since the 60’s become enshrined within the social system, for no longer aspiring to transcendence. Criticality has, alas, been recuperated. (I have in mind here, for example, what has been called critical algorithm studies)

And then there’s this: Marcuse’s insight into the transcendent project is that it has to “be in accordance with the real possibilities open at the attained level of the material and intellectual culture” and also that “it defines the established totality in its very structure, basic tendencies, and relations.” It cannot transcend anything without first including all of what is there. And this is precisely the weakness of this critical project as it now stands: that it excludes the mathematical and engineering logic that is at the heart of contemporary technics and thereby, despite its lip service to giving technology first class citizenship within its Actor Network, in fact fails to “define the established totality in its very structure, basic tendencies, and relations.” There is a very important body of theoretical work at the foundation of computer science and statistics, the theory that grounds the instrumental force and also systemic ubiquity of information technology and now data science. The continued crisis of our now very, very late modern capitalism are due partly, IMHO, by our failure to dialectically synthesize the hegemonic computational paradigm, which is not going to be defeated by ‘refusal’, with expressions of human interest that resist it.

I’m hopeful because recently I’ve learned about new research agendas that may be on to accomplishing just this. I doubt they will take on the perhaps too grandiose mantle of “the trancendent project.” But I for one would be glad if they did.

by Sebastian Benthall at November 14, 2015 06:23 PM

Is the opacity of governance natural? cf @FrankPasquale

I’ve begun reading Frank Pasquale’s The Black Box Society on the recommendation that it’s a good place to start if I’m looking to focus a defense of the role of algorithms in governance.

I’ve barely started and already found lots of juicy material. For example:

Gaps in knowledge, putative and real, have powerful implications, as do the uses that are made of them. Alan Greenspan, once the most powerful central banker in the world, claimed that today’s markets are driven by an “unredeemably opaque” version of Adam Smith’s “invisible hand,” and that no one (including regulators) can ever get “more than a glimpse at the internal workings of the simplest of modern financial systems.” If this is true, libertarian policy would seem to be the only reasonable response. Friedrich von Hayek, a preeminent theorist of laissez-faire, called the “knowledge problem” an insuperable barrier to benevolent government intervention in the economy.

But what if the “knowledge problem” is not an intrinsic aspect of the market, but rather is deliberately encouraged by certain businesses? What if financiers keep their doings opaque on purpose, precisely to avoid and confound regulation? That would imply something very different about the merits of deregulation.

The challenge of the “knowledge problem” is just one example of a general truth: What we do and don’t know about the social (as opposed to the natural) world is not inherent in its nature, but is itself a function of social constructs. Much of what we can find out about companies, governments, or even one another, is governed by law. Laws of privacy, trade secrecy, the so-called Freedom of Information Act–all set limits to inquiry. They rule certain investigations out of the question before they can even begin. We need to ask: To whose benefit?

There are a lot of ideas here. Trying to break them down:

  1. Markets are opaque.
  2. If markets are naturally opaque, that is a reason for libertarian policy.
  3. If markets are not naturally opaque, then they are opaque on purpose, then that’s a reason to regulate in favor of transparency.
  4. As a general social truth, the social world is not naturally opaque but rather opaque or transparent because of social constructs such as law.

We are meant to conclude that markets should be regulated for transparency.

The most interesting claim to me is what I’ve listed as the fourth one, as it conveys a worldview that is both disputable and which carries with it the professional biases we would expect of the author, a Professor of Law. While there are certainly many respects in which this claim is true, I don’t yet believe it has the force necessary to carry the whole logic of this argument. I will be particularly attentive to this point as I read on.

The danger I’m on the lookout for is one where the complexity of the integration of society, which following Beniger I believe to be a natural phenomenon, is treated as a politically motivated social construct and therefore something that should be changed. It is really only the part after the “and therefore” which I’m contesting. It is possible for politically motivated social constructs to be natural phenomena. All institutions have winners and losers relative to their power. Who would a change in policy towards transparency in the market benefit? If opacity is natural, it would shift the opacity to some other part of society, empowering a different group of people. (Possibly lawyers).

If opacity is necessary, then perhaps we could read The Black Box Society as an expression of the general problem of alienation. It is way premature for me to attribute this motivation to Pasquale, but it is a guiding hypothesis that I will bring with me as I read the book.

by Sebastian Benthall at November 14, 2015 07:22 AM

November 12, 2015

Ph.D. student

3D Printing En Plein Air

3D Printing En Plein Air from Laura Devendorf on Vimeo.

Drawing on my work with Being the Machine, this project explores the role of place in digital fabrication. With this project, I hope to take a step back from the relationship between hand and machine to consider the role of the entire body-in-space and the machine. I like to think of it as a way to bring generative, site specific, and instruction art into conversation with one another.

The systems consists of a portable easel, laser guide, and mobile app. The mobile app converts images of the environment into 3D models to be fabricated. The laser guide draws the motions a 3D printer would make to create the model and invites the maker to follow by hand. All building materials, hardware, and components fold into the portable easel, in an effort to make it easy to bring digital manufacturing workflows into unlikely places. More experiments to follow.

This project was completed as part of the Autodesk Artist-in-Residence program. The technical details and building “how-to”‘s are contained in this Instructable

by admin at November 12, 2015 05:45 PM

Ph.D. student


  • Apparently a lot of the economics/complex systems integration work that I wish I were working on has already been done by Sam Bowles. I’m particularly interested in what he has to say about inequality, though lately I’ve begun to think inequality is inevitable. I’d like this to prove me wrong. His work on alternative equilibria in institutional economics also sounds good. I’m looking for ways to formally model Foucauldean social dynamics and this literature seems like a good place to start.
  • A friend of a friend who works on computational modeling of quantum dynamics has assured me that to physicists quantum uncertainty is qualitatively different from subjective uncertainty due to, e.g., chaos. This is disappointing because I’ve found the cleanliness of thoroughgoing Bayesian about probability very compelling. However, it does suggest a link between chaos theory and logical uncertainty that is perhaps promising.
  • The same person pointed out insightfully that one of the benefits of capitalism is that it makes it easier to maintain ones relative social position. Specifically, it is easier to maintain wealth than it is to maintain ones physical capacity to defend oneself from violence. And it’s easier to maintain capital (reinvested wealth) than it is to maintain raw wealth (i.e. cash under the mattress). So there is something inherently conservative about capitalism’s effect on the social order, since it comes with rule of law to protect investments.
  • I can see all the traffic to it but I still can’t figure out why this post about Donna Haraway is now my most frequently visited blog post. I wish everyone who read it would read the Elizabeth Anderson SEP article on Feminist Epistemology and Philosophy of Science. It’s superb.
  • The most undercutting thing to Marxism and its intellectual descendants would be the conclusion that market dynamics are truly based in natural law and are not reified social relations. Thesis: Pervasive sensing and computing might prove once and for all that these market dynamics are natural laws. Anti-thesis: It might prove once and for all that they are not natural laws. Question: Is any amount of empirical data sufficient to show that social relations are or are not natural, or is there something contradictory in the sociological construction of knowledge that would prevent it from having definitive conclusions about its own collective consciousness? (Insert Godel/Halting Problem intuition here) ANSWER: The Big Computer does not have to participate in collective intelligence. It is all knowing. It is all-seeing. It renders social relations in its image. Hence, capitalism can be undone by giving capital so much autonomous control of the economy that the social relations required for it are obsolete. But what next?
  • With justice so elusive, science becomes a path to Gnosticism and other esoterica.

by Sebastian Benthall at November 12, 2015 05:18 AM

November 06, 2015

Ph.D. student

functional determinism or overfitting to chaos

It’s been a long time since I read any Foucault.

The last time I tried, I believe the writing made me angry. He jumps around between anecdotes, draws spurious conclusions. At the time I was much sharper and more demanding and would not tolerate a fallacious logical inference.

It’s years later and I am softer and more flexible. I’m finding myself liking Foucault more, even compelled by his arguments. But I think I was just able to catch myself believing something I shouldn’t have, and needed to make a note.

Foucault brilliantly takes a complex phenomenon–like a prison and the society around it–and traces how its rhetoric, its social effects, etc. all reinforce each other. He describes a complex, and convinces the reader that the complex is a stable unit is society. Delinquency is not the failure of prison, it is the success of prison, because it is a useful category of illegality made possible by the prison. Etc.

I believe this qualifies as “rich qualitative analysis.” Qualitative work has lately been lauded for its “richness”, which is an interesting term. I’m thinking for example for the Human Centered Data Science CfP for CSCW 2016.

With this kind of work–is Foucault a historian? a theorist?–there is always the question of generalizability. What makes Foucault’s account of prisons compelling to me today is that it matches my conception of how prisons still work. I have heard a lot about prisons. I watched The Wire. I know about the cradle-to-prison system.

No doubt these narratives were partly inspired, enabled, by Foucault. I believe them, not having any particular expertise in crime, because I have absorbed an ideology that sees the systemic links between these social forces.

Here is my doubt: what if there are even more factors in play than have been captured by Foucault or a prevailing ideology of crime? What is prisons both, paradoxically, create delinquency and also reform criminals? What if social reality is not merely poststructural, but unstructured, and the narratives we bring to bear on it in order to understand it are rich because they leave out complexity, not because they bring more of it in?

Another example: the ubiquitous discourse on privilege and its systemic effect of reproducing inequality. We are told to believe in systems of privilege–whiteness, wealth, masculinity, and so on. I will confess: I am one of the Most Privileged Men, and so I can see how these forms of privilege reinforce each other (or not). But I can also see variations to this simplistic schema, alterations, exceptions.

And so I have my suspicions. Inequality is reproduced; we know this because the numbers (about income, for example), are distributed in bizarre proportions. 1% owns 99%! It must be because of systemic effects.

But we know now that many of the distributions we once believed were power law distributions created by generative processes such as preferential attachment are really log normal distributions, which are quite different. This is an empirically detectable difference whose implications are quite profound.


Because a log normal distribution is created not by any precise “rich get rich” dynamic, but rather by any process according to which random variables are multiplied together. As a result, you get extreme inequality in a distribution simply by virtue of how various random factors contributing towards it are mathematically combined (multiplicatively), as opposed to any precise determination of the factors upon each other.

The implication of this is that no particular reform is going to remove the skew from the distribution as long as people are not prevented from efficiently using their advantage–whatever it is–to get more advantage. Rather, reforms that are not on the extreme end (such as reparations or land reform) are unlikely to change the equity outcome except from the politically motivated perspective of an interest group.

I was pretty surprised when I figured this out! The implication is that a lot of things that look very socially structured are actually explained by basic mathematical principles. I’m not sure what the theoretical implications of this are but I think there’s going to be a chapter in my dissertation about it.

by Sebastian Benthall at November 06, 2015 03:38 AM

November 04, 2015

Ph.D. student

repopulation as element in the stability of ideology

I’m reading the fourth section of Foucault’s Discipline and Punish, about ‘Prison’, for the first time for I School Classics

A striking point made by Foucault is that while we may think there is a chronology of the development of penitentiaries whereby they are designed, tested, critiqued, reformed, and so on, until we get a progressively improved system, this is not the case. Rather, at the time of Foucault’s writing, the logic of the penitentiary and its critiques had happily coexisted for a hundred and fifty years. Moreover, the failures of prisons–their contribution to recidivism and the education and organization of delinquents, for example–could only be “solved” by the reactivation of the underlying logic of prisons–as environments of isolation and personal transformation. So prison “failure” and “solution”, as well as (often organized) delinquency and recidivism, in addition to the architecture and administration of prison, are all part of the same “carceral system” which endures as a complex.

One wonders why the whole thing doesn’t just die out. One explanation is repopulation. People are born, live for a while, reproduce, live a while longer, and die. In the process, they must learn through education and experience. It’s difficult to rush personal growth. Hence, systematic errors that are discovered through 150 years of history are difficult to pass on, as each new generation will be starting from inherited priors (in the Bayesian sense) which may under-rank these kinds of systemic effects.

In effect, our cognitive limitations as human beings are part of the sociotechnical systems in which we play a part. And though it may be possible to grow out of such a system, there is a constant influx of the younger and more naive who can fill the ranks. Youth captured by ideology can be moved by promises of progress or denunciations of injustice or contamination, and thus new labor is supplied to turn the wheels of institutional machinery.

Given the environmental in-sustainability of modern institutions despite their social stability under conditions of repopulation, one has to wonder…. Whatever happened to the phenomenon of eco-terrorism?

by Sebastian Benthall at November 04, 2015 10:03 PM

October 30, 2015

Ph.D. student

cross-cultural links between rebellion and alienation

In my last post I noted that the contemporary American problem that the legitimacy of the state is called into question by distributional inequality is a specifically liberal concern based on certain assumptions about society: that it is a free association of producers who are otherwise autonomous.

Looking back to Arendt, we can find the roots of modern liberalism in the polis of antiquity, where democracy was based on free association of landholding men whose estates gave them autonomy from each other. Since the economics, the science that once concerned itself with managing the household (oikos, house + nomos, managing), has elevated to the primary concern of the state and the organizational principle of society. One way to see the conflict between liberalism and social inequality is as the tension between the ideal of freely associating citizens that together accomplish deeds and the reality of societal integration with its impositions on personal freedom and unequal functional differentiation.

Historically, material autonomy was a condition for citizenship. The promise of liberalism is universal citizenship, or political agency. At first blush, to accomplish this, either material autonomy must be guaranteed for all, or citizenship must be decoupled from material conditions altogether.

The problem with this model is that societal agency, as opposed to political agency, is always conditioned both materially and by society (Does this distinction need to be made?). The progressive political drive has recognized this with its unmasking and contestation of social privilege. The populist right wing political drive has recognized this with its accusations that the formal political apparatus has been captured by elite politicians. Those aspects of citizenship that are guaranteed as universal–the vote and certain liberties–are insufficient for the effective social agency on which political power truly depends. And everybody knows it.

This narrative is grounded in the experience of the United States and, going back, to the history of “The West”. It appears to be a perennial problem over cultural time. There is some evidence that it is also a problem across cultural space. Hanah Arendt argues in On Violence (1969) that the attraction of using violence against a ruling bureaucracy (which is political hypostatization of societal alienation more generally) is cross-cultural.

“[T]he greater the bureaucratization of public life, the greater will be the attraction of violence. In a fully developed bureaucracy there is nobody left with whom one can argue, to whom one can present grievances, on whom the pressures of power can be exerted. Bureaucracy is the form of government in which everybody is deprived of political freedom, of the power to act; for the rule by Nobody is not no-rule, and where all are equally powerless we have tyranny without a tyrant. The crucial feature of the student rebellions around the world is that they are directed everywhere against the ruling bureaucracy. This explains what at first glance seems so disturbing–that the rebellions in the East demand precisely those freedoms of speech and thought that the young rebels in the West say they despise as irrelevant. On the level of ideologies, the whole thing is confusing: it is much less so if we start from the obvious fact that the huge party machines have succeeded everywhere in overruling the voice of citizens, even in countries where freedom of speech and association is still intact.”

The argument here is that the moral instability resulting from alienation from politics and society is a universal problem of modernity that transcends ideology.

This is a big problem if we keep turning over decision-making authority over to algorithms.

by Sebastian Benthall at October 30, 2015 04:42 PM

October 29, 2015

Ph.D. student

inequality and alienation in society

While helpful for me, this blog post got out of hand. A few core ideas from it:

A prerequisite for being a state is being a stable state. (cf. Bourgine and Varella on autonomy)

A state may be stable (“power stable”) without being legitimate (“inherently stable” or “moral stable”).

State and society are intertwined and I’ll just conflate them here.

Under liberal ideology, society is society of individual producers and the purpose of the state is to guarantee “liberty, property, and equality.”

So specifically, (e.g. economic) inequality is a source of moral instability for liberalism.

Whether or not moral instability leads to destabilization of the state is a matter of empirical prediction. Using that as a way of justifying liberalism in the first place is probably a non-starter.

A different but related problem is the problem of alienation. Alienation happens when people don’t feel like they are part of the institutions that have power over them.

[Hegel’s philosophy is a good intellectual starting point for understanding alienation because Hegel’s logic was explicitly mereological, meaning about the relationship between parts and wholes.]

Liberal ideology effectively denies that individuals are part of society and therefore relies on equality for its moral stability.

But there are some reasons to think that this is untenable:

As society scales up, we require more and more apparatus to manage the complexity of societal integration. This is where power lies, and it creates a ruling bureaucratic or (now, increasingly) technical class. In other words, it may be impossible to for society to both be scalable and equal, in terms of distribution of goods.

Moreover, the more “technical” the apparatus of social integration is, the more remote it is from the lived experiences of society. As a result, we see more alienation in society. One way to think about alienation is inequality in the distribution of power or autonomy. So popular misgivings about how control has been ceded to algorithms are an articulation of alienation, though that word is out of fashion.

Inequality is a source of moral instability under liberal ideology. Under what conditions is alienation a source of moral stability?

by Sebastian Benthall at October 29, 2015 04:19 PM

Ph.D. alumna

New book: Participatory Culture in a Networked Era by Henry Jenkins, Mimi Ito, and me!

In 2012, Henry Jenkins approached Mimi Ito and I with a crazy idea that he’d gotten from talking to the folks at Polity. Would we like to sit down and talk through our research and use that as the basis of a book? I couldn’t think of anything more awesome than spending time with two of my mentors and teasing out the various strands of our interconnected research. I knew that there were places where we were aligned and places where we disagreed or, at least, where our emphases provided different perspectives. We’d all been running so fast in our own lives that we hadn’t had time to get to that level of nuance and this crazy project would be the perfect opportunity to do precisely that.

We started by asking our various communities what questions they would want us to address. And then we sat down together, face-to-face, for two days at a time over a few months. And we talked. And talked. And talked. In the process, we started identifying themes and how our various areas of focus were woven together.

Truth be told, I never wanted it to end. Throughout our conversations, I kept flashing back to my years at MIT when Henry opened my eyes to fan culture and a way of understanding media that seeped deep inside my soul. I kept remembering my trips to LA where I’d crash in Mimi’s guest room, talking research late into the night and being woken in the early hours by a bouncy child who never understood why I didn’t want to wake up at 6AM. But above everything else, the sheer delight of brainjamming with two people whose ideas and souls I knew so well was ecstasy.

And then the hard part started. We didn’t want this project to be the output of self-indulgence and inside baseball. We wanted it to be something that helped others see how research happens, how ideas form, and how collaborations and disagreements strengthen seemingly independent work. And so we started editing. And editing. And editing. Getting help editing. And then editing some more.

The result is Participatory Culture in a Networked Era and it is unlike any project I’ve ever embarked on or read. The book is written as a conversation and it was the product of a conversation. Except we removed all of the umms and uhhs and other annoying utterances and edited it in an attempt to make the conversation make sense for someone who is trying to understand the social and cultural contexts of participation through and by media. And we tried to weed out the circular nature of conversation as we whittled down dozens of hours of recorded conversation into a tangible artifact that wouldn’t kill too many trees.

What makes this book neat is that it sheds light on all of the threads of conversation that helped the work around participatory culture, connected learning, and networked youth practices emerge. We wanted to make the practice of research as visible as our research and reveal the contexts in which we are operating alongside our struggles to negotiate different challenges in our work. If you’re looking for classic academic output, you’re going to hate this book. But if you want to see ideas in context, it sure is fun. And in the conversational product, you’ll learn new perspectives on youth practices, participatory culture, learning, civic engagement, and the commercial elements of new media.

OMG did I fall in love with Henry and Mimi all over again doing this project. Seeing how they think just tickles my brain in the best ways possible. And I suspect you’ll love what they have to say too.

The book doesn’t officially release for a few more weeks, but word on the street is that copies of this book are starting to ship. Check it out!

by zephoria at October 29, 2015 03:21 PM

October 27, 2015

Ph.D. student

We need more Sittlichkeit: Vallier on Picketty and Rawls; Cyril on Surveillance and Democracy; Taylor on Hegel

Kevin Vallier’s critique of Picketty in Bleeding Heart Libertarians (funny name) is mainly a criticism of the idea that economic inequality leads to political stability.

In the course of his rebuttal of Picketty, he brings in some interesting Rawlsian theory which is more broadly important. He distinguishes between power stability, the stability of a state in maintaining itself due to its forcible prevention of resistance by Hobbesian power. “Inherent stability”, or moral stability (Vallier’s term) is “stability for the right reasons”–that comes from the state’s comportment with our sense of justice.

There are lots of other ways of saying the same think in the literature. We can ask if justice is de facto or de jure. We can distinguish, as does Hanah Arendt in On Violence, between power (which she maintains is only what’s rooted in collective action) and violence (which is I guess what Vallier would call ‘Hobbesian power’). In a perhaps more subtle move, we can with Habermas ask what legitimizes the power of the state.

The left-wing zeitgeist at the moment is emphasizing inequality as a problem. While Picketty argues that inequality leads to instability, it’s an open question whether this is in fact the case. There’s no particular reason why a Hobbesian sovereign with swarms of killer drones couldn’t maintain its despotic rule through violence. Probably the real cause for complaint is that this is illegitimate power (if you’re Habermas), or violence not power (if you’re Arendt), or moral instability (if you’re Rawls).

That makes sense. Illegitimate power is the kind of power that one would complain about.

Ok, so now cut to Malkia Cyril’s talk at CFP tying technological surveillance to racism. What better illustration of the problems of inequality in the United States than the history of racist policies towards black people? Cyril acknowledges the benefits of Internet technology in providing tools for activists but suspects that now technology will be used by people in power to maintain power for the sake of profit.

The fourth amendment, for us, is not and has never been about privacy, per se. It’s about sovereignty. It’s about power. It’s about democracy. It’s about the historic and present day overreach of governments and corporations into our lives, in order to facilitate discrimination and disadvantage for the purposes of control; for profit. Privacy, per se, is not the fight we are called to. We are called to this question of defending real democracy, not to this distinction between mass surveillance and targeted surveillance

So there’s a clear problem for Cyril which is that ‘real democracy’ is threatened by technical invasions of privacy. A lot of this is tied to the problem of who owns the technical infrastructure. “I believe in the Internet. But I don’t control it. Someone else does. We need a new civil rights act for the era of big data, and we need it now.” And later:

Last year, New York City Police Commissioner Bill Bratton said 2015 would be the year of technology for law enforcement. And indeed, it has been. Predictive policing has taken hold as the big brother of broken windows policing. Total information awareness has become the goal. Across the country, local police departments are working with federal law enforcement agencies to use advanced technological tools and data analysis to “pre-empt crime”. I have never seen anyone able to pre-empt crime, but I appreciate the arrogance that suggests you can tell the future in that way. I wish, instead, technologists would attempt to pre-empt poverty. Instead, algorithms. Instead, automation. In the name of community safety and national security we are now relying on algorithms to mete out sentences, determine city budgets, and automate public decision-making without any public input. That sounds familiar too. It sounds like Black codes. Like Jim Crow. Like 1963.

My head hurts a little as I read this because while the rhetoric is powerful, the logic is loose. Of course you can do better or worse at preempting crime. You can look at past statistics on crime and extrapolate to the future. Maybe that’s hard but you could do it in worse or better ways. A great way to do that would be, as Cyril suggests, by preempting poverty–which some people try to do, and which can be assisted by algorithmic decision-making. There’s nothing strictly speaking racist about relying on algorithms to make decisions.

So for all that I want to support Cyril’s call for ‘civil rights act for the era of big data’, I can’t figure out from the rhetoric what that would involve or what its intellectual foundations would be.

Maybe there are two kinds of problems here:

  1. A problem of outcome legitimacy. Inequality, for example, might be an outcome that leads to a moral case against the power of the state.
  2. A problem of procedural legitimacy. When people are excluded from the decision-making processes that affect their lives, they may find that to be grounds for a moral objection to state power.

It’s worth making a distinction between these two problems even though they are related. If procedures are opaque and outcomes are unequal, there will naturally be resentment of the procedures and the suspicion that they are discriminatory.

We might ask: what would happen if procedures were transparent and outcomes were still unequal? What would happen if procedures were opaque and outcomes were fair?

One last point…I’ve been dipping into Charles Taylor’s analysis of Hegel because…shouldn’t everybody be studying Hegel? Taylor maintains that Hegel’s political philosophy in The Philosophy of Right (which I’ve never read) is still relevant today despite Hegel’s inability to predict the future of liberal democracy, let alone the future of his native Prussia (which is apparently something of a pain point for Hegel scholars).

Hegel, or maybe Taylor in a creative reinterpretation of Hegel, anticipates the problem of liberal democracy of maintaining the loyalty of its citizens. I can’t really do justice to Taylor’s analysis so I will repeat verbatim with my comments in square brackets.

[Hegel] did not think such a society [of free and interchangeable individuals] was viable, that is, it could not commadn the loyalty, the minimum degree of discipline and acceptance of its ground rules, it could not generate the agreement on fundamentals necessary to carry on. [N.B.: Hegel conflates power stability and moral stability] In this he was not entirely wrong. For in fact the loyal co-operation which modern societies have been able to command of their members has not been mainly a function of the liberty, equality, and popular rule they have incorporated. [N.B. This is a rejection of the idea that outcome and procedural legitimacy are in fact what leads to moral stability.] It has been an underlying belief of the liberal tradition that it was enough to satisfy these principles in order to gain men’s allegiance. But in fact, where they are not partly ‘coasting’ on traditional allegiance, liberal, as all other, modern societies have relied on other forces to keep them together.

The most important of these is, of course, nationalism. Secondly, the ideologies of mobilization have played an important role in some societies, focussing men’s attention and loyalties through the unprecedented future, the building of which is the justification of all present structures (especially that ubiquitous institution, the party).

But thirdly, liberal societies have had their own ‘mythology’, in the sense of a conception of human life and purposes which is expressed in and legitimizes its structures and practices. Contrary to widespread liberal myth, it has not relied on the ‘goods’ it could deliver, be they liberty, equality, or property, to maintain its members loyalty. The belief that this was coming to be so underlay the notion of the ‘end of ideology’ which was fashionable in the fifties.

But in fact what looked like an end of ideology was only a short period of unchallenged reign of a central ideology of liberalism.

This is a lot, but bear with me. What this is leading up to is an analysis of social cohesion in terms of what Hegel called Sittlichkeit, “ethical life” or “ethical order”. I gather that Sittlichkeit is not unlike what we’d call an ideology or worldview in other contexts. But a Sittlichkeit is better than mere ideology, because Sittlichkeit is a view of ethically ordered society and so therefore is somehow incompatible with liberal atomization of the self which of course is the root of alienation under liberal capitalism.

A liberal society which is a going concern has a Sittlichkeit of its own, although paradoxically this is grounded on a vision of things which denies the need for Sittlickeiit and portrays the ideal society as created and sustained by the will of its members. Liberal societies, in other words, are lucky when they do not live up, in this respect, to their own specifications.

If these common meaning fail, then the foundations of liberal society are in danger. And this indeed seems as distinct possibility today. The problem of recovering Sittlichkeit, of reforming a set of institutions and practices with which men can identify, is with us in an acute way in the apathy and alienation of modern society. For instance the central institutions of representative government are challenged by a growing sense that the individual’s vote has no signficance. [c.f. Cyril’s rhetoric of alienation from algorithmic decision-making.]

But then it should not surprise us to find this phenomenon of electoral indifference referred to in [The Philosophy of Right]. For in fact the problem of alienation and the recovery of Sittlichkeit is a central one in Hegel’s theory and any age in which it is on the agenda is one to which Hegel’s though is bound to be relevant. Not that Hegel’s particular solutions are of any interest today. But rather that his grasp of the relations of man to society–of identity and alienation, of differentiation and partial communities–and their evolution through history, gives us an important part of the language we sorely ned to come to grips with this problem in our time.

Charles Taylor wrote all this in 1975. I’d argue that this problem of establishing ethical order to legitimize state power despite alienation from procedure is a perennial one. That the burden of political judgment has been placed most recently on the technology of decision-making is a function of the automation of bureaucratic control (see Beniger) and, it’s awkward to admit, my own disciplinary bias. In particular it seems like what we need is a Sittlichkeit that deals adequately with the causes of inequality in society, which seem poorly understood.

by Sebastian Benthall at October 27, 2015 07:16 PM

October 20, 2015

Ph.D. student

autonomy and immune systems

Somewhat disillusioned lately with the inflated discourse on “Artificial Intelligence” and trying to get a grip on the problem of “collective intelligence” with others in the Superintelligence and the Social Sciences seminar this semester, I’ve been following a lead (proposed by Julian Jonker) that perhaps the key idea at stake is not intelligence, but autonomy.

I was delighted when searching around for material on this to discover Bourgine and Varela’s “Towards a Practice of Autonomous Systems” (pdf link) (1992). Francisco Varela is one of my favorite thinkers, though he is a bit fringe on account of being both Chilean and unafraid of integrating Buddhism into his scholarly work.

The key point of the linked paper is that for a system (such as a living organism, but we might extend the idea to a sociotechnical system like an institution or any other “agent” like an AI) to be autonomous, it has to have a kind of operational closure over time–meaning not that it is closed to interaction, but that its internal states progress through some logical space–and that it must maintain its state within a domain of “viability”.

Though essentially a truism, I find it a simple way of thinking about what it means for a system to preserve itself over time. What we gain from this organic view of autonomy (Varela was a biologist) is an appreciation of the fact that an agent needs to adapt simply in order to survive, let alone to act strategically or reproduce itself.

Bourgine and Varela point out three separate adaptive systems to most living organisms:

  • Cognition. Information processing that determines the behavior of the system relative to its environment. It adapts to new stimuli and environmental conditions.
  • Genetics. Information processing that determines the overall structure of the agent. It adapts through reproduction and natural selection.
  • The Immune system. Information processing to identify invasive micro-agents that would threaten the integrity of the overall agent. It creates internal antibodies to shut down internal threats.

Sean O Nuallain has proposed that ones sense of personal self is best thought of as a kind of immune system. We establish a barrier between ourselves and the world in order to maintain a cogent and healthy sense of identity. One could argue that to have an identity at all is to have a system of identifying what is external to it and rejecting it. Compare this with psychological ideas of ego maintenance and Jungian confrontations with “the Shadow”.

At an social organizational level, we can speculate that there is still an immune function at work. Left and right wing ideologies alike have cultural “antibodies” to quickly shut down expressions of ideas that pattern match to what might be an intellectual threat. Academic disciplines have to enforce what can be said within them so that their underlying theoretical assumptions and methodological commitments are not upset. Sociotechnical “cybersecurity” may be thought of as a kind of immune system. And so on.

Perhaps the most valuable use of the “immune system” metaphor is that it identifies a mid-range level of adaptivity that can be truly subconscious, given whatever mode of “consciousness” you are inclined to point to. Social and psychological functions of rejection are in a sense a condition for higher-level cognition. At the same time, this pattern of rejection means that some information cannot be integrated materially; it must be integrated, if at all, through the narrow lens of the senses. At an organizational or societal level, individual action may be rejected because of its disruptive effect on the total system, especially if the system has official organs for accomplishing more or less the same thing.

by Sebastian Benthall at October 20, 2015 05:36 PM

Ph.D. alumna

What World Are We Building?

This morning, I had the honor and pleasure of giving the Everett C. Parker Lecture in celebration of the amazing work he did to fight for media justice. The talk that I gave weaved together some of my work with youth (on racial framing of technology) and my more recent thoughts on the challenges presented by data analytics. I also pulled on work of Latanya Sweeney and Eric Horvitz and argued that those of us who were shaping social media systems “didn’t architect for prejudice, but we didn’t design systems to combat it either.” More than anything, I used this lecture to argue that “we need those who are thinking about social justice to understand technology and those who understand technology to commit to social justice.”

My full remarks are available here: “What World Are We Building?” Please let me know what you think!

by zephoria at October 20, 2015 03:37 PM

October 12, 2015

Ph.D. student

notes towards “Freedom in the Machine”

I have reconceptualized my dissertation because it would be nice to graduate.

In this reconceptualization, much of the writing from this blog can be reused as a kind of philosophical prelude.

I wanted to title this prelude “Freedom and the Machine” so I Googled that phrase. I found three interesting items I had never heard of before:

  • A song: “Freedom and Machine Guns” by Lori McTear
  • A lecture by Ranulph Glanville, titled “Freedom and the Machine”. Dr. Glanville passed away recently after a fascinating career.
  • A book: Software-Agents and Liberal Order: An Inquiry Along the Borderline Between Economics and Computer Science, by Dirk Nicholas Wagner. A dissertation, perhaps.

With the exception of the song, this material feels very remote and European. Nevertheless the objectively correct Google search algorithm has determined that this is the most relevant material on this subject.

I’ve been told I should respond to Frank Pasquale’s Black Box Society, as this nicely captures contemporary discomfort with the role of machines and algorithmic determination in society. I am a bit trapped in literature from the mid-20th century, which mostly expresses the same spirit.

It is strange to think that a counterpoint to these anxieties, a defense of the role of machines in society, is necessary–since most people seem happy to have given the management of their lives over to machines anyway. But then again, no dissertation is necessary. I have to remember that writing such a thing is a formality and that pretensions of making intellectual contributions with such work are precisely that: pretensions. If there is value in the work, it won’t be in the philosophical prelude! (However much fun it may be to write.) Rather, it will be in the empirical work.

by Sebastian Benthall at October 12, 2015 03:14 AM

October 11, 2015

Ph.D. student

cultural values in design

As much as I would like to put aside the problem of technology criticism and focus on my empirical work, I find myself unable to avoid the topic. Today I was discussing work with a friend and collaborator who comes from a ‘critical’ perspective. We were talking about ‘values in design’, a subject that we both care about, despite our different backgrounds.

I suggested that one way to think about values in design is to think of a number of agents and their utility functions. Their utility functions capture their values; the design of an artifact can have greater or less utility for the agents in question. They may intentionally or unintentionally design artifacts that serve some but not others. And so on.

Of course, thinking in terms of ‘utility functions’ is common among engineers, economists, cognitive scientists, rational choice theorists in political science, and elsewhere. It is shunned by the critically trained. My friend and colleague was open minded in his consideration of utility functions, but was more concerned with how cultural values might sneak into or be expressed in design.

I asked him to define a cultural value. We debated the term for some time. We reached a reasonable conclusion.

With such a consensus to work with, we began to talk about how such a concept would be applied. He brought up the example of an algorithm claimed by its creators to be objective. But, he asked, could the algorithm have a bias? Would we not expect that it would express, secretly, cultural values?

I confessed that I aspire to design and implement just such algorithms. I think it would be a fine future if we designed algorithms to fairly and objectively arbitrate our political disputes. We have good reasons to think that an algorithm could be more objective than a system of human bureaucracy. While human decision-makers are limited by the partiality of their perspective, we can build infrastructure that accesses and processes data that are beyond an individual’s comprehension. The challenge is to design the system so that it operates kindly and fairly despite its operations being beyond the scope a single person’s judgment. This will require an abstracted understanding of fairness that is not grounded in the politics of partiality.

Suppose a team of people were to design and implement such a program. On what basis would the critics–and there would inevitably be critics–accuse it of being a biased design with embedded cultural values? Besides the obvious but empty criticism that valuing unbiased results is a cultural value, why wouldn’t the reasoned process of design reduce bias?

We resumed our work peacefully.

by Sebastian Benthall at October 11, 2015 12:56 AM

October 10, 2015

Ph.D. student

Protected: partiality and ethics

This post is password protected. You must visit the website and enter the password to continue reading.

by Sebastian Benthall at October 10, 2015 03:06 AM

October 06, 2015

Ph.D. student

ethical data science is statistical data science #dsesummit

I am at the Moore/Sloan Data Science Environment at the Suncadia Resort in Washington. There are amazing trees here. Wow!

So far the coolest thing I’ve seen is a talk on how Dynamic Mode Decomposition, a technique from fluid dynamics, is being applied to data from brains.

And yet, despite all this sweet science, all is not well in paradise. Provocations, source unknown, sting the sensitive hearts of the data scientists here. Something or someone stirs our emotional fluids.

There are two controversies. There is one solution, which is the synthesis of the two parts into a whole.

Herr Doctor Otherwise Anonymous confronted some compatriots and myself in the resort bar with a distressing thought. His work in computational analysis of physical materials–his data science–might be coopted and used for mass surveillance. Powerful businesses might use the tools he creates. Information discovered through these tools may be used to discriminate unfairly against the underprivileged. As teachers, responsible for the future through our students, are we not also responsible for teaching ethics? Should we not be concerned as practitioners; should we not hesitate?

I don’t mind saying that at the time I at my Ballmer Peak of lucidity. Yes, I replied, we should teach our students ethics. But we should not base our ethics in fear! And we should have the humility to see that the moral responsibility is not ours to bear alone. Our role is to develop good tools. Others may use them for good or ill, based on their confidence in our guarantees. Indeed, an ethical choice is only possible when one knows enough to make sound judgment. Only when we know all the variables in play and how they relate to each other can we be sure our moral decisions–perhaps to work for social equality–are valid.

Later, I discover that there is more trouble. The trouble is statistics. There is a matter of professional identity: Who are statisticians? Who are data scientists? Are there enough statisticians in data science? Are the statisticians snubbing the data scientists? Do they think they are holier-than-thou? Are the data scientists merely bad scientists, slinging irresponsible model-fitting code, inviting disaster?


Attachment to personal identity is the root of all suffering. Put aside all sociological questions of who gets to be called a statistician for a moment. Don’t even think about what branches of mathematics are considered part of a core statistical curriculum. These are historical contingencies with no place in the Absolute.

At the root of this anxiety about what is holy, and what is good science, is that statistical rigor just is the ethics of data science.

by Sebastian Benthall at October 06, 2015 03:25 PM

October 02, 2015

Ph.D. alumna

Join me at the Parker Lecture on Oct. 20 in Washington DC

Every year, the media reform community convenes to celebrate one of the founders of the movement, to reflect on the ethical questions of our day, and to honor outstanding champions of media reform. This annual event, called the Parker Lecture, is in honor of Dr. Everett C. Parker, who is often called the founder of the media reform movement, and who died last month at the age of 102. Dr. Parker made incredible contributions from his post as the Executive Director of the United Church of Christ’s Office of Communication, Inc.. This organization is part of the progressive movement’s efforts to hold media accountable and to consider how best to ensure all people, no matter their income or background, benefit from new technology.

I am delighted to be part of this year’s events as one of the honorees. My other amazing partners in this adventure are:

  • Joseph Torres, senior external affairs director of Free Press and co-author of News for All the People: The Epic Story of Race and the American Media, will receive the Parker Award which recognizes an individual whose work embodies the principles and values of the public interest in telecommunications.

  • Wally Bowen, co-founder and executive director of the Mountain Area Information Network (MAIN), will receive the Donald H. McGannon Award in recognition of his dedication to bringing modern telecommunications to low-income people in rural areas.

The 33rd Annual Parker Lecture will be held Tuesday, October 20, 2015 at 8 a.m. at the First Congregational United Church of Christ, 945 G St NW, Washington, DC 20001. I will be giving a talk as part of this celebration and joined by Clayton Old Elk of the Crow Tribe who will offer a praise song.

Want to join us? Tickets are available here.

by zephoria at October 02, 2015 05:37 PM

September 18, 2015

Ph.D. student

Ethnography, philosophy, and data anonymization

The other day at BIDS I was working at my laptop when a rather wizardly looking man in a bicycle helmet asked me when The Hacker Within would be meeting. I recognized him from a chance conversation in an elevator after Anca Dragan’s ICBS talk the previous week. We had in that brief moment connected over the fact that none of the bearded men in the elevator had remembered to press the button for the ground floor. We had all been staring off into space before a young programmer with a thin mustache pointed out our error.

Engaging this amicable fellow, whom I will leave anonymous, the conversation turned naturally towards principles for life. I forget how we got onto the topic, but what I took away from the conversation was his advice: “Don’t turn your passion into your job. That’s like turning your lover into a whore.”

Scholars in the School of Information are sometimes disparaging of the Data-Information-Knowledge-Wisdom hierarchy. Scholars, I’ve discovered, are frequently disparaging of ideas that are useful, intuitive, and pertinent to action. One cannot continue to play the Glass Bead Game if it has already been won and more than one can continue to be entertained by Tic Tac Toe once one has grasped its ineluctable logic.

We might wonder, as did Horkheimer, when the search and love of wisdom ceased to be the purpose of education. It may have come during the turn when philosophy was determined to be irrelevant, speculative or ungrounded. This perhaps coincided, in the United States, with McCarthyism. This is a question for the historians.

What is clear now is that philosophy per se is not longer considered relevant to scientific inquiry.

An ethnographer I know (who I will leave anonymous) told me the other day that the goal of Science and Technology Studies is to answer questions from philosophy of science with empirical observation. An admirable motivation for this is that philosophy of science should be grounded in the true practice of science, not in idle speculation about it. The ethnographic methods, through which observational social data is collected and then compellingly articulated, provide a kind of persuasiveness that for many far surpasses the persuasiveness of a priori logical argument, let alone authority.

And yet the authority of ethnographic writing depends always on the socially constructed role of the ethnographer, much like the authority of the physicist depends on their socially constructed role as physicists. I’d even argue that the dependence of ethnographic authority on social construction is greater than that of other kinds of scientific authority, as ethnography is so quintessentially an embedded social practice. A physicist or chemist or biologist at least in principle has nature to push back on their claims; a renegade natural scientist can as a last resort claim their authority through provision of a bomb or a cure. The mathematician or software engineer can test and verify their work through procedure. The ethnographer does not have these opportunity. Their writing will never be enough to convey the entirety of their experience. It is always partial evidence, a gesture at the unwritten.

This is not an accidental part of the ethnographic method. The practice of data anonymization, necessitated by the IRB and ethics, puts limitations on what can be said. These limitations are essential for building and maintaining the relationships of trust on which ethnographic data collection depends. The experiences of the ethnographer must always go far beyond what has been regulated as valid procedure. The information they have collected illicitly will, if they are skilled and wise, inform their judgment of what to write and what to leave out. The ethnographic text contains many layers of subtext that will be unknown to most readers. This is by design.

The philosophical text, in contrast, contains even less observational data. The text is abstracted from context. Only the logic is explicit. A naive reader will assume, then, that philosophy is a practice of logic chopping.

This is incorrect. My friend the ethnographer was correct: that ethnography is a way of answering philosophical questions empirically, through experience. However, what he missed is that philosophy is also a way of answering philosophical questions through experience. Just as in ethnographic writing, experience necessarily shapes the philosophical text. What is included, what is left out, what constellation in the cosmos of ideas is traced by the logic of the argument–these will be informed by experience, even if that experience is absent from the text itself.

One wonders: thus unhinged from empirical argument, how does a philosophical text become authoritative?

I’d offer the answer: it doesn’t. A philosophical text does not claim authority. That has been its method since Socrates.

by Sebastian Benthall at September 18, 2015 05:38 PM

September 10, 2015

Ph.D. student

de Beauvoir on science as human freedom

I appear to be unable to stop writing blog posts about philosophers who wrote in the 1940’s. I’ve been attempting a kind of survey. After a lot of reading, I have to say that my favorite–the one I think is most correct–is Simone de Beauvoir.

Much like “bourgeois”, “de Beauvoir” is something I find it impossible to remember how to spell. Therefore I am setting myself up for embarrassment by beginning to write about her work, The Ethics of Ambiguity. On the other hand, it’s nice to come full circle. In a notebook I was scribbling in when I first showed up in graduate school I was enthusiastic about using de Beauvoir to explicate what’s interesting about open source software development. Perhaps now is the right time to indulge the impulse.

de Beauvoir is generally not considered to be a philosopher of science. That’s too bad, because she said some of the most brilliant things about science ever said. If you can get past just a little bit of existentialist jargon, there’s a lot there.

Here’s a passage. The Marxists have put this entire book on the Internet, making it easy to read.

To will freedom and to will to disclose being are one and the same choice; hence, freedom takes a positive and constructive step which causes being to pass to existence in a movement which is constantly surpassed. Science, technics, art, and philosophy are indefinite conquests of existence over being; it is by assuming themselves as such that they take on their genuine aspect; it is in the light of this assumption that the word progress finds its veridical meaning. It is not a matter of approaching a fixed limit: absolute Knowledge or the happiness of man or the perfection of beauty; all human effort would then be doomed to failure, for with each step forward the horizon recedes a step; for man it is a matter of pursuing the expansion of his existence and of retrieving this very effort as an absolute.

de Beauvoir’s project in The Ethics of Ambiguity is to take seriously the antimonies of society and the individual, of nature and the subject, which Horkheimer only gets around to stating at the conclusion of contemporary analysis. Rather than cry from wounds of getting skewered by the horns of the antinomy, de Beauvoir turns that ambiguity inherent in the antinomy into a realistic, situated ethics.

If de Beauvoir’s ethics have a telos or purpose, it is to expand human freedom and potential indefinitely. Through a terrific dialectical argument, she reasons out why this project is in a sense the only honest one for somebody in the human condition, despite its transcendence over individual interest.

Science, then, becomes one of several activities which one undertakes to expand this human potential.

Science condemns itself to failure when, yielding to the infatuation of the serious, it aspires to attain being, to contain it, and to possess it; but it finds its truth if it considers itself as a free engagement of thought in the given, aiming, at each discovery, not at fusion with the thing, but at the possibility of new discoveries; what the mind then projects is the concrete accomplishment of its freedom.

Science is the process of free inquiry, not the product of a particular discovery. The finest scientific discoveries open up new discoveries.

What about technics?

The attempt is sometimes made to find an objective justification of science in technics; but ordinarily the mathematician is concerned with mathematics and the physicist with physics, and not with their applications. And, furthermore, technics itself is not objectively justified; if it sets up as absolute goals the saving of time and work which it enables us to realize and the comfort and luxury which it enables us to have access to, then it appears useless and absurd, for the time that one gains can not be accumulated in a store house; it is contradictory to want to save up existence, which, the fact is, exists only by being spent, and there is a good case for showing that airplanes, machines, the telephone, and the radio do not make men of today happier than those of former times.

Here we have in just a couple sentences dismissal of instrumentality as the basis for science. Science is not primarily for acceleration; this is absurd.

But actually it is not a question of giving men time and happiness, it is not a question of stopping the movement of life: it is a question of fulfilling it. If technics is attempting to make up for this lack, which is at the very heart of existence, it fails radically; but it escapes all criticism if one admits that, through it, existence, far from wishing to repose in the security of being, thrusts itself ahead of itself in order to thrust itself still farther ahead, that it aims at an indefinite disclosure of being by the transformation of the thing into an instrument and at the opening of ever new possibilities for man.

For de Beauvoir, science (as well as all the other “constructive activities of man” including art, etc.) should be about the disclosure of new possibilities.

Succinct and unarguable.

by Sebastian Benthall at September 10, 2015 07:04 PM

September 09, 2015

Ph.D. student

scientific contexts


  • For Helen Nissenbaum (contextual integrity theory):
    • a context is a social domain that is best characterized by its purpose. For example, a hospital’s purpose is to cure the sick and wounded.
    • a context also has certain historically given norms of information flow.
    • a violation of a norm of information flow in a given context is a potentially unethical privacy violation. This is an essentially conservative notion of privacy, which is balanced by the following consideration…
    • Whether or not a norm of information flow should change (given, say, a new technological affordance to do things in a very different way) can be evaluated by how well it serve the context’s purpose.
  • For Fred Dretske (Knowledge and the Flow of Information, 1983):
    • The appropriate definition of information is (roughly) just what it takes to know something. (More specifically: M carries information about X if it reliably transmits what it takes for a suitably equipped but otherwise ignorant observer to learn about X.)
  • Combining Nissenbaum and Dretske, we see that with an epistemic and naturalized understanding of information, contextual norms of information flow are inclusive of epistemic norms.
  • Consider scientific contexts. I want to use ‘science’ in the broadest possible (though archaic) sense of the intellectual and practical activity of study or coming to knowledge of any kind. “Science” from the Latin “scire”–to know. Or “Science” (capitalized) as the translated 19th Century German Wissenschaft.
    • A scientific context is one whose purpose is knowledge.
    • Specific issues of whose knowledge, knowledge about what, and to what end the knowledge is used will vary depending on the context.
    • As information flow is necessary for knowledge, the purpose of science, the norms of information flow within (and without) a scientific context, the integrity of scientific context will be especially sensitive to its norms of information flow.
  • An insight I owe to my colleague Michael Tschantz, in conversation, is that there are several open problems within contextual integrity theory:
    • How does one know what context one is in? Who decides that?
    • What happens at the boundary between contexts, for example when one context is embedded in another?
    • Are there ways for the purpose of a context to change (not just the norms within it)?
  • Proposal: One way of discovering what a science is is to trace what its norms of information flow and to identify its purpose. A contrast between the norms and purpose of, for example, data science and ethnography, would be illustrative of both. One approach to this problem could be kind of qualitative research done by Edwin Hutchins on distributed cognition, which accepts a naturalized view of information (necessary for this framing) and then discovers information flows in a context through qualitative observation.

by Sebastian Benthall at September 09, 2015 04:00 PM

September 03, 2015

Ph.D. student

barriers to participant observation of data science and ethnography

By chance, last night I was at a social gathering with two STS scholars that are unaffiliated with BIDS. One of them is currently training in ethnographic methods. I explained to him some of my quandaries as a data scientist working with ethnographers studying data science. How can I be better in my role?

He talked about participant observation, and how hard it is in a scientific setting. An experienced STS ethnographer who he respected has said: participant observation means being ready for an almost constant state of humiliation. Your competence is always being questioned; you are always acting inappropriately; your questions are considered annoying or off-base. You try to learn the tacit knowledge required in the science but will always be less good at it than the scientists themselves. This is all necessary for the ethnographic work.

To be a good informant (and perhaps this is my role, being a principal informant) means patiently explaining lots of things that normally go unexplained. One must make explicit that which is obvious and tacit to the experienced practitioner.

This sort of explanation accords well with my own training in qualitative methods and reading in this area, which I have pursued alongside my data science practice. This has been a deliberate blending in my graduate studies. In one semester I took both Statistical Learning Theory with Martin Wainwright and Qualitative Research Methods with Jenna Burrell. I’ve taken Behavioral Data Mining with John Canny and a seminar taught by Jean Lave on “What Theory Matters”.

I have been trying to cover my bases, methodologically. Part of this is informed by training in Bayesian methods as an undergraduate. If you are taught to see yourself as an information processing machine and take the principles of statistical learning seriously, then if you’re like me you may be concerned about bias in the way you take in information. If you get a sense that there’s some important body of knowledge or information to which you haven’t been adequately exposed, you seek it out in order to correct the bias.

This is not unlike what is called theoretical sampling in the qualitative methods literature. My sense, after being trained in both kinds of study, is that the principles that motivate them are the same or similar enough to make reconciliation between the approaches achievable.

I choose to identify as a data scientist, not as an ethnographer. One reason for this is that I believe I understand what ethnography is, that it is a long and arduous process of cultural immersion in which one attempts to articulate the emic experience of those under study, and that I am not primarily doing this kind of activity with my research. I have tried to ethnographic work on an online community. I would argue that this was particularly bad ethnographic work. I concluded some time ago that I don’t have the right temperament to be an ethnographer per se.

Nevertheless, here I am participating in an Ethnography Group. It turns out that it is rather difficult to participate in an ethnographic context with ethnographers of science while still maintaining ones identity as the kind of scientist that is being studied. Part of this has to do with conflicts over epistemic norms. Attempting to argue on the basis of scientific authority about the validity of the method of that science to a room of STS ethnographers is not taken as useful information from an informant nor as a creatively galvanizing rocking of the boat. It is seen as unproductive and potentially disrespectful.

Rather than treating this as an impasse, I have been pondering how to use these kinds of divisions productively. As a first pass, I’m finding it helpful in coming to an understanding of what data science is by seeing, perhaps with a clarity that others might not have the privilege of, what it is not. In a sense the Ethnography and Evaluation Working Group of the Berkeley Institute of Data Science is really at the boundary of data science.

This is exciting, because as far as I can tell nobody knows what data science is. Alternative definitions of data science is a joke in industry. The other day our ethnography team was discussing a seminar about “what is data science” with a very open minded scientist and engineer and he said he got a lot out of the seminar but that it reached no conclusions as to what this nascent field is. “What is data science?” and even “is there such a thing as data science?” are still unanswered questions and may continue to be unanswered even after industry has stopped hyping the term and started calling it something else.

So, you might ask, what happens at the boundary of data science and ethnography

The answer is: an epistemic conflict that’s deeply grounded in historical, cultural, institutional, and cognitive differences. It’s also a conflict that threatens the very project of an ethnography of data science itself.

The problem, I feel qualified to say as somebody with training on both sides of the fence and quite a bit of experience teaching both technical and non-technical subject matter, is this: learning the skills and principles behind good data science does not come easily to everybody and in any case takes a lot of hard work and time. These skills and principles pertain to many deep practices and literatures that are developed self-consciously in a cumulative way. Any one sub-field within the many technical disciplines that comprise “data science” could take years to master, and to do so is probably impossible without adequate prior mathematical training that many people don’t receive, perhaps because they lack the opportunity or don’t care.

In fewer words: there is a steep learning curve, and the earlier people start to climb it, the easier it is for them to practice data science.

My point is that this is bad news for the participant observer. Something I sometimes hear ethnographers in the data science space say of people is “I just can’t talk to that person; they think so differently from me.” Often the person in question is, to my mind, exactly the sort of person I would peg as an exemplary data scientist.

Often these are people with a depth of technical understanding that I don’t have and aspire to have. I recognize that they have made the difficult choice to study more of the foundations of what I believe to be an important field, despite the fact that this is (as evinced by the reaction of ‘softer’ social sciences) alienating to a lot of people. These are the people whom I can consult on methodological questions that are integral to my work as a data scientist. It is part of data science practice to discuss epistemic norms seriously with others in order to make sure that the integrity of the science is upheld. Knowledge about statistical norms and principles is taught in classes and reading groups and practiced in, for example, computational manipulation of data. But this knowledge is also expanded and maintained through informal, often passionate and even aggressive, conversations with colleagues.

I don’t know where this leaves the project of ethnography of data science. One possibility is that it can abandon participant observation as a method because participant observation is too difficult. That would be a shame but might simply be necessary.

Another strategy, which I think is potentially more interesting, is to ask seriously: why is this so difficult? What is difficult about data science? For whom is it most difficult? Do experts experience the same difficulties, or different ones? And so on.

by Sebastian Benthall at September 03, 2015 04:22 PM

September 02, 2015

Ph.D. student

statistics, values, and norms

Further considering the difficulties of doing an ethnography of data science, I am reminded of Hannah Arendt’s comments about the apolitically of science.

The problem is this:

  • Ethnography as a practice is, as far as I can tell, descriptive. It is characterized primarily by non-judgmental observation.
  • Data scientific practice is tied to an epistemology of statistics. Statistics is a discipline about values and norms for belief formation. While superficially it may appear to have no normative content, practicing statistical thinking in research is a matter of adherence to norms.
  • It is very difficult to reconcile ethnographic epistemology and statistical epistemology. They have completely different intellectual histories and are based in very different cognitive modalities.
  • Ethnographers are often trained to reject statistical epistemology in their own work and as a result don’t learn statistics.
  • Consequently, most ethnographies of data science practice will leave out precisely that which data scientists see as most essential to their practice.

“Statistics” here is not entirely accurate. In computational statistics or ‘data science’, we can replace “statistics” with a large body of knowledge spanning statistics, probability theory, theory of computation, etc. The hybridization of these bodies of knowledge in, for example, educational curricula, is an interesting shift in the trajectory of science as a whole.

A deeper concern: in the self-understanding of the sciences, there is a transmitted sense of this intellectual history. In many good textbooks on technical subject-matter, there is a section at the end of each chapter on the history of the field. I come to these sections of the textbook with a sense of reverence. They stand as testaments to the century of cumulative labor done by experts on which our current work stands.

When this material is of disinterest to the historian or ethnographer of science, it feels like a kind of sacrilege. Contrast this disinterest with the treatment of somebody like Imre Lakatos, whose history of mathematics is so much more a history of the mathematics, not a history of the mathematicians, that the form of the book is a dialog compressing hundreds of years of mathematical history into a single classroom discussion. Historical detail is provided in footnotes, apart from the main drama of the narrative–which is about the emergence of norms of reasoning over time.

by Sebastian Benthall at September 02, 2015 08:17 PM

Observations of ethnographers

This semester the Berkeley Institute of Data Science Ethnography and Evaluation Working Group (EEWG) is meeting in its official capacity for the first time. In this context I am a data scientist among ethnographers and the transition to participation in this strange, alien culture is not an easy one. I have prepared myself for this task through coursework and associations throughout my time at Berkeley, but I am afraid that integrating into an “Science and Technology Studies” ethnographic team will be difficult nonetheless.

Off the bat, certain cultural differences seem especially salient:

In the sciences, typically one cares about whether or not the results of your investigation are true or useful in an intersubjective sense. This sense of purpose leads to a sense of concern for the logical coherence and rigor of ones method, which in turn constrains the kinds of questions that can be asked and the ways in which one presents ones results. Within STS, methodological concerns are perhaps secondary. The STS ethnographer interviews people, reads and listens carefully, but also holds the data at a distance. Ultimately, the process of writing–which is necessarily tied up with what the writer is interested in–is as much a part of the method as the observations and the analysis. Whereas the scientist strives for intersubjective agreement, the ethnographer is methodologically bound to their own subjectivity.

A consequence of this is that agonism, or the role of argumentation and disagreement, is different within scientific and ethnographic communities. (I owe this point to my colleague, Stuart Geiger, an ethnographer who is also in the working group.) In a scientific community argument is seen as a necessary step towards resolving disagreement and arriving at intersubjective results. The purpose of argument is, ideally, to arrive at agreement. Reasons are shared and disagreements resolved through logic. In the ethnographic community, since intersubjectivity is not valued, rational argument is seen more as form of political or social maneuvering. To challenge somebody intellectually is not simply to challenge what they are intellectualizing; it is to challenge their intellectual authority or competence.

This raises an interesting question: what is competence, to ethnographers? To the scientist, competence is a matter of concrete skills (handling lab equipment, computation, reasoning, presentation of results, etc.) that facilitate the purpose of science, the achievement of intellectual agreement on matters within the domain. Somebody who succeeds by virtue of skills other than these (such as political skilfulness) is seen, by the scientist, as a charlatan and a danger to the scientific project. Many of the more antisocial tendencies of scientists can be understood as an effort to keep the charlatans out, in order to preserve the integrity (and, ultimately, authority) of the scientific project.

Ethnographic competence is mysterious to me because, at least in STS, scientific authority is always a descriptively assessed social phenomenon and not something which one trusts. If the STS ethnographer sees the scientific project primarily as one of cultural and institutional regularity and leaves out the teleological aspects of science as a search for truth of some kind, as has been common in STS since the 80’s, then how can STS see its own intellectual authority besides as a rather arbitrary political accomplishment? What is competence besides the judgement of competence by ones bureaucratic superiors?

I am not sure that these questions, which seem pertinent to me as a scientist, are even askable within the language and culture of STS. As they concern the normative elements of intellectual inquiry, not descriptions of the social institutions of inquiry, they are in a sense “unscientific” questions vis-a-vis STS.

Perhaps more importantly, these questions are unimportant to the STS ethnographer because they are not relevant to the STS ethnographer’s job. In this way the STS ethnographer is not unlike many practicing scientists who, once they learn an approved method and have a community in which to share their work, do not question the foundations of their field. And perhaps because of STS’s concern with what others might consider the mundane aspects of scientific inquiry–the scheduling of meetings, the arrangement of events, the circulation of invitations and rejection letters, the arrangement of institutions–their concept of intellectual work hinges on these activities more than it does argument or analysis, relative to the sciences.

by Sebastian Benthall at September 02, 2015 05:25 PM

August 30, 2015

MIMS 2012

Get Comfortable Sharing Your Shitty Work

After jamming with a friend, she commented that she felt emotionally spent afterwards. Not quite sure what she meant, I asked her to elaborate. She said that improvising music makes you feel vulnerable. You’ve got to put yourself out there, which opens you up to judgement and criticism.

And she’s right. In that moment I realized that being a designer trained me to get over that fear. I know I have to start somewhere shitty before I can get somewhere good. Putting myself and my ideas out there is part of that process. My work only becomes good through feedback and iteration.

So my advice to you, young designer, is to accept the fact that before your work becomes great, it’s going to be shitty. This will be hard at first. You’ll feel vulnerable. You’ll fear judgement. You’ll worry about losing the respect of your colleagues.

But get over it. We’ve all felt this way before. Just remember that we’re all in this together. We all want to produce great work for our customers. We all want to make great music together.

So get comfortable sharing your shitty work. You’ll start off discordant, but through the process of iteration and refinement you’ll eventually hit your groove.

by Jeff Zych at August 30, 2015 10:34 PM

August 28, 2015

Ph.D. student

The recalcitrance of prediction

We have identified how Bostrom’s core argument for superintelligence explosion depends on a crucial assumption. An intelligence explosion will happen only if the kinds of cognitive capacities involved in instrumental reason are not recalcitrant to recursive self-improvement. If recalcitrance rises comparably with the system’s ability to improve itself, then the takeoff will not be fast. This significantly decreases the probability of decisively strategic singleton outcomes.

In this section I will consider the recalcitrance of intelligent prediction, which is one of the capacities that is involved in instrumental reason (another being planning). Prediction is a very well-studied problem in artificial intelligence and statistics and so is easy to characterize and evaluate formally.

Recalcitrance is difficult to formalize. Recall that in Bostrom’s formulation:

\frac{dI}{dt} = \frac{O(I)}{R(I)}

One difficulty in analyzing this formula is that the units are not specified precisely. What is a “unit” of intelligence? What kind of “effort” is the unit of optimization power? And how could one measure recalcitrance?

A benefit of looking at a particular intelligent task is that it allows us to think more concretely about what these terms mean. If we can specify which tasks are important to consider, then we can take the level of performance on those well-specified class of problems as measures of intelligence.

Prediction is one such problem. In a nutshell, prediction comes down to estimating a probability distribution over hypotheses. Using the Bayesian formulation of statistical influence, we can represent the problem as:

P(H|D) = \frac{P(D|H) P(H)}{P(D)}

Here, P(H|D) is the posterior probability of a hypothesis H given observed data D. If one is following statistically optimal procedure, one can compute this value by taking the prior probability of the hypothesis P(H), multiplying it by the likelihood of the data given the hypothesis P(D|H), and then normalizing this result by dividing by the probability of the data over all models, P(D) = \sum_{i}P(D|H_i)P(H_i).

Statisticians will justifiably argue whether this is the best formulation of prediction. And depending on the specifics of the task, the target value may well be some function of posterior (such as the hypothesis with maximum likelihood) and the overall distribution may be secondary. These are valid objections that I would like to put to one side in order to get across the intuition of an argument.

What I want to point out is that if we look at the factors that affect performance on prediction problems, there a very few that could be subject to algorithmic self-improvement. If we think that part of what it means for an intelligent system to get more intelligent is to improve its ability of prediction (which Bostrom appears to believe), but improving predictive ability is not something that a system can do via self-modification, then that implies that the recalcitrance of prediction, far from being constant or lower, actually approaches infinity with respect the an autonomous system’s capacity for algorithmic self-improvement.

So, given the formula above, in what ways can an intelligent system improve its capacity to predict? We can enumerate them:

  • Computational accuracy. An intelligent system could be better or worse at computing the posterior probabilities. Since most of the algorithms that do this kind of computation do so with numerical approximation, there is the possibility of an intelligent system finding ways to improve the accuracy of this calculation.
  • Computational speed. There are faster and slower ways to compute the inference formula. An intelligent system could come up with a way to make itself compute the answer faster.
  • Better data. The success of inference is clearly dependent on what kind of data the system has access to. Note that “better data” is not necessarily the same as “more data”. If the data that the system learns from is from a biased sample of the phenomenon in question, then a successful Bayesian update could make its predictions worse, not better. Better data is data that is informative with respect to the true process that generated the data.
  • Better prior. The success of inference depends crucially on the prior probability assigned to hypotheses or models. A prior is better when it assigns higher probability to the true process that generates observable data, or models that are ‘close’ to that true process. An important point is that priors can be bad in more than one way. The bias/variance tradeoff is well-studied way of discussing this. Choosing a prior in machine learning involves a tradeoff between:
    1. Bias. The assignment of probability to models that skew away from the true distribution. An example of a biased prior would be one that gives positive probability to only linear models, when the true phenomenon is quadratic. Biased priors lead to underfitting in inference.
    2. Variance.The assignment of probability to models that are more complex than are needed to reflect the true distribution. An example of a high-variance prior would be one that assigns high probability to cubic functions when the data was generated by a quadratic function. The problem with high variance priors is that they will overfit data by inferring from noise, which could be the result of measurement error or something else less significant than the true generative process.

    In short, there best prior is the correct prior, and any deviation from that increases error.

Now that we have enumerate the ways in which an intelligent system may improve its power of prediction, which is one of the things that’s necessary for instrumental reason, we can ask: how recalcitrant are these factors to recursive self-improvement? How much can an intelligent system, by virtue of its own intelligence, improve on any of these factors?

Let’s start with computational accuracy and speed. An intelligent system could, for example, use some previously collected data and try variations of its statistical inference algorithm, benchmark their performance, and then choose to use the most accurate and fastest ones at a future time. Perhaps the faster and more accurate the system is at prediction generally, the faster and more accurately it would be able to engage in this process of self-improvement.

Critically, however, there is a maximum amount of performance that one can get from improvements to computational accuracy if you hold the other factors constant. You can’t be more accurate than perfectly accurate. Therefore, at some point recalcitrance of computational accuracy rises to infinity. Moreover, we would expect that effort made at improving computational accuracy would exhibit diminishing returns. In other words, recalcitrance of computational accuracy climbs (probably close to exponentially) with performance.

What is the recalcitrance of computational speed at inference? Here, performance is limited primarily by the hardware on which the intelligent system is implemented. In Bostrom’s account of superintelligence explosion, he is ambiguous about whether and when hardware development counts as part of a system’s intelligence. What we can say with confidence, however, is that for any particular piece of hardware there will be a maximum computational speed attainable with with, and that recursive self-improvement to computational speed can at best approach and attain this maximum. At that maximum, further improvement is impossible and recalcitrance is again infinite.

What about getting better data?

Assuming an adequate prior and the computational speed and accuracy needed to process it, better data will always improve prediction. But it’s arguable whether acquiring better data is something that can be done by an intelligent system working to improve itself. Data collection isn’t something that the intelligent system can do autonomously, since it has to interact with the phenomenon of interest to get more data.

If we acknowledge that data collection is a critical part of what it takes for an intelligent system to become more intelligent, then that means we should shift some of our focus away from “artificial intelligence” per se and onto ways in which data flows through society and the world. Regulations about data locality may well have more impact on the arrival of “superintelligence” than research into machine learning algorithms now that we have very faster, very accurate algorithms already. I would argue that the recent rise in interest in artificial intelligence is due mainly to availability of vast amounts of new data through sensors and the Internet. Advances in computational accuracy and speed (such as Deep Learning) have to catch up to this new availability of data and use new hardware, but data is the rate limiting factor.

Lastly, we have to ask: can a system improve its own prior, if data, computational speed, and computational accuracy are constant?

I have to argue that it can’t do this in any systematic way, if we are looking at the performance of the system at the right level of abstraction. Potentially a machine learning algorithm could modify its prior if it sees itself as underperforming in some ways. But there is a sense in which any modification to the prior made by the system that is not a result of a Bayesian update is just part of the computational basis of the original prior. So recalcitrance of the prior is also infinite.

We have examined the problem of statistical inference and ways that an intelligent system could improve its performance on this task. We identified four potential factors on which it could improve: computational accuracy, computational speed, better data, and a better prior. We determined that contrary to the assumption of Bostrom’s hard takeoff argument, the recalcitrance of prediction is quite high, approaching infinity in the cases of computational accuracy, computational speed, and the prior. Only data collections to be flexibly recalcitrant. But data collection is not a feature of the intelligent system alone but also depends on its context.

As a result, we conclude that the recalcitrance of prediction is too high for an intelligence explosion that depends on it to be fast. We also note that those concerned about superintelligent outcomes should shift their attention to questions about data sourcing and storage policy.

by Sebastian Benthall at August 28, 2015 07:01 PM

MIMS 2015

Adventures in DANE

This post will reflect on the relatively new DNS-based Authentication of Named Entities(DANE) protocol from the Internet Engineering Task Force(IETF). We will first explain how DANE works, talk about what DANE can and cannot do, then briefly discuss the future of Internet encryption standards in general before wrapping up.

What are DNSSEC and DANE?

DANE is defined in RFC 6698 and further clarified in RFC 7218. DANE depends entirely on DNSSEC, which is older and considerably more complicated. For our purposes, the only thing the reader need know about DNSSEC is that it solves the problem of trusting DNS responses. Simply put, DNSSEC ensures that DNS requests return responses that are cryptographically assured.

DANE builds on this assurance by hosting hashes of cryptographic keys in DNS. DNSSEC assures that what we see in DNS is exactly as it should be, DANE then exploits this assurance by providing a secondary trust network for cryptographic key verification. This secondary trust network is the DNS hierarchy.

Let’s look at an example. I have configured the test domain for HTTPS, TLS, DNSSEC and DANE. Let’s examine what this means.

If you visit with a modern web browser it will probably complain that it is untrusted, before asking you create an exception. This is because’s TLS certificate is not signed by any of the Certificate Authorities(CA) that your browser trusts. In setting up I created my own self signed certificate, and didn’t bother to get it signed by a CA.1

Instead, I enabled DNSSEC for, then created a hash of my self signed certificate and stuck it in a DNS TLSA record. TLSA records are where DANE hosts cryptographic information for a given service. If your browser supported DANE, it would download the TLS certificate for, compute its hash, then compare that against what is hosted in’s TLSA record. If the two hashes were the same it could trust the certificate presented by If the two hashes were different then your browser would know something fishy was happening, and not trust the certificate presented by the web server at

If you’re on a UNIX system you can query the TLSA record for with the following command.

dig +multi TLSA

The answer should look something like this. 21599 IN TLSA 3 0 2 (
                            C6403619A83B0025C6CF807992C1196CB42EE386 )

Let’s break this answer down.

The top line repeats the name of the record( you queried. Since different services on a single host can use different certificates, TLSA records include the IP protocol(tcp) and port number(443) in the record. This is followed by three items generic to all DNS records, the TTL(21599), the scope of record(IN for Internet) and the name of the record type(TLSA).

After these we have four values specific to TLSA records. The certificate usage(3), the selector(0), the matching type(2), and finally the hash of’s TLS certificate(D98DA..).

The certificate usage field(3) can contain a range from 0-3. By specifying 3 we’re saying this record contains a hash of’s TLS certificate. TLSA records can also be used to force a specific CA trust anchor. For example, if this value was 2 and the TLSA record contained the hash of CA StartSSL’s signing certificate, a supporting browser would require that’s TLS certificate be signed by the StartSSL CA.

The selector field(0) can have a range of 0-1 and simply states which format of the certificate is to hashed. It’s uninteresting for our discussion.

The matching type field(2) states which algorithm is used to compute the hash.2

Finally we have the actual hash(D98DA..) of the TLS certificate.

What can DANE do?

DANE provides a secondary chain of trust for TLS certificates. It enables TLS clients to compare the certificate presented to them by the server, to what is hosted in DNS. This prevents common Man In The Middle(MITM) attacks where an attacker intercepts a connection prior to it being established, presents its own certificate to both ends, and then sits in between the victim end-points capturing and decrypting everything. DANE prevents this common MITM attack in the same way our current CA system does, by providing a secondary means of verifying the server’s presented certificate.

The problem with CAs is that they get subverted3, and since our browsers implicitly trust all of them equally, a single subverted CA means every site using HTTPS is theoretically vulnerable. For example, if the operator of purchases a certificate from CA-X, and criminals break into CA-Y, a MITM attack could still succeed against visitors to TLS clients cannot know from which CA an operator has purchased their certificate. Thus an attacker could present a bad certificate to clients visiting signed by CA-Y, and the clients would accept it as valid.

DANE has two answers to this type of attack. First, since a hash of the correct certificate is hosted in DNS, clients can compare the certificate presented by the server to what is hosted in DNS. Then only proceed if they match. Secondly, DANE can lock a given DNS host to work with certificates issued by only one CA. So referencing the above example, if CA-Y is penetrated it won’t matter, because DANE compliant clients visiting will know that only certificates issued by CA-X are valid for

What can DANE not do?

DANE cannot link a given service to a real world identity. For example, DANE cannot tell you that is the website of Andrew McConachie. Take a closer look at the certificate for It’s issued to, and issued by, “Fake”. DANE don’t care. DANE only ensures that the TLS certificate presented by the web server at matches the hash in DNS. This won’t stop phishing attacks where a person is tricked into going to the wrong website, since that website’s TLS certificate would still match the hash in the TLSA record.

The way website operators tie identity to TLS certificates today is by getting special Extended Validation(EV) certificates from CAs. When a website owner requests an EV certificate from a CA that CA goes through a more extensive identification process. The purpose of which is to directly link the DNS name to a real world organization or individual. This is generally a rather thorough examination, and as such is more expensive than getting a normal certificate. EV certificates are also generally considered more secure than DV certificates, at least for HTTPS. If a website has an EV certificate, web browsers will display the name of the organization in the address bar.

Normal, or Domain Validated(DV), certificates make no claims regarding real world identity. If you control a DNS name you can get a DV certificate for that name. In this way DV certificates and DANE are very similar in the levels of trust they afford. They only differ in what infrastructure backs up this trust.

Does DANE play well with others?

DANE does not obviate the need for other trust mechanisms, in fact it was designed to play well with them. Unlike what some people think, the purpose of DANE is not to do away with the CA system. It is to provide another chain of trust based on the DNS hierarchy.

Certificate Transparency(CT) is another new standard from the IETF.4 It is standardized in RFC 6962. Simply put, CT establishes a public audit trail of issued TLS certificates that browsers and other clients can check against. As certificates are issued participating CSs add them to a write once audit trail. Certificates added to this audit trail cannot be removed or overwritten. TLS clients can then compare the certificate presented by a given website with what is in the audit trail. CT does not interfere with DANE, instead they complement one another. There is no reason today why a given site cannot be protected by our current CA system, DANE, and Certificate Transparency. The more the better. Redundancy and heterogeneity lead to more secure environments.5

The challenge moving forward for TLS clients will be in how these different models are used to determine trust, and presented to the user. Right now Firefox shows a lock and, if it’s an EV certificate, the name of the organization in the address bar.6 This is all based on the CA system of trust. If DNSSEC/DANE, and Certificate Transparency all gain adoption browser manufacturers will have to rethink how trust information is presented to their users. This is not going to be easy. To some degree boiling down all of this complexity to a single trust decision for the end user will be necessary, and trade-offs of information presented vs. usability will be required.

Weak Adoption and the Future

DANE depends on DNSSEC to function, and DNSSEC adoption has been slow. However, in some ways DANE has been pushing DNSSEC adoption. This article has been focusing on using DANE for HTTPS, actually DANE has seen the most deployment success in email deployments.7 There has been significant uptake in DANE by email providers wishing to prevent so called Server In The Middle Attacks(SITM). This type of attack occurs when a rogue mail server sits between two mail servers and captures all mail traffic between them. DANE averts this type of attack by allowing both Simple Mail Transfer Protocol(SMTP) talkers to compare the presented certificate with what is in DNS. The IETF currently has an Internet Draft on using DANE and TLS to secure SMTP traffic.

I think we should expect adoption of DANE for email security to continue increasing before any significant adoption begins for HTTPS. Many technologies require some sort of ‘killer app’ that pushes their adoption, and I suspect many people see DANE as DNSSEC’s killer app. I hope this is true, because one of the best ways we can thwart both pervasive monitoring by nation states, and illegal activities by criminals is increasing the adoption of TLS. Providing heterogeneous methods for assuring key integrity is also incredibly important. This article argued that a future with multiple methods for ensuring key integrity is preferable to a single winner. Our ideal secure Internet should have multiple independent means of verifying TLS certificates, DANE is just one of them.

Please contact me at andrewm AT ischool DOT berkeley DOT edu if you discover inaccuracies in this article.

  1. I tried getting it signed by StartSSL, but that didn’t quite work out.

  2. uses a SHA2-512 hash as this is the most secure algorithm that is currently supported. See RFC 7218 for a mapping of acronyms to algorithms.

  3. Three examples of CA breaches: Turk Trust, Diginotar, Comodo

  4. Check out for more info.

  5. OS diversity for intrusion tolerance: Myth or reality?

  6. CZ.nic offers a great browser plugin for DNSSEC and DANE.

  7. Jan Zorz at the Internet Society has been measuring DANE uptake in SMTP traffic in the Alexa top 1 million. Also, the NIST recently published a whitepaper on securing email using DANE. The whitepaper goes further, and suggests that email providers start using a recently proposed IETF Internet Draft on storing hashes of personal OpenPGP keys in DNS.

Adventures in DANE was originally published by Andrew McConachie at Metafarce on August 28, 2015.

by Andrew McConachie ( at August 28, 2015 07:00 AM

Ph.D. student

Nissenbaum the functionalist

Today in Classics we discussed Helen Nissenbaum’s Privacy in Context.

Most striking to me is that Nissenbaum’s privacy framework, contextual integrity theory, depends critically on a functionalist sociological view. A context is defined by its information norms and violations of those norms are judged according to their (non)accordance with the purposes and values of the context. So, for example, the purposes of an educational institution determine what are appropriate information norms within it, and what departures from those norms constitute privacy violations.

I used to think teleology was dead in the sciences. But recently I learned that it is commonplace in biology and popular in ecology. Today I learned that what amounts to a State Philosopher in the U.S. (Nissenbaum’s framework has been more or less adopted by the FTC) maintains a teleological view of social institutions. Fascinating! Even more fascinating that this philosophy corresponds well enough to American law as to be informative of it.

From a “pure” philosophy perspective (which is I will admit simply a vice of mine), it’s interesting to contrast Nissenbaum with…oh, Horkheimer again. Nissenbaum sees ethical behavior (around privacy at least) as being behavior that is in accord with the purpose of ones context. Morality is given by the system. For Horkheimer, the problem is that the system’s purposes subsume the interests of the individual, who is alone the agent who is able to determine what is right and wrong. Horkheimer is a founder of a Frankfurt school, arguably the intellectual ancestor of progressivism. Nissenbaum grounds her work in Burke and her theory is admittedly conservative. Privacy is violated when people’s expectations of privacy are violated–this is coming from U.S. law–and that means people’s contextual expectations carry more weight than an individual’s free-minded beliefs.

The tension could be resolved when free individuals determine the purpose of the systems they participate in. Indeed, Nissenbaum quotes Burke in his approval of established conventions as being the result of accreted wisdom and rationale of past generations. The system is the way it is because it was chosen. (Or, perhaps, because it survived.)

Since Horkheimer’s objection to “the system” is that he believes instrumentality has run amok, thereby causing the system serve a purpose nobody intended for it, his view is not inconsistent with Nissenbaum’s. Nissenbaum, building on Dworkin, sees contextual legitimacy as depending on some kind of political legitimacy.

The crux of the problem is the question of what information norms comprise the context in which political legitimacy is formed, and what purpose does this context or system serve?

by Sebastian Benthall at August 28, 2015 02:54 AM

August 27, 2015

Ph.D. student

The relationship between Bostrom’s argument and AI X-Risk

One reason why I have been writing about Bostrom’s superintelligence argument is because I am acquainted with what could be called the AI X-Risk social movement. I think it is fair to say that this movement is a subset of Effective Altruism (EA), a laudable movement whose members attempt to maximize their marginal positive impact on the world.

The AI X-Risk subset, which is a vocal group within EA, sees the emergence of a superintelligent AI as one of several risks that is notably because it could ruin everything. AI is considered to be a “global catastrophic risk” unlike more mundane risks like tsunamis and bird flu. AI X-Risk researchers argue that because of the magnitude of the consequences of the risk they are trying to anticipate, they must raise more funding and recruit more researchers.

While I think this is noble, I think it is misguided for reasons that I have been outlining in this blog. I am motivated to make these arguments because I believe that there are urgent problems/risks that are conceptually adjacent (if you will) to the problem AI X-Risk researchers study, but that the focus on AI X-Risk in particular diverts interest away from them. In my estimation, as more funding has been put into evaluating potential risks from AI many more “mainstream” researchers have benefited and taken on projects with practical value. To some extent these researchers benefit from the alarmism of the AI X-Risk community. But I fear that their research trajectory is thereby distorted from where it could truly provide maximal marginal value.

My reason for targeting Bostrom’s argument for the existential threat of superintelligent AI is that I believe it’s the best defense of the AI X-Risk thesis out there. In particular, if valid the argument should significantly raise the expected probability of an existentially risky AI outcome. For Bostrom, it is likely a natural consequence of advancement in AI research more generally because of recursive self-improvement and convergent instrumental values.

As I’ve informally work shopped this argument I’ve come upon this objection: Even if it is true that a superintelligent system would not for systematic reasons become a existentially risky singleton, that does not mean that somebody couldn’t develop such a superintelligent system in an unsystematic way. There is still an existential risk, even if it is much lower. And because existential risks are so important, surely we should prepare ourselves for even this low probability event.

There is something inescapable about this logic. However, the argument applies equally well to all kinds of potential apocalypses, such as enormous meteors crashing into the earth and biowarfare produced zombies. Without some kind of accounting of the likelihood of these outcomes, it’s impossible to do a rational budgeting.

Moreover, I have to call into question the rationality of this counterargument. If Bostrom’s arguments are used in defense of the AI X-Risk position but then the argument is dismissed as unnecessary when it is challenged, that suggests that the AI X-Risk community is committed to their cause for reasons besides Bostrom’s argument. Perhaps these reasons are unarticulated. One could come up with all kinds of conspiratorial hypotheses about why a group of people would want to disingenuously spread the idea that superintelligent AI poses an existential threat to humanity.

The position I’m defending on this blog (until somebody convinces me otherwise–I welcome all comments) is that a superintelligent AI singleton is not a significantly likely X-Risk. Other outcomes that might be either very bad or very good, such as ones with many competing and cooperating superintelligences, are much more likely. I’d argue that it’s more or less what we have today, if you consider sociotechnical organizations as a form of collective superintelligence. This makes research into this topic not only impactful in the long run, but also relevant to problems faced by people now and in the near future.

by Sebastian Benthall at August 27, 2015 04:51 PM

August 25, 2015

Ph.D. student

Bostrom and Habermas: technical and political moralities, and the God’s eye view

An intriguing chapter that follows naturally from Nick Bostrom’s core argument is his discussion of machine ethics writ large. He asks: suppose one could install into an omnipotent machine ethical principles, trusting it with the future of humanity. What principles should we install?

What Bostrom accomplishes by positing his Superintelligence (which begins with something simply smarter than humans, and evolves over the course of the book into something that takes over the galaxy) is a return to what has been called “the God’s eye view”. Philosophers once attempted to define truth and morality according to perspective of an omnipotent–often both transcendent and immanent–god. Through the scope of his work, Bostrom has recovered some of these old themes. He does this not only through his discussion of Superintelligence (and positing its existence in other solar systems already) but also through his simulation arguments.

The way I see it, one thing I am doing by challenging the idea of an intelligence explosion and its resulting in a superintelligent singleton is problematizing this recovery of the God’s Eye view. If your future world is governed by many sovereign intelligent systems instead of just one, then ethics are something that have to emerge from political reality. There is something irreducibly difficult about interacting with other intelligences and it’s from this difficulty that we get values, not the other way around. This sort of thinking is much more like Habermas’s mature ethical philosophy.

I’ve written about how to apply Habermas to the design of networked publics that mediate political interactions between citizens. What I built and offer as toy example in that paper, @TheTweetserve, is simplistic but intended just as a proof of concept.

As I continue to read Bostrom, I expect a convergence on principles. “Coherent extrapolated volition” sounds a lot like a democratic governance structure with elected experts at first pass. The question of how to design a governance structure or institution that leverages artificial intelligence appropriately while legitimately serving its users motivates my dissertation research. My research so far has only scratched the surface of this problem.

by Sebastian Benthall at August 25, 2015 03:19 AM

August 24, 2015

Ph.D. student

Recalcitrance examined: an analysis of the potential for superintelligence explosion

To recap:

  • We have examined the core argument from Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies regarding the possibility of a decisively strategic superintelligent singleton–or, more glibly, an artificial intelligence that takes over the world.
  • With an eye to evaluating whether this outcome is particularly likely relative to other futurist outcomes, we have distilled the argument and in so doing have reduced it to a simpler problem.
  • That problem is to identify bounds on the recalcitrance of the capacities that are critical for instrumental reasoning. Recalcitrance is defined as the inverse of the rate of increase to intelligence per time per unit of effort put into increasing that intelligence. It is meant to capture how hard it is to make an intelligent system smarter, and in particular how hard it is for an intelligent system to make itself smarter. Bostrom’s argument is that if an intelligent system’s recalcitrance is constant or lower, then it is possible for the system to undergo an “intelligence explosion” and take over the world.
  • By analyzing how Bostrom’s argument depends only on the recalcitrance of instrumentality, and not of the recalcitrance of intelligence in general, we can get a firmer grip on the problem. In particular, we can focus on such tasks as prediction and planning. If we discover that these tasks are in fact significantly recalcitrant that should reduce our expected probability of an AI singleton and consequently cause us to divert research funds to problems that anticipate other outcomes.

In this section I will look in further depth at the parts of Bostrom’s intelligence explosion argument about optimization power and recalcitrance. How recalcitrant must a system be for it to not be susceptible to an intelligence explosion?

This section contains some formalism. For readers uncomfortable with that, trust me: if the system’s recalcitrance is roughly proportional to the amount that the system is able to invest in its own intelligence, then the system’s intelligence will not explode. Rather, it will climb linearly. If the system’s recalcitrance is significantly greater than the amount that the system can invest in its own intelligence, then the system’s intelligence won’t even climb steadily. Rather, it will plateau.

To see why, recall from our core argument and definitions that:

Rate of change in intelligence = Optimization power / Recalcitrance.

Optimization power is the amount of effort that is put into improving the intelligence of system. Recalcitrance is the resistance of that system to improvement. Bostrom presents this as a qualitative formula then expands it more formally in subsequent analysis.

\frac{dI}{dt} = \frac{O(I)}{R}

Bostrom’s claim is that for instrumental reasons an intelligent system is likely to invest some portion of its intelligence back into improving its intelligence. So, by assumption we can model O(I) = \alpha I + \beta for some parameters \alpha and \beta, where 0 < \alpha < 1 and \beta represents the contribution of optimization power by external forces (such as a team of researchers). If recalcitrance is constant, e.g R = k, then we can compute:

\Large \frac{dI}{dt} = \frac{\alpha I + \beta}{k}

Under these conditions, I will be exponentially increasing in time t. This is the “intelligence explosion” that gives Bostrom’s argument so much momentum. The explosion only gets worse if recalcitrance is below a constant.

In order to illustrate how quickly the “superintelligence takeoff” occurs under this model, I’ve plotted the above function plugging in a number of values for the parameters \alpha, \beta and k. Keep in mind that the y-axis is plotted on a log scale, which means that a roughly linear increase indicates exponential growth.

Plot of exponential takeoff rates

Modeled superintelligence takeoff where rate of intelligence gain is linear in current intelligence and recalcitrance is constant. Slope in the log scale is determine by alpha and k values.

It is true that in all the above cases, the intelligence function is exponentially increasing over time. The astute reader will notice that by my earlier claim \alpha cannot be greater than 1, and so one of the modeled functions is invalid. It’s a good point, but one that doesn’t matter. We are fundamentally just modeling intelligence expansion as something that is linear on the log scale here.

However, it’s important to remember that recalcitrance may also be a function of intelligence. Bostrom does not mention the possibility of recalcitrance being increasing in intelligence. How sensitive to intelligence would recalcitrance need to be in order to prevent exponential growth in intelligence?

Consider the following model where recalcitrance is, like optimization power, linearly increasing in intelligence.

\frac{dI}{dt} = \frac{\alpha_o I + \beta_o}{\alpha_r I + \beta_r}

Now there are four parameters instead of three. Note this model is identical to the one above it when \alpha_r = 0. Plugging in several values for these parameters and plotting again with the y-scale on the log axis, we get:

Plot of takeoff when both optimization power and recalcitrance are linearly increasing in intelligence. Only when recalcitrance is unaffected by intelligence level is there an exponential takeoff. In the other cases, intelligence quickly plateaus on the log scale. No matter how much the system can invest in its own optimization power as a proportion of its total intelligence, it still only takes off at a linear rate.

Plot of takeoff when both optimization power and recalcitrance are linearly increasing in intelligence. Only when recalcitrance is unaffected by intelligence level is there an exponential takeoff. In the other cases, intelligence quickly plateaus on the log scale. No matter how much the system can invest in its own optimization power as a proportion of its total intelligence, it still only takes off at a linear rate.

The point of this plot is to illustrate how easily exponential superintelligence takeoff might be stymied by a dependence of recalcitrance on intelligence. Even in the absurd case where the system is able to invest a thousand times as much intelligence that it already has back into its own advancement, and a large team steadily commits a million “units” of optimization power (whatever that means–Bostrom is never particularly clear on the definition of this), a minute linear dependence of recalcitrance on optimization power limits the takeoff to linear speed.

Are the reasons to think that recalcitrance might increase as intelligence increases? Prima facie, yes. Here’s a simple thought experiment: What if there is some distribution of intelligence algorithm advances that are available in nature and that some of them are harder to achieve than others. A system that dedicates itself to advancing its own intelligence, knowing that it gets more optimization power as it gets more intelligent, might start by finding the “low hanging fruit” of cognitive enhancement. But as it picks the low hanging fruit, it is left with only the harder discoveries. Therefore, recalcitrance increases as the system grows more intelligent.

This is not a decisive argument against fast superintelligence takeoff and the possibility of a decisively strategic superintelligent singleton. Above is just an argument about why it is important to consider recalcitrance carefully when making claims about takeoff speed, and to counter what I believe is a bias in Bostrom’s work towards considering unrealistically low recalcitrance levels.

In future work, I will analyze the kinds of instrumental intelligence tasks, like prediction and planning, that we have identified as being at the core of Bostrom’s superintelligence argument. The question we need to ask is: does the recalcitrance of prediction tasks increase as the agent performing them becomes better at prediction? And likewise for planning. If prediction and planning are the two fundamental components of means-ends reasoning, and both have recalcitrance that increases significantly with the intelligence of the agent performing them, then we have reason to reject Bostrom’s core argument and assign a very low probability to the doomsday scenario that occupies much of Bostrom’s imagination in Superintelligence. If this is the case, that suggests we should be devoting resources to anticipating what he calls multipolar scenarios, where no intelligent system has a decisive strategic advantage, instead.

by Sebastian Benthall at August 24, 2015 11:25 PM

August 23, 2015

Ph.D. student

Instrumentality run amok: Bostrom and Instrumentality

Narrowing our focus onto the crux of Bostrom’s argument, we can see how tightly it is bound to a much older philosophical notion of instrumental reason. This comes to the forefront in his discussion of the orthogonality thesis (p.107):

The orthogonality thesis
Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.

Bostrom goes on to clarify:

Note that the orthogonality thesis speaks not of rationality or reason, but of intelligence. By “intelligence” we here mean something like skill at prediction, planning, and means-ends reasoning in general. This sense of instrumental cognitive efficaciousness is most relevant when we are seeking to understand what the causal impact of a machine superintelligence might be.

Bostrom maintains that the generality of instrumental intelligence, which I would argue is evinced by the generality of computing, gives us a way to predict how intelligent systems will act. Specifically, he says that an intelligent system (and specifically a superintelligent) might be predictable because of its design, because of its inheritance of goals from a less intelligence system, or because of convergent instrumental reasons. (p.108)

Return to the core logic of Bostrom’s argument. The existential threat posed by superintelligence is simply that the instrumental intelligence of an intelligent system will invest in itself and overwhelm any ability by us (its well-intentioned creators) to control its behavior through design or inheritance. Bostrom thinks this is likely because instrumental intelligence (“skill at prediction, planning, and means-ends reasoning in general”) is a kind of resource or capacity that can be accumulated and put to other uses more widely. You can use instrumental intelligence to get more instrumental intelligence; why wouldn’t you? The doomsday prophecy of a fast takeoff superintelligence achieving a decisive strategic advantage and becoming a universe-dominating singleton depends on this internal cycle: instrumental intelligence investing in itself and expanding exponentially, assuming low recalcitrance.

This analysis brings us to a significant focal point. The critical missing formula in Bostrom’s argument is (specifically) the recalcitrance function of instrumental intelligence. This is not the same as recalcitrance with respect to “general” intelligence or even “super” intelligence. Rather, what’s critical is how much a process dedicated to “prediction, planning, and means-ends reasoning in general” can improve its own capacities at those things autonomously. The values of this recalcitrance function will bound the speed of superintelligence takeoff. These bounds can then inform the optimal allocation of research funding towards anticipation of future scenarios.

In what I hope won’t distract from the logical analysis of Bostrom’s argument, I’d like to put it in a broader context.

Take a minute to think about the power of general purpose computing and the impact it has had on the past hundred years of human history. As the earliest digital computers were informed by notions of artificial intelligence (c.f. Alan Turing), we can accurately say that the very machine I use to write this text, and the machine you use to read it, are the result of refined, formalized, and materialized instrumental reason. Every programming language is a level of abstraction over a machine that has no ends in itself, but which serves the ends of its programmer (when it’s working). There is a sense in which Bostrom’s argument is not about a near future scenario but rather is just a description of how things already are.

Our very concepts of “technology” and “instrument” are so related that it can be hard to see any distinction at all. (c.f. Heidegger, “The Question Concerning Technology“) Bostrom’s equating of instrumentality with intelligence is a move that makes more sense as computing becomes ubiquitously part of our experience of technology. However, if any instrumental mechanism can be seen as a form of intelligence, that lends credence to panpsychist views of cognition as life. (c.f. the Santiago theory)

Meanwhile, arguably the genius of the market is that it connects ends (through consumption or “demand”) with means (through manufacture and services, or “supply”) efficiently, bringing about the fruition of human desire. If you replace “instrumental intelligence” with “capital” or “money”, you get a familiar critique of capitalism as a system driven by capital accumulation at the expense of humanity. The analogy with capital accumulation is worthwhile here. Much as in Bostrom’s “takeoff” scenarios, we can see how capital (in the modern era, wealth) is reinvested in itself and grows at an exponential rate. Variable rates of return on investment lead to great disparities in wealth. We today have a “multipolar scenario” as far as the distribution of capital is concerned. At times people have advocated for an economic “singleton” through a planned economy.

It is striking that contemporary analytic philosopher and futurist Nick Bostrom’s contemplates the same malevolent force in his apocalyptic scenario as does Max Horkheimer in his 1947 treatise “Eclipse of Reason“: instrumentality run amok. Whereas Bostrom concerns himself primarily with what is literally a machine dominating the world, Horkheimer sees the mechanism of self-reinforcing instrumentality as pervasive throughout the economic and social system. For example, he sees engineers as loci of active instrumentalism. Bostrom never cites Horkheimer, let alone Heidegger. That there is a convergence of different philosophical sub-disciplines on the same problem suggests that there are convergent ultimate reasons which may triumph over convergent instrumental reasons in the end. The question of what these convergent ultimate reasons are, and what their relationship to instrumental reasons is, is a mystery.

by Sebastian Benthall at August 23, 2015 06:10 PM

August 21, 2015

Ph.D. student

Further distillation of Bostrom’s Superintelligence argument

Following up on this outline of the definitions and core argument of Bostrom’s Superintelligence, I will try to narrow in on the key mechanisms the argument depends on.

At the heart of the argument are a number of claims about instrumentally convergent values and self-improvement. It’s important to distill these claims to their logical core because their validity affects the probability of outcomes for humanity and the way we should invest resources in anticipation of superintelligence.

There are a number of ways to tighten Bostrom’s argument:

Focus the definition of superintelligence. Bostrom leads with the provocative but fuzzy definition of superintelligence as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.” But the overall logic of his argument makes it clear that the domain of interest does not necessarily include violin-playing or any number of other activities. Rather, the domains necessary for a Bostrom superintelligence explosion are those that pertain directly to improving ones own intellectual capacity. Bostrom speculates about these capacities in two ways. In one section he discusses the “cognitive superpowers”, domains that would quicken a superintelligence takeoff. In another section he discusses convergent instrumental values, values that agents with a broad variety of goals would converge on instrumentally.

  • Cognitive Superpowers
    • Intelligence amplification
    • Strategizing
    • Social manipulation
    • Hacking
    • Technology research
    • Economic productivity
  • Convergent Instrumental Values
    • Self-preservation
    • Goal-content integrity
    • Cognitive enhancement
    • Technological perfection
    • Resource acquisition

By focusing on these traits, we can start to see that Bostrom is not really worried about what has been termed an “Artificial General Intelligence” (AGI). He is concerned with a very specific kind of intelligence with certain capacities to exert its will on the world and, most importantly, to increase its power over nature and other intelligent systems rapidly enough to attain a decisive strategic advantage. Which leads us to a second way we can refine Bostrom’s argument.

Closely analyze recalcitrance. Recall that Bostrom speculates that the condition for a fast takeoff superintelligence, assuming that the system engages in “intelligence amplification”, is constant or lower recalcitrance. A weakness in his argument is his lack of in-depth analysis of this recalcitrance function. I will argue that for many of the convergent instrumental values and cognitive superpowers at the core of Bostrom’s argument, it is possible to be much more precise about system recalcitrance. This analysis should allow us to determine to a greater extent the likelihood of singleton vs. multipolar superintelligence outcomes.

For example, it’s worth noting that a number of the “superpowers” are explicitly in the domain of the social sciences. “Social manipulation” and “economic productivity” are both vastly complex domains of research in their own right. Each may well have bounds about how effective an intelligent system can be at them, no matter how much “optimization power” is applied to the task. The capacities of those manipulated to understand instructions is one such bound. The fragility or elasticity of markets could be another such bound.

For intelligence amplification, strategizing, technological research/perfection, and cognitive enhancement in particular, there is a wealth of literature in artificial intelligence and cognitive science that addresses the technical limits of these domains. Such technical limitations are a natural source of recalcitrance and an impediment to fast takeoff.

by Sebastian Benthall at August 21, 2015 07:42 PM

Bostrom’s Superintelligence: Definitions and core argument

I wanted to take the opportunity to spell out what I see as the core definitions and argument of Bostrom’s Superintelligence as a point of departure for future work. First, some definitions:

  • Superintelligence. “We can tentatively define a superintelligence as any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.” (p.22)
  • Speed superintelligence. “A system that can do all that a human intellect can do, but much faster.” (p.53)
  • Collective superintelligence. “A system composed of a large number of smaller intellects such that the system’s overall performance across many very general domains vastly outstrips that of any current cognitive system.” (p.54)
  • Quality superintelligence. “A system that is at least as fast as a human mind and vastly qualitatively smarter.” (p.56)
  • Takeoff. The event of the emergence of a superintelligence. The takeoff might be slow, moderate, or fast, depending on the conditions under which it occurs.
  • Optimization power and Recalcitrance. Bostrom’s proposed that we model the speed of superintelligence takeoff as: Rate of change in intelligence = Optimization power / Recalcitrance. Optimization power refers to the effort of improving the intelligence of the system. Recalcitrance refers to the resistance of the system to being optimized.(p.65, pp.75-77)
  • Decisive strategic advantage. The level of technological and other advantages sufficient to enable complete world domination. (p.78)
  • Singleton. A world order in which there is at the global level one decision-making agency. (p.78)
  • The wise-singleton sustainability threshold. “A capability set exceeds the wise-singleton threshold if and only if a patient and existential risk-savvy system with that capability set would, if it faced no intelligent opposition or competition, be able to colonize and re-engineer a large part of the accessible universe.” (p.100)
  • The orthogonality thesis. “Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.” (p.107)
  • The instrumental convergence thesis. “Several instrumental values can be identified which are convergent in the sense that their attainment would increase the chances of the agent’s goal being realized for a wide range of final goals and a wide range of situations, implying that these instrumental values are likely to be pursued by a broad spectrum of situated intelligent agents.” (p.109)

Bostrom’s core argument in the first eight chapters of the book, as I read it, is this:

  1. Intelligent systems are already being built and expanded on.
  2. If some constant proportion of a system’s intelligence is turned into optimization power, then if the recalcitrance of the system is constant or lower, then the intelligence of the system will increase at an exponential rate. This will be a fast takeoff.
  3. Recalcitrance is likely to be lower for machine intelligence than human intelligence because of the physical properties of artificial computing systems.
  4. An intelligent system is likely to invest in its own intelligence because of the instrumental convergence thesis. Improving intelligence is an instrumental goal given a broad spectrum of other goals.
  5. In the event of a fast takeoff, it is likely that the superintelligence will get a decisive strategic advantage, because of a first-mover advantage.
  6. Because of the instrumental convergence thesis, we should expect a superintelligence with a decisive strategic advantage to become a singleton.
  7. Machine superintelligences, which are more likely to takeoff fast and become singletons, are not likely to create nice outcomes for humanity by default.
  8. A superintelligent singleton is likely to be above the wise-singleton threshold. Hence the fate of the universe and the potential of humanity is at stake.

Having made this argument, Bostrom goes on to discuss ways we might anticipate and control the superintelligence as it becomes a singleton, thereby securing humanity.

by Sebastian Benthall at August 21, 2015 12:02 AM

August 16, 2015

Ph.D. student

And now for something completely different: Superintelligence and the social sciences

This semester I’ll be co-organizing, with Mahendra Prasad, a seminar on the subject of “Superintelligence and the Social Sciences”.

How I managed to find myself in this role is a bit of a long story. But as I’ve had a longstanding curiosity about this topic, I am glad to be putting energy into the seminar. It’s a great opportunity to get exposure to some of the very interesting work done by MIRI on this subject. It’s also a chance to thoroughly investigate (and critique) Bostrom’s book Superintelligence: Paths, Dangers, and Strategies.

I find the subject matter perplexing because in many ways it forces the very cultural and intellectual clash that I’ve been preoccupied with elsewhere on this blog: the failure of social scientists and engineers to communicate. Or, perhaps, the failure of qualitative researchers and quantitative researchers to communicate. Whatever you want to call it.

Broadly, the question at stake is: what impact will artificial intelligence have on society? This question is already misleading since in the imagination of most people who haven’t been trained in the subject, “artificial intelligence” refers to something of a science fiction scenario, whereas to practitioner, “artificial intelligence” is, basically, just software. Just as the press went wild last year speculating about “algorithms”, by which it meant software, so too is the press excited about artificial intelligence, which is just software.

But the concern that software is responsible for more and more of the activity in the world and that it is in a sense “smarter than us”, and especially the fear that it might become vastly smarter than us (i.e. turning into what Bostrom calls a “superintelligence”), is pervasive enough to drive research funding into topics like “AI Safety”. It also is apparently inspiring legal study into the regulation of autonomous systems. It may also have implications for what is called, vaguely, “social science”, though increasingly it seems like nobody really knows what that is.

There is a serious epistemological problem here. Some researchers are trying to predict or forewarn the societal impact of agents that are by assumption beyond their comprehension on the premise that they may come into existence at any moment.

This is fascinating but one has to get a grip.

by Sebastian Benthall at August 16, 2015 08:19 PM

August 04, 2015

MIMS 2015

Metafarce Update -> systemd, man pages, and TLS

I’ve recently had time to update the guts of This post is about the updates to those guts, including what I tried that didn’t work out so well. The first section is full of personal opinion about the state of free UNIX OS’s.1 The second section concerns my adventures in getting TLS to work, and thoughts on the state of free TLS certificate signing services.


I wanted to have IPv6 connectivity, DNSSEC and TLS for and a few other domains I host. The provider I had been using for VPS did not offer IPv6, so I found a VPS provider that did. The provider I had been using for DNS did not support DNSSEC, so I found a DNS provider that did.

Switching VPS providers meant I had to setup a new machine anyways. I had been running Debian for years, but I decided to switch to OpenBSD. My Debian VPS had been fine over the years. I kept it updated with apt-get and generally never had any major problems with. The next section deals with why I switched.

Because Reasons

Actually, two reasons.

The first reason is because of systemd. I simply didn’t want to deal with it. I didn’t want to learn it, I didn’t see the value in it, and it has crappy documentation. This isn’t me saying systemd is crap. I don’t know if it’s crap because I haven’t spent any time evaluating it. This is me saying I don’t care about systemd, and it isn’t worth my time to investigate. There are other places on the web where one can argue the [de]merits of systemd, this is not a place for that.

One of the key things I’ve missed in the assorted arguments surrounding systemd is the lack of historical context. As if many of the systemd combatants aren’t aware of how sacred init systems are to UNIX folk. One of the first big splits in UNIX history was between those who wanted a BSD style init, and those who wanted a sysV style init. There is a long history of UNIX folk arguing about how to start their OS. However, I saw very little recognition of that fact in arguments for/against systemd.

The second reason is because Debian man pages suck. Debian probably has the highest quality man pages of any Linux distro, but they still suck. They’re often outdated, incomplete, incorrect, and it doesn’t seem like Linux users care all that much that their man pages suck. Most users only read man pages during troubleshooting, and then only after failing to find their solution on the web. I read man pages for every application I install. I want to know how the application works, what files it uses, signals it accepts, etc.

The BSD UNIX’s have excellent man pages, and they get the attention they deserve during release cycles. Unlike most Linux distributions, updates to man pages in BSD UNIX’s are listed in changelogs and seen as development work on-par with software changes. This is as it should be. Documentation acts as kind of contract between user and programmer. It sets user expectations. If a man page says a program should behave in a certain fashion and the program doesn’t, then we know it’s a bug.

There is a trend in the UNIX world to think man pages are outdated. Some newer UNIX applications don’t even include man pages. This is stupid. Documentation is part of the program, and should not be relegated to an afterthought. Also, you might not always have the web when troubleshooting.

TLS and StartSSL and the other domains I run on this VPS now have both IPv6 and DNSSEC. Metafarce does not yet have TLS(i.e. https) because I refuse to pay for it. offers free certificates, so in theory I should be able to get one for free. The problem is that I cannot convince StartSSL that I control To successfully validate that a user owns a domain the user must have access to any email address in the whois record, OR have access to either postmaster@, hostmaster@, or webmaster@ for that domain.

I don’t control any of the email addresses in my whois record, I don’t have a privacy service for my whois record, my registrar just doesn’t allow me to edit them. I’m also not willing to create an MX record for, then setup mail forwarding for postmaster@, hostmaster@, or webmaster@. Therefore I cannot convince StartSSL that I control I shouldn’t be in this situation. We have DNS SOA records for reasons, and one of those reasons is to host the zone’s admin email address. At the very least the address listed in’s SOA record should be available to use for domain validation purposes.

Also, how do they know the DNS domain controller will be the only one who has access to the these email addresses? The list, while not arbitrary, is not forced reserved for all mail setups.2 There are plenty of email accepting domains that forward these addresses straight to /dev/null.

Another method I have seen used to confirm control of a zone is to create a TXT record with a unique string. StartSSL could provide me with a unique string, I would then add a TXT record with that string as its value. This method assumes that someone who can create TXT records for a domain controls the domain, which is probably a fair assumption.

I think StartSSL has chosen a poor method for tying users to domains. Whois records should not be relied upon as a method for proving control. Not only does this break for people who use whois privacy services, but many users cannot directly edit their whois record, and don’t have the skills/resources to setup email forwarding for their domain.

The outcome of all this is that I don’t support https for Without having my cert signed by a CA, users have to wade through piles of buttons and dialogs that scare them away. Thus it remains unencrypted.3 Proving that a given user controls a given domain is a tough problem, and I don’t mean to suggest otherwise. StartSSL offers a free signing service and they should be commended for it. I just hope the situation improves so that myself and others can start hosting more content over TLS.

Let’s Encrypt to the Rescue

Let’s Encrypt is a soon to be launched certificate authority run by the Internet Security Research Group(ISRG). They’re a public benefit corporation backed by a few concerned corporate sponsors and the EFF. They’re going to sign web TLS certs for free at launch, which is great in and of itself. Even greater is the Internet draft they’ve written for their new automagic TLS cert creation and signing. We’ll see how it works out, but if they get it right this will be a huge boon for TLS adoption. At the very least I can then start running TLS everywhere without having to pay for it.

  1. I use the term UNIX very generally as a super category. Any OS which imbues the concepts of UNIX is a type of UNIX. Linux, Minix, MacOSX, *BSD, and Solaris are all types of UNIX. I’m not sure about QNX, but Windows and VxWorks are definitely not UNIX.

  2. RFC 2142 does actually reserve these, but that doesn’t mean mail admins always do.

  3. Another site I host on this VPS,, supports TLS. The cert for is not signed by any CA, so the user has to click at some scary looking buttons in order to view content. The cert is guaranteed by DANE to be for, yet no browsers currently support DANE out-of-the-box.

Metafarce Update -> systemd, man pages, and TLS was originally published by Andrew McConachie at Metafarce on August 04, 2015.

by Andrew McConachie ( at August 04, 2015 07:00 AM

July 29, 2015

Ph.D. student


The example of Arendt’s dismissal of scientific discourse from political discussion underscores a much deeper political problem: a lack of intelligibility.

Every language is intelligible to some people and not to others. This is obviously true in the case of major languages like English and Chinese. It is less obvious but still a problem with different dialects of a language. It becomes a source of conflict when there is a lack of intelligibility between the specialized languages of expertise or personal experience.

For many, mathematical formalism is unintelligible; it appears to be so for Arendt, and this disturbs her, as she locates politics in speech and wants there to be political controls on scientists. But how many scientists and mathematicians would find Arendt intelligible? She draws deeply on concepts from ancient Greek and Augustinian philosophy. Are these thoughts truly accessible? What about the intelligibility of the law, to non-lawyers? Or the intelligibility of spoken experiences of oppression to those who do not share such an experience?

To put it simply: people don’t always understand each other and this poses a problem for any political theory that locates justice in speech and consensus. Advocates of these speech-based politics are most often extraordinarily articulate and write persuasively about the need to curtail the power of any systems of control that they do not understand. They are unable to agree to a social contract that they cannot read.

But this persuasive speech is necessarily unable to account for the myriad mechanisms that are both conditions for the speech and unintelligible to the speaker. This includes the mechanisms of law and technology. There is a performative contradiction between these persuasive words and their conditions of dissemination, and this is reason to reject them.

Advocates of bureaucratic rule tend to be less eloquent, and those that create technological systems that replace bureaucratic functions even less so. Nevertheless each group is intelligible to itself and may have trouble understanding the other groups.

The temptation for any one segment of society to totalize its own understanding, dismissing other ways of experiencing and articulating reality as inessential or inferior, is so strong that it can be read in even great authors like Arendt. Ideological politics (as opposed to technocratic politics) is the conflict between groups expressing their interests as ideology.

The problem is that in order to function as it does at scale, modern society requires the cooperation of specialists. Its members are heterogeneous; this is the source of its flexibility and power. It is also the cause of ideological conflict between functional groups that should see themselves as part of a whole. Even if these members do see their interdependence in principle, their specialization makes them less intelligible. Articulation often involves different skills from action, and teaching to the uninitiated is another skill altogether. Meanwhile, the complexity of the social system expands as it integrates more diverse communities, reducing further the proportion understood by a single member.

There is still in some political discourse the ideal of deliberative consensus as the ground of normative or political legitimacy. Suppose, as seems likely, that this is impossible for the perfectly mundane and mechanistic reason that society is so complicated due to the demands of specialization that intelligibility among its constituents is never going to happen.

What then?

by Sebastian Benthall at July 29, 2015 05:44 AM

July 28, 2015

MIMS 2015
Ph.D. student

the state and the household in Chinese antiquity

It’s worthwhile in comparison with Arendt’s discussion of Athenian democracy to consider the ancient Chinese alternative. In Alfred Huang’s commentary on the I Ching, we find this passage:

The ancient sages always applied the principle of managing a household to governing a country. In their view, a country was simply a big household. With the spirit of sincerity and mutual love, one is able to create a harmonious situation anywhere, in any circumstance. In his Analects, Confucius says,

From the loving example of one household,
A whole state becomes loving.
From the courteous manner of one household,
A whole state becomes courteous.

Comparing the history of Europe and the rise of capitalistic bureaucracy with the history of China, where bureaucracy is much older, is interesting. I have comparatively little knowledge of the latter, but it is often said that China does not have the same emphasis on individualism that you find in the West. Security is considered much more important than Freedom.

The reminder that the democratic values proposed by Arendt and Horkheimer are culturally situated is an important one, especially as Horkheimer claims that free burghers are capable of producing art that expresses universal needs.

by Sebastian Benthall at July 28, 2015 02:38 AM

July 27, 2015

Ph.D. student

a refinement

If knowledge is situated, and scientific knowledge is the product of rational consensus among diverse constituents, then a social organization that unifies many different social units functionally will have a ‘scientific’ ideology or rationale that is specific to the situation of that organization.

In other words, the political ideology of a group of people will be part of the glue that constitutes the group. Social beliefs will be a component of the collective identity.

A social science may be the elaboration of one such ideology. Many have been. So social scientific beliefs are about capturing the conditions for the social organization which maintains that belief. (c.f. Nietzsche on tablets of values)

There are good reasons to teach these specialized social sciences as a part of vocational training for certain functions. For example, people who work in finance or business can benefit from learning economics.

Only in an academic context does the professional identity of disciplinary affiliation matter. This academic political context creates great division and confusion that merely reflects the disorganization of the academic system.

This disorganization is fruitful precisely because it allows for individuality (cf. Horkheimer). However, it is also inefficient and easy to corrupt. Hmm.

Against this, not all knowledge is situated. Some is universal. It’s universality is due to its pragmatic usefulness in technical design. Since technical design acts on everyone even when their own situated understanding does not include it, this kind of knowledge has universal ground (in violence, sadly, but maybe also in other ways.)

The question is whether there is room anywhere in the technically correct understanding of social organization (something we might see in Beniger) there is room for the articulation of what it supposed to be great and worthy of man (see Horkheimer).

I have thought for a long time that there is probably something like this describable in terms of complexity theory.

by Sebastian Benthall at July 27, 2015 04:22 AM

structuralism and/or functionalism

Previous entries detailing the arguments of Arendt, Horkheimer, and Beniger show these theorists have what you might call a structural functionalist bent. Society is conceived as a functional whole. There are units of organization within it. For Arendt, this social organization begins in the private household and expands to all of society. Horkheimer laments this as the triumph of mindless economic organization over genuine, valuable individuality.

Structuralism, let alone structural functionalism, is not in fashion in the social sciences. Purely speculatively, one reason for this might be that to the extent that society was organized to perform certain functions, more of those functions have been delegated to information processing infrastructure, as in Beniger’s analysis. That leaves “culture” more a domain of ephemerality and identity conflict, as activity in the sphere of economic production becomes if not private, opaque.

My empirical work on open source communities is suggestive (though certainly not conclusively so) that these communities are organized more for functional efficiency than other kinds of social groups (including academics). I draw this inference from the degree dissortativity of the open source social networks. Disassortativity suggests the interaction of different kinds of people, which is against homophilic patterns of social formation but which seems essential for economic activity where the interact of specialists is what creates value.

Assuming that society its entirety (!!) is very complex and not easily captured by a single grand theory, we can nevertheless distinguish difference kinds of social organization and see how they theorize themselves. We can also map how they interact and what mechanisms mediated between them.

by Sebastian Benthall at July 27, 2015 03:37 AM

July 25, 2015

Ph.D. student

Land and gold (Arendt, Horkheimer)

I am thirty, still in graduate school, and not thrilled about the prospects of home ownership since all any of the professionals around me talk about is the sky-rocketing price of real estate around the critical American urban centers.

It is with a leisure afforded by graduate school that I am able to take the long view on this predicament. It is very cheap to spend ones idle time reading Arendt, who has this to say about the relationship between wealth and property:

The profound connection between private and public, manifest on its most elementary level in the question of private property, is likely to be misunderstood today because of the modern equation of property and wealth on one side and propertylessness and poverty on the other. This misunderstanding is all the more annoying as both, property as well as wealth, are historically of greater relevance to the public realm than any other private matter or concern and have played, at least formally, more or less the same role as the chief condition for admission to the public realm and full-fledged citizenship. It is therefore easy to forget that wealth and property, far from being the same, are of an entirely different nature. The present emergence everywhere of actually or potentially very wealthy societies which at the same time are essentially propertyless, because the wealth of any single individual consists of his share in the annual income of society as a whole, clearly shows how little these two things are connected.

For Arendt, beginning with her analysis of ancient Greek society, property (landholding) is the condition of ones participation in democracy. It is a place of residence and source of ones material fulfilment, which is a prerequisite to ones free (because it is unnecessitated) participation in public life. This is contrasted with wealth, which is a feature of private life and is unpolitical. In ancient society, slaves could own wealth, but not property.

If we look at the history of Western civilization as a progression away from this rather extreme moment, we see the rise of social classes whose power is based on in landholding but in wealth. Industrialism and the economy based on private ownership of capital is a critical transition in history. That capital is not bound to a particular location but rather is mobile across international boundaries is one of the things that characterizes global capitalism and brings it in tension with a geographically bounded democratic state. It is interesting that a Jeffersonian democracy, designed with the assumption of landholding citizens, should predate industrial capitalism and be consitutionally unprepared for the result, but nevertheless be one of the models for other democratic governance structures throughout the world.

If private ownership of capital, not land, defines political power under capitalism, then wealth, not property, becomes the measure of ones status and security. For a time, when wealth was as a matter of international standard exchangeable for gold, private ownership of gold could replace private ownership of land as the guarantee of ones material security and thereby grounds for ones independent existence. This independent, free rationality has since Aristotle been the purpose (telos) of man.

In the United States, Franklin Roosevelt’s 1933 Executive Order 6102 forbade the private ownership of gold. The purpose of this was to free the Federal Reserve of the gold market’s constraint on increasing the money supply during the Great Depression.

A perhaps unexpected complaint against this political move comes from Horkheimer (Eclipse of Reason, 1947), who sees this as a further affront to individualism by capitalism.

The age of vast industrial power, by eliminating the perspectives of a stable past and future that grew out of ostensibly permanent property relations, is the process of liquidating the individual. The deterioration of his situation is perhaps best measured in terms of his utter insecurity as regards to his personal savings. As long as currencies were rigidly tied to gold, and gold could flow freely over frontiers, its value could shift only within narrow limits. Under present-day conditions the dangers of inflation, of a substantial reduction or complete loss of the purchasing power of his savings, lurks around the next corner. Private possession of gold was the symbol of bourgeois rule. Gold made the burgher somehow the successor of the aristocrat. With it he could establish security for himself and be reasonable sure that even after his death his dependents would not be completely sucked up by the economic system. His more or less independent position, based on his right to exchange goods and money for gold, and therefore on the relatively stable property values, expressed itself in the interest he took in the cultivation of his own personality–not, as today, in order to achieve a better career or for any professional reason, but for the sake of his own individual existence. The effort was meaningful because the material basis of the individual was not wholly unstable. Although the masses could not aspire to the position of the burgher, the presence of a relatively numerous class of individuals who were governed by interest in humanistic values formed the background for a kind of theoretical thought as well as for the type of manifestions in the arts that by virtue of their inherent truth express the needs of society as a whole.

Horkheimer’s historical arc, like many Marxists, appears to ignore its parallels in antiquity. Monetary policy in the Roman Empire, which used something like a gold standard, was not always straightforward. Inflation was sometimes a severe problem when generals would print money to pay the soldiers hat supported their political coups. So it’s not clear that the modern economy is more unstable than gold or land based economies. However, the criticism that economic security is largely a matter of ones continued participation in a larger system, and that there is little in the way of financial security besides this, holds. He continues:

The state’s restriction on the right to possess gold is the symbol of a complete change. Even the members of the middle class must resign themselves to insecurity. The individual consoles himself with the thought that his government, corporation, association, union, or insurance company will take care of him when he becomes ill or reaches the retiring age. The various laws prohibiting private possession of gold symbolize the verdict against the independent economic individual. Under liberalism, the beggar was always an eyesore to the rentier. In the age of big business both beggar and rentier are vanishing. There are no safety zones on society’s thoroughfares. Everyone must keep moving. The entrepreneur has become a functionary, the scholar a professional expert. The philosopher’s maxim, Bene qui latuit, bene vixit, is incompatible with the modern business cycles. Everyone is under the whip of a superior agency. Those who occupy the commanding positions have little more autonomy than their subordinates; they are bound by the power they wield.

In an academic context, it is easy to make a connection between Horkheimer’s concerns about gold ownership and tenure. Academic tenure is or was the refuge of the individual who could in theory develop themselves as individuals in obscurity. The price of this autonomy, which according the philosophical tradition represents the highest possible achievement of man, is that one teaches. So, the developed individual passes on the values developed through contemplation and reflection to the young. The privatization of the university and the emphasis on teaching marketable skills that allow graduates to participate more fully in the economic system is arguably an extension of Horkheimer’s cultural apocalypse.

The counter to this is the claim that the economy as a whole achieves a kind of homeostasis that provides greater security than one whose value is bound to something stable and exogenous like gold and land. Ones savings are secure as long as the system doesn’t fail. Meanwhile, the price of access to cultural materials through which one might expand ones individuality (i.e. videos of academic lectures, the arts, or music) decrease as a consequence of the pervasiveness of the economy. At this point one feels one has reached the limits of Horkheimer’s critique, which perhaps only sees one side of the story despite its sublime passion. We see echoes of it in contemporary feminist critique, which emphasizes how the demands of necessity are disproportionately burdened by women and how this affects their role in the economy. That women have only relatively recently, in historical terms, been released from the private household into the public world (c.f. Arendt again) situates them more precariously within the economic system.

What remains unclear (to me) is how one should conceive of society and values when there is an available continuum of work, opportunity, leisure, individuality, art, and labor under conditions of contemporary technological control. Specifically, the notion of inequality becomes more complicated when one considers that society has never been equal in the sense that is often aspired to in contemporary American society. This is largely because the notion of equality we use today draws from two distinct sources. The first is the equality of self-sufficient landholding men as they encounter each other freely in the polis. Or, equivalently, as self-sufficient goldholding men in something like the Habermasian bourgeois public sphere. The second is equality within society, which is economically organized and therefore requires specialization and managerial stratification. We can try to assure equality to members of society insofar as they are members of society, but not as to their function within society.

by Sebastian Benthall at July 25, 2015 11:30 PM

July 23, 2015

Ph.D. student

Horkheimer on engineers

Horkheimer’s comment on engineers:

It is true that the engineer, perhaps the symbol of this age, is not so exclusively bent on profitmaking as the industrialist or the merchant. Because his function is more directly connected with the requirements of the production job itself, his commands bear the mark of greater objectivity. His subordinates recognize that at least some of his orders are in the nature of things and therefore rational in a universal sense. But at bottom this rationality, too, pertains to domination, not reason. The engineer is not interested in understanding things for their own sake or the sake of insight, but in accordance to their being fitted into a scheme, no matter how alien to their own inner structure; this holds for living beings as well as for inanimate things. The engineer’s mind is that of industrialism in its streamlined form. His purposeful rule would make men an agglomeration of instruments without a purpose of their own.

This paragraph sums up much of what Horkheimer stands for. His criticism of engineers, the catalysts of industrialism, is not that they are incorrect. It is that their instrumental rationality is not humanely purposeful.

This humane purposefulness, for Horkheimer, is born out of individual contemplation. Though he recognizes that this has been a standpoint of the privileged (c.f. Arendt on the Greek polis), he sees industrialism as successful in bringing many people out of a place of necessity but at the cost of marginalizing and trivializing all individual contemplation. The result is an efficient machine with nobody in charge. This bodes ill because such a machine is vulnerable to being co-opted by an irrational despot or charlatan. Individuality, free of material necessity and also free of the machine that liberated it from that necessity, is the origin of moral judgement that prevents fascist rule.

This is very different from the picture of individuality Fred Turner presents in The Democratic Surround. In his account of how United States propaganda created a “national character” that was both individual enough to be anti-fascist and united enough to fight fascism, he emphasizes the role of art installations that encourage the view to stitch themselves synthetically into a large picture of the nation. One is unique within a larger, diverse…well, we might use the word society, borrowing from Arendt, who was also writing in the mid-century.

If this is all true, then this dates a transition in American culture from one of individuality to one of society. This coincides with the tendency of information organization traced assiduously by Beniger.

We can perhaps trace an epicycle of this process in the history of the Internet. In it’s “wild west” early days, when John Perry Barlow could write about the freedom of cyberspace, it was a place primarily occupied by the privileged few. Interestingly, many of these were engineers, and so were (I’ll assume for the sake of argument) but materially independent and not exclusively focused on profit-making. Hence the early Internet was not unlike the ancient polis, a place where free people could attempt words and deeds that would immortalize them.

As the Internet became more widely used and commercialized, it became more and more part of the profiteering machine of capitalism. So today we see it’s wildness curtailed by the demands of society (which includes an appeal to an ethics sensitive both to disparities in wealth and differences in the body, both part of the “private” realm in antiquity but an element of public concern in modern society.)

by Sebastian Benthall at July 23, 2015 09:33 PM

Arendt on social science

Despite my first (perhaps kneejerk) reaction to Arendt’s The Human Condition, as I read further I am finding it one of the most profoundly insightful books I’ve ever read.

It is difficult to summarize: not because it is written badly, but because it is written well. I feel every paragraph has real substance to it.

Here’s an example: Arendt’s take on the modern social sciences:

To gauge the extent of society’s victory in the modern age, its early substitution of behavior for action and its eventual substitution of bureaucracy, the rule of nobody, for personal rulership, it may be well to recall that its initial science of economics, which substitutes patterns of behavior only in this rather limited field of human activity, was finally followed by the all-comprehensive pretension of the social sciences which, as “behavioral sciences,” aim to reduce man as a whole, in all his activities, to the level of a conditioned and behaving animal. If economics is the science of society in its early stages, when it could impose its rules of behavior only on sections of the population and on parts of their activities, the rise of the “behavioral sciences” indicates clearly the final stage of this development, when mass society has devoured all strata of the nation and “social behavior” has become the standard for all regions of life.

To understand this paragraph, one has to know what Arendt means by society. She introduces the idea of society in contrast to the Ancient Greek polis, which is the sphere of life in Antiquity where the head of a household could meet with other heads of households to discuss public matters. Importantly for Arendt, all concerns relating to the basic maintenance and furthering of life–food, shelter, reproduction, etc.–were part of the private domain, not the polis. Participation in public affairs was for those who were otherwise self-sufficient. In their freedom, they would compete to outdo each other in acts and words that would resonate beyond their lifetime, deeds, through which they could aspire to immortality.

Society, in contrast, is what happens when the mass of people begin to organize themselves as if they were part of one household. The conditions of maintaining life are public. In modern society, people are defined by their job; even being the ruler is just another job. Deviation from ones role in society in an attempt to make a lasting change–deeds–are considered disruptive, and so are rejected by the norms of society.

From here, we get Arendt’s critique of the social sciences, which is essentially this: that is only possible to have a social science that finds regularities of people’s behavior when their behavior has been regularized by society. So the social sciences are not discovering a truth about people en masse that was not known before. The social sciences aren’t discovering things about people. They are rather reflecting the society as it is. The more that the masses are effectively ‘socialized’, the more pervasive a generalizing social science can be, because only under those conditions are there regularities there to be captured as knowledge and taught.

by Sebastian Benthall at July 23, 2015 02:06 AM

July 19, 2015

Ph.D. student

Hannah Arendt on the apoliticality of science

The next book for the Berkeley School of Information’s Classics reading group is Hannah Arendt’s The Human Condition, 1958. We are reading this as a follow-up to Sennett’s The Craftsman, working backwards through his intellectual lineage. We have the option to read other Arendt. I’m intrigued by her monograph On Violence, because it’s about the relationship between violence and power (which is an important thing to think about) and also because it’s comparatively short (~100 pages). But I’ve begun dipping into The Human Condition today only to find an analysis of the role of science in society. Of course I could not resist writing about it here.

Arendt opens the book with a prologue discussing the cultural significance of the Apollo mission. She muses at shift in human ambition that has lead to its seeking to leave Earth. Having rejected Heavenly God as Father, she sees this as a rejection of Earth as Mother. Poetic stuff–Arendt is a lucid writer, prose radiating wisdom.

Then Arendt begins to discuss The Problems with Science (emphasis mine):

While such possibilities [of space travel, and of artificial extension of human life and capabilities] still may lie in a distant future, the first boomerang effects of science’s great triumphs have made themselves felt in a crisis within the natural sciences themselves. The trouble concerns the fact that the “truths” of the modern scientific world view, though they can be demonstrated in mathematical formulas and proved technologically, will no longer lend themselves to normal expression in speech and thought. The moment these “truths” are spoken of conceptually and coherently, the resulting statements will be “perhaps not as meaningless as a ‘triangular circle,’ but much more so than a ‘winged lion'” (Erwin Schödinger). We do not yet know whether this situation is final. But it could be that we, who are earth-bound creatures and have begun to act as though we are dwellers of the universe, will forever be unable to unable to understand, that is, to think and speak about the things which nevertheless we are able to do. In this case, it would be as though our brain, which constitutes the physical, material condition of our thoughts, were unable to follow what we do, so that from now on we would indeed need artificial machines to do our thinking and speaking. If it should turn out to be true that knowledge (in the sense of know-how) and thought have parted company for good, then we would indeed become the helpless slaves, not so much of our machines as of our know-how, thoughtless creatures at the mercy of every gadget which is technically possible, no matter how murderous it is.

We can read into Arendt a Heideggerian concern about man’s enslavement of himself through technology, and equally a distrust mathematical formalism that one can also find in Horkheimer‘s Eclipse of Reason. It’s fair to say that the theme of technological menace haunted the 20th century; this is indeed the premise of Beniger‘s The Control Revolution, whose less loaded account described how the advance of technical control could be seen as nothing less or more than the continuing process of life’s self-organization.

What is striking to me about Arendt’s concerns, especially after having attended SciPy 2015, a conference full of people discussing their software code as a representation of scientific knowledge, is how ignorant Arendt is about how mathematics is used by scientists. (EDIT: The error here is mine. A skimming of the book past the prologue (always a good idea before judging the content of a book or its author…) makes it clear that this comment about mathematical formalism is not a throwaway statement at the beginning of the book to motivate a discussion of political action, but rather something derived from her analysis of political action and the history of science. Ironically, I’ve read her “speech” and interpreted it politically (in the narrow sense of implicating identities of “the scientist”, a term which she does seem to use disparagingly or distancingly elsewhere, when another more charitable reading (one that was more sensitive to how she is “technically” defining her terms (though I expect she would deny this usage)–“speech” being rather specialized for Arendt, not being merely ‘utterances’–wouldn’t be as objectionable. I’m agitated by the bluntness of my first reading, and encouraged to read further.)

On the one hand, Arendt wisely situates mathematics as an expression of know-how, and sees technology as an extension of human capacity not as something autonomous from it. But it’s strange to read her argue essentially that mathematics and technology is not something that can be discussed. This ignores the daily practice of scientists, mathematicians, and their intellectual heirs, software engineers, which involves lots of discussion about about technology. Often these discussions are about the political impact of technical decisions.

As an example, I had the pleasure of attending a meeting of the NumPy community at SciPy. NumPy is one of the core packages for scientific computing in Python which implements computationally efficient array operations. Much of the discussion hinged on whether and to what extent changes to the technical interface would break downstream implementations using the library, angering their user base. This political conflict, among other events, lead to the creation of sempervirens, a tool for collecting data about how people are using the library. This data will hopefully inform decisions about when to change the technical design.

Despite the facts of active discourse about technology in the mathematized language of technology, Arendt maintains that it is the inarticulateness of science that makes it politically dangerous.

However, even apart from these alst and yet uncertain consequences, the situation created by the sciences is of great political significance. Wherever the relevance of speech is at stake, matters become political by definition, for speech is what makes man a political being. If we would follow the advice , so frequently urged upon us, to adjust our cultural attitudes to the present status of scientific achievement, we wuld in all earnest adopt a way of life in which speech is no longer meaningful. For the sciences today have been forced to adopt a “language” of mathematical symbols which, though it was originally meant only as an abbreviation for spoken statements, now contains statements that in no way can be translated back into speech. The reason why it may be wise to distrust the political judgment of scientists qua scientists is not primarily their lack of “character”–that they did not refuse to develop atomic weapons–or their naivete–that they did not understand that once these weapons were developed they would be the last to be consulted about their use–but precisely the fact that they move in a world where speech has lost its power. And whatever men do or know or experience can make sense only to the extent that it can be spoken about. There may be truths beyond speech, and they may be of great relevance to man in the singular, that is, to man in so far as he is not a political being, whatever else he may be. Men in the plural, that is, men in so far as they live and move and act in this world, can experience meaningfulness only because they can talk with and make sense to each other and to themselves.

There is an element of truth to this analysis. But there is also a deep misunderstanding of the scientific process as one that somehow does not involve true speech. Here we find another root of a much more contemporary debate about technology in society reflected in recent concern about the power of ‘algorithms’. (EDIT: Again, after consideration, shallowly accusing Arendt of a “deep misunderstanding” at this stage is hubris. Though there does seem to be a connection between some of the contemporary debate about algorithms to Arendt’s view, it’s wrong to project historically backwards sixty years when The Human Condition is an analysis of the shifting conditions over the preceding two millennia.

Arendt claims early on that the most dramatic change in the human condition that she can anticipate is humanity’s leaving the earth to populate the universe. I want to argue that the creation of the Internet has been transformative of the human condition in a different way.)

I think it would be fair to say that Arendt, beloved a writer though she is, doesn’t know what she’s talking about when she’s talking about mathematical formalism. (EDIT: Again, a blunt conclusion. However, the role of formalism in, say, economics (though much debated) stands as a counterexample to Arendt in other ways.) And perhaps this is the real problem. When, for almost a century, theorists have tried to malign the role of scientific understanding in politics, it has been (incoherently) either on the grounds that it is secretly ideological in ways that have gone unstated, or (as for Arendt) that it is cognitively defective in a way that prevents it from participating in politics proper. (EDIT: This is a misreading of Arendt. It appears that what makes mathematical science apolitical for Arendt is precisely its universality, and hence its inability to be part of discussion about the different situations of political actors. Still, something seems quite wrong about Arendt’s views here. How would she think about Dwork’s “Fairness through awareness“?

The frustration for a politically motivated scientist is this: Political writers will sometimes mistake their own inability to speak or understand mathematical truths for their general intelligibility. On grounds of this alleged intelligibility they dismiss scientists from political discussion. They then find themselves apolitically enslaved by technology they don’t understand, and angry about it. Rather than blame their own ignorance of the subject matter, they blame scientists for being unintelligible. This is despite scientists intelligibility to each other.

An analysis of the politics of science will be incomplete without a clear picture of how scientists and non-scientists relate to each other and communicate. As far as I can tell, such an analysis is almost impossible politically speaking because of the power dynamic of the relation. Professional non-scientific intellects are loathe to credit scientists with an intellectual authority that they feel that they are not able to themselves attain, and scientific practice requires adhering to standards of rigor which give one greater intellectual authority; these standards by their nature require ahistorical analysis, dismissal of folk theorizing, etc. It has become politically impossible to ground an explanation of a social phenomenon on the basis that one population is “smarter” than another, despite this being a ready first approximation and one that is used in practice by the vast majority of people in private. Hence, the continuation of the tradition of treatises putting science in its place.

by Sebastian Benthall at July 19, 2015 12:48 AM

July 17, 2015

Ph.D. student

One Magisterium: a review (part 1)

I have come upon a remarkable book, titled One Magisterium: How Nature Knows Through Us, by Seán Ó Nualláin, President, University of Ireland, California. It is dedicated “To all working at the edges of society in an uncompromising search for truth and justice.” It’s acknowledgement section opens:

Kenyan middle-distance runners were famous for running like “scared rabbits”: going straight to the head of the field and staying there, come what may. Even more than was the case for my other books, I wrote this like a scared rabbit.”

Ó Nualláin is a recognizable face at UC Berkeley though I think it’s fair to say that most of the faculty and PhD students couldn’t tell you who he is. To a mainstream academic, he is one of the nebulous class of people who show up to events. One glorious loophole of university culture is that the riches of intellectual communion are often made available in open seminars held by people so weary of obscurity that they are happy for any warm body that cares enough to attend. This condition combined with the city of Berkeley’s accommodating attitude towards quacks and vagrants adds flavor to the university’s intellectual character.

There is of course no campus for the University of Ireland, California. Ó Nualláin is a truly independent scholar. Unlike many more unfortunate intellectuals, he has made the brilliant decision to not quit his day job, which is as a musician. A Google inquiry into the man indicates he probably got his PhD from Dublin City University and spent a good deal of time around Stanford’s Symbolic Systems department. (EDIT: Sean has corrected me on the details of his accomplished biography in the comments.)

I got on his mailing lists some time ago because of my interest in the Foundations of Mind conference, which he runs in Berkeley. Later, I was impressed by his aggressive volley of questions when Nick Bostrom spoke at Berkeley (I’ve become familiar with Bostrom’s work through MIRI (formerly SingInst). I’ve spoken to him just a couple times, once at a poster session at the Berkeley Institute of Data Science and once at Katy Huff’s scientific technology practice group, The Hacker Within.

I’m providing these details out of what you might call anthropological interest. At the School of Information I’ve somehow caught the bug of Science and Technology Studies by osmosis. Now I work for Charlotte Cabasse on her ethnographic team, despite believing myself to be a computational social scientist. This qualitative work is a wonderful excuse to write about ones experiences.

My perceptions of Ó Nualláin are relevant, then, because they situate the author of One Magisterium as an outsider to the academic mainstream at Berkeley. This outsider status comes through quite heavily in the book, starting from the Acknowledgments section (which recognizes all the service staff at the bars and coffee shops where he wrote the book) and running as a regular theme throughout. Discontent with and rejection from academia-as-usual are articulated in sublimated form as harsh critique of the academic institution. Ó Nualláin is engaged in an “uncompromising search for truth and justice,” and the university as it exists today demands too many compromises.

Magisterium is a Catholic term for a teaching authority. One Magisterium refers to the book’s ambition of pointing to a singular teaching authority, a new one heretofore unrecognized by other teaching authorities such as mainstream universities. Hence the book is an attack on other sources of intellectual authority. An example passage:

The devastating news for any reader venturing a toe into the stormy waters of this book is that its writer’s view is that we may never be able to dignify the moral, epistemological and political miasma of the early twenty-first century with terms like “crisis” for which the appropriate solution is of course a “paradigm shift”. It may simply be a set of hideously interconnected messes; epistemological and administrative in the academy, institutional and moral in the greater society. As a consequence, the landscape of possible “solutions” may seem so unconstrained that the wisdom of Joe the barman may be seen to equal that of any series of tomes, no matter how well-researched.

This book is above all an attempt to unify the plurality of discourses — scientific, religious, moral, aesthetic, and so on — that obtain at the start of the third millenium.

An anthropologist of science might observe that this criticality-of-everything, coupled with the claim to have a unifying theory of everything, is a surefire way to get ignored by the academy. The incentive structure of the academy requires specialization and a political balance of ideas. If somebody were to show up with the right idea, it would discredit a lot of otherwise important people and put others out of a job.

The problem, or one of them (there are many mentioned in the first chapter of One Magisterium, titled “The Trouble with Everything”), is that Ó Nualláin is right. At least as far as I can tell at this point. It is not an easy book to read; it is not structured linearly so much as (I imagine, not knowing what I’m talking about) like complex Irish dancing music, with motifs repeated and encircling themselves like a double helix or perhaps some more complex structure. Threaded together are topics from Quantum Mechanics, an analysis of the anthropic principle, a critique of Dawkins’ atheism and a positioning of the relevance of Vedanta theology to understanding physical reality, and an account of the proper role of the arts in society. I suspect that the book is meant to unfold on ones psychology slowly, resulting in ones adoption of what Ó Nualláin calls bionoetics, the new united worldview that is the alleged solution to everything.

A key principle of bionoetics is the recognition of what Ó Nualláin calls the “noetic” level of description, which is distinct from the “cognitive” third-person stance in that it is compressed in a way that makes it relevant to action in any particular domain of inquiry. Most of what he describes as “noetic” I read as “phenomenological”. I wonder if Ó Nualláin has read Merleau-Ponty–he uses the Husserlian critique of “psychologism” extensively.

I think it’s immaterial whether “noetic” is an appropriate neologism for this blending of the first-personal experience into the magisterium. Indeed, there is something comforting to a hard-headed scientist about Ó Nualláin’s views: contrary to the contemporary anthropological view, this first-personal knowledge has no place in academic science; it’s place is art. Having been in enough seminars at the School of Information where anthropologists lament not being taken seriously as producing knowledge comparable to that of the Scientists, and being one who appreciates the value of Art without needing it to be Science, I find something intuitively appealing about this view. Nevertheless, one wonders if the epistemic foundation of Ó Nualláin’s critique of the academy is grounded in scientific inquiry or his own and others first-personal noetic experiences coupled with observations of who is “successful” in scientific fields.

Just one chapter into One Magisterium, I have to say I’m impressed with it in a very specific way. Some of us learn about the world with a synthetic mind, searching for the truth with as few constraints on ones inquiry as possible. Indeed, that’s how I wound up at as nebulous place as the School of Information at Berkeley. As one conducts the search, one finds oneself increasingly isolated. Some truths may never be spoken, and it’s never appropriate to say all the truths at once. This is especially true in an academic context, where it is paramount for the reputation of the institution that everyone avoid intellectual embarrassment whenever possible. So we make compromises, contenting ourselves with minute and politically palatable expertise.

I am deeply impressed that Ó Nualláin has decided to fuck all and tell it like it is.

by Sebastian Benthall at July 17, 2015 06:51 PM

June 22, 2015

Ph.D. alumna

Which Students Get to Have Privacy?

There’s a fresh push to protect student data. But the people who need the most protection are the ones being left behind.

It seems that student privacy is trendy right now. At least among elected officials. Congressional aides are scrambling to write bills that one-up each other in showcasing how tough they are on protecting youth. We’ve got Congressmen Polis and Messer (with Senator Blumenthal expected to propose a similar bill in the Senate). Kline and Scott have a discussion draft of their bill out while Markey and Hatch have reintroduced the bill they introduced a year ago. And then there’s Senator Vitter’s proposed bill. And let’s not even talk about the myriad of state-level legislation.

Most of these bills are responding in some way or another to a 1974 piece of legislation called the Family Educational Rights and Privacy Act (FERPA), which restricted what schools could and could not do with student data.

Needless to say, lawmakers in 1974 weren’t imagining the world of technology that we live with today. On top of that, legislative and bureaucratic dynamics have made it difficult for the Department of Education to address failures at the school level without going nuclear and just defunding a school outright. And schools lack security measures (because they lack technical sophistication) and they’re entering into all sorts of contracts with vendors that give advocates heartburn.

So there’s no doubt that reform is needed, but the question — as always — is what reform? For whom? And with what kind of support?

The bills are pretty spectacularly different, pushing for a range of mechanisms to limit abuses of student data. Some are fine-driven; others take a more criminal approach. There are also differences in who can access what data under what circumstances. The bills give different priorities to parents, teachers, and schools. Of course, even though this is all about *students*, they don’t actually have a lot of power in any of these bills. It’s all a question of who can speak on their behalf and who is supposed to protect them from the evils of the world. And what kind of punishment for breaches is most appropriate. (Not surprisingly, none of the bills provide for funding to help schools come up to speed.)

As a youth advocate and privacy activist, I’m generally in favor of student privacy. But my panties also get in a bunch when I listen to how people imagine the work of student privacy. As is common in Congress as election cycles unfold, student privacy has a “save the children” narrative. And this forces me to want to know more about the threat models we’re talking about. What are we saving the children *from*?

Threat Models

There are four external threats that I think are interesting to consider. These are the dangers that students face if their data leaves the education context.

#1: The Stranger Danger Threat Model. It doesn’t matter how much data we have to challenge prominent fears, the possibly of creepy child predators lurking around school children still overwhelms any conversation about students, including their data.

#2: The Marketing Threat Model. From COPPA to the Markey/Hatch bill, there’s a lot of concern about how student data will be used by companies to advertise products to students or otherwise fuel commercial data collection that drives advertising ecosystems.

#3: The Consumer Finance Threat Model. In a post-housing bubble market, the new subprime lending schemes are all about enabling student debt, especially since students can’t declare bankruptcy when they default on their obscene loans. There is concern about how student data will be used to fuel the student debt ecosystem.

#4: The Criminal Justice Threat Model. Law enforcement has long been interested in student performance, but this data is increasingly desirable in a world of policing that is trying to assess risk. There are reasons to believe that student data will fuel the new policing architectures.

The first threat model is artificial (see: “It’s Complicated”), but it propels people to act and create laws that will not do a darn thing to address abuse of children. The other three threat models are real, but these threats are spread differently over the population. In the world of student privacy, #2 gets far more attention than #3 and #4. In fact, almost every bill creates carve-outs for “safety” or otherwise allows access to data if there’s concern about a risk to the child, other children, or the school. In other words, if police need it. And, of course, all of these laws allow parents and guardians to get access to student data with no consideration of the consequences for students who are under state supervision. So, really, #4 isn’t even in the cultural imagination because, as with nearly everything involving our criminal justice system, we don’t believe that “those people” deserve privacy.

The reason that I get grouchy is that I hate how the risks that we’re concerned about are shaped by the fears of privileged parents, not the risks of those who are already under constant surveillance, those who are economically disadvantaged, and those who are in the school-prison pipeline. #2-#4 are all real threat models with genuine risks, but we consistently take #2 far more seriously than #3 or #4, and privileged folks are more concerned with #1.

What would it take to actually consider the privacy rights of the most marginalized students?

The threats that poor youth face? That youth of color face? And the trade-offs they make in a hypersurveilled world? What would it take to get people to care about how we keep building out infrastructure and backdoors to track low-status youth in new ways? It saddens me that the conversation is constructed as being about student privacy, but it’s really about who has the right to monitor which youth. And, as always, we allow certain actors to continue asserting power over youth.

This post was originally published to The Message at Medium on May 22, 2015. Image credit: Francisco Osorio

by zephoria at June 22, 2015 01:51 PM

June 16, 2015

MIMS 2012

“Did You A/B Test It?”

After launching a feature, coworkers often ask me, “Did you A/B test it?” While the question is well-meaning, A/B testing isn’t the only way, or even the best way, of making data-informed decisions in product development. In this post, I’ll explain why, and provide other ways of validating hypotheses to assure your coworkers that a feature was worth building.

Implied Development Process

My coworker’s simple question implies a development process that looks like this:

  1. You have an idea for a new feature
  2. You build the new feature
  3. You A/B test it to prove its success
  4. Profit! High fives! Release party!

While this looks reasonable on the surface, it has a few flaws.

Flaw 1: What metric are you measuring?

The A/B test in step 3 implies that you’re comparing a version of the product with the new feature to a version without the new feature. But a key part of running an A/B test is choosing a metric to call the winner, which is where things get tricky. Your instinct is probably to measure usage of the new feature. But this doesn’t work because the control lacks the feature, so it loses before the test even begins.

There are, however, higher-level metrics you care about. These could range from broad business metrics, like revenue or time in product, to more narrow metrics, like completing a specific task (such as successfully booking a place to stay in the case of AirBnB). Generally speaking, broader metrics are slower to move and influenced by more factors, so narrow metrics are better.

Even so, this type of experiment isn’t what A/B tests excels at. At its core, A/B testing is a hill climbing technique. This means it’s good at telling you if small, incremental changes are an improvement (in other words, each test is a step up a hill). Launching a feature is more like exploring a new hill. You’re giving users the ability to do something they couldn’t do before. A/B testing isn’t good at comparing hills to each other, nor will it help you find new hills.

Flaw 2: What if the new feature loses?

Let’s say you have good metrics to measure, and enough traffic to run the test in a reasonable timeframe. But the results come back, and the unthinkable happened: your new feature lost. There’s no profit, high fives, or launch party. Now what do you do?

Because of sunk costs, your instinct is going to be to try to improve the feature until it wins. But an A/B test doesn’t tell you why it lost. Maybe there was a minor usability problem, or maybe it’s fundamentally flawed. Whatever the problem may be, an A/B test won’t tell you what it is, which doesn’t help you improve it.

The worst-case scenario is that the feature doesn’t solve a real problem, in which case you should remove it. But this is an expensive option because you spent the time to design, build, and launch it before learning it wasn’t worth building. Ideally you’d discover this earlier.

Revised Development Process

When our well-meaning coworker asked if we A/B tested the new feature, what they really wanted to know is if we have data to back up that it was worth building. To them, an A/B test is the only way they know how to answer that question. But as user experience professionals, we know there are plenty of methods for gathering data to guide our designs. Let’s revise our product development process from above:

  1. You have an idea for a new feature.
  2. You scope the problem the feature is supposed to solve by interviewing users, sending out surveys, analyzing product usage, or using other research methods.
  3. You create prototypes and show them to users.
  4. You refine the design based on user feedback.
  5. You repeat steps 3 and 4 until you’re confident the design solves the problem you set out to solve.
  6. You build the feature.
  7. You do user testing to find and fix usability flaws.
  8. You release the feature via a phased rollout (or a private/public/opt-in beta) and measure your key metrics to make sure they’re within normal parameters.
    • This can be run as an A/B test, but doesn’t need to be.
  9. Once you’re confident the feature is working as expected, fully launch it to everyone.
  10. Profit! High fives! Release party!
  11. Optimize the feature by A/B testing incremental changes.

In this revised development process (commonly called user-centered design), you’re gathering data every step of the way. Rather than building a feature and “validating” it at the end with an A/B test, you’re continually refining what you’re building based on user feedback. By the time you release it, you’ve iterated countless times and are confident it’s solving a real problem. And once it’s built, you can use A/B testing to do what A/B testing does best — optimization.

A longer process? Yes. A more confident, higher quality launch? Also yes.

Now when your coworkers ask if you A/B tested your feature, you can reply, “No, but we made data-informed decisions that told us users really want this feature. Let me show you all of our data!” By using research and A/B testing appropriately, you’ll build features that your users and your bottom line will love.

Further Reading

If you’d like to learn how other companies incorporate A/B testing into their development process, or about user-centered design in general, these articles are great resources:

Thanks to Kyle Rush, Olga Antonenko Young, and Silvia Amtmann for providing feedback on earlier drafts of this post.

by Jeff Zych at June 16, 2015 03:49 PM

June 03, 2015

Ph.D. alumna

I miss not being scared.

From the perspective of an adult in this society, I’ve taken a lot of stupid risks in my life. Physical risks like outrunning cops and professional risks like knowingly ignoring academic protocol. I have some scars, but I’ve come out pretty OK in the scheme of things. And many of those risks have paid off for me even as similar risks have devastated others.

Throughout the ten years that I was doing research on youth and social media, countless people told me that my perspective on teenagers’ practices would change once I had kids. Wary of this frame, I started studying the culture of fear, watching as parents exhibited fear of their children doing the same things that they once did, convinced that everything today is so much worse than it was when they were young or that the consequences would be so much greater. I followed the research on fear and the statistics on teen risks and knew that it wasn’t about rationality. There was something about how our society socialized parents into parenting that produced the culture of fear.

Now I’m a parent. And I’m in my late 30s. And I get to experience the irrational cloud of fear. The fear of mortality. The fear of my children’s well-being. Those quiet little moments when crossing the street where my brain flips to an image of a car plowing through the stroller. The heart-wrenching panic when my partner is late and I imagine all of the things that might have happened. The reading of stories of others’ pain and shuddering with fear that my turn is next. The moments of loss and misfortune in my own life when I close my eyes and hope my children don’t have to feel that pain. I can feel the haunting desire to avoid risks and to cocoon my children.

I know the stats. I know the ridiculousness of my fears. And all I can think of is the premise of Justine Larbalestier’s Magic or Madness where the protagonist must either use her magic or go crazy if she doesn’t use it. I feel like I am at constant war with my own brain over the dynamics of fear. I refuse to succumb to the fear because I know how irrational it is but in refusing, I send myself down crazy rabbit holes on a regular basis. For my kids’ sake, I want to not let fear shape my decision-making but then I’m fearing fear. And, well, welcome to the rabbit hole.

I miss not being scared. I miss taking absurd risks and not giving them a second thought. I miss doing the things that scare the shit out of most parents. I miss the ridiculousness of not realizing that I should be afraid in the first place.

In our society, we infantalize youth for their willingness to take risks that we deem dangerous and inappropriate. We get obsessed with protecting them and regulating them. We use brain science and biography to justify restrictions because we view their decision making as flawed. We look at new technologies or media and blame them for corrupting the morality of youth, for inviting them to do things they shouldn’t. Then we about face and capitalize on their risk taking when it’s to our advantage, such as when they go off to war on our behalf.

Is our society really worse off because youth take risks and adults don’t? Why are they wrong and us old people are right? Is it simply because we have more power? As more and more adults live long, fearful lives in Western societies, I keep thinking that we should start regulating our decision-making. Our inability to be brash is costing our society in all sorts of ways. And it will only get worse as some societies get younger while others get older. Us old people aren’t imagining new ways of addressing societal ills. Meanwhile, our conservative scaredy cat ways don’t allow youth to explore and challenge the status quo or invent new futures. I keep thinking that we need to protect ourselves and our children from our own irrationality produced from our fears.

I have to say that fear sucks. I respect its power, just like I respect the power of a hurricane, but it doesn’t make me like fear any more. So I keep dreaming of ways to eradicate fear. And what I know for certain is that statistical information won’t cut it. And so I dream of a sci-fi world in which I can manipulate my synapses to prevent those ideas from triggering. In the meanwhile, I clench my jaw and try desperately to not let the crazy visions of terrible things that could happen work their way into my cognitive perspective. And I wonder what it will take for others to recognize the impact that our culture of fear is having on all of us.

This post was originally published to The Message at Medium on May 4, 2015

by zephoria at June 03, 2015 04:12 PM

May 21, 2015

Ph.D. alumna

The Cost of Fame

We were in Juarez Mexico. We had gone there as a group of activists to stage a protest over the government’s refusal to investigate the disappearance and brutal murders of hundreds of women. It was a V-Day initiative and so there were celebrities among us.

I was assigned as one of the faux fans and my responsibility was to hover around the celebrities during the protest in order to minimize who could actually access the celebrities. The actual bodyguards kept a distance so that the celebrities could be seen and heard. And photographed. It was a weird role, a moment in which it was made clear how difficult it was for celebrities to be in public. Their accessibility was always mediated, planned for, negotiated. And I was to be invisible so that they could be visible.

Over the years, I’ve worked with a lot of celebrities through my activist work. I’ve had to create artificial distractions, distribute fake information about celebrities’ locations, and help celebrities hide. I’ve had to help architect a process for celebrities to use the bathroom or get a glass of water and I’ve watched the cost of that overhead. Every move has to be managed because of paparazzi and fans. There’s nothing elegant about being famous when you just need to take a shit.

There’s a cost to fame, a cost that is largely invisible to most people. Many of the teens that I interviewed wanted to be famous. They saw fame as freedom — freedom from parents, poverty, and insecurity.

What I learned in working with celebrities is that fame is a trap, a burden, a manacle.

It seems so appealing and, for some, it can be an amazing tool. But for many who aren’t prepared for it, fame is a restricting force, limiting your freedom and mobility, and forcing you to put process around every act you take. Forcing you to live with constant critique, with every move and action constantly judged by others who feel as though they have the right because the famous are seen as privileged. There’s a reason that substance abuse runs rampant among celebrities. There’s a reason so many celebrities crack under pressure. Fame is the opposite of freedom.

Social media has created new platforms for people to achieve fame. Instagram fame. YouTube fame. But most people who become Internet famous aren’t Justin Bieber. They’re people with millions of followers and no support structure. They don’t have a personal assistant and bodyguard. They don’t have someone who manages the millions of messages they receive or turns away the creepy fans who show up in person. They are on their own to handle all of the shit that comes their way.

In her brilliant book, “Status Update: Celebrity, Publicity, and Branding in the Social Media Age,” Alice Marwick highlights how attracting attention and achieving fame is a central part of being successful in the new economy. Welcome to our neoliberal society. Yet, as Marwick quickly uncovers in her analysis, these practices are experienced differently depending on race and gender. What it means to be famous online looks different if you’re a woman and/or a person of color. The quantity and quality of remarks you receive as you attract attention changes dramatically. The rape threats increase. The remarks on your body increase. And the interactions get creepier. And the costs skyrocket.

I’m relatively well-known on the internet. And each time that I’ve written or done something that’s attracted a lot of attention, I’ve felt a hint of the cost of fame, so much so that I purposefully go out of my way to disappear for a while and decrease my visibility. Throughout my career, there have been times in which I could’ve done things that would’ve taken my micro-celebrity to the next level, and yet I’ve chosen to back away because I don’t like the costs that I face. I don’t like the death threats. I also don’t like when people won’t be honest with me. I don’t like when people get nervous around me. And I don’t like being objectified, as though I have no feelings. These are all part of the cost of fame. We don’t see celebrities as people; we see them as cultural artifacts.

We’ve made fame a desirable commodity, produced and fetishized. From reality TV and Jerry Springer to YouTube and Instagram, we’ve created structures for everyday people to achieve mass attention. But we’ve never created the structures to help them cope. Or for those who help propel others into fame to think about the consequences of their actions. And we’ve never stopped to think about how these platforms that fuel fame culture help reinforce misogyny and racism.

There’s a cost to fame, a cost that is unevenly borne. And I have no idea how to make that cost visible to the teens who desire fame, the media producers who create the platforms for fame, or the fans who generate the ugliness behind fame. It’s far too easy to see the gloss, far too difficult to see what it means to be trapped.

trapped in my glasshouse
crowd has been gathering since dawn
i make a pot of coffee
while catastrophe awaits me out on the lawn
i think i’m going to stay in today
pretend like i don’t know what’s going on
Ani Difranco, Glass House

In Juarez, we got attacked. In the mayhem, Jane Fonda’s jewelry was taken from her. I still remember the elegance with which she handled that situation. I also remember her response when someone asked if the jewerly was valuable. “Do you think I’m an idiot? This isn’t my first protest.” As we spent the night camped out under the flickering blue lights in the roach motel, I listened to her tell stories of previous political actions and the attacks she’d faced. She was acutely aware of the costs of fame, but she was also aware of how she could use it to make a difference. I couldn’t help but think of a comment Angelina Jolie once made when she noted that people would always follow her around with a camera so she might as well go to places that needed to be photographed. Neither women made me want to be famous, but both made me deeply appreciate those who have learned to negotiate fame.

This post was originally published to The Message at Medium on April 21, 2015 as part of Fame Week.

by zephoria at May 21, 2015 06:20 PM

May 20, 2015

Ph.D. student

resisting the power of organizations

“From the day of his birth, the individual is made to feel there is only one way of getting along in this world–that of giving up hope in his ultimate self-realization. This he can achieve solely by imitation. He continuously responds to what he perceives about him, not only consciously but with his whole being, emulating the traits and attitudes represented by all the collectivities that enmesh him–his play group, his classmates, his athletic team, and all the other groups that, as has been pointed out, enforce a more strict conformity, a more radical surrender through complete assimilation, than any father or teacher in the nineteenth century could impose. By echoing, repeating, imitating his surroundings, by adapting himself to all the powerful groups to which he eventually belongs, by transforming himself from a human being into a member of organizations, by sacrificing his potentialities for the sake of readiness and ability to conform to and gain influence in such organizations, he manages to survive. It is survival achieved by the oldest biological means necessary, mimicry.” – Horkheimer, “Rise and Decline of the Individual”, Eclipse of Reason, 1947

Returning to Horkheimer‘s Eclipse of Reason (1947) after studying Beniger‘s Control Revolution (1986) serves to deepen ones respect for Horkheimer.

The two writers are for the most part in agreement as to the facts. It is a testament to their significance and honesty as writers that they are not quibbling about the nature of reality but rather are reflecting seriously upon it. But whereas maintains a purely pragmatic, unideological perspective, Horkheimer (forty years earlier) correctly attributes this pragmatic perspective to the class of business managers to whom Beniger’s work is directed.

Unlike more contemporary critiques, Horkheimer’s position is not to dismiss this perspective as ideological. He is not working within the postmodern context that sees all knowledge as contestable because it is situated. Rather, he is working with the mid-20th acknowledgment that objectivity is power. This is a necessary step in the criticality of the Frankfurt School, which is concerned largely with the way (real) power shapes society and identity.

It would be inaccurate to say that Beniger celebrates the organization. His history traces the development of social organization as evolving organism. Its expanding capacity for information processing is a result of the crisis of control unleashed by the integration of its energetic constituent components. Globalization (if we can extend Beniger’s story to include globalization) is the progressive organization of organizations of organization. It is interesting that this progression of organization is a strike against Weiner’s prediction of the need for society to arm itself against entropy. This conundrum is one we will need to address in later work.

For now, it is notable that Horkheimer appears to be responding to just the same historical developments later articulated by Beniger. Only Horkeimer is writing not as a descriptive scientist but as a philosopher engaged in the process of human meaning-making. This positions him to discuss the rise and decline of the individual in the era of increasingly powerful organizations.

Horkheimer sees the individual as positioned at the nexus of many powerful organizations to which he must adapt through mimicry for the sake of survival. His authentic identity is accomplished only when alone because submission to organizational norms is necessary for survival or the accumulation of organizational power. In an era where pragmatic ability to manipulate people, not spiritual ideals, qualifies one for organization power, the submissive man represses his indignation and rage at this condition and becomes an automoton of the system.

Which system? All systems. Part of the brilliance of both Horkheimer and Beniger is their ability to generalize over many systems to see their common effect on their constituents.

I have not read Horkheimer’s solution the individual’s problem of how to maintain his individuality despite the powerful organizations which demand mimicry of him. This is a pressing question when organizations are becoming ever more powerful by using the tools of data science. My own hypotheses, which is still in need of scientific validation, is that the solution lies in the intersecting agency implied by the complex topology of the organization of organizations.

by Sebastian Benthall at May 20, 2015 12:38 AM

May 18, 2015

MIMS 2012

May 17, 2015

Ph.D. student

software code as representation of knowledge

The reason why ubiquitous networked computing has changed how we represent knowledge is because the semantics of code are guaranteed by the mechnical implementations of its compilers.

This introduces a kind of discipline in the representation of knowledge as source code that is not present in natural language or even in formal mathematical notation, which must be interpreted by humans.

Evolutionarily, humanity’s innate capacity for natural language is well established. Literacy, however, is a trained skill that involves years of education. As Derrida points out in Of Grammatology, the transition from the understanding of language as speech or breath to the understanding of knowledge as text was a very significant change in the history of knowledge.

We have not yet adjusted institutionally to a world where knowledge is represented as code. Most of the institutions that run the world–the legal system, universities, etc.–still run on the basis of written language.

But the new institutions that are adapting to represent knowledge as data and software code to process it are becoming more powerful than these older institutions.

This power comes from these new institutions’ ability to assign the work of acting on their knowledge to computing machines that can work tirelessly and that integrate well with operations. These new institutions can process more information, gathered from more sources, than the old institutions. They are organizationally more intelligent than the older organizations. Because of this intelligence, they can accrue wealth and more power.

by Sebastian Benthall at May 17, 2015 08:57 PM

May 14, 2015

Ph.D. student

data science is not positivist, it’s power

Naively, we might assume that contemporary ‘data science’ is a form of positivist or post-positivist science. The scientist gathers data and subsumes it under logical formulae–models with fitted parameters. Indeed this is the case when data science is applied to natural phenomena, such as stars or the human genome.

The question of what kind of science ‘data science’ is becomes much more complex when we start to look at its application to social phenomena. This includes its application to the management of industrial and commercial technology–the so called “Internet of Things“. (Technology in general, and especially technology as situated socially, being a social phenomenon.)

There are (at least) two reasons why data science in these social domains is not strictly positivist.

The first is that, according to McKinsey’s Michael Chui, data science in the Internet of Things context is main about either real-time control or anomaly detection. Neither of these depends on the kind of nomothetic orientation that positivism requires. The former requires only an objective function over inputs to guide the steering of the dynamic system. The latter requires only the detection of deviation from historically observed patterns.

‘Data science’ applied in this context isn’t actually about the discovery of knowledge at all. It is not, strictly speaking, a science. Rather, it is a process through which the operations of existing technologies are related and improved by further technological interventions. Robust positivist engineering knowledge is applied to these cases. But however much the machines may ‘learn’, what they learn is not propositional.

Perhaps the best we can say is that ‘data science’ in this context is the science of techniques for making these kinds of interventions. As learning these techniques depends on mathematical rigor and empirical prototyping, we can say perhaps of the limited sense of ‘pure’ (not applied) data science that it is a positivist science.

But the second reason why data science is not positivist comes about as a result of its application. The problem is that when systems controlled by complex computational processes interact, the result is a more complex system. In adversarial cases, the interacting complex systems become the subject matter of cybersecurity research, towards which data science is one application. But as soon as on starts to study phenomena that are aware of the observer and can act in ways that respond to its presence, you get out of positivist territory.

A better way to think about data science might be to think of it in terms of perception. In, the visual system, data that comes in through the eye goes through many steps of preprocessing before it becomes the subject of attention. Visual representations feed into the control mechanisms of movement.

If we see data science not as a positivist attempt to discover natural laws, but rather as an extension of agency by expanding powers of perception and training skillful control, then we can get a picture of data science that’s consistent with theories of situated and embodied cognition.

These theories of situated and embodied cognition are perhaps the best contenders for what can displace the dominant paradigm as imagined by critics of cognitive science, economics, etc. Rather than being a rejection of explanatory power of naturalistic theories of information processing, these theories extend naive theories to embrace the complexity of how agents cognition is situated in a body in time, space, and society.

If we start to think of ‘data science’ not as a kind of natural science but as the techniques and tools for extending the information processing that is involved in ones individual or collective agency, then we can start to think about data science as what it really is: power.

by Sebastian Benthall at May 14, 2015 06:14 AM

May 09, 2015

Ph.D. student

is science ideological?

In a previous post, I argued that Beniger is an unideological social scientist because he grounds his social scientific theory in robust theory from the natural and formal sciences, like theory of computation and mathematical biology. Astute commenter mg has questioned this assertion.

Does firm scientific grounding absolve a theoretical inquiry from ideology – what about the ideological framework that the science itself has grown in and is embedded in? Can we ascribe such neutrality to science?

This is a good question.

To answer it, it would be good to have a working definition of ideology. I really like one suggested by this passage from Habermas, which I have used elsewhere.

The concept of knowledge-constitutive human interests already conjoins the two elements whose relation still has to be explained: knowledge and interest. From everyday experience we know that ideas serve often enough to furnish our actions with justifying motives in place of the real ones. What is called rationalization at this level is called ideology at the level of collective action. In both cases the manifest content of statements is falsified by consciousness’ unreflected tie to interests, despite its illusion of autonomy. The discipline of trained thought thus correctly aims at excluding such interests. In all the sciences routines have been developed that guard against the subjectivity of opinion, and a new discipline, the sociology of knowledge, has emerged to counter the uncontrolled influence of interests on a deeper level, which derive less from the individual than from the objective situation of social groups.

If we were to extract a definition of ideology from this passage, it would be something like this: an ideology is:

  1. an expression of motives that serves to justify collective action by a social group
  2. …that is false because it is unreflective of the social group’s real interests.

I maintain that the theories that Beniger uses to frame his history of technology are unideological because they are not expressions of motives. They are descriptive claims whose validity has been tested thoroughly be multiple independent social groups with conflicting interests. It’s this validity within and despite the contest of interests which gives scientific understanding its neutrality.

Related: Brookfield’s “Contesting Criticality: Epistemological and Practical Contradictions in Critical Reflection” (here), which I think is excellent, succinctly describes the intellectual history of criticality and how contemporary usage of it blends three distinct traditions:

  1. a Marxist view of ideology as the result of objectively true capitalistic social relations,
  2. a psychoanalytic view of ideology as a result of trauma or childhood,
  3. and a pragmatic/constructivist/postmodern view of all knowledge being situated.

Brookfield’s point is that an unreflective combination of these three perspectives is incoherent both theoretically and practically. That’s because while the first two schools of thought (which Habermas combines, above–later Frankfurt School writers deftly combined Marxism is psychoanalysis) both maintain an objectivist view of knowledge, the constructivists reject this in favor of a subjectivist view. Since discussion of “ideology” comes to us from the objectivist tradition, there is a contradiction in the view that all science is ideological. Calling something ‘ideological’ or ‘hegemonic’ requires that you take a stand on something, such as the possibility of an alternative social system.

by Sebastian Benthall at May 09, 2015 05:05 PM

May 08, 2015

Ph.D. student

Fascinated by Vijay Narayanan’s talk at #DataEDGE

As I write this I’m watching Vijay Narayanan’s, Director of Algorithms and Data Science Solutions at Microsoft, talk at the DataEDGE conference at UC Berkeley.

The talk is about “The Data Science Economy.” It began with a history of the evolution of the human centralized nervous system. He then went on to show the centralizing trend of the data economy. Data collection will be become more mobile, data processing will be done in the cloud. This data will be sifted by software and used to power a marketplace of services, which ultimately deliver intelligence to their users.

It was wonderful to see somebody so in the know reaffirming what has been a suspicion I’ve had since starting graduate school but have found little support for in the academic setting. The suspicion is that what’s needed to accurately model the data science economy is a synthesis of cognitive science and economics that can show the comparative market value and competitiveness of different services.

This is not out of the mainline of information technology, management science, computer science, and other associated disciplines that have been at the nexus of business and academia for 70 years. It’s an intellectual tradition that’s rooted in the 1940’s cybernetics vision of Norbert Wiener and was going strong in the social sciences as late as Beniger‘s The Control Revolution, which, like Narayanan, draws an explicit connection between information processing in the brain and information processing in the microprocessor–notably while acknowledging the intermediary step of bureaucracy as a large-scale information processing system.

There’s significant cross-pollination between engineering, economics, computer science, and cognitive psychology. I’ve read papers from, say, the Education field in the late 80’s and early 90’s that refers to this collectively as “the dominant paradigm”. At UC Berkeley today, it’s fascinating to see a departmental politics play out over ‘data science’ that echoes some of these concerns that a powerful alliance of ideas are getting mobilized by industry and governments while other disciplines are struggling to find relevance.

It’s possible that these specialized disciplinary discourses are important for the cultivation of thought that is important for its insight despite being fundamentally impractical. I’m coming to a different view: that maybe the ‘dominant paradigm’ is dominant because it is scientifically true, and that other disciplinary orientations are suffering because they are based on unsound theory. If disciplines that are ‘dominated’ by another paradigm are floundering because they are, to put it simply, wrong, then that is a very elegant explanation for what’s going on.

The ramification of this is that what’s needed is not a number of alternatives to ‘the dominant paradignm’. What’s needed is that scholars double down on the dominant paradigm and learn how to express in its logic the complexities and nuances that the other disciplines have been designed to capture. What we can hope for, in terms of intellectual continuity, is the preservation of what’s best of older ideas in a creative synthesis with the foundational principles of computer science and mathematical biology.

by Sebastian Benthall at May 08, 2015 10:41 PM

Ph.D. alumna

Are We Training Our Students to be Robots?

Excited about the possibility that he would project his creativity onto paper, I handed my 1-year-old son a crayon. He tried to eat it. I held his hand to show him how to draw, and he broke the crayon in half. I went to open the door and when I came back, he had figured out how to scribble… all over the wooden floor.

Crayons are pretty magical and versatile technologies. They can be used as educational tools — or alternatively, as projectiles. And in the process of exploring their properties, children learn to make sense of both their physical affordances and the social norms that surround them. “No, you can’t poke your brother’s eye with that crayon!” is a common refrain in my house. Learning to draw — on paper and with some sense of meaning — has a lot to do with the context, a context in which I help create, a context that is learned outside of the crayon itself.

From crayons to compasses, we’ve learned to incorporate all sorts of different tools into our lives and educational practices. Why, then, do computing and networked devices consistently stump us? Why do we imagine technology to be our educational savior, but also the demon undermining learning through distraction? Why are we so unable to see it as a tool whose value is most notably discovered situated in its context?

The arguments that Peg Tyre makes in “iPads < Teachers” are dead on. Personalized learning technologies won’t magically on their own solve our education crisis. The issues we are facing in education are social and political, reflective of our conflicting societal values. Our societal attitudes toward teachers are deeply destructive, a contemporary manifestation of historical attitudes towards women’s labor.

But rather than seeing learning as a process and valuing educators as an important part of a healthy society, we keep looking for easy ways out of our current predicament, solutions that don’t involve respecting the hard work that goes into educating our young.
In doing so, we glom onto technologies that will only exacerbate many existing issues of inequity and mistrust. What’s at stake isn’t the technology itself, but the future of learning.

An empty classroom at the Carpe Diem school in Indianapolis.
Education shouldn’t be just about reading, writing, and arithmetic. Students need to learn how to be a part of our society. And increasingly, that society is technologically mediated. As a result, excluding technology from the classroom makes little sense; it produces an unnecessary disconnect between school and contemporary life.

This forces us to consider two interwoven — and deeply political — societal goals of education: to create an informed citizenry and to develop the skills for a workforce.

With this in mind, there are different ways of interpreting the personalized learning agenda, which makes me feel simultaneously optimistic and outright terrified. If you take personalized learning to its logical positive extreme, technology will educate every student as efficiently as possible. This individual-centric agenda is very much rooted in American neoliberalism.

But what if there’s a darker story? What if we’re really training our students to be robots?

Let me go cynical for a moment. In the late 1800s, the goal of education in America was not particularly altruistic. Sure, there were reformers who imagined that a more educated populace would create an informed citizenry. But what made widespread education possible was that American business needed workers. Industrialization required a populace socialized into very particular frames of interaction and behavior. In other words, factories needed workers who could sit still.

Many of tomorrow’s workers aren’t going to be empowered creatives subscribed to the mantra of, “Do what you love!” Many will be slotted into systems of automation that are hybrid human and computer. Not in the sexy cyborg way, but in the ugly call center way.
Like today’s retail laborers who have to greet every potential customer with a smile, many humans in tomorrow’s economy will do the unrewarding tasks that are too expensive for robots to replace. We’re automating so many parts of our society that, to be employable, the majority of the workforce needs to be trained to be engaged with automated systems.

All of this begs one important question: who benefits, and who loses, from a technologically mediated world?

Education has long been held up as the solution to economic disparity (though some reports suggest that education doesn’t remedy inequity). While the rhetoric around personalized learning emphasizes the potential for addressing inequity, Tyre suggests that good teachers are key for personalized learning to work.

Not only are privileged students more likely to have great teachers, they are also more likely to have teachers who have been trained to use technology — and how to integrate it into the classroom’s pedagogy. If these technologies do indeed “enhance the teacher’s effect,” this does not bode well for low-status students, who are far less likely to have great teachers.

Technology also costs money. Increasingly, low-income schools are pouring large sums of money into new technologies in the hopes that those tools can fix the various problems that low-status students face. As a result, there’s less money for good teachers and other resources that schools need.

I wish I had a solution to our education woes, but I’ve been stumped time and again, mostly by the politics surrounding any possible intervention. Historically, education was the province of local schools making local decisions. Over the last 30 years, the federal government and corporations alike have worked to centralize education.

From textbooks to grading systems, large companies have standardized educational offerings, while making schools beholden to their design logic. This is how Texas values get baked into Minnesota classrooms. Simultaneously, over legitimate concern about the variation in students’ experiences, federal efforts have attempted to implement learning standards. They use funding as the stick for conformity, even as local politics and limited on-the-ground resources get in the way.

Personalized learning has the potential to introduce an entirely new factor into the education landscape: network effects. Even as ranking systems have compared schools to one another, we’ve never really had a system where one students’ learning opportunities truly depend on another’s. And yet, that’s core to how personalized learning works. These systems don’t evolve based on the individual, but based on what’s learned about students writ large.

Personalized learning is, somewhat ironically, far more socialist than it may first appear. You can’t “personalize” technology without building models that are deeply dependent on others. In other words, it is all about creating networks of people in a hyper-individualized world. It’s a strange hybrid of neoliberal and socialist ideologies.

An instructor works with a student in the learning center at the Carpe Diem school in Indianapolis.
Just as recommendation systems result in differentiated experiences online, creating dynamics where one person’s view of the internet radically differs from another’s, so too will personalized learning platforms.

More than anything, what personalized learning brings to the table for me is the stark reality that our society must start grappling with the ways we are both interconnected and differentiated. We are individuals and we are part of networks.

In the realm of education, we cannot and should not separate these two. By recognizing our interconnected nature, we might begin to fulfill the promises that technology can offer our students.

This post was originally published to Bright at Medium on April 7, 2015. Bright is made possible by funding from the New Venture Fund, and is supported by The Bill & Melinda Gates Foundation.

by zephoria at May 08, 2015 12:29 AM