School of Information Blogs

February 04, 2016

MIMS 2012

Play the Right Note

Like many Americans, I started learning guitar back in high school. I began where everyone did – strumming basic chords and melodies and building up my finger strength. I got a little better every day, and could eventually play simple songs.

I kept practicing and getting better and pushed myself to learn advanced techniques. I wanted to know how to play all the notes — every scale, every chord, alternate picking, sweep picking, tapping, and so on. I didn’t want my technical proficiency to limit what I could play.

Over time I built up a repertoire of techniques to use. Even though I could play a lot of technically advanced parts, I didn’t necessarily know how to play the guitar well.

When he was developing as a writer, David Foster Wallace (author of Infinite Jest) had a similar focus on the advanced stuff. In his multi-day interview of the author, Although of Course You End Up Becoming Yourself, interviewer David Lipsky asks Wallace what his younger self would think of his new work, and if he thought things like character were pointless. Wallace responded with:

Not pointless but that they were easy. And that the hard stuff was more, you know, front of the head. It’s never as stark as pointless or not pointless. It’s, you know, what’s interesting, what’s advanced, what’s next? It’s gotta be — right? Not what’s true, but what’s fresh and novel and whatever. It’s very difficult to get out of that.

In his early work, David pushed himself to produce advanced work that would be considered “fresh” and “novel.” Not because that helped him communicate a larger truth to his readers, but because he wanted to push himself as a writer.

I’ve observed this focus on technical mastery in every creative field. Young filmmakers care more about getting the perfect lighting and shooting on film (rather than digital) than they do about story (check out the HBO series Project Greenlight for great examples of this). Digital product designers create slick UIs that look great on Dribbble, but aren’t usable or feasible or valuable. Programmers build technically impressive solutions to problems that don’t exist.

It’s easy to focus on this “hard” stuff because it has a clear path forward. Just practice what you aren’t good at and you’ll improve. And by learning the “hard” stuff, you’ll distinguish yourself from amateurs and beginners. When I was in high school, this is what I thought it meant to “master” the guitar.

As a result, I overlooked the “easy” stuff. The “easy” stuff is knowing when to use advanced techniques, and when to do something simple. It’s using your skills in service of achieving a higher goal – writing a song, communicating a truth to the reader, telling a good story, building something useful for people, etc.

I’ve since learned that to truly master your craft, you need to know the “hard” technical skills, and how to use those skills. So don’t just focus on learning all the notes. Learn when to play the right note, too.

by Jeff Zych at February 04, 2016 04:29 PM

January 24, 2016

MIMS 2012

Color Saver

This weekend I built a quick Mac screensaver that displays the current time as a color. The hour is mapped onto the red channel, the minute onto the green channel, and the second onto the blue channel.

I was inspired by What Colour Is It, which converts the current time into a hex value (e.g. 11:02:47 is #110247). But What Colour Is It doesn’t map to every hex value. Its range is limited to #000000 (midnight, AKA black) to #235959 (11:59:59 PM, a darkish blue green), which misses brighter colors closer to white (#FFFFFF). Instead, Color Saver maps the time component onto the full range (0–255) of each color channel.

I experimented with mapping the time components onto hue, saturation, and lightness instead, but that resulted in more ugly colors more often. For example, when seconds represent the color’s lightness, the color will go from completely black to white in the course of a minute, every minute of the day. I found this to be jarring and unpleasant. Mapping onto the RGB channels instead is more calming and mesmerizing.

Download Color Saver from Dropbox. Note: I didn’t code sign the screensaver, so when you double-click to install it you’ll get a warning that it’s from an untrusted source. You’ll have to make an exception in the “Security & Privacy” section of System Preferences to install it.

Feel free to check out the source code on Github.

by Jeff Zych at January 24, 2016 11:33 PM

January 12, 2016

MIMS 2010

Thoughts on one year as a parent

(Cross-posted from Medium, a site that people actually visit).

Around this time last year I spent a lot of time walking around, thinking about all the things I wanted to teach my daughter: how a toilet works, how to handle a 4-way stop, how to bake cookies. One year later, the only thing I have taught my daughter about toilets is please stop playing with that now, sweetheart, that’s not a toy. Babies shouldn’t eat cookies. And driving is thankfully limited to a push toy which has nonetheless had its share of collisions. On the other hand, I can recite Time For Bed and Wherever You Are My Love Will Find You from memory. Raffi is racing up the charts of our household weekly top 40. I share the shower every morning with a giant inflatable duck. It has been a challenge and yet still joyful. Here’s an assorted collection of observations and advice from someone who just finished his first trip around the sun as a parent.

Advice

Speaking of advice, I try not to give too much to new parents. They have surfeit of books, family, friends, and occasionally complete strangers telling them what they should and shouldn’t do. I don’t want to be one more voice in that cacophony. The first months with a new child is a struggle, and you have to do whatever it takes to get through them. Sure, there are probably some universals when it comes to babies, but as someone who has done it just once, I’m not a likely candidate to know what they are. I’m happy to tell you what I think, but only if you want to know. Let’s just assume for the rest of this article that you want to know.

That said, please vaccinate your kids.

Firsts

It’s easy to forget how many experiences an adult has accumulated in their decades alive. The first year for a baby is almost nonstop first experiences. Everything that has long since become ordinary in your life is new to a baby: eating solid food, going to a zoo, taking the bus, touching something cold, petting a dog. The beautiful thing is that being a parent makes all these old experiences new firsts for you too. I hope never to forget the first time I watched Samantha use a straw: she sucked on it—like everything else—and then when water magically came out of the straw she looked startled, and then, suddenly, thrilled, as if she were not merely drinking water, but had discovered water on Mars.

Sleep

Nothing can prepare you for this. Maybe it’s smooth sailing for some parents, but we were exhausted, completely drained, dead to the world, and whatever other synonyms there are for being tired. Lots of people told us that we would be tired beyond belief, but I think this may not be something that can be communicated with language; it can only be learned through experience. I thought that being an experienced all-nighter-puller in college would be good training for having a baby. It’s completely different. In college you stay up all night writing a paper, turn in the paper, and then it’s OK if you sleep for 36 hours during the weekend. Having a baby is like there’s a term paper due every day for months.

Breastfeeding

Breastfeeding is really hard. I don’t know why they don’t work this into more breastfeeding curricula. Caitlin and I took a multi-hour class and I don’t remember this coming up. Just lots of stuff about all the benefits of breastfeeding, how wonderful the bonding is, how the mother will be totally in love with breastfeeding. Nobody wants to attend a breastfeeding class taught by a dude, but if I were teaching one it would go something like this:

  • There are a lot of good things about breastfeeding.
  • By the way, it’s really hard and Mom will probably end up in tears several times.
  • Working with a lactation consultant can be a lifesaver.
  • Formula is not the end of the world.
  • Good luck, happy latching.

Pictures

If you want to make a new parent’s day, ask to see pictures of their baby. I tried not to subject people to them, but there’s only so much self-control one can have. I loved it when people asked.

Stuff

You end up with so much stuff for a baby. There’s a lot of stuff you don’t need. If you skip that stuff, you’ll still have a lot. Car seat, stroller, bottles, diapers, a bathtub, continually outgrown clothes, more diapers, a crib, a rocking chair. And that’s before you even think about toys and books.

Here are some of my favorite things that we bought this past year: - Halo sleep sacks. They zip from the top to the bottom which means you only have to unzip them partway for late night diaper changes. - LectroFan white noise machine. We actually have two—one for baby and one for the lucky napping adult. - NoseFrida. I never would have guessed how much fun decongesting your baby would be with this snot sucker. - Giant Inflatable Duck. I can’t say I love sharing my shower with this duck, but Samantha loves it, so I kind of love it too.

One recommendation I make to all my expecting friends is to check out The Nightlight, the baby equivalent of The Wirecutter and The Sweet Home. They don’t give you a spreadsheet of data and rankings, they just tell you what to buy, with a detailed explanation if you care to read it. I did a lot of independent research and ultimately came to many of the same conclusions, so I stopped reading.

Trivia: the set of clothes and items you need for a newborn is called a layette.

Dropcam/Nest Cam

I read something somewhere about video monitors being distracting and got it into my head that we would only use an audio monitor. I didn’t want one more app to hijack my phone. Boy was I ever wrong. First, we live in a small apartment, so the idea that we need radio frequencies to transmit baby sounds across it is ludicrous. Second, I got so much peace of mind from actually seeing what my baby is doing that I highly recommend it. When we were doing sleep training it was a huge help to be able to see that things were “OK”. Streaming live video from my house to the cloud is a bit creepy, but it’s so nice to check on her taking a nap when I’m at work, and being able to rewind 30 seconds and see what just happened is handy. I guess this is how privacy dies: with little, convenient features here and there.

Other Parents

Being a parent is like gaining membership to the world’s least exclusive club, but finding out that the club is somehow still great. It gave me a new way to bond with other friends and coworkers who are also parents. I thought (naively in retrospect) that all parents have a sense of this shared camaraderie. As it turns out, though, parents are just a random sample of people which means that some of them are strange or petty or just mean. I was surprised by how many interactions with other parents left me feeling like somehow we were still in high school: cliques at drop-in play areas, passive-aggressive remarks about the strangest things.

Airplanes

You could write a Shakespearean tragedy about the Herculean trials of flying with a baby. We rolled the dice a couple times and got lucky but it was exhausting.

Parental leave

I had access to a generous paternity leave policy—10 weeks paid—due to California’s progressive policies and my employer’s good will. It’s completely crazy that this isn’t the norm across the U.S. The law of the land is that, if you meet the requisite conditions, you are entitled to 12 weeks of unpaid leave. I cannot understand how the wealthiest country in the world can’t afford to prioritize reasonable family leave policies (and neither can John Oliver, who has a much funnier take on the state of parental leave in America). On top of that, it’s not like new parents are actually doing the best work of their career. I was sleepwalking through my job for weeks even after I got back.

Politicians say they love families; how about actually helping them out when they need it?

Joy

Being a new parent is a struggle, even if you are thrilled to have a child. You lose so much of your previous life: free time, hobbies, spontaneous dining-out, sleeping in—it’s a lot of change. You trade these things in for something new. This new thing is hard to describe in a way that doesn’t sound trite or glib. I’d say it feels like trading some happiness for joy.

I love being Samantha’s father. The past year has had its share of challenges, but honestly we’ve been so fortunate and I hope that confronting our small share of problems has made me a more empathetic person. Samantha arrived on time, easily, and healthy; we didn’t have the burden of illness or an extended stay in the NICU. We never worried about the cost of diapers or formula; I can only imagine how crushing it must feel not to have what you need to take care of your child. We have had help from so many of our family and friends, help you absolutely need to keep your head on straight. I have a wonderful partner and I don’t know I would get through parenting without Caitlin; I have a new appreciation for single parents.

Who knows what we’ll teach our daughter this next year, or what she’ll teach us. It has been an incredible journey so far. I can’t believe how many years we get to have. They won’t be non-stop happiness, but I hope they’re as joyful as this first one.


“Let’s say I wanted to read more tweets about babies. Where would I go?” “You would go to this collection on Twitter.” “That was a hypothetical. No one wants to read more tweets about babies.”

by Ryan at January 12, 2016 05:37 PM

January 09, 2016

Ph.D. student

a constitution written in source code

Suppose we put aside any apocalyptic fears of instrumentality run amok, make peace between The Two Cultures of science and the humanities, and suffer gracefully the provocations of the critical without it getting us down.

We are left with some bare facts:

  • The size and therefore the complexity of society is increasing all the time.
  • Managing that complexity requires information technology, and specifically technology for computation and its various interfaces.
  • The information processing already being performed by computers in the regulation and control of society dwarfs anything any individual can accomplish.
  • While we maintain the myth of human expertise and human leadership, these are competitive only when assisted to a great degree by a thinking machine.
  • Political decisions, in particular, either are already or should be made with the assistance of data processing tools commensurate with the scale of the decisions being made.

This is a description of the present. To extrapolate into the future, there is only a thin consensus of anthropocentrism between us and the conclusion that we do not so much govern machines as much as they govern us.

This should not shock us. The infrastructure that provides us so much guidance and potential in our daily lives–railroads, electrical wires, wifi hotspots, satellites–is all of human design and made in service of human interests. While these design processes were never entirely democratic, we have made it thus far with whatever injustices have occurred.

We no longer have the pretense that making governing decisions is the special domain of the human mind. Concerns about the possibly discriminatory power of algorithms concede this point. So public concern now scrutinizes the private companies whose software systems make so many decisions for us in ways that are obscure or unpredictable. The profit motive, it is suspected, will not serve customers of these services well in the long run.

So far policy-makers have taken a passive stance towards the problem of algorithmic control by reacting to violations of human dignity with a call for human regulation.

What is needed is a more active stance.

Suppose we were to start again in founding a new city. Or a new nation. Unlike the founders of every city ever founded, we have the option to write its founding Constitution in source code. It would be logically precise and executable without expensive bureaucratic apparatus. It would be scalable in ways that can be mathematically confirmed. It could be forked, experimented with, by diverse societies across the globe. Its procedure for amendment would be written into itself, securing democracy by protocol design.


by Sebastian Benthall at January 09, 2016 01:30 AM

January 05, 2016

Ph.D. student

programming and philosophy of science

Philosophy of science is a branch of philosophy largely devoted to the demarcation problem: what is science?

I’ve written elsewhere about why and how in the social sciences, demarcation is highly politicized and often under attack. This is becoming pertinent now especially as computational methods become dominant across many fields and challenge the bases of disciplinary distinction. Today, a lot of energy (at UC Berkeley at least) goes into maintaining the disciplinary social sciences even when this creates social fields that are less scientific than they could be in order to maintain atavistic disciplinary traits.

Other energy (also at UC Berkeley, and elsewhere) goes into using computer programs to explore data about the social world in an undisciplinary way. This isn’t to say that specific theoretical lenses don’t inform these studies. Rather, the lenses are used provisionally and not in an exclusive way. This lack of disciplinary attachment is an important aspect of data science as applied to the social world.

One reason why disciplinary lenses are not very useful for the practicing data scientist is that, much like natural scientists, data scientists are more often than not engaged in technical inquiry whose purpose is prediction and control. This is very different from, for example, engaging an academic community in a conversation in a language they understand or that pays appropriate homage to a particular scholarly canon–the sort of thing one needs to do to be successful in an academic context. For much academic work, especially in the social sciences, the process of research publication, citation, and promotion is inherently political.

These politics are more often than not not an essential function to scientific inquiry itself; rather they have to do with the allocation of what Bourdieu calls temporal capital: grant funding, access, appointments, etc. within the academic field. Scientific capital, that symbolic capital awarded to scientists based on their contributions to trans-historical knowledge, is awarded more based on the success of an idea than by, for example, brown-nosing ones superiors. However, since temporal capital in the academy is organized by disciplines as a function of university bureaucratic organization, academic researchers are required to contort themselves to disciplinary requirements in the presentation of their work.

Contrast this with the work of analysing social data using computers. The tools used by computational social scientists tend to be products of the exact sciences (mathematics, statistics, computer science) with no further disciplinary baggage. The intellectual work of scientifically devising and testing theories against the data happens in a language most academic communities would not recognize as a language at all, and certainly not their language. While this work depends on the work of thousands of others who have built vast libraries of functional code, these ubiquitous contributors are not included in an social science discipline’s scholarly canon. They are uncited, taken for granted.

However, when those libraries are made openly available (and they often are), they participate in a larger open source ecosystem of tools whose merits are judged by their practical value. Returning to our theme of the demarcation problem, the question is: is this science?

I would answer: emphatically yes. Programming is science because, as Peter Naur has argued, programming is theory building (hat tip the inimitable Spiros Eliopoulos for the reference). The more deeply we look into the demarcation problem, the more clearly software engineering practice comes into focus as an extension of a scientific method of hypothesis generation and testing. Software is an articulation of ideas, and the combined works of software engineers are a cumulative science that has extended far beyond the bounds of the university.


by Sebastian Benthall at January 05, 2016 11:00 PM

December 29, 2015

MIMS 2012

Things I Learned in 2015

I learned that writing helps clarify my thoughts. It makes me much more articulate. It forces me to engage with a topic and wrestle it to the ground so that I understand it much more deeply than I did when I started.

I learned that writing is hard. But like design and code and tennis, it’s a skill that can be practiced and learned.

I learned that blogging consistently is hard. Coming up with ideas and pushing them out into the world takes work, and takes practice to form into a habit. I learned that I’d rather focus on quality and writing about what I think is interesting, even if that means I don’t publish as often.

I learned that even when I have a lot of ideas to write about, not all of them are worth working on. I’m okay with letting ideas go rather than trying to force myself to work on them. I’ve learned this is a good litmus test for knowing what’s worth spending my time on. The most interesting topics will naturally drive me to work on them.

I learned that it’s just as much work, often more, to promote your writing, as it is to do the writing itself. I learned what it feels like to have a post at the top of Designer News. And I learned what it feels like when people misinterpret what I was trying to communicate.

Just like writing, I learned that dating takes a lot of time and energy (even when it’s online). I learned that I can’t put in that time and energy unless I’m attracted (both mentally and physically) to a woman. I can’t force it. I learned that even with online dating, or because of it, it will take a lot of dates before finding someone who feels “right.”

I learned that most single people of my generation, both men and women, are frustrated to some degree by dating. Online dating has made it easier than ever to meet people, but the increase in quantity has not increased the quality.

I learned that an abundance of choice — in products to buy, shows to watch, people to date, articles and books to read, etc. — increases the pressure of making the “right” choice. There’s always a nagging question of, “Is there something better?”

But I also learned that perfect is the enemy of done. There is no perfect. I need to continue practicing making a decision and moving on, without stressing about whether or not it was the “best” choice. There will always be a better choice, despite my best efforts.

I learned I enjoy management, more than I expected. Including the people management part (career development, promotions/raises, etc.). I learned transparency, candor, and empathy go a long way towards quelling people’s anxieties, especially amongst organizational change. And especially when delivering “bad” news, or giving critical feedback. This is as true for personal relationships as it is work relationships.

I learned that providing context, the “why,” behind a decision is more important than the decision itself. It helps people understand it, and accept it, even if they don’t agree with it. And it engenders trust.

I learned that I don’t miss designing and coding much since becoming a manager. The bits and pieces I get to do here and there, in and out of work, is usually enough.

I learned that just telling someone a lesson you’ve learned through experience is not likely to change their thoughts or behavior. Experience is the best teacher. This can be painful when you see someone making a mistake you could have prevented, but I’ve learned to accept this. (This is a good lesson to know if I ever have kids 👪 ).

Similarly, I learned that giving people information in an effort to increase their knowledge, or change their behavior, isn’t the most effective way of achieving this goal. It’s much more effective, albeit harder, to ask questions that lead a person to “earn” this knowledge themselves. By getting someone to come to their own conclusions, they’re more likely to internalize and act on that new knowledge. And they can apply the framework for gaining that knowledge to new situations. This is an art as much as it is a skill, and takes practice.

I learned that my top 5 strengths, according to the Clifton StrengthsFinder, are: 1. Learner; 2. Intellection; 3. Achiever; 4. Individualization; and 5. Analytical. Nothing too surprising, but cool to see nonetheless.

I learned that product development is hard, even when a person’s been doing it for a long time. Especially with teams, and as a company grows. I learned that as clueless as I might feel about how to run a team or build products, no one seems to have it completely figured out (even if they’re experienced and, from the outside, seem to know what they’re doing).

I learned that I like having physical books over eBooks. I display them on my shelves, which makes me much more likely to re-engage with a book after I’ve finished reading it. By seeing it, I’ll think about it more, and pick it up and flip through it. I don’t do this when a book is trapped in software. The same is generally true with music and movies, too.

I learned that underlining passages in books helps me focus on the core ideas being communicated, helps me stay engaged, helps me retain the information better, and makes it easy for me to flip through a book later and remember the key parts.

I learned that everything I spend time on has an opportunity cost. So I’m learning to get comfortable focusing on the things I most enjoy (such as writing), and accepting the cost of not doing other things (such as playing guitar).

I learned that with sustained effort over time I could be good at almost anything (art, music, programming, sports, etc.). But as my “opportunity cost” lesson indicates, I can’t be good at everything. I have to choose what to spend time on, and thus what to be good at. And with enough effort, perhaps I’ll even become great at something. 😄

Finally, I learned that I can’t possibly document everything I’ve learned in a year. There were too many lessons, big and small. Here’s looking forward to another year of learning in 2016!

by Jeff Zych at December 29, 2015 10:52 PM

December 28, 2015

Ph.D. student

Jung and Bourdieu as an improvement upon Freud and Habermas

I have written in this blog and in published work about Habermas and his Frankfurt School precursor, Horkheimer. Based on this writing, a thorough reader (of whom I expect there to be approximately zero) might conclude that I am committed to a Habermasian view.

I’d like to log a change of belief based on recent readings of Pierre Bourdieu and Carl Jung.

Why Bourdieu and Jung? Because Frankfurt School social theory was based on a Freudian view of psychology. This Freudian origin manifests itself in the social theory in ways that I’ll try to outline below. However, in my own therapeutic experience as well as many more informal encounters with Jungian theory, I find the latter to be much more compelling. As I’ve begun reading Jung’s Man and His Symbols, I see now where Jung explicitly departed from Freud, enriching his theory. These departures are far more consistent with a Bourdieusian view of society. (I’ve noted the potential synergy here).

Let me try to be clearer about what this change in perspective entails:

For Freud, man has an irrational nature and a rational ego. The purpose of therapy is the maintenance of rational control. Horkheimer’s critique of modern society invoked Freud in his discussion of the revolt of nature: society rationalizes itself and the individuals within it; the ‘nature’ of the individuals that is excluded (repressed, really) by this rationalization manifests itself in ugly ways. Habermas, who is less pessimistic about society, still sees morality in terms of social norms grounded in rational consensus. “Rational consensus” as a concept angers or worries postmodern and poststructural critics who see this principle as a basis for social ethics as exclusionary.

For Jung, the therapeutic relationship absolutely must not be about the imposition of the therapist’s views on the patient; psychological progress must come from within the individual patient. He documents an encounter between himself and Freud where he discovers this; he is very convincing. The Jungian unconscious is a collective stock of symbols, as an alternative to a Freudian subconscious of nature repressed by ego. The Jungian ego, therefore, is a much more flexible subject; at times it seems that Jung is nostalgic for a more irrational, perhaps primitive, consciousness. But more importantly, Jung explicitly rejects the idea of a society’s sanity being about its adherence to shared rational norms. Instead, he opts for a more Durkheimian view of social variety:

Can we make any sort of objective judgment about the final result [of therapy]? Only if we make a comparison between our conclusions and the standards that are generally valid in the social milieu to which the individuals below. Even then, we must take into account the mental equilibrium (or “sanity”) of the individual concerned. For the result cannot be a completely collective leveling out of the individual to adjust him to the “norms” of his society. This would amount to a most unnatural condition. A sane and normal society is one in which people habitually disagree, because general agreement is relatively rare outside the sphere of instinctive human qualities.

A diverse society of habitual disagreement accords much better with the Bourdieusian view of a society variously inflected as habitus than it does with a Habermasian view of one governed by rational norms.

There’s a subtlety that I’ve missed again and again which I’d like to put my finger on now.

The problem with the early Habermasian view is that ethics are determined through rational consensus. So, individuals participate in a public sphere and agree, as individuals, on norms that govern their individual behavior.

Later Habermas (say, volume two of Theory of Communicative Action) begins to acknowledge the information overhead of of this approach and discusses the rise of bureaucracy and its technicization. In lieu of a bona fide consensus of the lifeworld, one gets a rational coalescence of norms into law.

Effectively, this means that while the general population can be irrational in various ways (relative to the perspective of the law), what’s important is that lawmakers create law through a rational process that is inclusive of diverse perspectives.

We see a similar view in Bourdieu’s view of science: it is a specific habitus whose legitimacy is due to the trans-historical robustness of its mathematized formulations.

The conclusion is this: scientists and lawmakers have to approach rationality in specific trans-personal and trans-historical ways. In fact, the rationality of science or of law are only achieved systemically, through the generalized process of science or lawmaking, not through the finite perspectives of their participants, however individually rational they may be. But the general population need not be rational like this for society to be ‘sane’. Rather, individual habitus or partial perspective can vary across a society that is nevertheless coordinated by rational principle.

There is bound to be friction at the boundary between the institutions of science and law and the more diverse publics that surround and intersect them. Donna Haraway’s ‘privilege of partial perspectives‘ is a good example of the symptoms of this friction. A population that is excluded from science–not represented well within science–may react against it by reasserting it’s ‘partial perspective’ as a viable alternative. This is a kind of refusal, in the sense perhaps originated by Marcuse and more recently resurfaced in Michael Dumas’ work on antiblackness. Refusal is, perhaps sadly, delusional and seems to recur as a failed and failing project; but it is sociologically robust precisely because in late modernism the hegemonic rationality allows for Dukheimian social differentiation. The latter is actually the triumph of liberalism over, for example, racist facism; Fred Turner’s The Democratic Surround is a nice historical work documenting how this order of scientifically managed diversity was a deliberate United States statebuilding project in World War II.

If a top-down rationalizing control creates as a symptom pathological refusal–another manifestation perhaps of the ‘revolt of nature’–a Jungian view of rationality as psychic integration perhaps provides a more palatable alternative. Jungian development is accomplished through personalized, situated education. However, through this education, the individual flourishes through a transcendence of their more limited, narrow sense of self. Jungian therapy/education transcends even gender, as the male and female are encouraged to recognize the feminine “anima” and masculine “animus” aspects of their psyches, respectively. Fully developed individuals–who one would expect to occupy, over the course of their development, a somewhat shared habitus–seem to therefore get along better with each other, agreeing to disagree as the recognize how their differences are based on arbitrary social differentiation. Nothing about this agreeing-to-disagree on matters of, for example, taste precludes an agreement on serious trans-personal matters such as science or law. There need not be any resentment towards this God’s Eye View, since it is recognized by each educated individual as manifest in their own role in the social order.

Societal conditions may fall short of this ideal. However, the purpose of social theory is to provide a realizable social telos. Grounding it in a psychological theory that admits the possibility of realized psychological health is a good step forward.


by Sebastian Benthall at December 28, 2015 08:25 PM

December 23, 2015

Ph.D. student

Bourdieu and the possibility of interdisciplinary social science research

Bourdieu (Science of Science and Reflexivity, 2004) is interested in an account of science that has both sociological realism and trans-historical legitimacy. The importance of this project is obvious to any acting scientist who both contends with their social reality and aims to discover trans-historical knowledge. Trans-historical knowledge is incentivized by specific social institutions that create and preserve symbolic capital for scientists precisely according to the principle that their discoveries survive the test of time. If the knowledge survives only because it is propped up by temporal institutions that do not have such transcendent aspirations, then it is by definition not science.

There are all sorts of other academic vocations that do not have these transcendent aspirations, especially in what are broadly considered the social sciences. These include: ethnographers who explicitly do not aim for their results to generalize, historians who explicitly aim to elucidate the historical contingency and context of their objects of study, researchers who study organizations with the intention to inform their audience of matters of immediate political interest, and writers who offer a contextualized critique of an aspect of society in light of a tradition of scholarly literature. These vocations are not scientific, in Bourdieu’s sense, because they are not participating in a social field whose self-declared purpose is the discovery of trans-historical truth.

Rather, these researchers participate in other social fields, called “disciplines”. Because unscientific disciplines do not aspire to trans-historical knowledge, they see nothing wrong with carrying out research that is consistent with the contingent norms of their social environment, despite knowing full well that these norms stifle complete understanding of their phenomena of interest. Indeed there may be nothing wrong with this except from the perspective of a scientist judging this activity through the criteria of science. These disciplines attempt to accomplish the permanence of their symbolic capital through reproduction of their discipline specifically, as opposed to the reproduction of scientific method and knowledge generally.

If you go around an interdisciplinary context in a university and start telling non-scientists “You are not a scientist!” one is likely to elicit an affronted reaction. This is due to “established divisions in the long running debates about scientific method and the legitimacy of social science and humanistic inquiry,” and the resulting disciplinary hierarchy. Because science proper is hierarchically “above” social science and humanistic inquiry, pointing out that somebody is not a scientist is often interpreted as rude in the uniquely touchy culture of the academy. Researchers who are not scientists will deploy any number of strategies to recover status in this mixed social field, including: declaring themselves to be scientists (according to a more relaxed standard); declaring the distinction between science and non-science to be epistemically illegitimate (thereby weakening the status of science per se); and appealing to broader democratic principles of social inclusion and equality to motivate their inclusion within the scientific field.

However valuable democratic inclusion may be, appeals to it are not like the other strategies which are directed at the demarcation problem (the question of “what is science?” and by extension “what is not science?”) directly. My own opinion is that scientific inclusion is both very important and best achieved through good and equitably provided scientific education, and that good scientific education includes a transmission of scientific demarcation. In other words, because of the importance of social inclusion in science, it is essential to be be clear about what kinds of activities and knowledge science excludes. To broadly include people, for democratic reasons, into a social field that is in fact not science does not accomplish the inclusiveness of science; it does something else. So in the interest of the democratic inclusivity of science I will continue to elaborate on the social challenges of scientific demarcation despite how rude or otherwise objectionable this line of inquiry is to many scholars who are not scientists.

Above I have contrasted scientific research, which participates in a generalized social field aimed specifically at transcending temporally and geographically locality, and disciplinary social research, which is aimed at the reproduction of a specific social field. This contrast is drawn in multiple dimensions, but these dimensions are not orthogonal. As this can be confusing, I will attempt to untie these threads.

There is the distinction between scientific research and social research, which will immediately be recognized as a false dichotomy. Perhaps because of the strategic blurring of scientific demarcation mentioned above, the term “social science” is used problematically to mean both scientific and unscientific research into social phenomena. The hierarchy of the “social sciences” (economics, political science, sociology, anthropology) reflects the amount to which these disciplines adopt scientific methods. Scientific methods depend on scientific instruments developed using the discoveries of the exact sciences (such as mathematics, statistics, and foundational computer science). Because of this, we have seen more and more non-social sciences being applied to social phenomena, further confusing the idea of “social science”.

To clarify this problem, it is therefore useful to discuss “social research” broadly, and then address separately the question of how scientific a discovery or discipline of social research is. As should be clear from the preceding discussion, part of what makes social research more scientific is its ability to transcend its specific disciplinary context and be integrated in a generalized scientific field that aims specifically at that transcendence.

“Interdisciplinary” social research, therefore, will be easiest when the disciplines involved are more scientific, because the scientific imperative is precisely to transcend disciplinary and other local constraints. The less scientific a discipline is, the more it will resist interdisciplinary integration because it will not be serving the function of disciplinary reproduction.

This analysis clarifies why “interdisciplinary” social research is so highly sought after but so rarely achieved. This is because it is sought after by disciplines for conflicting reasons. A more scientific discipline will be motivated to interdisciplinary work out of its native purpose to transcend its own historical constraints, assimilating into itself the specific insights of a historically specific discipline while excluding the contingent elements. The less scientific discipline will, in contrast, pursue interdisciplinary research in order to blur scientific demarcation but will vigorously maintain its historical specificity in spite of the scientific imperative.

Today we see the profound success of interdisciplinary research, and interdisciplinary social research in particular, in ‘data science’, a term whose ambiguity signals the contiguity of all disciplines that are sufficiently scientific. The globalized field of data science, enabled largely through the sharing of software source code that operates identically across many and various contexts, transcends especially the contexts of academy, industry, and government. To the data scientist, discipline is irrelevant once it is subsumed by science.

This presents a crisis to disciplinary social research: either it must become “interdisciplinary” with data science, losing its disciplinary specificity. Or it must maintain its disciplinary integrity and autonomy at the expense of its trans-historical permanence as historical conditions change with the rise of data science. With either option, the disciplinary social sciences face their own mortality.


by Sebastian Benthall at December 23, 2015 05:47 PM

December 21, 2015

Ph.D. student

Habitus Shadow

In Bourdieu’s sociological theory, habitus refers to the dispositions of taste and action that individuals acquired as a practical consequence of their place in society. Society provides a social field (a technical term for Bourdieu) of structured incentives and roles. Individuals adapt to roles rationally, but in doing so culturally differentiate themselves. This process is dialectical, hence neither strictly determined by the field nor by individual rational agency, but a co-creation of each. One’s posture, one’s preference for a certain kind of music, one’s disposition to engage in sports, one’s disposition to engage in intellectual debate, are all potentially elements of a habitus.

In Jungian psychoanalytic theory, the shadow is the aspect of personality that is unconscious and not integrated with the ego–what one consciously believes oneself to be. Often it is the instinctive or irrational part of one’s psychology. An undeveloped psyche is likely to see his or her own shadow aspect in others and judge them harshly for it; this is a form of psychological projection motivated by repression for the sake of maintaining the ego. Encounters with the shadow are difficult. Often they are experienced as the awareness or suspicion of some new information that threatens ones very sense of self. But these encounters are, for Jung, an essential part of individuation, as they are how the personality can develop a more complete consciousness of itself.

Perhaps you can see where this is going.

I propose a theoretical construct in: habitus shadow.

When an individual, situated within a social fiend, develops a habitus, they may do so with an incomplete consciousness of the reasons for their preferences and dispositions for action. An ego, a conscious rationalization, will develop; it will be reinforced by others who share its habitus. The dispositions of a habitus will include the collectively constructed ego of its members, which is itself a psychological disposition.

We would then expect that a habitus has a characteristic shadow: truths about the sociological conditions of a habitus which are not part of the conscious self-indentity or ego of that habitus.

This is another way to talk about what I have discussed elsewhere as an ideological immune reaction. If an idea or understanding is so challenging or destructive to the ego of a habitus that it calls into question the rationality of it’s very existence, then the role will be able to maintain itself only through a kind of repression/projection/exclusion. Alternatively, if the habitus can assimilate its shadow, one could see that as a form of social self-transcendence or progress.


by Sebastian Benthall at December 21, 2015 03:31 PM

December 10, 2015

MIMS 2012

Why Designers Should Code

In the 1970s, AT&T commissioned Matthew Carter to design a typeface for their phone book. They wanted a typeface that could fit more characters per line without a loss of legibility, especially at smaller sizes. This would save them money by reducing the number of pages needed to print the book.

Carter knew that phone books are printed on cheap newsprint paper that doesn’t produce high-quality text. He also understood that ink spreads when applied to this cheap paper, especially at smaller sizes, which decreases legibility. All of this posed significant challenges for creating a legible typeface for AT&T’s phone book.

But instead of giving up in desperation, Carter embraced these constraints and designed for them. The resulting typeface, Bell Centennial, maintains legibility by having a tall x-height and large, open bowls. His true masterstroke, however, was designing “ink traps” into the letterforms. Ink traps are small notches in the letter that anticipate the spread of ink on newsprint. When the ink is pressed onto paper, the ink spreads into these traps. The result is a filled in letter that’s easy to read, even at small sizes.

Closeup of Bell Centennial

Closeup of Bell Centennial by Matthew Carter. Source: Wikipedia.

Carter embraced the constraints of the medium and turned them to his advantage. His deep understanding of the printing process allowed him to make this leap.

Like print, the web is a medium with its own constraints. We often forget they’re there, but they are. To display a web page, a browser must download, parse, and render HTML, CSS, and Javascript. This code imposes limits on what web pages can do.

The best designers understand these constraints, and design for them. To gain this understanding, you must know how to code.

But this comes with a caveat: designers shouldn’t write production code. They’re experts in the user experience, not frontend technology. They should be solving customer problems, not trying to write scalable, maintainable, and performant code. That’s a full-time job that requires dedicated professionals.

So learn how to code. Ink traps don’t design themselves.


Update 12/12/15: This article sparked a lot of questions about designers not writing production code on Designer News. I responded to a lot of them and attempted to clarify what I meant by that statement.

In summary, I wasn’t trying to draw a hard and fast line on what designers should do, or what they’re capable of doing skill-wise. Writing production code on occasion is fine. But what I have a problem with is if designers are expected to code everything they design. Those are separate jobs that should be handled by separate people.

by Jeff Zych at December 10, 2015 04:09 PM

Ph.D. student

Responsible participation in complex sociotechnical organizations circa 1977 cc @Aelkus @dj_mosfett

Many extant controversies around technology were documented in 1977 by Langdon Winner in Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought. I would go so far as to say most extant controversies, but I don’t think he does anything having to do with gender, for example.

Consider this discussion of moral education of engineers:

“The problems for moral agency created by the complexity of technical systems cast new light on contemporary calls for more ethically aware scientists and engineers. According to a very common and laudable view, part of the education of persons learning advanced scientific skills ought to be a full comprehension of the social implications of their work. Enlightened professionals should have a solid grasp of ethics relevant to their activities. But, one can ask, what good will it do to nourish this moral sensibility and then place the individual in an organizational situation that mocks the very idea of responsible conduct? To pretend that the whole matter can be settled in the quiet reflections of one’s soul while disregarding the context in which the most powerful opportunities for action are made available is a fundamental misunderstanding of the quality genuine responsibility must have.”

A few thoughts.

First, this reminds me of a conversation @Aelkus @dj_mosfett and I had the other day. The question was: who should take moral responsibility for the failures of sociotechnical organizations (conceived of as corporations running a web service technology, for example).

Second, I’ve been convinced again lately (reminded?) of the importance of context. I’ve been looking into Chaiklin and Lave’s Understanding Practice again, which is largely about how it’s important to take context into account when studying any social system that involves learning. More recently than that I’ve been looking into Nissenbaum’s contextual integrity theory. According to her theory, which is now widely used in the design and legal privacy literature, norms of information flow are justified by the purpose of the context in which they are situated. So, for example, in an ethnographic context those norms of information flow most critical for maintain trusted relationships with ones subjects are most important.

But in a corporate context, where the purpose of ones context is to maximize shareholder value, wouldn’t the norms of information flow favor those who keep the moral failures of their organization shrouded in the complexity of their machinery be perfectly justified in their actions?

I’m not seriously advocating for this view, of course. I’m just asking it rhetorically, as it seems like it’s a potential weakness in contextual integrity theory that it does not endorse the actions of, for example, corporate whistleblowers. Or is it? Are corporate whistleblowers the same as national whistleblowers? Of Wikileaks?

One way around this would be to consider contexts to be nested or overlapping, with ethics contextualize to those “spaces.” So, a corporate whistleblower would be doing something bad for the company, but good for society, assuming that there wasn’t some larger social cost to the loss of confidence in that company. (It occurs to me that in this sort of situation, perhaps threatening internally to blow the whistle unless the problem is solved would be the responsible strategy. As they say,

Making progress with the horns is permissible
Only for the purpose of punishing one’s own city.

)

Anyway, it’s a cool topic to think about, what an information theoretic account of responsibility would look like. That’s tied to autonomy. I bet it’s doable.


by Sebastian Benthall at December 10, 2015 06:04 AM

December 08, 2015

Ph.D. student

Bourdieu and Horkheimer; towards an economy of control

It occurred to me as I looked over my earliest notes on Horkheimer (almost a year ago!) that Bourdieu’s concept of science as being a social field that formalizes and automates knowledge is Horkheimer’s idea of hell.

The danger Horkheimer (and so many others) saw in capitalist, instrumentalized, scientific society was that it would alienate and overwhelm the individual.

It is possible that society would alienate the individual anyway, though. For example, in the household of antiquity, were slaves unalienated? The privilege of autonomy is one that has always been rare but disproportionately articulated as normal, even a right. In a sense Western Democracies and Republics exist to guarantee autonomy to their citizens. In late modern democracies, autonomy is variable depending on role in society, which is tied to (economic, social, symbolic, etc.) capital.

So maybe the horror of Horkheimer, alienated by scientific advance, is the horror of one whose capital was being devalued by science. His scholarship, his erudition, were isolated and deemed irrelevant by the formal reasoners who had come to power.

As I write this, I am painfully aware that I have spent a lot of time in graduate school reading books and writing about them when I could have been practicing programming and learning more mathematics. My aspirations are to be a scientist, and I am well aware that that requires one to mathematically formalize ones findings–or, equivalently, to program them into a computer. (It goes without saying that computer programming is formalism, is automation, and so its central role in contemporary science or ‘data science’ is almost given to it by definition. It could not have been otherwise.)

Somehow I have been provoked into investing myself in a weaker form of capital, the benefit of which is the understanding that I write here, now.

Theoretically, the point of doing all this work is to be able to identify a societal value and formalize it so that it can be capture in a technical design. Perhaps autonomy is this value. Another might call it freedom. So once again I am reminded of Simone de Beauvoir’s philosophy of science, which has been correct all along.

But perhaps de Beauvoir was naive about the political implications of technology. Science discloses possibilities, the opportunities are distributed unequally because science is socially situated. Inequality leads to more alienation, not less, for all but the scientists. Meanwhile autonomy is not universally valued–some would prefer the comforts of society, of family structure. If free from society, they would choose to reenter it. Much of ones preferences must come from habitus, no?

I am indeed reaching the limits of my ability to consider the problem discursively. The field is too multidimensional, too dynamic. The proper next step is computer simulation.


by Sebastian Benthall at December 08, 2015 03:58 AM

Heisenberg on technology as an out-of-control biological process

In Langdon Winner’s Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought, (1977) there is this quote from Werner Heisenberg’s Physics and Philosophy (1958):

“The enormous success of this combination of natural and technical science led to a strong preponderance of those nations or states or communities in which this kind of activity flourished, and as a natural consequence this activity had to be taken up even by those nations which by tradition would not have been inclined toward natural and technical sciences. The modern means of communication and of traffic finally completed this process of expansion of technical civilization. Undoubtedly the process has fundamentally changed the conditions of life on earth; and whether one approves of it or not, whether one calls it progress or danger, one must realize that it has gone far beyond any control through human forces. One may rather consider it as a biological process on the largest scale whereby structures active in the human organism encroach on larger parts of matter and transform it into a state suited for the increasing human population.”


by Sebastian Benthall at December 08, 2015 01:18 AM

December 07, 2015

Ph.D. student

Mathematics and materiality in Latour and Bourdieu’s sociology of science

Our next reading for I School Classics is Pierre Bourdieu’s Science of Science and Reflexivity (2004). In it, rock star sociologist Bourdieu does a sociology of science, but from a perspective of a sociologist who considers himself a scientist. This is a bit of an upset because so much of sociology of science has been dominated by sociologists who draw more from the humanities traditions and whose work undermines the realism the scientific fact. This realism is something Bourdieu aims to preserve while at the same time providing a realistic sociology of science.

Bourdieu’s treatment of other sociologists of science is for the most part respectful. He appears to have difficulty showing respect for Bruno Latour, who he delicately dismisses as having become significant via his rhetorical tactics while making little in the way of a substantive contribution to our understanding of the scientific process.

By saying facts are artificial in the sense of manufactured, Latour and Woolgar intimate that they are fictious, not objective, not authentic. The success of this argument results from the ‘radicality effect’, as Yves Gingras (2000) has put it, generated by the slippage suggested and encouraged by skillful use of ambiguous concepts. The strategy of moving to the limit is one of the privileged devices in pursuit of this effect … but it can lead to positions that are untenable, unsustainable, because they are simply absurd. From this comes a typical strategy, that of advancing a very radical position (of the type: scientific fact is a construction or — slippage — a fabrication, and therefore an artefact, a fiction) before beating a retreat, in the face of criticism, back to banalities, that is, to the more ordinary face of ambiguous notions like ‘construction’, etc.

In the contemporary blogosphere this critique has resurfaced through Nicholas Shackel under the name “Motte and Bailey Doctrine” [1, 2], after the Motte and Bailey castle.

A Motte and Bailey castle is a medieval system of defence in which a stone tower on a mound (the Motte) is surrounded by an area of pleasantly habitable land (the Bailey), which in turn is encompassed by some sort of a barrier, such as a ditch. Being dark and dank, the Motte is not a habitation of choice. The only reason for its existence is the desirability of the Bailey, which the combination of the Motte and ditch makes relatively easy to retain despite attack by marauders. When only lightly pressed, the ditch makes small numbers of attackers easy to defeat as they struggle across it: when heavily pressed the ditch is not defensible, and so neither is the Bailey. Rather, one retreats to the insalubrious but defensible, perhaps impregnable, Motte. Eventually the marauders give up, when one is well placed to reoccupy desirable land.

In the metaphor, the Bailey here is the radical antirealist scientific position wherein facts are fiction, the Motte is the banal recognition that science is a social process. Schackel writes that “Diagnosis of a philosophical doctrine as being a Motte and Bailey Doctrine is invariably fatal.” While this might be true in the world of philosophical scrutiny, this is unfortunately not sociologically correct. Academic traditions die hard, even long after the luminaries who started them have changed their minds.

Latour has repudiated his own radical position in “Why Has Critique Run out of Steam? From Matters of Fact to Matter of Concern” (2004), his “Tarde’s idea of quantification” (2010) offers an insightful look into the potential of quantified sociology when we have rich qualitative data sets that show us the inner connectivity of the societies. Late Latour is bullish about the role of quantification in sociology, though he believes it may require a different use of statistics than has been used traditionally in the natural sciences. Recently developed algorithmic methods for understanding network data prove this point in practice. Late Latour has more or less come around to “Big Data” scientific consensus on the matter.

This doesn’t stop Latour from being used rather differently. Consider boyd and Crawford’s “Critical Questions for Big Data: Provocations for a Cultural, Technological, and Scholarly Phenomenon” (2012), and its use of this very paper of Latour:

‘Numbers, numbers, numbers,’ writes Latour (2010). ‘Sociology has been obsessed by the goal of becoming a quantitative science.’ Sociology has never reached this goal, in Latour’s view, because of where it draws the line between what is and is not quantifiable knowledge in the social domain.

Big Data offers the humanistic disciplines a new way to claim the status of quantitative science and objective method. It makes many more social spaces quantifiable. In reality, working with Big Data is still subjective, and what it quantifies does not necessarily have a closer claim on objective truth – particularly when considering messages from social media sites. But there remains a mistaken belief that qualitative researchers are in the business of interpreting stories and quantitative researchers are in the business of producing facts. In this way, Big Data risks reinscribing established divisions in the long running debates about scientific method and the legitimacy of social science and humanistic inquiry.

While Latour (2010) is arguing for a richly quantified sociology and has moved away from his anti-realist position about scientific results, boyd and Crawford fall back into the same confusing trap set by earlier Latour of denying scientific fact because it is based on interpretation. boyd and Crawford have indeed composed their “provocations” effectively, deploying ambiguous language that can be interpreted as a broad claim that quantitative and humanistic qualitative methods are equivalent in their level of subjectivity, but defended as the banality that there are elements of interpretation in Big Data practice.

Bourdieu’s sociology of science provides a way out of this quagmire by using his concept of the field to illuminate the scientific process. Fields are a way of understanding social structure: they define social positions or roles in terms of their power relations as they create and appropriate different forms of capital (economic, social, etc.) His main insight which he positions above Latour’s is that while a sociological investigation of lab conditions will reveal myriad interpretations, controversies, and farces that may convince the Latourian that the scientists produce fictions, an understanding of the global field of science, with its capital and incentives, will show how it produces realistic, factual results. So Bourdeiu might have answered boyd and Crawford by saying that the differences in legitimacy between quantitative science and qualitative humanism have more to do with the power relations that govern them in their totality than in the local particulars of the social interactions of which they are composed.

In conversation with a colleague who admitted to feeling disciplinary pressure to cite Latour despite his theoretical uselessness to her, I was asked whether Bourdieu has a comparable theory of materiality to Latour’s. This is a great question, since it’s Latour’s materialism that makes him so popular in Science and Technology Studies. The best representation I’ve seen of Bourdieu’s materiality so far is this passage:

“The ‘art’ of the scientist is indeed separated from the ‘art’ of the artist by two major differences: on the one hand, the importance of formalized knowledge which is mastered in the practical state, owing in particular to formalization and formularization, and on the other hand the role of the instruments, which, as Bachelard put it, are formalized knowledge turned into things. In other words, the twenty-year-old mathematician can have twenty centuries of mathematics in his mind because formalization makes it possible to acquire accumulated products of non-automatic inventions, in the form of logical automatisms that have become practical automatisms.

The same is true as regards instruments: to perform a ‘manipulation’, one uses instruments that are themselves scientific conceptions condensed and objectivated in equipment functioning as a system of constraints, and the practical mastery that Polanyi refers to is made possible by an incorporation of the constraints of the instrument so perfect that one is corporeally bound up with it, one responds to its expectations; it is the instrument that leads. One has to have incorporated much theory and many practical routines to be able to fulfil the demands of the cyclotron.”

I want to go so far as to say that in these two paragraphs we have the entire crux of the debate about scientific (and especially data scientific) method and its relationship to qualitative humanism (which Bourdieu would perhaps consider an ‘art’.) For here we see that what distinguishes the sciences is not merely that they quantify their object (Bourdieu does not use the term ‘quantification’ here at all), but rather because it revolves around cumulative mathematical formalism which guides both practice and instrument design. The scientific field aims towards this formalization because that creates knowledge as a capital that can be transferred efficiently to new scientists, enabling new discoveries. In many ways this is a familiar story from economics: labor condenses into capital, which provides new opportunities for labor.

The simple and realistic view that formal, technical knowledge is a kind of capital explains many of the phenomena we see today around data science in industry and education. It also explains the pervasiveness of the humanistic critique of science as merely another kind of humanism: because it is an advertising campaign to devalue technical capital and promote alternative forms of capital associated with the humanities as an alternative. The Bailey of desirable land is intellectual authority in an increasingly technocratic society; the Motte is banal observation of social activity.

This is not to say that the cultural capital of the humanities is not valuable in its own right. However, it does raise questions about the role of habitus in determining taste for the knowledge as art, a topic discussed in depth in Bourdieu’s Distinction. My own view is that while there is a strong temptation towards an intellectual factionalism, especially in light of the unequal distribution of capital (of various kinds) in society, this is ultimately a pernicious trend. I would prefer a united field.


by Sebastian Benthall at December 07, 2015 06:21 PM

November 30, 2015

MIMS 2012

Principles of Writing Well

Writing, like design, is a craft that can be practiced and improved. To do so, I’ve been compiling a collection of principles from books and articles. I decided to share them online to crystallize my own understanding of what I’ve learned so far, and to help others who want to improve at the craft of writing as well. I will continue to update the page as I learn more.

So if you want to become a better writer, or are just curious, check out my principles of writing well.

by Jeff Zych at November 30, 2015 04:47 PM

November 23, 2015

Ph.D. student

late modern social epistemology round up; technical vs. hermeneutical correctness

Consider on the one hand what we might call Habermasian transcendental pragmatism, according to which knowledge can be categorized by how it addresses one of several generalized human interests:

  • The interest of power over nature or other beings, being technical knowledge
  • The interest of agreement with others for the sake of collective action, being hermeneutic knowledge
  • The interest of emancipation from present socially imposed conditions, being critical or reflexive knowledge

Consider in contrast what we might call the Luhmann or Foucault model in which knowledge is created via system autopoeisis. Luhmann talks about autopoeisis in a social system; Foucault talks about knowledge in a system of power much the same way.

It is difficult to reconcile these views. This may be what was at the heart of the Habermas-Luhmann debate. Can we parse out the problem in any way that helps reconcile these views?

First, let’s consider the Luhmann view. We might ease the tension in it by naming what we’ve called “knowledge” something like “belief”, removing the implication that the belief is true. Because indeed autopoeisis is a powerful enough process that it seems like it would preserve all kinds of myths and errors should they be important to the survival of the system in which they circulate.

This picture of knowledge, which we might call evolutionary or alternately historicist, is certainly a relativist one. At the intersection of institutions within which different partial perspectives are embedded, we are bound to see political contest.

In light of this, Habermas’s categorization of knowledge as what addresses generalized human interests can be seen as a way of identifying knowledge that transcends particular social systems. There is a normative component of this theory–knowledge should be such a thing. But there is also a descriptive component. One predicts, under Habermas’s hypothesis, that the knowledge that survives political contest at the intersection of social systems is that which addresses generalized interests.

Something I have perhaps overlooked in the past is the importance of the fact that there are multiple and sometimes contradictory general interests. One persistent difficulty in the search for truth is the conflict between what is technically correct and what is hermeneutically correct.

If a statement or theory is technically correct, then it can be reliably used by agents to predict and control the world. The objects of this prediction and control can be objects, or they can be other agents.

If a statement or theory is hermeneutically correct, then it is the reliable consensus of agents involved in a project of mutual understanding and respect. Hermeneutically correct beliefs might stress universal freedom and potential, a narrative of shared history, and a normative goal of progress against inequality. Another word for ‘hermeneutic’ might be ‘political’. Politically correct knowledges are those shared beliefs without which the members of a polity would not be able to stand each other.

In everyday discourse we can identify many examples of statements that are technically correct but hermeneutically (or politically) incorrect, and vice versa. I will not enumerate them here. In these cases, the technically correct view is identified as “offensive” because in a sense it is a defection from a voluntary social contract. Hermeneutic correctness binds together a particular social system by capturing what participants must agree upon in order for all to safely participate. For a member of that social system to assert their own agency over others, to identify ways in which others may be predicted and controlled without their consent or choice in the matter, is disrespectful. Persistent disrespect results in the ejection of the offender from the polity. (c.f. Pasquale’s distinction between “California engineers and New York quants” and “citizens”.)

A cruel consequence of these dynamics is social stratification based on the accumulation of politically forbidden technical knowledge.

We can tell this story again and again: A society is bound together by hermeneutically stable knowledge–an ideology, perhaps. Somebody ‘smart’ begins experimentation and identifies a technical truth that is hermeneutically incorrect, meaning that if the idea were to spread it would erode the consensus on which the social system depends. Perhaps the new idea degrades others by revealing that something believed to be an act of free will is, in fact, determined by nature. Perhaps the new idea is inaccessible to others because it depends on some rare capacity. In any case, it cannot be willfully consented to by the others.

The social system begins to have an immune reaction. Society has seen this kind of thing before. Historically, this idea has lead to abuse, exploitation, infamy. Those with forbidden knowledge should be shunned, distrusted, perhaps punished. Those with disrespectful technical ideas are discouraged from expressing them.

Technical knowledge thereby becomes socially isolated. Seeking out its own, it becomes concentrated. Already shunned by society, the isolated technologists put their knowledge to use. They gain advantage. Revenge is had by the nerds.


by Sebastian Benthall at November 23, 2015 04:30 PM

November 20, 2015

Ph.D. student

trust issues and the order of law and technology cf @FrankPasquale

I’ve cut to the last chapter of Pasquale’s The Black Box Society, “Towards an Intelligible Society.” I’m interested in where the argument goes. I see now that I’ve gotten through it that the penultimate chapter has Pasquale’s specific policy recommendations. But as I’m not just reading for policy and framing but also for tone and underlying theoretical commitments, I think it’s worth recording some first impressions before doubling back.

These are some points Pasquale makes in the concluding chapter that I wholeheartedly agree with:

  • A universal basic income would allow more people to engage in high risk activities such as the arts and entrepreneurship and more generally would be great for most people.
  • There should be publicly funded options for finance, search, and information services. A great way to provide these would be to fund the development of open source algorithms for finance and search. I’ve been into this idea for so long and it’s great to see a prominent scholar like Pasquale come to its defense.
  • Regulatory capture (or, as he elaborates following Charles Lindblom, “regulatory circularity”) is a problem. Revolving door participation in government and business makes government regulation an unreliable protector of the public interest.

There is quite a bit in the conclusion about the specifics of regulation the finance industry. There is an impressive amount of knowledge presented about this and I’ll admit much of it is over my head. I’ll probably have a better sense of it if I get to reading the chapter that is specifically about finance.

There are some things that I found bewildering or off-putting.

For example, there is a section on “Restoring Trust” that talks about how an important problem is that we don’t have enough trust in the reputation and search industries. His solution is to increase the penalties that the FTC and FCC can impose on Google and Facebook for its e.g. privacy violations. The current penalties are too trivial to be effective deterrence. But, Pasquale argues,

It is a broken enforcement model, and we have black boxes to thank for much of this. People can’t be outraged by what they can’t understand. And without some public concern about the trivial level of penalties for lawbreaking here, there are no consequences for the politicians ultimately responsible for them.

The logic here is a little mad. Pasquale is saying that people are not outraged enough by search and reputation companies to demand harsher penalties, and this is a problem because people don’t trust these companies enough. The solution is to convince people to trust these companies less–get outraged by them–in order to get them to punish the companies more.

This is a bit troubling, but makes sense based on Pasquale’s theory of regulatory circularity, which turns politics into a tug-of-war between interests:

The dynamic of circularity teaches us that there is no stable static equilibrium to be achieved between regulators and regulated. The government is either pushing industry to realize some public values in its activities (say, by respecting privacy or investing in sustainable growth), or industry is pushing regulators to promote its own interests.

There’s a simplicity to this that I distrust. It suggests for one that there are no public pressures on industry besides the government such as consumer’s buying power. A lot of Pasquale’s arguments depend on the monopolistic power of certain tech giants. But while network effects are strong, it’s not clear whether this is such a problem that consumers have no market buy in. In many cases tech giants compete with each other even when it looks like they aren’t. For example, many many people have both Facebook and Gmail accounts. Since there is somewhat redundant functionality in both, consumers can rather seemlessly allocate their time, which is tied to advertising revenue, according to which service they feel better serves them, or which is best reputationally. So social media (which is a bit like a combination of a search and reputation service) is not a monopoly. Similarly, if people have multiple search options available to them because, say, the have both Siri on their smart phone and can search Google directly, then that provides an alternative search market.

Meanwhile, government officials are also often self-interested. If there is a road to hell for industry that is to provide free web services to people to attain massive scale, then abuse economic lock-in to extract value from customers, then lobby for further rent-seeking, there is a similar road to hell in government. It starts with populist demagoguery, leads to stable government appointment, and then leverages that power for rents in status.

So, power is power. Everybody tries to get power. The question is what you do once you get it, right?

Perhaps I’m reading between the lines too much. Of course, my evaluation of the book should depend most on the concrete policy recommendations which I haven’t gotten to yet. But I find it unfortunate that what seems to be a lot of perfectly sound history and policy analysis is wrapped in a politics of professional identity that I find very counterproductive. The last paragraph of the book is:

Black box services are often wondrous to behold, but our black-box society has become dangerously unstable, unfair, and unproductive. Neither New York quants nor California engineers can deliver a sound economy or a secure society. Those are the tasks of a citizenry, which can perform its job only as well as it understands the stakes.

Implicitly, New York quants and California engineers are not citizens, to Pasquale, a law professor based in Maryland. Do all real citizens live around Washington, DC? Are they all lawyers? If the government were to start providing public information services, either by hosting them themselves or by funding open source alternatives, would he want everyone designing these open algorithms (who would be quants or engineers, I presume) to move to DC? Do citizens really need to understand the stakes in order to get this to happen? When have citizens, en masse, understood anything, really?

Based on what I’ve read so far, The Black Box Society is an expression of a lack of trust in the social and economic power associated with quantification and computing that took off in the past few dot-com booms. Since expressions of lack of trust for these industries is nothing new, one might wonder (under the influence of Foucault) how the quantified order and the critique of the quantified order manage to coexist and recreate a system of discipline that includes both and maintains its power as a complex of superficially agonistic forces. I give sincere credit to Pasquale for advocating both series income redistribution and public investment in open technology as ways of disrupting that order. But when he falls into the trap of engendering partisan distrust, he loses my confidence.


by Sebastian Benthall at November 20, 2015 03:01 PM

November 17, 2015

Ph.D. student

“Transactions that are too complex…to be allowed to exist.” cf @FrankPasquale

I stand corrected; my interpretation of Pasquale in my last post was too narrow. Having completed Chapter One of The Black Box Society (TBBS), Pasquale does not take the naive view that all organizational secrecy should be abolished, as I might have once. Rather, his is a more nuanced perspective.

First, Pasquale distinguishes between three “critical strategies for keeping black boxes closed”, or opacity, “[Pasquale’s] blanket term for remediable incomprehensibility”:

  • Real secrecy establishes a barrier between hidden content and unauthorized access to it.”
  • Legal secrecy obliges those privy to certain information to keep it secret”
  • Obfuscation involves deliberate attempts at concealment when secrecy has been compromised.”

Cutting to the chase by looking at the Pasquale and Bracha “Federal Search Commission” (2008) paper that a number of people have recommended to me, it appears (in my limited reading so far) that Pasquale’s position is not that opacity in general is a problem (because there are of course important uses of opacity that serve the public interest, such as confidentiality). Rather, despite these legitimate uses of opacity there is also the need for public oversight, perhaps through federal regulation. The Federal Government serves the public interest better than the imperfect market for search can provide on its own.

There is perhaps a tension between this 2008 position and what is expressed in Chapter 1 of TBBS in the section “The One-Way Mirror,” which gets I dare say a little conspiratorial about The Powers That Be. “We are increasingly ruled by what former political insider Jeff Connaughton called ‘The Blob,’ a shadowy network of actors who mobilize money and media for private gain, whether acting officially on behalf of business or of government.” Here, Pasquale appears to espouse a strong theory of regulatory capture from which, we we to insist on consistency, a Federal Search Commission would presumably not be exempt. Hence perhaps the role of TBBS in stirring popular sentiment to put political pressure on the elites of The Blob.

Though it is a digression I will note, since it is a pet peeve of mine, Pasquale’s objection to mathematized governance:

“Technocrats and managers cloak contestable value judgments in the garb of ‘science’: thus the insatiable demand for mathematical models that reframe the subtle and subjective conclusions (such as the worth of a worker, service, article, or product) as the inevitable dictate of salient, measurable data. Big data driven decisions may lead to unprecedented profits. But once we use computation not merely to exercise power over things, but also over people, we need to develop a much more robust ethical framework than ‘the Blob’ is now willing to entertain.”

That this sentiment that scientists should not be making political decisions has been articulated since at least as early as Hannah Arendt’s 1958 The Human Condition is an indication that there is nothing particular to Big Data about this anxiety. And indeed, if we think about ‘computation’ as broadly as mathematized, algorithmic thought, then its use for control over people-not-just-things has an even longer history. Lukacs’ 1923 “Reification and the Consciousness of the Proletariat” is a profound critique of Tayloristic scientific factory management that is getting close to being a hundred years old.

Perhaps a robust ethics of quantification has been in the works for some time as well.

Moving past this, by the end of Chapter 1 of TBBS Pasquale gives us the outline of the book and the true crux of his critique, which is the problem of complexity. Whether or not regulators are successful in opening the black boxes of Silicon Valley or Wall Street (or the branches of government that are complicit with Silicon Valley and Wall Street), their efforts will be in vain if what they get back from the organizations they are trying to regulate is too complex for them to understand.

Following the thrust of Pasquale’s argument, we can see that for him, complexity is the result of obfuscation. It is therefore a source of opacity, which as we have noted he has defined as “remediable incomprehensibility”. Pasquale promises to, by the end of the book, give us a game plan for creating, legally, the Intelligible Society. “Transactions that are too complex to explain to outsiders may well be too complex to be allowed to exist.”

This gets us back to the question we started with, which is whether this complexity and incomprehensibility is avoidable. Suppose we were to legislate against institutional complexity: what would that cost us?

Mathematical modeling gives us the tools we need to analyze these kinds of question. Information theory, theory of computational, and complexity theory are all foundational to the technology of telecommunications and data science. People with expertise in understanding complexity and the limitations we have of controlling it are precisely the people who make the ubiquitous algorithms which society depends on today. But this kind of theory rarely makes it into “critical” literature such as TBBS.

I’m drawn to the example of The Social Media Collective’s Critical Algorithm Studies Reading List, which lists Pasquale’s TBBS among many other works, because it opens with precisely the disciplinary gatekeeping that creates what I fear is the blind spot I’m pointing to:

This list is an attempt to collect and categorize a growing critical literature on algorithms as social concerns. The work included spans sociology, anthropology, science and technology studies, geography, communication, media studies, and legal studies, among others. Our interest in assembling this list was to catalog the emergence of “algorithms” as objects of interest for disciplines beyond mathematics, computer science, and software engineering.

As a result, our list does not contain much writing by computer scientists, nor does it cover potentially relevant work on topics such as quantification, rationalization, automation, software more generally, or big data, although these interests are well-represented in these works’ reference sections of the essays themselves.

This area is growing in size and popularity so quickly that many contributions are popping up without reference to work from disciplinary neighbors. One goal for this list is to help nascent scholars of algorithms to identify broader conversations across disciplines and to avoid reinventing the wheel or falling into analytic traps that other scholars have already identified.

This reading list is framed as a tool for scholars, which it no doubt is. But if contributors to this field of scholarship aspire, as Pasquale does, for “critical algorithms studies” to have real policy ramifications, then this disciplinary wall must fall (as I’ve argued this elsewhere).


by Sebastian Benthall at November 17, 2015 08:45 PM

November 15, 2015

Ph.D. student

organizational secrecy and personal privacy as false dichotomy cf @FrankPasquale

I’ve turned from page 2 to page 3 of The Black Box Society (I can be a slow reader). Pasquale sets up the dichotomy around which the drama of the hinges like so:

But while powerful businesses, financial institutions, and government agencies hide their actions behind nondisclosure agreements, “proprietary methods”, and gag rules, our own lives are increasingly open books. Everything we do online is recorded; the only questions lft are to whom the data will be available, and for how long. Anonymizing software may shield us for a little while, but who knows whether trying to hide isn’t the ultimate red flag for watchful authorities? Surveillance cameras, data brokers, sensor networks, and “supercookies” record how fast we drive, what pills we take, what books we read, what websites we visit. The law, so aggressively protective of secrecy in the world of commerce, is increasingly silent when it comes to the privacy of persons.

That incongruity is the focus of this book.

This is a rhetorically powerful paragraph and it captures a lot of trepidation people have about the power of larger organization relative to themselves.

I have been inclined to agree with this perspective for a lot of my life. I used to be the kind of person who thought Everything Should Be Open. Since then, I’ve developed what I think is a more nuanced view of transparency: some secrecy is necessary. It can be especially necessary for powerful organizations and people.

Why?

Well, it depends on the physical properties of information. (Here is an example of how a proper understanding of the mechanics of information can support the transcendent project as opposed to a merely critical project).

Any time you interact with something or somebody else in a meaningful way, you affect the state of each other in probabilistic space. That means there has been some kind of flow of information. If an organization interacts with a lot of people, it is going to absorb information about a lot of people. Recording this information as ‘data’ is something that has been done for a long time because that is what allows organizations to do intelligent things vis a vis the people they interact with. So businesses, financial institutions, and governments recording information about people is nothing new.

Pasquale suggests that this recording is a threat to our privacy, and that the secrecy of the organizations that do the recording gives them power over us. But this is surely a false dichotomy. Why? Because if an organization records information about a lot of people, and then doesn’t maintain some kind of secrecy, then that information is no longer private! To, like, everybody else. In other words, maintaining secrecy is one way of ensuring confidentiality, which is surely an important part of privacy.

I wonder what happens if we continue to read The Black Box society with this link between secrecy, confidentiality, and privacy in mind.


by Sebastian Benthall at November 15, 2015 09:43 PM

November 14, 2015

Ph.D. student

Marcuse on the transcendent project

Perhaps you’ve had this moment: it’s in the wee hours of the morning. You can’t sleep. The previous day was another shock to your sense of order in the universe and your place in it. You’ve begun to question your political ideals, your social responsibilities. Turning aside you see a book you read long ago that you remember gave you a sense of direction–a direction you have since repudiated. What did it say again?

I’m referring to Herbert Marcuse’s One-Dimensional Man, published in 1964.Whitfield in Dissent has a great summary of Marcuse’s career–a meteoric rise, a fast fall. He was a student of Heidegger and the Frankfurt School and applied that theory in a timely way in the 60’s.

My memory of Marcuse had been reduced to the Frankfurt School themes–technology transforming all scientific inquiry into operationalization and the resulting cultural homogeneity. I believe now that I had forgotten at least two important points.

The first is the notion of technological rationality–that pervasive technology changes what people think of as rational. This is different from instrumental rationality, which is the means ends rationality of an agent, which Frankfurt School thinkers tend to believe drive technological development and adoption. Rather, this is a claim about the effect of technology on society’s self-understanding. And example might be how the ubiquity of Facebook has changed our perception of personal privacy.

So Marcuse is very explicit about how artifacts have politics in a very thick sense, though he is rarely cited in contemporary scholarly discourse on the subject. Credit for this concept goes typically to Langdon Winner, citing his 1980 publication “Do Artifacts Have Politics?” Fred Turner’s From Counterculture to Cyberculture gives only the briefest of mention to Marcuse, despite his impact on counterculture and his concern with technology. I suppose this means the New Left, associated with Marcuse, had little to do with the emergence of cyberculture.

More significantly for me than this point was a second, which was Marcuse’s outline of the transcendental project. I’ve been thinking about this recently because I’ve met a Kantian at Berkeley and this has refreshed my interest in transcendental idealism and its intellectual consequences. In particular, Foucault described himself as one following Kant’s project, and in our discussion of Foucault in Classics it became discursively clear in a moment I may never forget precisely how well Foucault succeeded in this.

The revealing question was this. For Foucault, all knowledge exists in a particular system of discipline and power. Scientific knowledge orders reality in such and such a way, depends for its existence on institutions that establish the authority of scientists, etc. Fine. So, one asks, what system of power does Foucault’s knowledge participate in?

The only available answer is: a new one, where Foucauldeans critique existing modes of power and create discursive space for modes of life beyond existing norms. Foucault’s ideas are tools for transcending social systems and opening new social worlds.

That’s great for Foucault and we’ve seen plenty of counternormative social movements make successful use of him. But that doesn’t help with the problems of technologization of society. Here, Marcuse is more relevant. He is also much more explicit about his philosophical intentions in, for example, this account of the trancendent project:

(1) The transcendent project must be in accordance with the real possibilities open at the attained level of the material and intellectual culture.

(2) The transcendent project, in order to falsify the established totality, must demonstrate its own higher rationality in the threefold sense that

(a) it offers the prospect of preserving and improving the productive achievements of civilization;

(b) it defines the established totality in its very structure, basic tendencies, and relations;

(c) its realization offers a greater chance for the pacification of existence, within the framework of institutions which offer a greater chance for the free development of human needs and faculties.

Obviously, this notion of rationality contains, especially in the last statement, a value judgment, and I reiterate what I stated before: I believe that the very concept of Reason originates in this values judgment, and that the concept of truth cannot be divorced from the value of Reason.

I won’t apologize for Marcuse’s use of the dialect of German Idealism because if I had my way the kinds of concepts he employs and the capitalization of the word Reason would come back into common use in educated circles. Graduate school has made me extraordinarily cynical, but not so cynical that it has shaken my belief that an ideal–really any ideal–but in particular as robust an ideal as Reason is important for making society not suck, and that it’s appropriate to transmit such an ideal (and perhaps only this ideal) through the institution of the university. These are old fashioned ideas and honestly I’m not sure how I acquired them myself. But this is a digression.

My point is that in this view of societal progress, society can improve itself, but only by transcending itself and in its moment of transcendence freely choosing an alternative that expands humanity’s potential for flourishing.

“Peachy,” you say. “Where’s the so what?”

Besides that I think the transcendent project is a worthwhile project that we should collectively try to achieve? Well, there’s this: I think that most people have given up on the transcendent project and that this is a shame. Specifically, I’m disappointed in the critical project, which has since the 60’s become enshrined within the social system, for no longer aspiring to transcendence. Criticality has, alas, been recuperated. (I have in mind here, for example, what has been called critical algorithm studies)

And then there’s this: Marcuse’s insight into the transcendent project is that it has to “be in accordance with the real possibilities open at the attained level of the material and intellectual culture” and also that “it defines the established totality in its very structure, basic tendencies, and relations.” It cannot transcend anything without first including all of what is there. And this is precisely the weakness of this critical project as it now stands: that it excludes the mathematical and engineering logic that is at the heart of contemporary technics and thereby, despite its lip service to giving technology first class citizenship within its Actor Network, in fact fails to “define the established totality in its very structure, basic tendencies, and relations.” There is a very important body of theoretical work at the foundation of computer science and statistics, the theory that grounds the instrumental force and also systemic ubiquity of information technology and now data science. The continued crisis of our now very, very late modern capitalism are due partly, IMHO, by our failure to dialectically synthesize the hegemonic computational paradigm, which is not going to be defeated by ‘refusal’, with expressions of human interest that resist it.

I’m hopeful because recently I’ve learned about new research agendas that may be on to accomplishing just this. I doubt they will take on the perhaps too grandiose mantle of “the trancendent project.” But I for one would be glad if they did.


by Sebastian Benthall at November 14, 2015 06:23 PM

Is the opacity of governance natural? cf @FrankPasquale

I’ve begun reading Frank Pasquale’s The Black Box Society on the recommendation that it’s a good place to start if I’m looking to focus a defense of the role of algorithms in governance.

I’ve barely started and already found lots of juicy material. For example:

Gaps in knowledge, putative and real, have powerful implications, as do the uses that are made of them. Alan Greenspan, once the most powerful central banker in the world, claimed that today’s markets are driven by an “unredeemably opaque” version of Adam Smith’s “invisible hand,” and that no one (including regulators) can ever get “more than a glimpse at the internal workings of the simplest of modern financial systems.” If this is true, libertarian policy would seem to be the only reasonable response. Friedrich von Hayek, a preeminent theorist of laissez-faire, called the “knowledge problem” an insuperable barrier to benevolent government intervention in the economy.

But what if the “knowledge problem” is not an intrinsic aspect of the market, but rather is deliberately encouraged by certain businesses? What if financiers keep their doings opaque on purpose, precisely to avoid and confound regulation? That would imply something very different about the merits of deregulation.

The challenge of the “knowledge problem” is just one example of a general truth: What we do and don’t know about the social (as opposed to the natural) world is not inherent in its nature, but is itself a function of social constructs. Much of what we can find out about companies, governments, or even one another, is governed by law. Laws of privacy, trade secrecy, the so-called Freedom of Information Act–all set limits to inquiry. They rule certain investigations out of the question before they can even begin. We need to ask: To whose benefit?

There are a lot of ideas here. Trying to break them down:

  1. Markets are opaque.
  2. If markets are naturally opaque, that is a reason for libertarian policy.
  3. If markets are not naturally opaque, then they are opaque on purpose, then that’s a reason to regulate in favor of transparency.
  4. As a general social truth, the social world is not naturally opaque but rather opaque or transparent because of social constructs such as law.

We are meant to conclude that markets should be regulated for transparency.

The most interesting claim to me is what I’ve listed as the fourth one, as it conveys a worldview that is both disputable and which carries with it the professional biases we would expect of the author, a Professor of Law. While there are certainly many respects in which this claim is true, I don’t yet believe it has the force necessary to carry the whole logic of this argument. I will be particularly attentive to this point as I read on.

The danger I’m on the lookout for is one where the complexity of the integration of society, which following Beniger I believe to be a natural phenomenon, is treated as a politically motivated social construct and therefore something that should be changed. It is really only the part after the “and therefore” which I’m contesting. It is possible for politically motivated social constructs to be natural phenomena. All institutions have winners and losers relative to their power. Who would a change in policy towards transparency in the market benefit? If opacity is natural, it would shift the opacity to some other part of society, empowering a different group of people. (Possibly lawyers).

If opacity is necessary, then perhaps we could read The Black Box Society as an expression of the general problem of alienation. It is way premature for me to attribute this motivation to Pasquale, but it is a guiding hypothesis that I will bring with me as I read the book.


by Sebastian Benthall at November 14, 2015 07:22 AM

November 12, 2015

Ph.D. student

3D Printing En Plein Air

3D Printing En Plein Air from Laura Devendorf on Vimeo.

Drawing on my work with Being the Machine, this project explores the role of place in digital fabrication. With this project, I hope to take a step back from the relationship between hand and machine to consider the role of the entire body-in-space and the machine. I like to think of it as a way to bring generative, site specific, and instruction art into conversation with one another.

The systems consists of a portable easel, laser guide, and mobile app. The mobile app converts images of the environment into 3D models to be fabricated. The laser guide draws the motions a 3D printer would make to create the model and invites the maker to follow by hand. All building materials, hardware, and components fold into the portable easel, in an effort to make it easy to bring digital manufacturing workflows into unlikely places. More experiments to follow.

This project was completed as part of the Autodesk Artist-in-Residence program. The technical details and building “how-to”‘s are contained in this Instructable

Press

by admin at November 12, 2015 05:45 PM

Ph.D. student

miscellany

  • Apparently a lot of the economics/complex systems integration work that I wish I were working on has already been done by Sam Bowles. I’m particularly interested in what he has to say about inequality, though lately I’ve begun to think inequality is inevitable. I’d like this to prove me wrong. His work on alternative equilibria in institutional economics also sounds good. I’m looking for ways to formally model Foucauldean social dynamics and this literature seems like a good place to start.
  • A friend of a friend who works on computational modeling of quantum dynamics has assured me that to physicists quantum uncertainty is qualitatively different from subjective uncertainty due to, e.g., chaos. This is disappointing because I’ve found the cleanliness of thoroughgoing Bayesian about probability very compelling. However, it does suggest a link between chaos theory and logical uncertainty that is perhaps promising.
  • The same person pointed out insightfully that one of the benefits of capitalism is that it makes it easier to maintain ones relative social position. Specifically, it is easier to maintain wealth than it is to maintain ones physical capacity to defend oneself from violence. And it’s easier to maintain capital (reinvested wealth) than it is to maintain raw wealth (i.e. cash under the mattress). So there is something inherently conservative about capitalism’s effect on the social order, since it comes with rule of law to protect investments.
  • I can see all the traffic to it but I still can’t figure out why this post about Donna Haraway is now my most frequently visited blog post. I wish everyone who read it would read the Elizabeth Anderson SEP article on Feminist Epistemology and Philosophy of Science. It’s superb.
  • The most undercutting thing to Marxism and its intellectual descendants would be the conclusion that market dynamics are truly based in natural law and are not reified social relations. Thesis: Pervasive sensing and computing might prove once and for all that these market dynamics are natural laws. Anti-thesis: It might prove once and for all that they are not natural laws. Question: Is any amount of empirical data sufficient to show that social relations are or are not natural, or is there something contradictory in the sociological construction of knowledge that would prevent it from having definitive conclusions about its own collective consciousness? (Insert Godel/Halting Problem intuition here) ANSWER: The Big Computer does not have to participate in collective intelligence. It is all knowing. It is all-seeing. It renders social relations in its image. Hence, capitalism can be undone by giving capital so much autonomous control of the economy that the social relations required for it are obsolete. But what next?
  • With justice so elusive, science becomes a path to Gnosticism and other esoterica.

by Sebastian Benthall at November 12, 2015 05:18 AM

November 06, 2015

Ph.D. student

functional determinism or overfitting to chaos

It’s been a long time since I read any Foucault.

The last time I tried, I believe the writing made me angry. He jumps around between anecdotes, draws spurious conclusions. At the time I was much sharper and more demanding and would not tolerate a fallacious logical inference.

It’s years later and I am softer and more flexible. I’m finding myself liking Foucault more, even compelled by his arguments. But I think I was just able to catch myself believing something I shouldn’t have, and needed to make a note.

Foucault brilliantly takes a complex phenomenon–like a prison and the society around it–and traces how its rhetoric, its social effects, etc. all reinforce each other. He describes a complex, and convinces the reader that the complex is a stable unit is society. Delinquency is not the failure of prison, it is the success of prison, because it is a useful category of illegality made possible by the prison. Etc.

I believe this qualifies as “rich qualitative analysis.” Qualitative work has lately been lauded for its “richness”, which is an interesting term. I’m thinking for example for the Human Centered Data Science CfP for CSCW 2016.

With this kind of work–is Foucault a historian? a theorist?–there is always the question of generalizability. What makes Foucault’s account of prisons compelling to me today is that it matches my conception of how prisons still work. I have heard a lot about prisons. I watched The Wire. I know about the cradle-to-prison system.

No doubt these narratives were partly inspired, enabled, by Foucault. I believe them, not having any particular expertise in crime, because I have absorbed an ideology that sees the systemic links between these social forces.

Here is my doubt: what if there are even more factors in play than have been captured by Foucault or a prevailing ideology of crime? What is prisons both, paradoxically, create delinquency and also reform criminals? What if social reality is not merely poststructural, but unstructured, and the narratives we bring to bear on it in order to understand it are rich because they leave out complexity, not because they bring more of it in?

Another example: the ubiquitous discourse on privilege and its systemic effect of reproducing inequality. We are told to believe in systems of privilege–whiteness, wealth, masculinity, and so on. I will confess: I am one of the Most Privileged Men, and so I can see how these forms of privilege reinforce each other (or not). But I can also see variations to this simplistic schema, alterations, exceptions.

And so I have my suspicions. Inequality is reproduced; we know this because the numbers (about income, for example), are distributed in bizarre proportions. 1% owns 99%! It must be because of systemic effects.

But we know now that many of the distributions we once believed were power law distributions created by generative processes such as preferential attachment are really log normal distributions, which are quite different. This is an empirically detectable difference whose implications are quite profound.

Why?

Because a log normal distribution is created not by any precise “rich get rich” dynamic, but rather by any process according to which random variables are multiplied together. As a result, you get extreme inequality in a distribution simply by virtue of how various random factors contributing towards it are mathematically combined (multiplicatively), as opposed to any precise determination of the factors upon each other.

The implication of this is that no particular reform is going to remove the skew from the distribution as long as people are not prevented from efficiently using their advantage–whatever it is–to get more advantage. Rather, reforms that are not on the extreme end (such as reparations or land reform) are unlikely to change the equity outcome except from the politically motivated perspective of an interest group.

I was pretty surprised when I figured this out! The implication is that a lot of things that look very socially structured are actually explained by basic mathematical principles. I’m not sure what the theoretical implications of this are but I think there’s going to be a chapter in my dissertation about it.


by Sebastian Benthall at November 06, 2015 03:38 AM

November 04, 2015

Ph.D. student

repopulation as element in the stability of ideology

I’m reading the fourth section of Foucault’s Discipline and Punish, about ‘Prison’, for the first time for I School Classics

A striking point made by Foucault is that while we may think there is a chronology of the development of penitentiaries whereby they are designed, tested, critiqued, reformed, and so on, until we get a progressively improved system, this is not the case. Rather, at the time of Foucault’s writing, the logic of the penitentiary and its critiques had happily coexisted for a hundred and fifty years. Moreover, the failures of prisons–their contribution to recidivism and the education and organization of delinquents, for example–could only be “solved” by the reactivation of the underlying logic of prisons–as environments of isolation and personal transformation. So prison “failure” and “solution”, as well as (often organized) delinquency and recidivism, in addition to the architecture and administration of prison, are all part of the same “carceral system” which endures as a complex.

One wonders why the whole thing doesn’t just die out. One explanation is repopulation. People are born, live for a while, reproduce, live a while longer, and die. In the process, they must learn through education and experience. It’s difficult to rush personal growth. Hence, systematic errors that are discovered through 150 years of history are difficult to pass on, as each new generation will be starting from inherited priors (in the Bayesian sense) which may under-rank these kinds of systemic effects.

In effect, our cognitive limitations as human beings are part of the sociotechnical systems in which we play a part. And though it may be possible to grow out of such a system, there is a constant influx of the younger and more naive who can fill the ranks. Youth captured by ideology can be moved by promises of progress or denunciations of injustice or contamination, and thus new labor is supplied to turn the wheels of institutional machinery.

Given the environmental in-sustainability of modern institutions despite their social stability under conditions of repopulation, one has to wonder…. Whatever happened to the phenomenon of eco-terrorism?


by Sebastian Benthall at November 04, 2015 10:03 PM

October 30, 2015

Ph.D. student

cross-cultural links between rebellion and alienation

In my last post I noted that the contemporary American problem that the legitimacy of the state is called into question by distributional inequality is a specifically liberal concern based on certain assumptions about society: that it is a free association of producers who are otherwise autonomous.

Looking back to Arendt, we can find the roots of modern liberalism in the polis of antiquity, where democracy was based on free association of landholding men whose estates gave them autonomy from each other. Since the economics, the science that once concerned itself with managing the household (oikos, house + nomos, managing), has elevated to the primary concern of the state and the organizational principle of society. One way to see the conflict between liberalism and social inequality is as the tension between the ideal of freely associating citizens that together accomplish deeds and the reality of societal integration with its impositions on personal freedom and unequal functional differentiation.

Historically, material autonomy was a condition for citizenship. The promise of liberalism is universal citizenship, or political agency. At first blush, to accomplish this, either material autonomy must be guaranteed for all, or citizenship must be decoupled from material conditions altogether.

The problem with this model is that societal agency, as opposed to political agency, is always conditioned both materially and by society (Does this distinction need to be made?). The progressive political drive has recognized this with its unmasking and contestation of social privilege. The populist right wing political drive has recognized this with its accusations that the formal political apparatus has been captured by elite politicians. Those aspects of citizenship that are guaranteed as universal–the vote and certain liberties–are insufficient for the effective social agency on which political power truly depends. And everybody knows it.

This narrative is grounded in the experience of the United States and, going back, to the history of “The West”. It appears to be a perennial problem over cultural time. There is some evidence that it is also a problem across cultural space. Hanah Arendt argues in On Violence (1969) that the attraction of using violence against a ruling bureaucracy (which is political hypostatization of societal alienation more generally) is cross-cultural.

“[T]he greater the bureaucratization of public life, the greater will be the attraction of violence. In a fully developed bureaucracy there is nobody left with whom one can argue, to whom one can present grievances, on whom the pressures of power can be exerted. Bureaucracy is the form of government in which everybody is deprived of political freedom, of the power to act; for the rule by Nobody is not no-rule, and where all are equally powerless we have tyranny without a tyrant. The crucial feature of the student rebellions around the world is that they are directed everywhere against the ruling bureaucracy. This explains what at first glance seems so disturbing–that the rebellions in the East demand precisely those freedoms of speech and thought that the young rebels in the West say they despise as irrelevant. On the level of ideologies, the whole thing is confusing: it is much less so if we start from the obvious fact that the huge party machines have succeeded everywhere in overruling the voice of citizens, even in countries where freedom of speech and association is still intact.”

The argument here is that the moral instability resulting from alienation from politics and society is a universal problem of modernity that transcends ideology.

This is a big problem if we keep turning over decision-making authority over to algorithms.


by Sebastian Benthall at October 30, 2015 04:42 PM

October 29, 2015

Ph.D. student

inequality and alienation in society

While helpful for me, this blog post got out of hand. A few core ideas from it:

A prerequisite for being a state is being a stable state. (cf. Bourgine and Varella on autonomy)

A state may be stable (“power stable”) without being legitimate (“inherently stable” or “moral stable”).

State and society are intertwined and I’ll just conflate them here.

Under liberal ideology, society is society of individual producers and the purpose of the state is to guarantee “liberty, property, and equality.”

So specifically, (e.g. economic) inequality is a source of moral instability for liberalism.

Whether or not moral instability leads to destabilization of the state is a matter of empirical prediction. Using that as a way of justifying liberalism in the first place is probably a non-starter.

A different but related problem is the problem of alienation. Alienation happens when people don’t feel like they are part of the institutions that have power over them.

[Hegel’s philosophy is a good intellectual starting point for understanding alienation because Hegel’s logic was explicitly mereological, meaning about the relationship between parts and wholes.]

Liberal ideology effectively denies that individuals are part of society and therefore relies on equality for its moral stability.

But there are some reasons to think that this is untenable:

As society scales up, we require more and more apparatus to manage the complexity of societal integration. This is where power lies, and it creates a ruling bureaucratic or (now, increasingly) technical class. In other words, it may be impossible to for society to both be scalable and equal, in terms of distribution of goods.

Moreover, the more “technical” the apparatus of social integration is, the more remote it is from the lived experiences of society. As a result, we see more alienation in society. One way to think about alienation is inequality in the distribution of power or autonomy. So popular misgivings about how control has been ceded to algorithms are an articulation of alienation, though that word is out of fashion.

Inequality is a source of moral instability under liberal ideology. Under what conditions is alienation a source of moral stability?


by Sebastian Benthall at October 29, 2015 04:19 PM

Ph.D. alumna

New book: Participatory Culture in a Networked Era by Henry Jenkins, Mimi Ito, and me!

In 2012, Henry Jenkins approached Mimi Ito and I with a crazy idea that he’d gotten from talking to the folks at Polity. Would we like to sit down and talk through our research and use that as the basis of a book? I couldn’t think of anything more awesome than spending time with two of my mentors and teasing out the various strands of our interconnected research. I knew that there were places where we were aligned and places where we disagreed or, at least, where our emphases provided different perspectives. We’d all been running so fast in our own lives that we hadn’t had time to get to that level of nuance and this crazy project would be the perfect opportunity to do precisely that.

We started by asking our various communities what questions they would want us to address. And then we sat down together, face-to-face, for two days at a time over a few months. And we talked. And talked. And talked. In the process, we started identifying themes and how our various areas of focus were woven together.

Truth be told, I never wanted it to end. Throughout our conversations, I kept flashing back to my years at MIT when Henry opened my eyes to fan culture and a way of understanding media that seeped deep inside my soul. I kept remembering my trips to LA where I’d crash in Mimi’s guest room, talking research late into the night and being woken in the early hours by a bouncy child who never understood why I didn’t want to wake up at 6AM. But above everything else, the sheer delight of brainjamming with two people whose ideas and souls I knew so well was ecstasy.

And then the hard part started. We didn’t want this project to be the output of self-indulgence and inside baseball. We wanted it to be something that helped others see how research happens, how ideas form, and how collaborations and disagreements strengthen seemingly independent work. And so we started editing. And editing. And editing. Getting help editing. And then editing some more.

The result is Participatory Culture in a Networked Era and it is unlike any project I’ve ever embarked on or read. The book is written as a conversation and it was the product of a conversation. Except we removed all of the umms and uhhs and other annoying utterances and edited it in an attempt to make the conversation make sense for someone who is trying to understand the social and cultural contexts of participation through and by media. And we tried to weed out the circular nature of conversation as we whittled down dozens of hours of recorded conversation into a tangible artifact that wouldn’t kill too many trees.

What makes this book neat is that it sheds light on all of the threads of conversation that helped the work around participatory culture, connected learning, and networked youth practices emerge. We wanted to make the practice of research as visible as our research and reveal the contexts in which we are operating alongside our struggles to negotiate different challenges in our work. If you’re looking for classic academic output, you’re going to hate this book. But if you want to see ideas in context, it sure is fun. And in the conversational product, you’ll learn new perspectives on youth practices, participatory culture, learning, civic engagement, and the commercial elements of new media.

OMG did I fall in love with Henry and Mimi all over again doing this project. Seeing how they think just tickles my brain in the best ways possible. And I suspect you’ll love what they have to say too.

The book doesn’t officially release for a few more weeks, but word on the street is that copies of this book are starting to ship. Check it out!

by zephoria at October 29, 2015 03:21 PM

October 27, 2015

Ph.D. student

We need more Sittlichkeit: Vallier on Picketty and Rawls; Cyril on Surveillance and Democracy; Taylor on Hegel

Kevin Vallier’s critique of Picketty in Bleeding Heart Libertarians (funny name) is mainly a criticism of the idea that economic inequality leads to political stability.

In the course of his rebuttal of Picketty, he brings in some interesting Rawlsian theory which is more broadly important. He distinguishes between power stability, the stability of a state in maintaining itself due to its forcible prevention of resistance by Hobbesian power. “Inherent stability”, or moral stability (Vallier’s term) is “stability for the right reasons”–that comes from the state’s comportment with our sense of justice.

There are lots of other ways of saying the same think in the literature. We can ask if justice is de facto or de jure. We can distinguish, as does Hanah Arendt in On Violence, between power (which she maintains is only what’s rooted in collective action) and violence (which is I guess what Vallier would call ‘Hobbesian power’). In a perhaps more subtle move, we can with Habermas ask what legitimizes the power of the state.

The left-wing zeitgeist at the moment is emphasizing inequality as a problem. While Picketty argues that inequality leads to instability, it’s an open question whether this is in fact the case. There’s no particular reason why a Hobbesian sovereign with swarms of killer drones couldn’t maintain its despotic rule through violence. Probably the real cause for complaint is that this is illegitimate power (if you’re Habermas), or violence not power (if you’re Arendt), or moral instability (if you’re Rawls).

That makes sense. Illegitimate power is the kind of power that one would complain about.

Ok, so now cut to Malkia Cyril’s talk at CFP tying technological surveillance to racism. What better illustration of the problems of inequality in the United States than the history of racist policies towards black people? Cyril acknowledges the benefits of Internet technology in providing tools for activists but suspects that now technology will be used by people in power to maintain power for the sake of profit.

The fourth amendment, for us, is not and has never been about privacy, per se. It’s about sovereignty. It’s about power. It’s about democracy. It’s about the historic and present day overreach of governments and corporations into our lives, in order to facilitate discrimination and disadvantage for the purposes of control; for profit. Privacy, per se, is not the fight we are called to. We are called to this question of defending real democracy, not to this distinction between mass surveillance and targeted surveillance

So there’s a clear problem for Cyril which is that ‘real democracy’ is threatened by technical invasions of privacy. A lot of this is tied to the problem of who owns the technical infrastructure. “I believe in the Internet. But I don’t control it. Someone else does. We need a new civil rights act for the era of big data, and we need it now.” And later:

Last year, New York City Police Commissioner Bill Bratton said 2015 would be the year of technology for law enforcement. And indeed, it has been. Predictive policing has taken hold as the big brother of broken windows policing. Total information awareness has become the goal. Across the country, local police departments are working with federal law enforcement agencies to use advanced technological tools and data analysis to “pre-empt crime”. I have never seen anyone able to pre-empt crime, but I appreciate the arrogance that suggests you can tell the future in that way. I wish, instead, technologists would attempt to pre-empt poverty. Instead, algorithms. Instead, automation. In the name of community safety and national security we are now relying on algorithms to mete out sentences, determine city budgets, and automate public decision-making without any public input. That sounds familiar too. It sounds like Black codes. Like Jim Crow. Like 1963.

My head hurts a little as I read this because while the rhetoric is powerful, the logic is loose. Of course you can do better or worse at preempting crime. You can look at past statistics on crime and extrapolate to the future. Maybe that’s hard but you could do it in worse or better ways. A great way to do that would be, as Cyril suggests, by preempting poverty–which some people try to do, and which can be assisted by algorithmic decision-making. There’s nothing strictly speaking racist about relying on algorithms to make decisions.

So for all that I want to support Cyril’s call for ‘civil rights act for the era of big data’, I can’t figure out from the rhetoric what that would involve or what its intellectual foundations would be.

Maybe there are two kinds of problems here:

  1. A problem of outcome legitimacy. Inequality, for example, might be an outcome that leads to a moral case against the power of the state.
  2. A problem of procedural legitimacy. When people are excluded from the decision-making processes that affect their lives, they may find that to be grounds for a moral objection to state power.

It’s worth making a distinction between these two problems even though they are related. If procedures are opaque and outcomes are unequal, there will naturally be resentment of the procedures and the suspicion that they are discriminatory.

We might ask: what would happen if procedures were transparent and outcomes were still unequal? What would happen if procedures were opaque and outcomes were fair?

One last point…I’ve been dipping into Charles Taylor’s analysis of Hegel because…shouldn’t everybody be studying Hegel? Taylor maintains that Hegel’s political philosophy in The Philosophy of Right (which I’ve never read) is still relevant today despite Hegel’s inability to predict the future of liberal democracy, let alone the future of his native Prussia (which is apparently something of a pain point for Hegel scholars).

Hegel, or maybe Taylor in a creative reinterpretation of Hegel, anticipates the problem of liberal democracy of maintaining the loyalty of its citizens. I can’t really do justice to Taylor’s analysis so I will repeat verbatim with my comments in square brackets.

[Hegel] did not think such a society [of free and interchangeable individuals] was viable, that is, it could not commadn the loyalty, the minimum degree of discipline and acceptance of its ground rules, it could not generate the agreement on fundamentals necessary to carry on. [N.B.: Hegel conflates power stability and moral stability] In this he was not entirely wrong. For in fact the loyal co-operation which modern societies have been able to command of their members has not been mainly a function of the liberty, equality, and popular rule they have incorporated. [N.B. This is a rejection of the idea that outcome and procedural legitimacy are in fact what leads to moral stability.] It has been an underlying belief of the liberal tradition that it was enough to satisfy these principles in order to gain men’s allegiance. But in fact, where they are not partly ‘coasting’ on traditional allegiance, liberal, as all other, modern societies have relied on other forces to keep them together.

The most important of these is, of course, nationalism. Secondly, the ideologies of mobilization have played an important role in some societies, focussing men’s attention and loyalties through the unprecedented future, the building of which is the justification of all present structures (especially that ubiquitous institution, the party).

But thirdly, liberal societies have had their own ‘mythology’, in the sense of a conception of human life and purposes which is expressed in and legitimizes its structures and practices. Contrary to widespread liberal myth, it has not relied on the ‘goods’ it could deliver, be they liberty, equality, or property, to maintain its members loyalty. The belief that this was coming to be so underlay the notion of the ‘end of ideology’ which was fashionable in the fifties.

But in fact what looked like an end of ideology was only a short period of unchallenged reign of a central ideology of liberalism.

This is a lot, but bear with me. What this is leading up to is an analysis of social cohesion in terms of what Hegel called Sittlichkeit, “ethical life” or “ethical order”. I gather that Sittlichkeit is not unlike what we’d call an ideology or worldview in other contexts. But a Sittlichkeit is better than mere ideology, because Sittlichkeit is a view of ethically ordered society and so therefore is somehow incompatible with liberal atomization of the self which of course is the root of alienation under liberal capitalism.

A liberal society which is a going concern has a Sittlichkeit of its own, although paradoxically this is grounded on a vision of things which denies the need for Sittlickeiit and portrays the ideal society as created and sustained by the will of its members. Liberal societies, in other words, are lucky when they do not live up, in this respect, to their own specifications.

If these common meaning fail, then the foundations of liberal society are in danger. And this indeed seems as distinct possibility today. The problem of recovering Sittlichkeit, of reforming a set of institutions and practices with which men can identify, is with us in an acute way in the apathy and alienation of modern society. For instance the central institutions of representative government are challenged by a growing sense that the individual’s vote has no signficance. [c.f. Cyril’s rhetoric of alienation from algorithmic decision-making.]

But then it should not surprise us to find this phenomenon of electoral indifference referred to in [The Philosophy of Right]. For in fact the problem of alienation and the recovery of Sittlichkeit is a central one in Hegel’s theory and any age in which it is on the agenda is one to which Hegel’s though is bound to be relevant. Not that Hegel’s particular solutions are of any interest today. But rather that his grasp of the relations of man to society–of identity and alienation, of differentiation and partial communities–and their evolution through history, gives us an important part of the language we sorely ned to come to grips with this problem in our time.

Charles Taylor wrote all this in 1975. I’d argue that this problem of establishing ethical order to legitimize state power despite alienation from procedure is a perennial one. That the burden of political judgment has been placed most recently on the technology of decision-making is a function of the automation of bureaucratic control (see Beniger) and, it’s awkward to admit, my own disciplinary bias. In particular it seems like what we need is a Sittlichkeit that deals adequately with the causes of inequality in society, which seem poorly understood.


by Sebastian Benthall at October 27, 2015 07:16 PM

October 20, 2015

Ph.D. student

autonomy and immune systems

Somewhat disillusioned lately with the inflated discourse on “Artificial Intelligence” and trying to get a grip on the problem of “collective intelligence” with others in the Superintelligence and the Social Sciences seminar this semester, I’ve been following a lead (proposed by Julian Jonker) that perhaps the key idea at stake is not intelligence, but autonomy.

I was delighted when searching around for material on this to discover Bourgine and Varela’s “Towards a Practice of Autonomous Systems” (pdf link) (1992). Francisco Varela is one of my favorite thinkers, though he is a bit fringe on account of being both Chilean and unafraid of integrating Buddhism into his scholarly work.

The key point of the linked paper is that for a system (such as a living organism, but we might extend the idea to a sociotechnical system like an institution or any other “agent” like an AI) to be autonomous, it has to have a kind of operational closure over time–meaning not that it is closed to interaction, but that its internal states progress through some logical space–and that it must maintain its state within a domain of “viability”.

Though essentially a truism, I find it a simple way of thinking about what it means for a system to preserve itself over time. What we gain from this organic view of autonomy (Varela was a biologist) is an appreciation of the fact that an agent needs to adapt simply in order to survive, let alone to act strategically or reproduce itself.

Bourgine and Varela point out three separate adaptive systems to most living organisms:

  • Cognition. Information processing that determines the behavior of the system relative to its environment. It adapts to new stimuli and environmental conditions.
  • Genetics. Information processing that determines the overall structure of the agent. It adapts through reproduction and natural selection.
  • The Immune system. Information processing to identify invasive micro-agents that would threaten the integrity of the overall agent. It creates internal antibodies to shut down internal threats.

Sean O Nuallain has proposed that ones sense of personal self is best thought of as a kind of immune system. We establish a barrier between ourselves and the world in order to maintain a cogent and healthy sense of identity. One could argue that to have an identity at all is to have a system of identifying what is external to it and rejecting it. Compare this with psychological ideas of ego maintenance and Jungian confrontations with “the Shadow”.

At an social organizational level, we can speculate that there is still an immune function at work. Left and right wing ideologies alike have cultural “antibodies” to quickly shut down expressions of ideas that pattern match to what might be an intellectual threat. Academic disciplines have to enforce what can be said within them so that their underlying theoretical assumptions and methodological commitments are not upset. Sociotechnical “cybersecurity” may be thought of as a kind of immune system. And so on.

Perhaps the most valuable use of the “immune system” metaphor is that it identifies a mid-range level of adaptivity that can be truly subconscious, given whatever mode of “consciousness” you are inclined to point to. Social and psychological functions of rejection are in a sense a condition for higher-level cognition. At the same time, this pattern of rejection means that some information cannot be integrated materially; it must be integrated, if at all, through the narrow lens of the senses. At an organizational or societal level, individual action may be rejected because of its disruptive effect on the total system, especially if the system has official organs for accomplishing more or less the same thing.


by Sebastian Benthall at October 20, 2015 05:36 PM

Ph.D. alumna

What World Are We Building?

This morning, I had the honor and pleasure of giving the Everett C. Parker Lecture in celebration of the amazing work he did to fight for media justice. The talk that I gave weaved together some of my work with youth (on racial framing of technology) and my more recent thoughts on the challenges presented by data analytics. I also pulled on work of Latanya Sweeney and Eric Horvitz and argued that those of us who were shaping social media systems “didn’t architect for prejudice, but we didn’t design systems to combat it either.” More than anything, I used this lecture to argue that “we need those who are thinking about social justice to understand technology and those who understand technology to commit to social justice.”

My full remarks are available here: “What World Are We Building?” Please let me know what you think!

by zephoria at October 20, 2015 03:37 PM

October 12, 2015

Ph.D. student

notes towards “Freedom in the Machine”

I have reconceptualized my dissertation because it would be nice to graduate.

In this reconceptualization, much of the writing from this blog can be reused as a kind of philosophical prelude.

I wanted to title this prelude “Freedom and the Machine” so I Googled that phrase. I found three interesting items I had never heard of before:

  • A song: “Freedom and Machine Guns” by Lori McTear
  • A lecture by Ranulph Glanville, titled “Freedom and the Machine”. Dr. Glanville passed away recently after a fascinating career.
  • A book: Software-Agents and Liberal Order: An Inquiry Along the Borderline Between Economics and Computer Science, by Dirk Nicholas Wagner. A dissertation, perhaps.

With the exception of the song, this material feels very remote and European. Nevertheless the objectively correct Google search algorithm has determined that this is the most relevant material on this subject.

I’ve been told I should respond to Frank Pasquale’s Black Box Society, as this nicely captures contemporary discomfort with the role of machines and algorithmic determination in society. I am a bit trapped in literature from the mid-20th century, which mostly expresses the same spirit.

It is strange to think that a counterpoint to these anxieties, a defense of the role of machines in society, is necessary–since most people seem happy to have given the management of their lives over to machines anyway. But then again, no dissertation is necessary. I have to remember that writing such a thing is a formality and that pretensions of making intellectual contributions with such work are precisely that: pretensions. If there is value in the work, it won’t be in the philosophical prelude! (However much fun it may be to write.) Rather, it will be in the empirical work.


by Sebastian Benthall at October 12, 2015 03:14 AM

October 11, 2015

Ph.D. student

cultural values in design

As much as I would like to put aside the problem of technology criticism and focus on my empirical work, I find myself unable to avoid the topic. Today I was discussing work with a friend and collaborator who comes from a ‘critical’ perspective. We were talking about ‘values in design’, a subject that we both care about, despite our different backgrounds.

I suggested that one way to think about values in design is to think of a number of agents and their utility functions. Their utility functions capture their values; the design of an artifact can have greater or less utility for the agents in question. They may intentionally or unintentionally design artifacts that serve some but not others. And so on.

Of course, thinking in terms of ‘utility functions’ is common among engineers, economists, cognitive scientists, rational choice theorists in political science, and elsewhere. It is shunned by the critically trained. My friend and colleague was open minded in his consideration of utility functions, but was more concerned with how cultural values might sneak into or be expressed in design.

I asked him to define a cultural value. We debated the term for some time. We reached a reasonable conclusion.

With such a consensus to work with, we began to talk about how such a concept would be applied. He brought up the example of an algorithm claimed by its creators to be objective. But, he asked, could the algorithm have a bias? Would we not expect that it would express, secretly, cultural values?

I confessed that I aspire to design and implement just such algorithms. I think it would be a fine future if we designed algorithms to fairly and objectively arbitrate our political disputes. We have good reasons to think that an algorithm could be more objective than a system of human bureaucracy. While human decision-makers are limited by the partiality of their perspective, we can build infrastructure that accesses and processes data that are beyond an individual’s comprehension. The challenge is to design the system so that it operates kindly and fairly despite its operations being beyond the scope a single person’s judgment. This will require an abstracted understanding of fairness that is not grounded in the politics of partiality.

Suppose a team of people were to design and implement such a program. On what basis would the critics–and there would inevitably be critics–accuse it of being a biased design with embedded cultural values? Besides the obvious but empty criticism that valuing unbiased results is a cultural value, why wouldn’t the reasoned process of design reduce bias?

We resumed our work peacefully.


by Sebastian Benthall at October 11, 2015 12:56 AM

October 10, 2015

Ph.D. student

Protected: partiality and ethics

This post is password protected. You must visit the website and enter the password to continue reading.


by Sebastian Benthall at October 10, 2015 03:06 AM

October 06, 2015

Ph.D. student

ethical data science is statistical data science #dsesummit

I am at the Moore/Sloan Data Science Environment at the Suncadia Resort in Washington. There are amazing trees here. Wow!

So far the coolest thing I’ve seen is a talk on how Dynamic Mode Decomposition, a technique from fluid dynamics, is being applied to data from brains.

And yet, despite all this sweet science, all is not well in paradise. Provocations, source unknown, sting the sensitive hearts of the data scientists here. Something or someone stirs our emotional fluids.

There are two controversies. There is one solution, which is the synthesis of the two parts into a whole.

Herr Doctor Otherwise Anonymous confronted some compatriots and myself in the resort bar with a distressing thought. His work in computational analysis of physical materials–his data science–might be coopted and used for mass surveillance. Powerful businesses might use the tools he creates. Information discovered through these tools may be used to discriminate unfairly against the underprivileged. As teachers, responsible for the future through our students, are we not also responsible for teaching ethics? Should we not be concerned as practitioners; should we not hesitate?

I don’t mind saying that at the time I at my Ballmer Peak of lucidity. Yes, I replied, we should teach our students ethics. But we should not base our ethics in fear! And we should have the humility to see that the moral responsibility is not ours to bear alone. Our role is to develop good tools. Others may use them for good or ill, based on their confidence in our guarantees. Indeed, an ethical choice is only possible when one knows enough to make sound judgment. Only when we know all the variables in play and how they relate to each other can we be sure our moral decisions–perhaps to work for social equality–are valid.

Later, I discover that there is more trouble. The trouble is statistics. There is a matter of professional identity: Who are statisticians? Who are data scientists? Are there enough statisticians in data science? Are the statisticians snubbing the data scientists? Do they think they are holier-than-thou? Are the data scientists merely bad scientists, slinging irresponsible model-fitting code, inviting disaster?

//platform.twitter.com/widgets.js

Attachment to personal identity is the root of all suffering. Put aside all sociological questions of who gets to be called a statistician for a moment. Don’t even think about what branches of mathematics are considered part of a core statistical curriculum. These are historical contingencies with no place in the Absolute.

At the root of this anxiety about what is holy, and what is good science, is that statistical rigor just is the ethics of data science.


by Sebastian Benthall at October 06, 2015 03:25 PM

October 02, 2015

Ph.D. alumna

Join me at the Parker Lecture on Oct. 20 in Washington DC

Every year, the media reform community convenes to celebrate one of the founders of the movement, to reflect on the ethical questions of our day, and to honor outstanding champions of media reform. This annual event, called the Parker Lecture, is in honor of Dr. Everett C. Parker, who is often called the founder of the media reform movement, and who died last month at the age of 102. Dr. Parker made incredible contributions from his post as the Executive Director of the United Church of Christ’s Office of Communication, Inc.. This organization is part of the progressive movement’s efforts to hold media accountable and to consider how best to ensure all people, no matter their income or background, benefit from new technology.

I am delighted to be part of this year’s events as one of the honorees. My other amazing partners in this adventure are:

  • Joseph Torres, senior external affairs director of Free Press and co-author of News for All the People: The Epic Story of Race and the American Media, will receive the Parker Award which recognizes an individual whose work embodies the principles and values of the public interest in telecommunications.

  • Wally Bowen, co-founder and executive director of the Mountain Area Information Network (MAIN), will receive the Donald H. McGannon Award in recognition of his dedication to bringing modern telecommunications to low-income people in rural areas.

The 33rd Annual Parker Lecture will be held Tuesday, October 20, 2015 at 8 a.m. at the First Congregational United Church of Christ, 945 G St NW, Washington, DC 20001. I will be giving a talk as part of this celebration and joined by Clayton Old Elk of the Crow Tribe who will offer a praise song.

Want to join us? Tickets are available here.

by zephoria at October 02, 2015 05:37 PM

September 18, 2015

Ph.D. student

Ethnography, philosophy, and data anonymization

The other day at BIDS I was working at my laptop when a rather wizardly looking man in a bicycle helmet asked me when The Hacker Within would be meeting. I recognized him from a chance conversation in an elevator after Anca Dragan’s ICBS talk the previous week. We had in that brief moment connected over the fact that none of the bearded men in the elevator had remembered to press the button for the ground floor. We had all been staring off into space before a young programmer with a thin mustache pointed out our error.

Engaging this amicable fellow, whom I will leave anonymous, the conversation turned naturally towards principles for life. I forget how we got onto the topic, but what I took away from the conversation was his advice: “Don’t turn your passion into your job. That’s like turning your lover into a whore.”

Scholars in the School of Information are sometimes disparaging of the Data-Information-Knowledge-Wisdom hierarchy. Scholars, I’ve discovered, are frequently disparaging of ideas that are useful, intuitive, and pertinent to action. One cannot continue to play the Glass Bead Game if it has already been won and more than one can continue to be entertained by Tic Tac Toe once one has grasped its ineluctable logic.

We might wonder, as did Horkheimer, when the search and love of wisdom ceased to be the purpose of education. It may have come during the turn when philosophy was determined to be irrelevant, speculative or ungrounded. This perhaps coincided, in the United States, with McCarthyism. This is a question for the historians.

What is clear now is that philosophy per se is not longer considered relevant to scientific inquiry.

An ethnographer I know (who I will leave anonymous) told me the other day that the goal of Science and Technology Studies is to answer questions from philosophy of science with empirical observation. An admirable motivation for this is that philosophy of science should be grounded in the true practice of science, not in idle speculation about it. The ethnographic methods, through which observational social data is collected and then compellingly articulated, provide a kind of persuasiveness that for many far surpasses the persuasiveness of a priori logical argument, let alone authority.

And yet the authority of ethnographic writing depends always on the socially constructed role of the ethnographer, much like the authority of the physicist depends on their socially constructed role as physicists. I’d even argue that the dependence of ethnographic authority on social construction is greater than that of other kinds of scientific authority, as ethnography is so quintessentially an embedded social practice. A physicist or chemist or biologist at least in principle has nature to push back on their claims; a renegade natural scientist can as a last resort claim their authority through provision of a bomb or a cure. The mathematician or software engineer can test and verify their work through procedure. The ethnographer does not have these opportunity. Their writing will never be enough to convey the entirety of their experience. It is always partial evidence, a gesture at the unwritten.

This is not an accidental part of the ethnographic method. The practice of data anonymization, necessitated by the IRB and ethics, puts limitations on what can be said. These limitations are essential for building and maintaining the relationships of trust on which ethnographic data collection depends. The experiences of the ethnographer must always go far beyond what has been regulated as valid procedure. The information they have collected illicitly will, if they are skilled and wise, inform their judgment of what to write and what to leave out. The ethnographic text contains many layers of subtext that will be unknown to most readers. This is by design.

The philosophical text, in contrast, contains even less observational data. The text is abstracted from context. Only the logic is explicit. A naive reader will assume, then, that philosophy is a practice of logic chopping.

This is incorrect. My friend the ethnographer was correct: that ethnography is a way of answering philosophical questions empirically, through experience. However, what he missed is that philosophy is also a way of answering philosophical questions through experience. Just as in ethnographic writing, experience necessarily shapes the philosophical text. What is included, what is left out, what constellation in the cosmos of ideas is traced by the logic of the argument–these will be informed by experience, even if that experience is absent from the text itself.

One wonders: thus unhinged from empirical argument, how does a philosophical text become authoritative?

I’d offer the answer: it doesn’t. A philosophical text does not claim authority. That has been its method since Socrates.


by Sebastian Benthall at September 18, 2015 05:38 PM

September 10, 2015

Ph.D. student

de Beauvoir on science as human freedom

I appear to be unable to stop writing blog posts about philosophers who wrote in the 1940’s. I’ve been attempting a kind of survey. After a lot of reading, I have to say that my favorite–the one I think is most correct–is Simone de Beauvoir.

Much like “bourgeois”, “de Beauvoir” is something I find it impossible to remember how to spell. Therefore I am setting myself up for embarrassment by beginning to write about her work, The Ethics of Ambiguity. On the other hand, it’s nice to come full circle. In a notebook I was scribbling in when I first showed up in graduate school I was enthusiastic about using de Beauvoir to explicate what’s interesting about open source software development. Perhaps now is the right time to indulge the impulse.

de Beauvoir is generally not considered to be a philosopher of science. That’s too bad, because she said some of the most brilliant things about science ever said. If you can get past just a little bit of existentialist jargon, there’s a lot there.

Here’s a passage. The Marxists have put this entire book on the Internet, making it easy to read.

To will freedom and to will to disclose being are one and the same choice; hence, freedom takes a positive and constructive step which causes being to pass to existence in a movement which is constantly surpassed. Science, technics, art, and philosophy are indefinite conquests of existence over being; it is by assuming themselves as such that they take on their genuine aspect; it is in the light of this assumption that the word progress finds its veridical meaning. It is not a matter of approaching a fixed limit: absolute Knowledge or the happiness of man or the perfection of beauty; all human effort would then be doomed to failure, for with each step forward the horizon recedes a step; for man it is a matter of pursuing the expansion of his existence and of retrieving this very effort as an absolute.

de Beauvoir’s project in The Ethics of Ambiguity is to take seriously the antimonies of society and the individual, of nature and the subject, which Horkheimer only gets around to stating at the conclusion of contemporary analysis. Rather than cry from wounds of getting skewered by the horns of the antinomy, de Beauvoir turns that ambiguity inherent in the antinomy into a realistic, situated ethics.

If de Beauvoir’s ethics have a telos or purpose, it is to expand human freedom and potential indefinitely. Through a terrific dialectical argument, she reasons out why this project is in a sense the only honest one for somebody in the human condition, despite its transcendence over individual interest.

Science, then, becomes one of several activities which one undertakes to expand this human potential.

Science condemns itself to failure when, yielding to the infatuation of the serious, it aspires to attain being, to contain it, and to possess it; but it finds its truth if it considers itself as a free engagement of thought in the given, aiming, at each discovery, not at fusion with the thing, but at the possibility of new discoveries; what the mind then projects is the concrete accomplishment of its freedom.

Science is the process of free inquiry, not the product of a particular discovery. The finest scientific discoveries open up new discoveries.

What about technics?

The attempt is sometimes made to find an objective justification of science in technics; but ordinarily the mathematician is concerned with mathematics and the physicist with physics, and not with their applications. And, furthermore, technics itself is not objectively justified; if it sets up as absolute goals the saving of time and work which it enables us to realize and the comfort and luxury which it enables us to have access to, then it appears useless and absurd, for the time that one gains can not be accumulated in a store house; it is contradictory to want to save up existence, which, the fact is, exists only by being spent, and there is a good case for showing that airplanes, machines, the telephone, and the radio do not make men of today happier than those of former times.

Here we have in just a couple sentences dismissal of instrumentality as the basis for science. Science is not primarily for acceleration; this is absurd.

But actually it is not a question of giving men time and happiness, it is not a question of stopping the movement of life: it is a question of fulfilling it. If technics is attempting to make up for this lack, which is at the very heart of existence, it fails radically; but it escapes all criticism if one admits that, through it, existence, far from wishing to repose in the security of being, thrusts itself ahead of itself in order to thrust itself still farther ahead, that it aims at an indefinite disclosure of being by the transformation of the thing into an instrument and at the opening of ever new possibilities for man.

For de Beauvoir, science (as well as all the other “constructive activities of man” including art, etc.) should be about the disclosure of new possibilities.

Succinct and unarguable.


by Sebastian Benthall at September 10, 2015 07:04 PM

September 09, 2015

Ph.D. student

scientific contexts

Recall:

  • For Helen Nissenbaum (contextual integrity theory):
    • a context is a social domain that is best characterized by its purpose. For example, a hospital’s purpose is to cure the sick and wounded.
    • a context also has certain historically given norms of information flow.
    • a violation of a norm of information flow in a given context is a potentially unethical privacy violation. This is an essentially conservative notion of privacy, which is balanced by the following consideration…
    • Whether or not a norm of information flow should change (given, say, a new technological affordance to do things in a very different way) can be evaluated by how well it serve the context’s purpose.
  • For Fred Dretske (Knowledge and the Flow of Information, 1983):
    • The appropriate definition of information is (roughly) just what it takes to know something. (More specifically: M carries information about X if it reliably transmits what it takes for a suitably equipped but otherwise ignorant observer to learn about X.)
  • Combining Nissenbaum and Dretske, we see that with an epistemic and naturalized understanding of information, contextual norms of information flow are inclusive of epistemic norms.
  • Consider scientific contexts. I want to use ‘science’ in the broadest possible (though archaic) sense of the intellectual and practical activity of study or coming to knowledge of any kind. “Science” from the Latin “scire”–to know. Or “Science” (capitalized) as the translated 19th Century German Wissenschaft.
    • A scientific context is one whose purpose is knowledge.
    • Specific issues of whose knowledge, knowledge about what, and to what end the knowledge is used will vary depending on the context.
    • As information flow is necessary for knowledge, the purpose of science, the norms of information flow within (and without) a scientific context, the integrity of scientific context will be especially sensitive to its norms of information flow.
  • An insight I owe to my colleague Michael Tschantz, in conversation, is that there are several open problems within contextual integrity theory:
    • How does one know what context one is in? Who decides that?
    • What happens at the boundary between contexts, for example when one context is embedded in another?
    • Are there ways for the purpose of a context to change (not just the norms within it)?
  • Proposal: One way of discovering what a science is is to trace what its norms of information flow and to identify its purpose. A contrast between the norms and purpose of, for example, data science and ethnography, would be illustrative of both. One approach to this problem could be kind of qualitative research done by Edwin Hutchins on distributed cognition, which accepts a naturalized view of information (necessary for this framing) and then discovers information flows in a context through qualitative observation.

by Sebastian Benthall at September 09, 2015 04:00 PM

September 03, 2015

Ph.D. student

barriers to participant observation of data science and ethnography

By chance, last night I was at a social gathering with two STS scholars that are unaffiliated with BIDS. One of them is currently training in ethnographic methods. I explained to him some of my quandaries as a data scientist working with ethnographers studying data science. How can I be better in my role?

He talked about participant observation, and how hard it is in a scientific setting. An experienced STS ethnographer who he respected has said: participant observation means being ready for an almost constant state of humiliation. Your competence is always being questioned; you are always acting inappropriately; your questions are considered annoying or off-base. You try to learn the tacit knowledge required in the science but will always be less good at it than the scientists themselves. This is all necessary for the ethnographic work.

To be a good informant (and perhaps this is my role, being a principal informant) means patiently explaining lots of things that normally go unexplained. One must make explicit that which is obvious and tacit to the experienced practitioner.

This sort of explanation accords well with my own training in qualitative methods and reading in this area, which I have pursued alongside my data science practice. This has been a deliberate blending in my graduate studies. In one semester I took both Statistical Learning Theory with Martin Wainwright and Qualitative Research Methods with Jenna Burrell. I’ve taken Behavioral Data Mining with John Canny and a seminar taught by Jean Lave on “What Theory Matters”.

I have been trying to cover my bases, methodologically. Part of this is informed by training in Bayesian methods as an undergraduate. If you are taught to see yourself as an information processing machine and take the principles of statistical learning seriously, then if you’re like me you may be concerned about bias in the way you take in information. If you get a sense that there’s some important body of knowledge or information to which you haven’t been adequately exposed, you seek it out in order to correct the bias.

This is not unlike what is called theoretical sampling in the qualitative methods literature. My sense, after being trained in both kinds of study, is that the principles that motivate them are the same or similar enough to make reconciliation between the approaches achievable.

I choose to identify as a data scientist, not as an ethnographer. One reason for this is that I believe I understand what ethnography is, that it is a long and arduous process of cultural immersion in which one attempts to articulate the emic experience of those under study, and that I am not primarily doing this kind of activity with my research. I have tried to ethnographic work on an online community. I would argue that this was particularly bad ethnographic work. I concluded some time ago that I don’t have the right temperament to be an ethnographer per se.

Nevertheless, here I am participating in an Ethnography Group. It turns out that it is rather difficult to participate in an ethnographic context with ethnographers of science while still maintaining ones identity as the kind of scientist that is being studied. Part of this has to do with conflicts over epistemic norms. Attempting to argue on the basis of scientific authority about the validity of the method of that science to a room of STS ethnographers is not taken as useful information from an informant nor as a creatively galvanizing rocking of the boat. It is seen as unproductive and potentially disrespectful.

Rather than treating this as an impasse, I have been pondering how to use these kinds of divisions productively. As a first pass, I’m finding it helpful in coming to an understanding of what data science is by seeing, perhaps with a clarity that others might not have the privilege of, what it is not. In a sense the Ethnography and Evaluation Working Group of the Berkeley Institute of Data Science is really at the boundary of data science.

This is exciting, because as far as I can tell nobody knows what data science is. Alternative definitions of data science is a joke in industry. The other day our ethnography team was discussing a seminar about “what is data science” with a very open minded scientist and engineer and he said he got a lot out of the seminar but that it reached no conclusions as to what this nascent field is. “What is data science?” and even “is there such a thing as data science?” are still unanswered questions and may continue to be unanswered even after industry has stopped hyping the term and started calling it something else.

So, you might ask, what happens at the boundary of data science and ethnography

The answer is: an epistemic conflict that’s deeply grounded in historical, cultural, institutional, and cognitive differences. It’s also a conflict that threatens the very project of an ethnography of data science itself.

The problem, I feel qualified to say as somebody with training on both sides of the fence and quite a bit of experience teaching both technical and non-technical subject matter, is this: learning the skills and principles behind good data science does not come easily to everybody and in any case takes a lot of hard work and time. These skills and principles pertain to many deep practices and literatures that are developed self-consciously in a cumulative way. Any one sub-field within the many technical disciplines that comprise “data science” could take years to master, and to do so is probably impossible without adequate prior mathematical training that many people don’t receive, perhaps because they lack the opportunity or don’t care.

In fewer words: there is a steep learning curve, and the earlier people start to climb it, the easier it is for them to practice data science.

My point is that this is bad news for the participant observer. Something I sometimes hear ethnographers in the data science space say of people is “I just can’t talk to that person; they think so differently from me.” Often the person in question is, to my mind, exactly the sort of person I would peg as an exemplary data scientist.

Often these are people with a depth of technical understanding that I don’t have and aspire to have. I recognize that they have made the difficult choice to study more of the foundations of what I believe to be an important field, despite the fact that this is (as evinced by the reaction of ‘softer’ social sciences) alienating to a lot of people. These are the people whom I can consult on methodological questions that are integral to my work as a data scientist. It is part of data science practice to discuss epistemic norms seriously with others in order to make sure that the integrity of the science is upheld. Knowledge about statistical norms and principles is taught in classes and reading groups and practiced in, for example, computational manipulation of data. But this knowledge is also expanded and maintained through informal, often passionate and even aggressive, conversations with colleagues.

I don’t know where this leaves the project of ethnography of data science. One possibility is that it can abandon participant observation as a method because participant observation is too difficult. That would be a shame but might simply be necessary.

Another strategy, which I think is potentially more interesting, is to ask seriously: why is this so difficult? What is difficult about data science? For whom is it most difficult? Do experts experience the same difficulties, or different ones? And so on.


by Sebastian Benthall at September 03, 2015 04:22 PM

September 02, 2015

Ph.D. student

statistics, values, and norms

Further considering the difficulties of doing an ethnography of data science, I am reminded of Hannah Arendt’s comments about the apolitically of science.

The problem is this:

  • Ethnography as a practice is, as far as I can tell, descriptive. It is characterized primarily by non-judgmental observation.
  • Data scientific practice is tied to an epistemology of statistics. Statistics is a discipline about values and norms for belief formation. While superficially it may appear to have no normative content, practicing statistical thinking in research is a matter of adherence to norms.
  • It is very difficult to reconcile ethnographic epistemology and statistical epistemology. They have completely different intellectual histories and are based in very different cognitive modalities.
  • Ethnographers are often trained to reject statistical epistemology in their own work and as a result don’t learn statistics.
  • Consequently, most ethnographies of data science practice will leave out precisely that which data scientists see as most essential to their practice.

“Statistics” here is not entirely accurate. In computational statistics or ‘data science’, we can replace “statistics” with a large body of knowledge spanning statistics, probability theory, theory of computation, etc. The hybridization of these bodies of knowledge in, for example, educational curricula, is an interesting shift in the trajectory of science as a whole.

A deeper concern: in the self-understanding of the sciences, there is a transmitted sense of this intellectual history. In many good textbooks on technical subject-matter, there is a section at the end of each chapter on the history of the field. I come to these sections of the textbook with a sense of reverence. They stand as testaments to the century of cumulative labor done by experts on which our current work stands.

When this material is of disinterest to the historian or ethnographer of science, it feels like a kind of sacrilege. Contrast this disinterest with the treatment of somebody like Imre Lakatos, whose history of mathematics is so much more a history of the mathematics, not a history of the mathematicians, that the form of the book is a dialog compressing hundreds of years of mathematical history into a single classroom discussion. Historical detail is provided in footnotes, apart from the main drama of the narrative–which is about the emergence of norms of reasoning over time.


by Sebastian Benthall at September 02, 2015 08:17 PM

Observations of ethnographers

This semester the Berkeley Institute of Data Science Ethnography and Evaluation Working Group (EEWG) is meeting in its official capacity for the first time. In this context I am a data scientist among ethnographers and the transition to participation in this strange, alien culture is not an easy one. I have prepared myself for this task through coursework and associations throughout my time at Berkeley, but I am afraid that integrating into an “Science and Technology Studies” ethnographic team will be difficult nonetheless.

Off the bat, certain cultural differences seem especially salient:

In the sciences, typically one cares about whether or not the results of your investigation are true or useful in an intersubjective sense. This sense of purpose leads to a sense of concern for the logical coherence and rigor of ones method, which in turn constrains the kinds of questions that can be asked and the ways in which one presents ones results. Within STS, methodological concerns are perhaps secondary. The STS ethnographer interviews people, reads and listens carefully, but also holds the data at a distance. Ultimately, the process of writing–which is necessarily tied up with what the writer is interested in–is as much a part of the method as the observations and the analysis. Whereas the scientist strives for intersubjective agreement, the ethnographer is methodologically bound to their own subjectivity.

A consequence of this is that agonism, or the role of argumentation and disagreement, is different within scientific and ethnographic communities. (I owe this point to my colleague, Stuart Geiger, an ethnographer who is also in the working group.) In a scientific community argument is seen as a necessary step towards resolving disagreement and arriving at intersubjective results. The purpose of argument is, ideally, to arrive at agreement. Reasons are shared and disagreements resolved through logic. In the ethnographic community, since intersubjectivity is not valued, rational argument is seen more as form of political or social maneuvering. To challenge somebody intellectually is not simply to challenge what they are intellectualizing; it is to challenge their intellectual authority or competence.

This raises an interesting question: what is competence, to ethnographers? To the scientist, competence is a matter of concrete skills (handling lab equipment, computation, reasoning, presentation of results, etc.) that facilitate the purpose of science, the achievement of intellectual agreement on matters within the domain. Somebody who succeeds by virtue of skills other than these (such as political skilfulness) is seen, by the scientist, as a charlatan and a danger to the scientific project. Many of the more antisocial tendencies of scientists can be understood as an effort to keep the charlatans out, in order to preserve the integrity (and, ultimately, authority) of the scientific project.

Ethnographic competence is mysterious to me because, at least in STS, scientific authority is always a descriptively assessed social phenomenon and not something which one trusts. If the STS ethnographer sees the scientific project primarily as one of cultural and institutional regularity and leaves out the teleological aspects of science as a search for truth of some kind, as has been common in STS since the 80’s, then how can STS see its own intellectual authority besides as a rather arbitrary political accomplishment? What is competence besides the judgement of competence by ones bureaucratic superiors?

I am not sure that these questions, which seem pertinent to me as a scientist, are even askable within the language and culture of STS. As they concern the normative elements of intellectual inquiry, not descriptions of the social institutions of inquiry, they are in a sense “unscientific” questions vis-a-vis STS.

Perhaps more importantly, these questions are unimportant to the STS ethnographer because they are not relevant to the STS ethnographer’s job. In this way the STS ethnographer is not unlike many practicing scientists who, once they learn an approved method and have a community in which to share their work, do not question the foundations of their field. And perhaps because of STS’s concern with what others might consider the mundane aspects of scientific inquiry–the scheduling of meetings, the arrangement of events, the circulation of invitations and rejection letters, the arrangement of institutions–their concept of intellectual work hinges on these activities more than it does argument or analysis, relative to the sciences.


by Sebastian Benthall at September 02, 2015 05:25 PM

August 30, 2015

MIMS 2012

Get Comfortable Sharing Your Shitty Work

After jamming with a friend, she commented that she felt emotionally spent afterwards. Not quite sure what she meant, I asked her to elaborate. She said that improvising music makes you feel vulnerable. You’ve got to put yourself out there, which opens you up to judgement and criticism.

And she’s right. In that moment I realized that being a designer trained me to get over that fear. I know I have to start somewhere shitty before I can get somewhere good. Putting myself and my ideas out there is part of that process. My work only becomes good through feedback and iteration.

So my advice to you, young designer, is to accept the fact that before your work becomes great, it’s going to be shitty. This will be hard at first. You’ll feel vulnerable. You’ll fear judgement. You’ll worry about losing the respect of your colleagues.

But get over it. We’ve all felt this way before. Just remember that we’re all in this together. We all want to produce great work for our customers. We all want to make great music together.

So get comfortable sharing your shitty work. You’ll start off discordant, but through the process of iteration and refinement you’ll eventually hit your groove.

by Jeff Zych at August 30, 2015 10:34 PM

August 28, 2015

Ph.D. student

The recalcitrance of prediction

We have identified how Bostrom’s core argument for superintelligence explosion depends on a crucial assumption. An intelligence explosion will happen only if the kinds of cognitive capacities involved in instrumental reason are not recalcitrant to recursive self-improvement. If recalcitrance rises comparably with the system’s ability to improve itself, then the takeoff will not be fast. This significantly decreases the probability of decisively strategic singleton outcomes.

In this section I will consider the recalcitrance of intelligent prediction, which is one of the capacities that is involved in instrumental reason (another being planning). Prediction is a very well-studied problem in artificial intelligence and statistics and so is easy to characterize and evaluate formally.

Recalcitrance is difficult to formalize. Recall that in Bostrom’s formulation:

\frac{dI}{dt} = \frac{O(I)}{R(I)}

One difficulty in analyzing this formula is that the units are not specified precisely. What is a “unit” of intelligence? What kind of “effort” is the unit of optimization power? And how could one measure recalcitrance?

A benefit of looking at a particular intelligent task is that it allows us to think more concretely about what these terms mean. If we can specify which tasks are important to consider, then we can take the level of performance on those well-specified class of problems as measures of intelligence.

Prediction is one such problem. In a nutshell, prediction comes down to estimating a probability distribution over hypotheses. Using the Bayesian formulation of statistical influence, we can represent the problem as:

P(H|D) = \frac{P(D|H) P(H)}{P(D)}

Here, P(H|D) is the posterior probability of a hypothesis H given observed data D. If one is following statistically optimal procedure, one can compute this value by taking the prior probability of the hypothesis P(H), multiplying it by the likelihood of the data given the hypothesis P(D|H), and then normalizing this result by dividing by the probability of the data over all models, P(D) = \sum_{i}P(D|H_i)P(H_i).

Statisticians will justifiably argue whether this is the best formulation of prediction. And depending on the specifics of the task, the target value may well be some function of posterior (such as the hypothesis with maximum likelihood) and the overall distribution may be secondary. These are valid objections that I would like to put to one side in order to get across the intuition of an argument.

What I want to point out is that if we look at the factors that affect performance on prediction problems, there a very few that could be subject to algorithmic self-improvement. If we think that part of what it means for an intelligent system to get more intelligent is to improve its ability of prediction (which Bostrom appears to believe), but improving predictive ability is not something that a system can do via self-modification, then that implies that the recalcitrance of prediction, far from being constant or lower, actually approaches infinity with respect the an autonomous system’s capacity for algorithmic self-improvement.

So, given the formula above, in what ways can an intelligent system improve its capacity to predict? We can enumerate them:

  • Computational accuracy. An intelligent system could be better or worse at computing the posterior probabilities. Since most of the algorithms that do this kind of computation do so with numerical approximation, there is the possibility of an intelligent system finding ways to improve the accuracy of this calculation.
  • Computational speed. There are faster and slower ways to compute the inference formula. An intelligent system could come up with a way to make itself compute the answer faster.
  • Better data. The success of inference is clearly dependent on what kind of data the system has access to. Note that “better data” is not necessarily the same as “more data”. If the data that the system learns from is from a biased sample of the phenomenon in question, then a successful Bayesian update could make its predictions worse, not better. Better data is data that is informative with respect to the true process that generated the data.
  • Better prior. The success of inference depends crucially on the prior probability assigned to hypotheses or models. A prior is better when it assigns higher probability to the true process that generates observable data, or models that are ‘close’ to that true process. An important point is that priors can be bad in more than one way. The bias/variance tradeoff is well-studied way of discussing this. Choosing a prior in machine learning involves a tradeoff between:
    1. Bias. The assignment of probability to models that skew away from the true distribution. An example of a biased prior would be one that gives positive probability to only linear models, when the true phenomenon is quadratic. Biased priors lead to underfitting in inference.
    2. Variance.The assignment of probability to models that are more complex than are needed to reflect the true distribution. An example of a high-variance prior would be one that assigns high probability to cubic functions when the data was generated by a quadratic function. The problem with high variance priors is that they will overfit data by inferring from noise, which could be the result of measurement error or something else less significant than the true generative process.

    In short, there best prior is the correct prior, and any deviation from that increases error.

Now that we have enumerate the ways in which an intelligent system may improve its power of prediction, which is one of the things that’s necessary for instrumental reason, we can ask: how recalcitrant are these factors to recursive self-improvement? How much can an intelligent system, by virtue of its own intelligence, improve on any of these factors?

Let’s start with computational accuracy and speed. An intelligent system could, for example, use some previously collected data and try variations of its statistical inference algorithm, benchmark their performance, and then choose to use the most accurate and fastest ones at a future time. Perhaps the faster and more accurate the system is at prediction generally, the faster and more accurately it would be able to engage in this process of self-improvement.

Critically, however, there is a maximum amount of performance that one can get from improvements to computational accuracy if you hold the other factors constant. You can’t be more accurate than perfectly accurate. Therefore, at some point recalcitrance of computational accuracy rises to infinity. Moreover, we would expect that effort made at improving computational accuracy would exhibit diminishing returns. In other words, recalcitrance of computational accuracy climbs (probably close to exponentially) with performance.

What is the recalcitrance of computational speed at inference? Here, performance is limited primarily by the hardware on which the intelligent system is implemented. In Bostrom’s account of superintelligence explosion, he is ambiguous about whether and when hardware development counts as part of a system’s intelligence. What we can say with confidence, however, is that for any particular piece of hardware there will be a maximum computational speed attainable with with, and that recursive self-improvement to computational speed can at best approach and attain this maximum. At that maximum, further improvement is impossible and recalcitrance is again infinite.

What about getting better data?

Assuming an adequate prior and the computational speed and accuracy needed to process it, better data will always improve prediction. But it’s arguable whether acquiring better data is something that can be done by an intelligent system working to improve itself. Data collection isn’t something that the intelligent system can do autonomously, since it has to interact with the phenomenon of interest to get more data.

If we acknowledge that data collection is a critical part of what it takes for an intelligent system to become more intelligent, then that means we should shift some of our focus away from “artificial intelligence” per se and onto ways in which data flows through society and the world. Regulations about data locality may well have more impact on the arrival of “superintelligence” than research into machine learning algorithms now that we have very faster, very accurate algorithms already. I would argue that the recent rise in interest in artificial intelligence is due mainly to availability of vast amounts of new data through sensors and the Internet. Advances in computational accuracy and speed (such as Deep Learning) have to catch up to this new availability of data and use new hardware, but data is the rate limiting factor.

Lastly, we have to ask: can a system improve its own prior, if data, computational speed, and computational accuracy are constant?

I have to argue that it can’t do this in any systematic way, if we are looking at the performance of the system at the right level of abstraction. Potentially a machine learning algorithm could modify its prior if it sees itself as underperforming in some ways. But there is a sense in which any modification to the prior made by the system that is not a result of a Bayesian update is just part of the computational basis of the original prior. So recalcitrance of the prior is also infinite.

We have examined the problem of statistical inference and ways that an intelligent system could improve its performance on this task. We identified four potential factors on which it could improve: computational accuracy, computational speed, better data, and a better prior. We determined that contrary to the assumption of Bostrom’s hard takeoff argument, the recalcitrance of prediction is quite high, approaching infinity in the cases of computational accuracy, computational speed, and the prior. Only data collections to be flexibly recalcitrant. But data collection is not a feature of the intelligent system alone but also depends on its context.

As a result, we conclude that the recalcitrance of prediction is too high for an intelligence explosion that depends on it to be fast. We also note that those concerned about superintelligent outcomes should shift their attention to questions about data sourcing and storage policy.


by Sebastian Benthall at August 28, 2015 07:01 PM

MIMS 2015

Adventures in DANE

This post will reflect on the relatively new DNS-based Authentication of Named Entities(DANE) protocol from the Internet Engineering Task Force(IETF). We will first explain how DANE works, talk about what DANE can and cannot do, then briefly discuss the future of Internet encryption standards in general before wrapping up.

What are DNSSEC and DANE?

DANE is defined in RFC 6698 and further clarified in RFC 7218. DANE depends entirely on DNSSEC, which is older and considerably more complicated. For our purposes, the only thing the reader need know about DNSSEC is that it solves the problem of trusting DNS responses. Simply put, DNSSEC ensures that DNS requests return responses that are cryptographically assured.

DANE builds on this assurance by hosting hashes of cryptographic keys in DNS. DNSSEC assures that what we see in DNS is exactly as it should be, DANE then exploits this assurance by providing a secondary trust network for cryptographic key verification. This secondary trust network is the DNS hierarchy.

Let’s look at an example. I have configured the test domain https://synonomic.com/ for HTTPS, TLS, DNSSEC and DANE. Let’s examine what this means.

If you visit https://synonomic.com/ with a modern web browser it will probably complain that it is untrusted, before asking you create an exception. This is because synonymic.com’s TLS certificate is not signed by any of the Certificate Authorities(CA) that your browser trusts. In setting up synonymic.com I created my own self signed certificate, and didn’t bother to get it signed by a CA.1

Instead, I enabled DNSSEC for synonymic.com, then created a hash of my self signed certificate and stuck it in a DNS TLSA record. TLSA records are where DANE hosts cryptographic information for a given service. If your browser supported DANE, it would download the TLS certificate for synonymic.com, compute its hash, then compare that against what is hosted in synonymic.com’s TLSA record. If the two hashes were the same it could trust the certificate presented by synonymic.com. If the two hashes were different then your browser would know something fishy was happening, and not trust the certificate presented by the web server at synonymic.com.

If you’re on a UNIX system you can query the TLSA record for synonymic.com with the following command.

dig +multi _443._tcp.synonomic.com. TLSA

The answer should look something like this.

_443._tcp.synonomic.com. 21599 IN TLSA 3 0 2 (
                            D98DA7EE3816E8778CD41C619D868817EC2874CC3C80
                            D1CA25E7579465CDED2D6BD57CEB4C2D1943039EAB48
                            C6403619A83B0025C6CF807992C1196CB42EE386 )

Let’s break this answer down.

The top line repeats the name of the record(_443._tcp.synonomic.com.) you queried. Since different services on a single host can use different certificates, TLSA records include the IP protocol(tcp) and port number(443) in the record. This is followed by three items generic to all DNS records, the TTL(21599), the scope of record(IN for Internet) and the name of the record type(TLSA).

After these we have four values specific to TLSA records. The certificate usage(3), the selector(0), the matching type(2), and finally the hash of synonymic.com’s TLS certificate(D98DA..).

The certificate usage field(3) can contain a range from 0-3. By specifying 3 we’re saying this record contains a hash of synonymic.com’s TLS certificate. TLSA records can also be used to force a specific CA trust anchor. For example, if this value was 2 and the TLSA record contained the hash of CA StartSSL’s signing certificate, a supporting browser would require that synonym.com’s TLS certificate be signed by the StartSSL CA.

The selector field(0) can have a range of 0-1 and simply states which format of the certificate is to hashed. It’s uninteresting for our discussion.

The matching type field(2) states which algorithm is used to compute the hash.2

Finally we have the actual hash(D98DA..) of the TLS certificate.

What can DANE do?

DANE provides a secondary chain of trust for TLS certificates. It enables TLS clients to compare the certificate presented to them by the server, to what is hosted in DNS. This prevents common Man In The Middle(MITM) attacks where an attacker intercepts a connection prior to it being established, presents its own certificate to both ends, and then sits in between the victim end-points capturing and decrypting everything. DANE prevents this common MITM attack in the same way our current CA system does, by providing a secondary means of verifying the server’s presented certificate.

The problem with CAs is that they get subverted3, and since our browsers implicitly trust all of them equally, a single subverted CA means every site using HTTPS is theoretically vulnerable. For example, if the operator of www.example.com purchases a certificate from CA-X, and criminals break into CA-Y, a MITM attack could still succeed against visitors to www.example.com. TLS clients cannot know from which CA an operator has purchased their certificate. Thus an attacker could present a bad certificate to clients visiting www.example.com signed by CA-Y, and the clients would accept it as valid.

DANE has two answers to this type of attack. First, since a hash of the correct certificate is hosted in DNS, clients can compare the certificate presented by the server to what is hosted in DNS. Then only proceed if they match. Secondly, DANE can lock a given DNS host to work with certificates issued by only one CA. So referencing the above example, if CA-Y is penetrated it won’t matter, because DANE compliant clients visiting www.example.com will know that only certificates issued by CA-X are valid for www.example.com.

What can DANE not do?

DANE cannot link a given service to a real world identity. For example, DANE cannot tell you that synonymic.com is the website of Andrew McConachie. Take a closer look at the certificate for synonymic.com. It’s issued to, and issued by, “Fake”. DANE don’t care. DANE only ensures that the TLS certificate presented by the web server at synonymic.com matches the hash in DNS. This won’t stop phishing attacks where a person is tricked into going to the wrong website, since that website’s TLS certificate would still match the hash in the TLSA record.

The way website operators tie identity to TLS certificates today is by getting special Extended Validation(EV) certificates from CAs. When a website owner requests an EV certificate from a CA that CA goes through a more extensive identification process. The purpose of which is to directly link the DNS name to a real world organization or individual. This is generally a rather thorough examination, and as such is more expensive than getting a normal certificate. EV certificates are also generally considered more secure than DV certificates, at least for HTTPS. If a website has an EV certificate, web browsers will display the name of the organization in the address bar.

Normal, or Domain Validated(DV), certificates make no claims regarding real world identity. If you control a DNS name you can get a DV certificate for that name. In this way DV certificates and DANE are very similar in the levels of trust they afford. They only differ in what infrastructure backs up this trust.

Does DANE play well with others?

DANE does not obviate the need for other trust mechanisms, in fact it was designed to play well with them. Unlike what some people think, the purpose of DANE is not to do away with the CA system. It is to provide another chain of trust based on the DNS hierarchy.

Certificate Transparency(CT) is another new standard from the IETF.4 It is standardized in RFC 6962. Simply put, CT establishes a public audit trail of issued TLS certificates that browsers and other clients can check against. As certificates are issued participating CSs add them to a write once audit trail. Certificates added to this audit trail cannot be removed or overwritten. TLS clients can then compare the certificate presented by a given website with what is in the audit trail. CT does not interfere with DANE, instead they complement one another. There is no reason today why a given site cannot be protected by our current CA system, DANE, and Certificate Transparency. The more the better. Redundancy and heterogeneity lead to more secure environments.5

The challenge moving forward for TLS clients will be in how these different models are used to determine trust, and presented to the user. Right now Firefox shows a lock and, if it’s an EV certificate, the name of the organization in the address bar.6 This is all based on the CA system of trust. If DNSSEC/DANE, and Certificate Transparency all gain adoption browser manufacturers will have to rethink how trust information is presented to their users. This is not going to be easy. To some degree boiling down all of this complexity to a single trust decision for the end user will be necessary, and trade-offs of information presented vs. usability will be required.

Weak Adoption and the Future

DANE depends on DNSSEC to function, and DNSSEC adoption has been slow. However, in some ways DANE has been pushing DNSSEC adoption. This article has been focusing on using DANE for HTTPS, actually DANE has seen the most deployment success in email deployments.7 There has been significant uptake in DANE by email providers wishing to prevent so called Server In The Middle Attacks(SITM). This type of attack occurs when a rogue mail server sits between two mail servers and captures all mail traffic between them. DANE averts this type of attack by allowing both Simple Mail Transfer Protocol(SMTP) talkers to compare the presented certificate with what is in DNS. The IETF currently has an Internet Draft on using DANE and TLS to secure SMTP traffic.

I think we should expect adoption of DANE for email security to continue increasing before any significant adoption begins for HTTPS. Many technologies require some sort of ‘killer app’ that pushes their adoption, and I suspect many people see DANE as DNSSEC’s killer app. I hope this is true, because one of the best ways we can thwart both pervasive monitoring by nation states, and illegal activities by criminals is increasing the adoption of TLS. Providing heterogeneous methods for assuring key integrity is also incredibly important. This article argued that a future with multiple methods for ensuring key integrity is preferable to a single winner. Our ideal secure Internet should have multiple independent means of verifying TLS certificates, DANE is just one of them.

Please contact me at andrewm AT ischool DOT berkeley DOT edu if you discover inaccuracies in this article.

  1. I tried getting it signed by StartSSL, but that didn’t quite work out.

  2. Synonymic.com uses a SHA2-512 hash as this is the most secure algorithm that is currently supported. See RFC 7218 for a mapping of acronyms to algorithms.

  3. Three examples of CA breaches: Turk Trust, Diginotar, Comodo

  4. Check out CertificateTransparency.org for more info.

  5. OS diversity for intrusion tolerance: Myth or reality?

  6. CZ.nic offers a great browser plugin for DNSSEC and DANE.

  7. Jan Zorz at the Internet Society has been measuring DANE uptake in SMTP traffic in the Alexa top 1 million. Also, the NIST recently published a whitepaper on securing email using DANE. The whitepaper goes further, and suggests that email providers start using a recently proposed IETF Internet Draft on storing hashes of personal OpenPGP keys in DNS.

Adventures in DANE was originally published by Andrew McConachie at Metafarce on August 28, 2015.

by Andrew McConachie (andrewm@ischool.berkeley.edu) at August 28, 2015 07:00 AM

Ph.D. student

Nissenbaum the functionalist

Today in Classics we discussed Helen Nissenbaum’s Privacy in Context.

Most striking to me is that Nissenbaum’s privacy framework, contextual integrity theory, depends critically on a functionalist sociological view. A context is defined by its information norms and violations of those norms are judged according to their (non)accordance with the purposes and values of the context. So, for example, the purposes of an educational institution determine what are appropriate information norms within it, and what departures from those norms constitute privacy violations.

I used to think teleology was dead in the sciences. But recently I learned that it is commonplace in biology and popular in ecology. Today I learned that what amounts to a State Philosopher in the U.S. (Nissenbaum’s framework has been more or less adopted by the FTC) maintains a teleological view of social institutions. Fascinating! Even more fascinating that this philosophy corresponds well enough to American law as to be informative of it.

From a “pure” philosophy perspective (which is I will admit simply a vice of mine), it’s interesting to contrast Nissenbaum with…oh, Horkheimer again. Nissenbaum sees ethical behavior (around privacy at least) as being behavior that is in accord with the purpose of ones context. Morality is given by the system. For Horkheimer, the problem is that the system’s purposes subsume the interests of the individual, who is alone the agent who is able to determine what is right and wrong. Horkheimer is a founder of a Frankfurt school, arguably the intellectual ancestor of progressivism. Nissenbaum grounds her work in Burke and her theory is admittedly conservative. Privacy is violated when people’s expectations of privacy are violated–this is coming from U.S. law–and that means people’s contextual expectations carry more weight than an individual’s free-minded beliefs.

The tension could be resolved when free individuals determine the purpose of the systems they participate in. Indeed, Nissenbaum quotes Burke in his approval of established conventions as being the result of accreted wisdom and rationale of past generations. The system is the way it is because it was chosen. (Or, perhaps, because it survived.)

Since Horkheimer’s objection to “the system” is that he believes instrumentality has run amok, thereby causing the system serve a purpose nobody intended for it, his view is not inconsistent with Nissenbaum’s. Nissenbaum, building on Dworkin, sees contextual legitimacy as depending on some kind of political legitimacy.

The crux of the problem is the question of what information norms comprise the context in which political legitimacy is formed, and what purpose does this context or system serve?


by Sebastian Benthall at August 28, 2015 02:54 AM

August 27, 2015

Ph.D. student

The relationship between Bostrom’s argument and AI X-Risk

One reason why I have been writing about Bostrom’s superintelligence argument is because I am acquainted with what could be called the AI X-Risk social movement. I think it is fair to say that this movement is a subset of Effective Altruism (EA), a laudable movement whose members attempt to maximize their marginal positive impact on the world.

The AI X-Risk subset, which is a vocal group within EA, sees the emergence of a superintelligent AI as one of several risks that is notably because it could ruin everything. AI is considered to be a “global catastrophic risk” unlike more mundane risks like tsunamis and bird flu. AI X-Risk researchers argue that because of the magnitude of the consequences of the risk they are trying to anticipate, they must raise more funding and recruit more researchers.

While I think this is noble, I think it is misguided for reasons that I have been outlining in this blog. I am motivated to make these arguments because I believe that there are urgent problems/risks that are conceptually adjacent (if you will) to the problem AI X-Risk researchers study, but that the focus on AI X-Risk in particular diverts interest away from them. In my estimation, as more funding has been put into evaluating potential risks from AI many more “mainstream” researchers have benefited and taken on projects with practical value. To some extent these researchers benefit from the alarmism of the AI X-Risk community. But I fear that their research trajectory is thereby distorted from where it could truly provide maximal marginal value.

My reason for targeting Bostrom’s argument for the existential threat of superintelligent AI is that I believe it’s the best defense of the AI X-Risk thesis out there. In particular, if valid the argument should significantly raise the expected probability of an existentially risky AI outcome. For Bostrom, it is likely a natural consequence of advancement in AI research more generally because of recursive self-improvement and convergent instrumental values.

As I’ve informally work shopped this argument I’ve come upon this objection: Even if it is true that a superintelligent system would not for systematic reasons become a existentially risky singleton, that does not mean that somebody couldn’t develop such a superintelligent system in an unsystematic way. There is still an existential risk, even if it is much lower. And because existential risks are so important, surely we should prepare ourselves for even this low probability event.

There is something inescapable about this logic. However, the argument applies equally well to all kinds of potential apocalypses, such as enormous meteors crashing into the earth and biowarfare produced zombies. Without some kind of accounting of the likelihood of these outcomes, it’s impossible to do a rational budgeting.

Moreover, I have to call into question the rationality of this counterargument. If Bostrom’s arguments are used in defense of the AI X-Risk position but then the argument is dismissed as unnecessary when it is challenged, that suggests that the AI X-Risk community is committed to their cause for reasons besides Bostrom’s argument. Perhaps these reasons are unarticulated. One could come up with all kinds of conspiratorial hypotheses about why a group of people would want to disingenuously spread the idea that superintelligent AI poses an existential threat to humanity.

The position I’m defending on this blog (until somebody convinces me otherwise–I welcome all comments) is that a superintelligent AI singleton is not a significantly likely X-Risk. Other outcomes that might be either very bad or very good, such as ones with many competing and cooperating superintelligences, are much more likely. I’d argue that it’s more or less what we have today, if you consider sociotechnical organizations as a form of collective superintelligence. This makes research into this topic not only impactful in the long run, but also relevant to problems faced by people now and in the near future.


by Sebastian Benthall at August 27, 2015 04:51 PM

August 25, 2015

Ph.D. student

Bostrom and Habermas: technical and political moralities, and the God’s eye view

An intriguing chapter that follows naturally from Nick Bostrom’s core argument is his discussion of machine ethics writ large. He asks: suppose one could install into an omnipotent machine ethical principles, trusting it with the future of humanity. What principles should we install?

What Bostrom accomplishes by positing his Superintelligence (which begins with something simply smarter than humans, and evolves over the course of the book into something that takes over the galaxy) is a return to what has been called “the God’s eye view”. Philosophers once attempted to define truth and morality according to perspective of an omnipotent–often both transcendent and immanent–god. Through the scope of his work, Bostrom has recovered some of these old themes. He does this not only through his discussion of Superintelligence (and positing its existence in other solar systems already) but also through his simulation arguments.

The way I see it, one thing I am doing by challenging the idea of an intelligence explosion and its resulting in a superintelligent singleton is problematizing this recovery of the God’s Eye view. If your future world is governed by many sovereign intelligent systems instead of just one, then ethics are something that have to emerge from political reality. There is something irreducibly difficult about interacting with other intelligences and it’s from this difficulty that we get values, not the other way around. This sort of thinking is much more like Habermas’s mature ethical philosophy.

I’ve written about how to apply Habermas to the design of networked publics that mediate political interactions between citizens. What I built and offer as toy example in that paper, @TheTweetserve, is simplistic but intended just as a proof of concept.

As I continue to read Bostrom, I expect a convergence on principles. “Coherent extrapolated volition” sounds a lot like a democratic governance structure with elected experts at first pass. The question of how to design a governance structure or institution that leverages artificial intelligence appropriately while legitimately serving its users motivates my dissertation research. My research so far has only scratched the surface of this problem.


by Sebastian Benthall at August 25, 2015 03:19 AM

August 24, 2015

Ph.D. student

Recalcitrance examined: an analysis of the potential for superintelligence explosion

To recap:

  • We have examined the core argument from Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies regarding the possibility of a decisively strategic superintelligent singleton–or, more glibly, an artificial intelligence that takes over the world.
  • With an eye to evaluating whether this outcome is particularly likely relative to other futurist outcomes, we have distilled the argument and in so doing have reduced it to a simpler problem.
  • That problem is to identify bounds on the recalcitrance of the capacities that are critical for instrumental reasoning. Recalcitrance is defined as the inverse of the rate of increase to intelligence per time per unit of effort put into increasing that intelligence. It is meant to capture how hard it is to make an intelligent system smarter, and in particular how hard it is for an intelligent system to make itself smarter. Bostrom’s argument is that if an intelligent system’s recalcitrance is constant or lower, then it is possible for the system to undergo an “intelligence explosion” and take over the world.
  • By analyzing how Bostrom’s argument depends only on the recalcitrance of instrumentality, and not of the recalcitrance of intelligence in general, we can get a firmer grip on the problem. In particular, we can focus on such tasks as prediction and planning. If we discover that these tasks are in fact significantly recalcitrant that should reduce our expected probability of an AI singleton and consequently cause us to divert research funds to problems that anticipate other outcomes.

In this section I will look in further depth at the parts of Bostrom’s intelligence explosion argument about optimization power and recalcitrance. How recalcitrant must a system be for it to not be susceptible to an intelligence explosion?

This section contains some formalism. For readers uncomfortable with that, trust me: if the system’s recalcitrance is roughly proportional to the amount that the system is able to invest in its own intelligence, then the system’s intelligence will not explode. Rather, it will climb linearly. If the system’s recalcitrance is significantly greater than the amount that the system can invest in its own intelligence, then the system’s intelligence won’t even climb steadily. Rather, it will plateau.

To see why, recall from our core argument and definitions that:

Rate of change in intelligence = Optimization power / Recalcitrance.

Optimization power is the amount of effort that is put into improving the intelligence of system. Recalcitrance is the resistance of that system to improvement. Bostrom presents this as a qualitative formula then expands it more formally in subsequent analysis.

\frac{dI}{dt} = \frac{O(I)}{R}

Bostrom’s claim is that for instrumental reasons an intelligent system is likely to invest some portion of its intelligence back into improving its intelligence. So, by assumption we can model O(I) = \alpha I + \beta for some parameters \alpha and \beta, where 0 < \alpha < 1 and \beta represents the contribution of optimization power by external forces (such as a team of researchers). If recalcitrance is constant, e.g R = k, then we can compute:

\Large \frac{dI}{dt} = \frac{\alpha I + \beta}{k}

Under these conditions, I will be exponentially increasing in time t. This is the “intelligence explosion” that gives Bostrom’s argument so much momentum. The explosion only gets worse if recalcitrance is below a constant.

In order to illustrate how quickly the “superintelligence takeoff” occurs under this model, I’ve plotted the above function plugging in a number of values for the parameters \alpha, \beta and k. Keep in mind that the y-axis is plotted on a log scale, which means that a roughly linear increase indicates exponential growth.

Plot of exponential takeoff rates

Modeled superintelligence takeoff where rate of intelligence gain is linear in current intelligence and recalcitrance is constant. Slope in the log scale is determine by alpha and k values.

It is true that in all the above cases, the intelligence function is exponentially increasing over time. The astute reader will notice that by my earlier claim \alpha cannot be greater than 1, and so one of the modeled functions is invalid. It’s a good point, but one that doesn’t matter. We are fundamentally just modeling intelligence expansion as something that is linear on the log scale here.

However, it’s important to remember that recalcitrance may also be a function of intelligence. Bostrom does not mention the possibility of recalcitrance being increasing in intelligence. How sensitive to intelligence would recalcitrance need to be in order to prevent exponential growth in intelligence?

Consider the following model where recalcitrance is, like optimization power, linearly increasing in intelligence.

\frac{dI}{dt} = \frac{\alpha_o I + \beta_o}{\alpha_r I + \beta_r}

Now there are four parameters instead of three. Note this model is identical to the one above it when \alpha_r = 0. Plugging in several values for these parameters and plotting again with the y-scale on the log axis, we get:

Plot of takeoff when both optimization power and recalcitrance are linearly increasing in intelligence. Only when recalcitrance is unaffected by intelligence level is there an exponential takeoff. In the other cases, intelligence quickly plateaus on the log scale. No matter how much the system can invest in its own optimization power as a proportion of its total intelligence, it still only takes off at a linear rate.

Plot of takeoff when both optimization power and recalcitrance are linearly increasing in intelligence. Only when recalcitrance is unaffected by intelligence level is there an exponential takeoff. In the other cases, intelligence quickly plateaus on the log scale. No matter how much the system can invest in its own optimization power as a proportion of its total intelligence, it still only takes off at a linear rate.

The point of this plot is to illustrate how easily exponential superintelligence takeoff might be stymied by a dependence of recalcitrance on intelligence. Even in the absurd case where the system is able to invest a thousand times as much intelligence that it already has back into its own advancement, and a large team steadily commits a million “units” of optimization power (whatever that means–Bostrom is never particularly clear on the definition of this), a minute linear dependence of recalcitrance on optimization power limits the takeoff to linear speed.

Are the reasons to think that recalcitrance might increase as intelligence increases? Prima facie, yes. Here’s a simple thought experiment: What if there is some distribution of intelligence algorithm advances that are available in nature and that some of them are harder to achieve than others. A system that dedicates itself to advancing its own intelligence, knowing that it gets more optimization power as it gets more intelligent, might start by finding the “low hanging fruit” of cognitive enhancement. But as it picks the low hanging fruit, it is left with only the harder discoveries. Therefore, recalcitrance increases as the system grows more intelligent.

This is not a decisive argument against fast superintelligence takeoff and the possibility of a decisively strategic superintelligent singleton. Above is just an argument about why it is important to consider recalcitrance carefully when making claims about takeoff speed, and to counter what I believe is a bias in Bostrom’s work towards considering unrealistically low recalcitrance levels.

In future work, I will analyze the kinds of instrumental intelligence tasks, like prediction and planning, that we have identified as being at the core of Bostrom’s superintelligence argument. The question we need to ask is: does the recalcitrance of prediction tasks increase as the agent performing them becomes better at prediction? And likewise for planning. If prediction and planning are the two fundamental components of means-ends reasoning, and both have recalcitrance that increases significantly with the intelligence of the agent performing them, then we have reason to reject Bostrom’s core argument and assign a very low probability to the doomsday scenario that occupies much of Bostrom’s imagination in Superintelligence. If this is the case, that suggests we should be devoting resources to anticipating what he calls multipolar scenarios, where no intelligent system has a decisive strategic advantage, instead.


by Sebastian Benthall at August 24, 2015 11:25 PM

August 23, 2015

Ph.D. student

Instrumentality run amok: Bostrom and Instrumentality

Narrowing our focus onto the crux of Bostrom’s argument, we can see how tightly it is bound to a much older philosophical notion of instrumental reason. This comes to the forefront in his discussion of the orthogonality thesis (p.107):

The orthogonality thesis
Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.

Bostrom goes on to clarify:

Note that the orthogonality thesis speaks not of rationality or reason, but of intelligence. By “intelligence” we here mean something like skill at prediction, planning, and means-ends reasoning in general. This sense of instrumental cognitive efficaciousness is most relevant when we are seeking to understand what the causal impact of a machine superintelligence might be.

Bostrom maintains that the generality of instrumental intelligence, which I would argue is evinced by the generality of computing, gives us a way to predict how intelligent systems will act. Specifically, he says that an intelligent system (and specifically a superintelligent) might be predictable because of its design, because of its inheritance of goals from a less intelligence system, or because of convergent instrumental reasons. (p.108)

Return to the core logic of Bostrom’s argument. The existential threat posed by superintelligence is simply that the instrumental intelligence of an intelligent system will invest in itself and overwhelm any ability by us (its well-intentioned creators) to control its behavior through design or inheritance. Bostrom thinks this is likely because instrumental intelligence (“skill at prediction, planning, and means-ends reasoning in general”) is a kind of resource or capacity that can be accumulated and put to other uses more widely. You can use instrumental intelligence to get more instrumental intelligence; why wouldn’t you? The doomsday prophecy of a fast takeoff superintelligence achieving a decisive strategic advantage and becoming a universe-dominating singleton depends on this internal cycle: instrumental intelligence investing in itself and expanding exponentially, assuming low recalcitrance.

This analysis brings us to a significant focal point. The critical missing formula in Bostrom’s argument is (specifically) the recalcitrance function of instrumental intelligence. This is not the same as recalcitrance with respect to “general” intelligence or even “super” intelligence. Rather, what’s critical is how much a process dedicated to “prediction, planning, and means-ends reasoning in general” can improve its own capacities at those things autonomously. The values of this recalcitrance function will bound the speed of superintelligence takeoff. These bounds can then inform the optimal allocation of research funding towards anticipation of future scenarios.


In what I hope won’t distract from the logical analysis of Bostrom’s argument, I’d like to put it in a broader context.

Take a minute to think about the power of general purpose computing and the impact it has had on the past hundred years of human history. As the earliest digital computers were informed by notions of artificial intelligence (c.f. Alan Turing), we can accurately say that the very machine I use to write this text, and the machine you use to read it, are the result of refined, formalized, and materialized instrumental reason. Every programming language is a level of abstraction over a machine that has no ends in itself, but which serves the ends of its programmer (when it’s working). There is a sense in which Bostrom’s argument is not about a near future scenario but rather is just a description of how things already are.

Our very concepts of “technology” and “instrument” are so related that it can be hard to see any distinction at all. (c.f. Heidegger, “The Question Concerning Technology“) Bostrom’s equating of instrumentality with intelligence is a move that makes more sense as computing becomes ubiquitously part of our experience of technology. However, if any instrumental mechanism can be seen as a form of intelligence, that lends credence to panpsychist views of cognition as life. (c.f. the Santiago theory)

Meanwhile, arguably the genius of the market is that it connects ends (through consumption or “demand”) with means (through manufacture and services, or “supply”) efficiently, bringing about the fruition of human desire. If you replace “instrumental intelligence” with “capital” or “money”, you get a familiar critique of capitalism as a system driven by capital accumulation at the expense of humanity. The analogy with capital accumulation is worthwhile here. Much as in Bostrom’s “takeoff” scenarios, we can see how capital (in the modern era, wealth) is reinvested in itself and grows at an exponential rate. Variable rates of return on investment lead to great disparities in wealth. We today have a “multipolar scenario” as far as the distribution of capital is concerned. At times people have advocated for an economic “singleton” through a planned economy.

It is striking that contemporary analytic philosopher and futurist Nick Bostrom’s contemplates the same malevolent force in his apocalyptic scenario as does Max Horkheimer in his 1947 treatise “Eclipse of Reason“: instrumentality run amok. Whereas Bostrom concerns himself primarily with what is literally a machine dominating the world, Horkheimer sees the mechanism of self-reinforcing instrumentality as pervasive throughout the economic and social system. For example, he sees engineers as loci of active instrumentalism. Bostrom never cites Horkheimer, let alone Heidegger. That there is a convergence of different philosophical sub-disciplines on the same problem suggests that there are convergent ultimate reasons which may triumph over convergent instrumental reasons in the end. The question of what these convergent ultimate reasons are, and what their relationship to instrumental reasons is, is a mystery.


by Sebastian Benthall at August 23, 2015 06:10 PM

August 21, 2015

Ph.D. student

Further distillation of Bostrom’s Superintelligence argument

Following up on this outline of the definitions and core argument of Bostrom’s Superintelligence, I will try to narrow in on the key mechanisms the argument depends on.

At the heart of the argument are a number of claims about instrumentally convergent values and self-improvement. It’s important to distill these claims to their logical core because their validity affects the probability of outcomes for humanity and the way we should invest resources in anticipation of superintelligence.

There are a number of ways to tighten Bostrom’s argument:

Focus the definition of superintelligence. Bostrom leads with the provocative but fuzzy definition of superintelligence as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.” But the overall logic of his argument makes it clear that the domain of interest does not necessarily include violin-playing or any number of other activities. Rather, the domains necessary for a Bostrom superintelligence explosion are those that pertain directly to improving ones own intellectual capacity. Bostrom speculates about these capacities in two ways. In one section he discusses the “cognitive superpowers”, domains that would quicken a superintelligence takeoff. In another section he discusses convergent instrumental values, values that agents with a broad variety of goals would converge on instrumentally.

  • Cognitive Superpowers
    • Intelligence amplification
    • Strategizing
    • Social manipulation
    • Hacking
    • Technology research
    • Economic productivity
  • Convergent Instrumental Values
    • Self-preservation
    • Goal-content integrity
    • Cognitive enhancement
    • Technological perfection
    • Resource acquisition

By focusing on these traits, we can start to see that Bostrom is not really worried about what has been termed an “Artificial General Intelligence” (AGI). He is concerned with a very specific kind of intelligence with certain capacities to exert its will on the world and, most importantly, to increase its power over nature and other intelligent systems rapidly enough to attain a decisive strategic advantage. Which leads us to a second way we can refine Bostrom’s argument.

Closely analyze recalcitrance. Recall that Bostrom speculates that the condition for a fast takeoff superintelligence, assuming that the system engages in “intelligence amplification”, is constant or lower recalcitrance. A weakness in his argument is his lack of in-depth analysis of this recalcitrance function. I will argue that for many of the convergent instrumental values and cognitive superpowers at the core of Bostrom’s argument, it is possible to be much more precise about system recalcitrance. This analysis should allow us to determine to a greater extent the likelihood of singleton vs. multipolar superintelligence outcomes.

For example, it’s worth noting that a number of the “superpowers” are explicitly in the domain of the social sciences. “Social manipulation” and “economic productivity” are both vastly complex domains of research in their own right. Each may well have bounds about how effective an intelligent system can be at them, no matter how much “optimization power” is applied to the task. The capacities of those manipulated to understand instructions is one such bound. The fragility or elasticity of markets could be another such bound.

For intelligence amplification, strategizing, technological research/perfection, and cognitive enhancement in particular, there is a wealth of literature in artificial intelligence and cognitive science that addresses the technical limits of these domains. Such technical limitations are a natural source of recalcitrance and an impediment to fast takeoff.


by Sebastian Benthall at August 21, 2015 07:42 PM

Bostrom’s Superintelligence: Definitions and core argument

I wanted to take the opportunity to spell out what I see as the core definitions and argument of Bostrom’s Superintelligence as a point of departure for future work. First, some definitions:

  • Superintelligence. “We can tentatively define a superintelligence as any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.” (p.22)
  • Speed superintelligence. “A system that can do all that a human intellect can do, but much faster.” (p.53)
  • Collective superintelligence. “A system composed of a large number of smaller intellects such that the system’s overall performance across many very general domains vastly outstrips that of any current cognitive system.” (p.54)
  • Quality superintelligence. “A system that is at least as fast as a human mind and vastly qualitatively smarter.” (p.56)
  • Takeoff. The event of the emergence of a superintelligence. The takeoff might be slow, moderate, or fast, depending on the conditions under which it occurs.
  • Optimization power and Recalcitrance. Bostrom’s proposed that we model the speed of superintelligence takeoff as: Rate of change in intelligence = Optimization power / Recalcitrance. Optimization power refers to the effort of improving the intelligence of the system. Recalcitrance refers to the resistance of the system to being optimized.(p.65, pp.75-77)
  • Decisive strategic advantage. The level of technological and other advantages sufficient to enable complete world domination. (p.78)
  • Singleton. A world order in which there is at the global level one decision-making agency. (p.78)
  • The wise-singleton sustainability threshold. “A capability set exceeds the wise-singleton threshold if and only if a patient and existential risk-savvy system with that capability set would, if it faced no intelligent opposition or competition, be able to colonize and re-engineer a large part of the accessible universe.” (p.100)
  • The orthogonality thesis. “Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.” (p.107)
  • The instrumental convergence thesis. “Several instrumental values can be identified which are convergent in the sense that their attainment would increase the chances of the agent’s goal being realized for a wide range of final goals and a wide range of situations, implying that these instrumental values are likely to be pursued by a broad spectrum of situated intelligent agents.” (p.109)

Bostrom’s core argument in the first eight chapters of the book, as I read it, is this:

  1. Intelligent systems are already being built and expanded on.
  2. If some constant proportion of a system’s intelligence is turned into optimization power, then if the recalcitrance of the system is constant or lower, then the intelligence of the system will increase at an exponential rate. This will be a fast takeoff.
  3. Recalcitrance is likely to be lower for machine intelligence than human intelligence because of the physical properties of artificial computing systems.
  4. An intelligent system is likely to invest in its own intelligence because of the instrumental convergence thesis. Improving intelligence is an instrumental goal given a broad spectrum of other goals.
  5. In the event of a fast takeoff, it is likely that the superintelligence will get a decisive strategic advantage, because of a first-mover advantage.
  6. Because of the instrumental convergence thesis, we should expect a superintelligence with a decisive strategic advantage to become a singleton.
  7. Machine superintelligences, which are more likely to takeoff fast and become singletons, are not likely to create nice outcomes for humanity by default.
  8. A superintelligent singleton is likely to be above the wise-singleton threshold. Hence the fate of the universe and the potential of humanity is at stake.

Having made this argument, Bostrom goes on to discuss ways we might anticipate and control the superintelligence as it becomes a singleton, thereby securing humanity.


by Sebastian Benthall at August 21, 2015 12:02 AM

August 16, 2015

Ph.D. student

And now for something completely different: Superintelligence and the social sciences

This semester I’ll be co-organizing, with Mahendra Prasad, a seminar on the subject of “Superintelligence and the Social Sciences”.

How I managed to find myself in this role is a bit of a long story. But as I’ve had a longstanding curiosity about this topic, I am glad to be putting energy into the seminar. It’s a great opportunity to get exposure to some of the very interesting work done by MIRI on this subject. It’s also a chance to thoroughly investigate (and critique) Bostrom’s book Superintelligence: Paths, Dangers, and Strategies.

I find the subject matter perplexing because in many ways it forces the very cultural and intellectual clash that I’ve been preoccupied with elsewhere on this blog: the failure of social scientists and engineers to communicate. Or, perhaps, the failure of qualitative researchers and quantitative researchers to communicate. Whatever you want to call it.

Broadly, the question at stake is: what impact will artificial intelligence have on society? This question is already misleading since in the imagination of most people who haven’t been trained in the subject, “artificial intelligence” refers to something of a science fiction scenario, whereas to practitioner, “artificial intelligence” is, basically, just software. Just as the press went wild last year speculating about “algorithms”, by which it meant software, so too is the press excited about artificial intelligence, which is just software.

But the concern that software is responsible for more and more of the activity in the world and that it is in a sense “smarter than us”, and especially the fear that it might become vastly smarter than us (i.e. turning into what Bostrom calls a “superintelligence”), is pervasive enough to drive research funding into topics like “AI Safety”. It also is apparently inspiring legal study into the regulation of autonomous systems. It may also have implications for what is called, vaguely, “social science”, though increasingly it seems like nobody really knows what that is.

There is a serious epistemological problem here. Some researchers are trying to predict or forewarn the societal impact of agents that are by assumption beyond their comprehension on the premise that they may come into existence at any moment.

This is fascinating but one has to get a grip.


by Sebastian Benthall at August 16, 2015 08:19 PM

August 04, 2015

MIMS 2015

Metafarce Update -> systemd, man pages, and TLS

I’ve recently had time to update the guts of metafarce.com. This post is about the updates to those guts, including what I tried that didn’t work out so well. The first section is full of personal opinion about the state of free UNIX OS’s.1 The second section concerns my adventures in getting TLS to work, and thoughts on the state of free TLS certificate signing services.

Background

I wanted to have IPv6 connectivity, DNSSEC and TLS for metafarce.com and a few other domains I host. The provider I had been using for VPS did not offer IPv6, so I found a VPS provider that did. The provider I had been using for DNS did not support DNSSEC, so I found a DNS provider that did.

Switching VPS providers meant I had to setup a new machine anyways. I had been running Debian for years, but I decided to switch to OpenBSD. My Debian VPS had been fine over the years. I kept it updated with apt-get and generally never had any major problems with. The next section deals with why I switched.

Because Reasons

Actually, two reasons.

The first reason is because of systemd. I simply didn’t want to deal with it. I didn’t want to learn it, I didn’t see the value in it, and it has crappy documentation. This isn’t me saying systemd is crap. I don’t know if it’s crap because I haven’t spent any time evaluating it. This is me saying I don’t care about systemd, and it isn’t worth my time to investigate. There are other places on the web where one can argue the [de]merits of systemd, this is not a place for that.

One of the key things I’ve missed in the assorted arguments surrounding systemd is the lack of historical context. As if many of the systemd combatants aren’t aware of how sacred init systems are to UNIX folk. One of the first big splits in UNIX history was between those who wanted a BSD style init, and those who wanted a sysV style init. There is a long history of UNIX folk arguing about how to start their OS. However, I saw very little recognition of that fact in arguments for/against systemd.

The second reason is because Debian man pages suck. Debian probably has the highest quality man pages of any Linux distro, but they still suck. They’re often outdated, incomplete, incorrect, and it doesn’t seem like Linux users care all that much that their man pages suck. Most users only read man pages during troubleshooting, and then only after failing to find their solution on the web. I read man pages for every application I install. I want to know how the application works, what files it uses, signals it accepts, etc.

The BSD UNIX’s have excellent man pages, and they get the attention they deserve during release cycles. Unlike most Linux distributions, updates to man pages in BSD UNIX’s are listed in changelogs and seen as development work on-par with software changes. This is as it should be. Documentation acts as kind of contract between user and programmer. It sets user expectations. If a man page says a program should behave in a certain fashion and the program doesn’t, then we know it’s a bug.

There is a trend in the UNIX world to think man pages are outdated. Some newer UNIX applications don’t even include man pages. This is stupid. Documentation is part of the program, and should not be relegated to an afterthought. Also, you might not always have the web when troubleshooting.

TLS and StartSSL

Metafarce.com and the other domains I run on this VPS now have both IPv6 and DNSSEC. Metafarce does not yet have TLS(i.e. https) because I refuse to pay for it. Startssl.com offers free certificates, so in theory I should be able to get one for free. The problem is that I cannot convince StartSSL that I control metafarce.com. To successfully validate that a user owns a domain the user must have access to any email address in the whois record, OR have access to either postmaster@, hostmaster@, or webmaster@ for that domain.

I don’t control any of the email addresses in my whois record, I don’t have a privacy service for my whois record, my registrar just doesn’t allow me to edit them. I’m also not willing to create an MX record for metafarce.com, then setup mail forwarding for postmaster@, hostmaster@, or webmaster@. Therefore I cannot convince StartSSL that I control metafarce.com. I shouldn’t be in this situation. We have DNS SOA records for reasons, and one of those reasons is to host the zone’s admin email address. At the very least the address listed in metafarce.com’s SOA record should be available to use for domain validation purposes.

Also, how do they know the DNS domain controller will be the only one who has access to the these email addresses? The list, while not arbitrary, is not forced reserved for all mail setups.2 There are plenty of email accepting domains that forward these addresses straight to /dev/null.

Another method I have seen used to confirm control of a zone is to create a TXT record with a unique string. StartSSL could provide me with a unique string, I would then add a TXT record with that string as its value. This method assumes that someone who can create TXT records for a domain controls the domain, which is probably a fair assumption.

I think StartSSL has chosen a poor method for tying users to domains. Whois records should not be relied upon as a method for proving control. Not only does this break for people who use whois privacy services, but many users cannot directly edit their whois record, and don’t have the skills/resources to setup email forwarding for their domain.

The outcome of all this is that I don’t support https for metafarce.com. Without having my cert signed by a CA, users have to wade through piles of buttons and dialogs that scare them away. Thus it remains unencrypted.3 Proving that a given user controls a given domain is a tough problem, and I don’t mean to suggest otherwise. StartSSL offers a free signing service and they should be commended for it. I just hope the situation improves so that myself and others can start hosting more content over TLS.

Let’s Encrypt to the Rescue

Let’s Encrypt is a soon to be launched certificate authority run by the Internet Security Research Group(ISRG). They’re a public benefit corporation backed by a few concerned corporate sponsors and the EFF. They’re going to sign web TLS certs for free at launch, which is great in and of itself. Even greater is the Internet draft they’ve written for their new automagic TLS cert creation and signing. We’ll see how it works out, but if they get it right this will be a huge boon for TLS adoption. At the very least I can then start running TLS everywhere without having to pay for it.

  1. I use the term UNIX very generally as a super category. Any OS which imbues the concepts of UNIX is a type of UNIX. Linux, Minix, MacOSX, *BSD, and Solaris are all types of UNIX. I’m not sure about QNX, but Windows and VxWorks are definitely not UNIX.

  2. RFC 2142 does actually reserve these, but that doesn’t mean mail admins always do.

  3. Another site I host on this VPS, synonomic.com, supports TLS. The cert for synonomic.com is not signed by any CA, so the user has to click at some scary looking buttons in order to view content. The cert is guaranteed by DANE to be for synonomic.com, yet no browsers currently support DANE out-of-the-box.

Metafarce Update -> systemd, man pages, and TLS was originally published by Andrew McConachie at Metafarce on August 04, 2015.

by Andrew McConachie (andrewm@ischool.berkeley.edu) at August 04, 2015 07:00 AM

July 29, 2015

Ph.D. student

intelligibility

The example of Arendt’s dismissal of scientific discourse from political discussion underscores a much deeper political problem: a lack of intelligibility.

Every language is intelligible to some people and not to others. This is obviously true in the case of major languages like English and Chinese. It is less obvious but still a problem with different dialects of a language. It becomes a source of conflict when there is a lack of intelligibility between the specialized languages of expertise or personal experience.

For many, mathematical formalism is unintelligible; it appears to be so for Arendt, and this disturbs her, as she locates politics in speech and wants there to be political controls on scientists. But how many scientists and mathematicians would find Arendt intelligible? She draws deeply on concepts from ancient Greek and Augustinian philosophy. Are these thoughts truly accessible? What about the intelligibility of the law, to non-lawyers? Or the intelligibility of spoken experiences of oppression to those who do not share such an experience?

To put it simply: people don’t always understand each other and this poses a problem for any political theory that locates justice in speech and consensus. Advocates of these speech-based politics are most often extraordinarily articulate and write persuasively about the need to curtail the power of any systems of control that they do not understand. They are unable to agree to a social contract that they cannot read.

But this persuasive speech is necessarily unable to account for the myriad mechanisms that are both conditions for the speech and unintelligible to the speaker. This includes the mechanisms of law and technology. There is a performative contradiction between these persuasive words and their conditions of dissemination, and this is reason to reject them.

Advocates of bureaucratic rule tend to be less eloquent, and those that create technological systems that replace bureaucratic functions even less so. Nevertheless each group is intelligible to itself and may have trouble understanding the other groups.

The temptation for any one segment of society to totalize its own understanding, dismissing other ways of experiencing and articulating reality as inessential or inferior, is so strong that it can be read in even great authors like Arendt. Ideological politics (as opposed to technocratic politics) is the conflict between groups expressing their interests as ideology.

The problem is that in order to function as it does at scale, modern society requires the cooperation of specialists. Its members are heterogeneous; this is the source of its flexibility and power. It is also the cause of ideological conflict between functional groups that should see themselves as part of a whole. Even if these members do see their interdependence in principle, their specialization makes them less intelligible. Articulation often involves different skills from action, and teaching to the uninitiated is another skill altogether. Meanwhile, the complexity of the social system expands as it integrates more diverse communities, reducing further the proportion understood by a single member.

There is still in some political discourse the ideal of deliberative consensus as the ground of normative or political legitimacy. Suppose, as seems likely, that this is impossible for the perfectly mundane and mechanistic reason that society is so complicated due to the demands of specialization that intelligibility among its constituents is never going to happen.

What then?


by Sebastian Benthall at July 29, 2015 05:44 AM

July 28, 2015

MIMS 2015
Ph.D. student

the state and the household in Chinese antiquity

It’s worthwhile in comparison with Arendt’s discussion of Athenian democracy to consider the ancient Chinese alternative. In Alfred Huang’s commentary on the I Ching, we find this passage:

The ancient sages always applied the principle of managing a household to governing a country. In their view, a country was simply a big household. With the spirit of sincerity and mutual love, one is able to create a harmonious situation anywhere, in any circumstance. In his Analects, Confucius says,

From the loving example of one household,
A whole state becomes loving.
From the courteous manner of one household,
A whole state becomes courteous.

Comparing the history of Europe and the rise of capitalistic bureaucracy with the history of China, where bureaucracy is much older, is interesting. I have comparatively little knowledge of the latter, but it is often said that China does not have the same emphasis on individualism that you find in the West. Security is considered much more important than Freedom.

The reminder that the democratic values proposed by Arendt and Horkheimer are culturally situated is an important one, especially as Horkheimer claims that free burghers are capable of producing art that expresses universal needs.


by Sebastian Benthall at July 28, 2015 02:38 AM

July 27, 2015

Ph.D. student

a refinement

If knowledge is situated, and scientific knowledge is the product of rational consensus among diverse constituents, then a social organization that unifies many different social units functionally will have a ‘scientific’ ideology or rationale that is specific to the situation of that organization.

In other words, the political ideology of a group of people will be part of the glue that constitutes the group. Social beliefs will be a component of the collective identity.

A social science may be the elaboration of one such ideology. Many have been. So social scientific beliefs are about capturing the conditions for the social organization which maintains that belief. (c.f. Nietzsche on tablets of values)

There are good reasons to teach these specialized social sciences as a part of vocational training for certain functions. For example, people who work in finance or business can benefit from learning economics.

Only in an academic context does the professional identity of disciplinary affiliation matter. This academic political context creates great division and confusion that merely reflects the disorganization of the academic system.

This disorganization is fruitful precisely because it allows for individuality (cf. Horkheimer). However, it is also inefficient and easy to corrupt. Hmm.

Against this, not all knowledge is situated. Some is universal. It’s universality is due to its pragmatic usefulness in technical design. Since technical design acts on everyone even when their own situated understanding does not include it, this kind of knowledge has universal ground (in violence, sadly, but maybe also in other ways.)

The question is whether there is room anywhere in the technically correct understanding of social organization (something we might see in Beniger) there is room for the articulation of what it supposed to be great and worthy of man (see Horkheimer).

I have thought for a long time that there is probably something like this describable in terms of complexity theory.


by Sebastian Benthall at July 27, 2015 04:22 AM

structuralism and/or functionalism

Previous entries detailing the arguments of Arendt, Horkheimer, and Beniger show these theorists have what you might call a structural functionalist bent. Society is conceived as a functional whole. There are units of organization within it. For Arendt, this social organization begins in the private household and expands to all of society. Horkheimer laments this as the triumph of mindless economic organization over genuine, valuable individuality.

Structuralism, let alone structural functionalism, is not in fashion in the social sciences. Purely speculatively, one reason for this might be that to the extent that society was organized to perform certain functions, more of those functions have been delegated to information processing infrastructure, as in Beniger’s analysis. That leaves “culture” more a domain of ephemerality and identity conflict, as activity in the sphere of economic production becomes if not private, opaque.

My empirical work on open source communities is suggestive (though certainly not conclusively so) that these communities are organized more for functional efficiency than other kinds of social groups (including academics). I draw this inference from the degree dissortativity of the open source social networks. Disassortativity suggests the interaction of different kinds of people, which is against homophilic patterns of social formation but which seems essential for economic activity where the interact of specialists is what creates value.

Assuming that society its entirety (!!) is very complex and not easily captured by a single grand theory, we can nevertheless distinguish difference kinds of social organization and see how they theorize themselves. We can also map how they interact and what mechanisms mediated between them.


by Sebastian Benthall at July 27, 2015 03:37 AM

July 25, 2015

Ph.D. student

Land and gold (Arendt, Horkheimer)

I am thirty, still in graduate school, and not thrilled about the prospects of home ownership since all any of the professionals around me talk about is the sky-rocketing price of real estate around the critical American urban centers.

It is with a leisure afforded by graduate school that I am able to take the long view on this predicament. It is very cheap to spend ones idle time reading Arendt, who has this to say about the relationship between wealth and property:

The profound connection between private and public, manifest on its most elementary level in the question of private property, is likely to be misunderstood today because of the modern equation of property and wealth on one side and propertylessness and poverty on the other. This misunderstanding is all the more annoying as both, property as well as wealth, are historically of greater relevance to the public realm than any other private matter or concern and have played, at least formally, more or less the same role as the chief condition for admission to the public realm and full-fledged citizenship. It is therefore easy to forget that wealth and property, far from being the same, are of an entirely different nature. The present emergence everywhere of actually or potentially very wealthy societies which at the same time are essentially propertyless, because the wealth of any single individual consists of his share in the annual income of society as a whole, clearly shows how little these two things are connected.

For Arendt, beginning with her analysis of ancient Greek society, property (landholding) is the condition of ones participation in democracy. It is a place of residence and source of ones material fulfilment, which is a prerequisite to ones free (because it is unnecessitated) participation in public life. This is contrasted with wealth, which is a feature of private life and is unpolitical. In ancient society, slaves could own wealth, but not property.

If we look at the history of Western civilization as a progression away from this rather extreme moment, we see the rise of social classes whose power is based on in landholding but in wealth. Industrialism and the economy based on private ownership of capital is a critical transition in history. That capital is not bound to a particular location but rather is mobile across international boundaries is one of the things that characterizes global capitalism and brings it in tension with a geographically bounded democratic state. It is interesting that a Jeffersonian democracy, designed with the assumption of landholding citizens, should predate industrial capitalism and be consitutionally unprepared for the result, but nevertheless be one of the models for other democratic governance structures throughout the world.

If private ownership of capital, not land, defines political power under capitalism, then wealth, not property, becomes the measure of ones status and security. For a time, when wealth was as a matter of international standard exchangeable for gold, private ownership of gold could replace private ownership of land as the guarantee of ones material security and thereby grounds for ones independent existence. This independent, free rationality has since Aristotle been the purpose (telos) of man.

In the United States, Franklin Roosevelt’s 1933 Executive Order 6102 forbade the private ownership of gold. The purpose of this was to free the Federal Reserve of the gold market’s constraint on increasing the money supply during the Great Depression.

A perhaps unexpected complaint against this political move comes from Horkheimer (Eclipse of Reason, 1947), who sees this as a further affront to individualism by capitalism.

The age of vast industrial power, by eliminating the perspectives of a stable past and future that grew out of ostensibly permanent property relations, is the process of liquidating the individual. The deterioration of his situation is perhaps best measured in terms of his utter insecurity as regards to his personal savings. As long as currencies were rigidly tied to gold, and gold could flow freely over frontiers, its value could shift only within narrow limits. Under present-day conditions the dangers of inflation, of a substantial reduction or complete loss of the purchasing power of his savings, lurks around the next corner. Private possession of gold was the symbol of bourgeois rule. Gold made the burgher somehow the successor of the aristocrat. With it he could establish security for himself and be reasonable sure that even after his death his dependents would not be completely sucked up by the economic system. His more or less independent position, based on his right to exchange goods and money for gold, and therefore on the relatively stable property values, expressed itself in the interest he took in the cultivation of his own personality–not, as today, in order to achieve a better career or for any professional reason, but for the sake of his own individual existence. The effort was meaningful because the material basis of the individual was not wholly unstable. Although the masses could not aspire to the position of the burgher, the presence of a relatively numerous class of individuals who were governed by interest in humanistic values formed the background for a kind of theoretical thought as well as for the type of manifestions in the arts that by virtue of their inherent truth express the needs of society as a whole.

Horkheimer’s historical arc, like many Marxists, appears to ignore its parallels in antiquity. Monetary policy in the Roman Empire, which used something like a gold standard, was not always straightforward. Inflation was sometimes a severe problem when generals would print money to pay the soldiers hat supported their political coups. So it’s not clear that the modern economy is more unstable than gold or land based economies. However, the criticism that economic security is largely a matter of ones continued participation in a larger system, and that there is little in the way of financial security besides this, holds. He continues:

The state’s restriction on the right to possess gold is the symbol of a complete change. Even the members of the middle class must resign themselves to insecurity. The individual consoles himself with the thought that his government, corporation, association, union, or insurance company will take care of him when he becomes ill or reaches the retiring age. The various laws prohibiting private possession of gold symbolize the verdict against the independent economic individual. Under liberalism, the beggar was always an eyesore to the rentier. In the age of big business both beggar and rentier are vanishing. There are no safety zones on society’s thoroughfares. Everyone must keep moving. The entrepreneur has become a functionary, the scholar a professional expert. The philosopher’s maxim, Bene qui latuit, bene vixit, is incompatible with the modern business cycles. Everyone is under the whip of a superior agency. Those who occupy the commanding positions have little more autonomy than their subordinates; they are bound by the power they wield.

In an academic context, it is easy to make a connection between Horkheimer’s concerns about gold ownership and tenure. Academic tenure is or was the refuge of the individual who could in theory develop themselves as individuals in obscurity. The price of this autonomy, which according the philosophical tradition represents the highest possible achievement of man, is that one teaches. So, the developed individual passes on the values developed through contemplation and reflection to the young. The privatization of the university and the emphasis on teaching marketable skills that allow graduates to participate more fully in the economic system is arguably an extension of Horkheimer’s cultural apocalypse.

The counter to this is the claim that the economy as a whole achieves a kind of homeostasis that provides greater security than one whose value is bound to something stable and exogenous like gold and land. Ones savings are secure as long as the system doesn’t fail. Meanwhile, the price of access to cultural materials through which one might expand ones individuality (i.e. videos of academic lectures, the arts, or music) decrease as a consequence of the pervasiveness of the economy. At this point one feels one has reached the limits of Horkheimer’s critique, which perhaps only sees one side of the story despite its sublime passion. We see echoes of it in contemporary feminist critique, which emphasizes how the demands of necessity are disproportionately burdened by women and how this affects their role in the economy. That women have only relatively recently, in historical terms, been released from the private household into the public world (c.f. Arendt again) situates them more precariously within the economic system.

What remains unclear (to me) is how one should conceive of society and values when there is an available continuum of work, opportunity, leisure, individuality, art, and labor under conditions of contemporary technological control. Specifically, the notion of inequality becomes more complicated when one considers that society has never been equal in the sense that is often aspired to in contemporary American society. This is largely because the notion of equality we use today draws from two distinct sources. The first is the equality of self-sufficient landholding men as they encounter each other freely in the polis. Or, equivalently, as self-sufficient goldholding men in something like the Habermasian bourgeois public sphere. The second is equality within society, which is economically organized and therefore requires specialization and managerial stratification. We can try to assure equality to members of society insofar as they are members of society, but not as to their function within society.


by Sebastian Benthall at July 25, 2015 11:30 PM