School of Information Blogs

August 30, 2015

MIMS 2012

Get Comfortable Sharing Your Shitty Work

After jamming with a friend, she commented that she felt emotionally spent afterwards. Not quite sure what she meant, I asked her to elaborate. She said that improvising music makes you feel vulnerable. You’ve got to put yourself out there, which opens you up to judgement and criticism.

And she’s right. In that moment I realized that being a designer trained me to get over that fear. I know I have to start somewhere shitty before I can get somewhere good. Putting myself and my ideas out there is part of that process. My work only becomes good through feedback and iteration.

So my advice to you, young designer, is to accept the fact that before your work becomes great, it’s going to be shitty. This will be hard at first. You’ll feel vulnerable. You’ll fear judgement. You’ll worry about losing the respect of your colleagues.

But get over it. We’ve all felt this way before. Just remember that we’re all in this together. We all want to produce great work for our customers. We all want to make great music together.

So get comfortable sharing your shitty work. You’ll start off discordant, but through the process of iteration and refinement you’ll eventually hit your groove.

by Jeff Zych at August 30, 2015 10:34 PM

August 28, 2015

Ph.D. student

The recalcitrance of prediction

We have identified how Bostrom’s core argument for superintelligence explosion depends on a crucial assumption. An intelligence explosion will happen only if the kinds of cognitive capacities involved in instrumental reason are not recalcitrant to recursive self-improvement. If recalcitrance rises comparably with the system’s ability to improve itself, then the takeoff will not be fast. This significantly decreases the probability of decisively strategic singleton outcomes.

In this section I will consider the recalcitrance of intelligent prediction, which is one of the capacities that is involved in instrumental reason (another being planning). Prediction is a very well-studied problem in artificial intelligence and statistics and so is easy to characterize and evaluate formally.

Recalcitrance is difficult to formalize. Recall that in Bostrom’s formulation:

\frac{dI}{dt} = \frac{O(I)}{R(I)}

One difficulty in analyzing this formula is that the units are not specified precisely. What is a “unit” of intelligence? What kind of “effort” is the unit of optimization power? And how could one measure recalcitrance?

A benefit of looking at a particular intelligent task is that it allows us to think more concretely about what these terms mean. If we can specify which tasks are important to consider, then we can take the level of performance on those well-specified class of problems as measures of intelligence.

Prediction is one such problem. In a nutshell, prediction comes down to estimating a probability distribution over hypotheses. Using the Bayesian formulation of statistical influence, we can represent the problem as:

P(H|D) = \frac{P(D|H) P(H)}{P(D)}

Here, P(H|D) is the posterior probability of a hypothesis H given observed data D. If one is following statistically optimal procedure, one can compute this value by taking the prior probability of the hypothesis P(H), multiplying it by the likelihood of the data given the hypothesis P(D|H), and then normalizing this result by dividing by the probability of the data over all models, P(D) = \sum_{i}P(D|H_i)P(H_i).

Statisticians will justifiably argue whether this is the best formulation of prediction. And depending on the specifics of the task, the target value may well be some function of posterior (such as the hypothesis with maximum likelihood) and the overall distribution may be secondary. These are valid objections that I would like to put to one side in order to get across the intuition of an argument.

What I want to point out is that if we look at the factors that affect performance on prediction problems, there a very few that could be subject to algorithmic self-improvement. If we think that part of what it means for an intelligent system to get more intelligent is to improve its ability of prediction (which Bostrom appears to believe), but improving predictive ability is not something that a system can do via self-modification, then that implies that the recalcitrance of prediction, far from being constant or lower, actually approaches infinity with respect the an autonomous system’s capacity for algorithmic self-improvement.

So, given the formula above, in what ways can an intelligent system improve its capacity to predict? We can enumerate them:

  • Computational accuracy. An intelligent system could be better or worse at computing the posterior probabilities. Since most of the algorithms that do this kind of computation do so with numerical approximation, there is the possibility of an intelligent system finding ways to improve the accuracy of this calculation.
  • Computational speed. There are faster and slower ways to compute the inference formula. An intelligent system could come up with a way to make itself compute the answer faster.
  • Better data. The success of inference is clearly dependent on what kind of data the system has access to. Note that “better data” is not necessarily the same as “more data”. If the data that the system learns from is from a biased sample of the phenomenon in question, then a successful Bayesian update could make its predictions worse, not better. Better data is data that is informative with respect to the true process that generated the data.
  • Better prior. The success of inference depends crucially on the prior probability assigned to hypotheses or models. A prior is better when it assigns higher probability to the true process that generates observable data, or models that are ‘close’ to that true process. An important point is that priors can be bad in more than one way. The bias/variance tradeoff is well-studied way of discussing this. Choosing a prior in machine learning involves a tradeoff between:
    1. Bias. The assignment of probability to models that skew away from the true distribution. An example of a biased prior would be one that gives positive probability to only linear models, when the true phenomenon is quadratic. Biased priors lead to underfitting in inference.
    2. Variance.The assignment of probability to models that are more complex than are needed to reflect the true distribution. An example of a high-variance prior would be one that assigns high probability to cubic functions when the data was generated by a quadratic function. The problem with high variance priors is that they will overfit data by inferring from noise, which could be the result of measurement error or something else less significant than the true generative process.

    In short, there best prior is the correct prior, and any deviation from that increases error.

Now that we have enumerate the ways in which an intelligent system may improve its power of prediction, which is one of the things that’s necessary for instrumental reason, we can ask: how recalcitrant are these factors to recursive self-improvement? How much can an intelligent system, by virtue of its own intelligence, improve on any of these factors?

Let’s start with computational accuracy and speed. An intelligent system could, for example, use some previously collected data and try variations of its statistical inference algorithm, benchmark their performance, and then choose to use the most accurate and fastest ones at a future time. Perhaps the faster and more accurate the system is at prediction generally, the faster and more accurately it would be able to engage in this process of self-improvement.

Critically, however, there is a maximum amount of performance that one can get from improvements to computational accuracy if you hold the other factors constant. You can’t be more accurate than perfectly accurate. Therefore, at some point recalcitrance of computational accuracy rises to infinity. Moreover, we would expect that effort made at improving computational accuracy would exhibit diminishing returns. In other words, recalcitrance of computational accuracy climbs (probably close to exponentially) with performance.

What is the recalcitrance of computational speed at inference? Here, performance is limited primarily by the hardware on which the intelligent system is implemented. In Bostrom’s account of superintelligence explosion, he is ambiguous about whether and when hardware development counts as part of a system’s intelligence. What we can say with confidence, however, is that for any particular piece of hardware there will be a maximum computational speed attainable with with, and that recursive self-improvement to computational speed can at best approach and attain this maximum. At that maximum, further improvement is impossible and recalcitrance is again infinite.

What about getting better data?

Assuming an adequate prior and the computational speed and accuracy needed to process it, better data will always improve prediction. But it’s arguable whether acquiring better data is something that can be done by an intelligent system working to improve itself. Data collection isn’t something that the intelligent system can do autonomously, since it has to interact with the phenomenon of interest to get more data.

If we acknowledge that data collection is a critical part of what it takes for an intelligent system to become more intelligent, then that means we should shift some of our focus away from “artificial intelligence” per se and onto ways in which data flows through society and the world. Regulations about data locality may well have more impact on the arrival of “superintelligence” than research into machine learning algorithms now that we have very faster, very accurate algorithms already. I would argue that the recent rise in interest in artificial intelligence is due mainly to availability of vast amounts of new data through sensors and the Internet. Advances in computational accuracy and speed (such as Deep Learning) have to catch up to this new availability of data and use new hardware, but data is the rate limiting factor.

Lastly, we have to ask: can a system improve its own prior, if data, computational speed, and computational accuracy are constant?

I have to argue that it can’t do this in any systematic way, if we are looking at the performance of the system at the right level of abstraction. Potentially a machine learning algorithm could modify its prior if it sees itself as underperforming in some ways. But there is a sense in which any modification to the prior made by the system that is not a result of a Bayesian update is just part of the computational basis of the original prior. So recalcitrance of the prior is also infinite.

We have examined the problem of statistical inference and ways that an intelligent system could improve its performance on this task. We identified four potential factors on which it could improve: computational accuracy, computational speed, better data, and a better prior. We determined that contrary to the assumption of Bostrom’s hard takeoff argument, the recalcitrance of prediction is quite high, approaching infinity in the cases of computational accuracy, computational speed, and the prior. Only data collections to be flexibly recalcitrant. But data collection is not a feature of the intelligent system alone but also depends on its context.

As a result, we conclude that the recalcitrance of prediction is too high for an intelligence explosion that depends on it to be fast. We also note that those concerned about superintelligent outcomes should shift their attention to questions about data sourcing and storage policy.


by Sebastian Benthall at August 28, 2015 07:01 PM

MIMS 2015

Adventures in DANE

This post will reflect on the relatively new DNS-based Authentication of Named Entities(DANE) protocol from the Internet Engineering Task Force(IETF). We will first explain how DANE works, talk about what DANE can and cannot do, then briefly discuss the future of Internet encryption standards in general before wrapping up.

What are DNSSEC and DANE?

DANE is defined in RFC 6698 and further clarified in RFC 7218. DANE depends entirely on DNSSEC, which is older and considerably more complicated. For our purposes, the only thing the reader need know about DNSSEC is that it solves the problem of trusting DNS responses. Simply put, DNSSEC ensures that DNS requests return responses that are cryptographically assured.

DANE builds on this assurance by hosting hashes of cryptographic keys in DNS. DNSSEC assures that what we see in DNS is exactly as it should be, DANE then exploits this assurance by providing a secondary trust network for cryptographic key verification. This secondary trust network is the DNS hierarchy.

Let’s look at an example. I have configured the test domain https://synonomic.com/ for HTTPS, TLS, DNSSEC and DANE. Let’s examine what this means.

If you visit https://synonomic.com/ with a modern web browser it will probably complain that it is untrusted, before asking you create an exception. This is because synonymic.com’s TLS certificate is not signed by any of the Certificate Authorities(CA) that your browser trusts. In setting up synonymic.com I created my own self signed certificate, and didn’t bother to get it signed by a CA.1

Instead, I enabled DNSSEC for synonymic.com, then created a hash of my self signed certificate and stuck it in a DNS TLSA record. TLSA records are where DANE hosts cryptographic information for a given service. If your browser supported DANE, it would download the TLS certificate for synonymic.com, compute its hash, then compare that against what is hosted in synonymic.com’s TLSA record. If the two hashes were the same it could trust the certificate presented by synonymic.com. If the two hashes were different then your browser would know something fishy was happening, and not trust the certificate presented by the web server at synonymic.com.

If you’re on a UNIX system you can query the TLSA record for synonymic.com with the following command.

dig +multi _443._tcp.synonomic.com. TLSA

The answer should look something like this.

_443._tcp.synonomic.com. 21599 IN TLSA 3 0 2 (
                            D98DA7EE3816E8778CD41C619D868817EC2874CC3C80
                            D1CA25E7579465CDED2D6BD57CEB4C2D1943039EAB48
                            C6403619A83B0025C6CF807992C1196CB42EE386 )

Let’s break this down.

The top line repeats the name of the record(_443._tcp.synonomic.com.) you queried. Since different services on a single host can use different certificates, TLSA records include the IP protocol(tcp) and port number(443) in the record. This is followed by three items generic to all DNS records, the TTL(21599), the scope of record(IN for Internet) and the name of the record type(TLSA).

After these we have four values specific to TLSA records. The certificate usage(3), the selector(0), the matching type(2), and finally the hash of synonymic.com’s TLS certificate(D98DA..).

The certificate usage field(3) can contain a range from 0-3. By specifying 3 we’re saying this record contains a hash of synonymic.com’s TLS certificate. TLSA records can also be used to force a specific CA trust anchor. For example, if this value was 2 and the TLSA record contained the hash of CA StartSSL’s signing certificate, a supporting browser would require that synonym.com’s TLS certificate be signed by the StartSSL CA.

The selector field(0) can have a range of 0-1 and simply states which format of the certificate is to hashed. It’s uninteresting for our discussion.

The matching type field(2) states which algorithm is used to compute the hash.2

Finally we have the actual hash(D98DA..) of the TLS certificate.

What can DANE do?

DANE provides a secondary chain of trust for TLS certificates. It enables TLS clients to compare the certificate presented to them by the server, to what is hosted in DNS. This prevents common Man In The Middle(MITM) attacks where an attacker intercepts a connection prior to it being established, presents its own certificate to both ends, and then sits in between the victim end-points capturing and decrypting everything. DANE prevents this common MITM attack in the same way our current CA system does, by providing a secondary means of verifying the server’s presented certificate.

The problem with CAs is that they get subverted3, and since our browsers implicitly trust all of them equally, a single subverted CA means every site using HTTPS is theoretically vulnerable. For example, if the operator of www.example.com purchases a certificate from CA-X, and criminals break into CA-Y, a MITM attack could still succeed against visitors to www.example.com. TLS clients cannot know from which CA an operator has purchased their certificate. Thus an attacker could present a bad certificate to clients visiting www.example.com signed by CA-Y, and the clients would accept it as valid.

DANE has two answers to this type of attack. First, since a hash of the correct certificate is hosted in DNS, clients can compare the certificate presented by the server to what is hosted in DNS. Then only proceed if they match. Secondly, DANE can lock a given DNS host to work with certificates issued by only one CA. So referencing the above example, if CA-Y is penetrated it won’t matter, because DANE compliant clients visiting www.example.com will know that only certificates issued by CA-X are valid for www.example.com.

What can DANE not do?

DANE cannot link a given service to a real world identity. For example, DANE cannot tell you that synonymic.com is the website of Andrew McConachie. Take a closer look at the certificate for synonymic.com. It’s issued to, and issued by, “Fake”. DANE don’t care. DANE only ensures that the TLS certificate presented by the web server at synonymic.com matches the hash in DNS. This won’t stop phishing attacks where a person is tricked into going to the wrong website, since that website’s TLS certificate would still match the hash in the TLSA record.

The way website operators tie identity to TLS certificates today is by getting special Extended Validation(EV) certificates from CAs. When a website owner requests an EV certificate from a CA that CA goes through a more extensive identification process. The purpose of which is to directly link the DNS name to a real world organization or individual. This is generally a rather thorough examination, and as such is more expensive than getting a normal certificate. EV certificates are also generally considered more secure than DV certificates, at least for HTTPS. If a website has an EV certificate, web browsers will display the name of the organization in the address bar.

Normal, or Domain Validated(DV), certificates make no claims regarding real world identity. If you control a DNS name you can get a DV certificate for that name. In this way DV certificates and DANE are very similar in the levels of trust they afford. They only differ in what infrastructure backs up this trust.

Does DANE play well with others?

DANE does not obviate the need for other trust mechanisms, in fact it was designed to play well with them. Unlike what some people think, the purpose of DANE is not to do away with the CA system. It is to provide another chain of trust based on the DNS hierarchy.

Certificate Transparency(CT) is another new standard from the IETF.4 It is standardized in RFC 6962. Simply put, CT establishes a public audit trail of issued TLS certificates that browsers and other clients can check against. As certificates are issued participating CSs add them to a write once audit trail. Certificates added to this audit trail cannot be removed or overwritten. TLS clients can then compare the certificate presented by a given website with what is in the audit trail. CT does not interfere with DANE, instead they complement one another. There is no reason today why a given site cannot be protected by our current CA system, DANE, and Certificate Transparency. The more the better. Redundancy and heterogeneity lead to more secure environments.5

The challenge moving forward for TLS clients will be in how these different models are used to determine trust, and presented to the user. Right now Firefox shows a lock and, if it’s an EV certificate, the name of the organization in the address bar.6 This is all based on the CA system of trust. If DNSSEC/DANE, and Certificate Transparency all gain adoption browser manufacturers will have to rethink how trust information is presented to their users. This is not going to be easy. To some degree boiling down all of this complexity to a single trust decision for the end user will be necessary, and trade-offs of information presented vs. usability will be required.

Weak Adoption and the Future

DANE depends on DNSSEC to function, and DNSSEC adoption has been slow. However, in some ways DANE has been pushing DNSSEC adoption. This article has been focusing on using DANE for HTTPS, actually DANE has seen the most deployment success in email deployments.7 There has been significant uptake in DANE by email providers wishing to prevent so called Server In The Middle Attacks(SITM). This type of attack occurs when a rogue mail server sits between two mail servers and captures all mail traffic between them. DANE averts this type of attack by allowing both Simple Mail Transfer Protocol(SMTP) talkers to compare the presented certificate with what is in DNS. The IETF currently has an Internet Draft on using DANE and TLS to secure SMTP traffic.

I think we should expect adoption of DANE for email security to continue increasing before any significant adoption begins for HTTPS. Many technologies require some sort of ‘killer app’ that pushes their adoption, and I suspect many people see DANE as DNSSEC’s killer app. I hope this is true, because one of the best ways we can thwart both pervasive monitoring by nation states, and illegal activities by criminals is increasing the adoption of TLS. Providing heterogeneous methods for assuring key integrity is also incredibly important. This article argued that a future with multiple methods for ensuring key integrity is preferable to a single winner. Our ideal secure Internet should have multiple independent means of verifying TLS certificates, DANE is just one of them.

Please contact me at andrewm AT ischool DOT berkeley DOT edu if you discover inaccuracies in this article.

  1. I tried getting it signed by StartSSL, but that didn’t quite work out.

  2. Synonymic.com uses a SHA2-512 hash as this is the most secure algorithm that is currently supported. See RFC 7218 for a mapping of acronyms to algorithms.

  3. Three examples of CA breaches: Turk Trust, Diginotar, Comodo

  4. Check out CertificateTransparency.org for more info.

  5. OS diversity for intrusion tolerance: Myth or reality?

  6. CZ.nic offers a great browser plugin for DNSSEC and DANE.

  7. Jan Zorz at the Internet Society has been measuring DANE uptake in SMTP traffic in the Alexa top 1 million. Also, the NIST recently published a whitepaper on securing email using DANE. The whitepaper goes further, and suggests that email providers start using a recently proposed IETF Internet Draft on storing hashes of personal OpenPGP keys in DNS.

Adventures in DANE was originally published by Andrew McConachie at Metafarce on August 28, 2015.

by Andrew McConachie (andrewm@ischool.berkeley.edu) at August 28, 2015 07:00 AM

Ph.D. student

Nissenbaum the functionalist

Today in Classics we discussed Helen Nissenbaum’s Privacy in Context.

Most striking to me is that Nissenbaum’s privacy framework, contextual integrity theory, depends critically on a functionalist sociological view. A context is defined by its information norms and violations of those norms are judged according to their (non)accordance with the purposes and values of the context. So, for example, the purposes of an educational institution determine what are appropriate information norms within it, and what departures from those norms constitute privacy violations.

I used to think teleology was dead in the sciences. But recently I learned that it is commonplace in biology and popular in ecology. Today I learned that what amounts to a State Philosopher in the U.S. (Nissenbaum’s framework has been more or less adopted by the FTC) maintains a teleological view of social institutions. Fascinating! Even more fascinating that this philosophy corresponds well enough to American law as to be informative of it.

From a “pure” philosophy perspective (which is I will admit simply a vice of mine), it’s interesting to contrast Nissenbaum with…oh, Horkheimer again. Nissenbaum sees ethical behavior (around privacy at least) as being behavior that is in accord with the purpose of ones context. Morality is given by the system. For Horkheimer, the problem is that the system’s purposes subsume the interests of the individual, who is alone the agent who is able to determine what is right and wrong. Horkheimer is a founder of a Frankfurt school, arguably the intellectual ancestor of progressivism. Nissenbaum grounds her work in Burke and her theory is admittedly conservative. Privacy is violated when people’s expectations of privacy are violated–this is coming from U.S. law–and that means people’s contextual expectations carry more weight than an individual’s free-minded beliefs.

The tension could be resolved when free individuals determine the purpose of the systems they participate in. Indeed, Nissenbaum quotes Burke in his approval of established conventions as being the result of accreted wisdom and rationale of past generations. The system is the way it is because it was chosen. (Or, perhaps, because it survived.)

Since Horkheimer’s objection to “the system” is that he believes instrumentality has run amok, thereby causing the system serve a purpose nobody intended for it, his view is not inconsistent with Nissenbaum’s. Nissenbaum, building on Dworkin, sees contextual legitimacy as depending on some kind of political legitimacy.

The crux of the problem is the question of what information norms comprise the context in which political legitimacy is formed, and what purpose does this context or system serve?


by Sebastian Benthall at August 28, 2015 02:54 AM

August 27, 2015

Ph.D. student

The relationship between Bostrom’s argument and AI X-Risk

One reason why I have been writing about Bostrom’s superintelligence argument is because I am acquainted with what could be called the AI X-Risk social movement. I think it is fair to say that this movement is a subset of Effective Altruism (EA), a laudable movement whose members attempt to maximize their marginal positive impact on the world.

The AI X-Risk subset, which is a vocal group within EA, sees the emergence of a superintelligent AI as one of several risks that is notably because it could ruin everything. AI is considered to be a “global catastrophic risk” unlike more mundane risks like tsunamis and bird flu. AI X-Risk researchers argue that because of the magnitude of the consequences of the risk they are trying to anticipate, they must raise more funding and recruit more researchers.

While I think this is noble, I think it is misguided for reasons that I have been outlining in this blog. I am motivated to make these arguments because I believe that there are urgent problems/risks that are conceptually adjacent (if you will) to the problem AI X-Risk researchers study, but that the focus on AI X-Risk in particular diverts interest away from them. In my estimation, as more funding has been put into evaluating potential risks from AI many more “mainstream” researchers have benefited and taken on projects with practical value. To some extent these researchers benefit from the alarmism of the AI X-Risk community. But I fear that their research trajectory is thereby distorted from where it could truly provide maximal marginal value.

My reason for targeting Bostrom’s argument for the existential threat of superintelligent AI is that I believe it’s the best defense of the AI X-Risk thesis out there. In particular, if valid the argument should significantly raise the expected probability of an existentially risky AI outcome. For Bostrom, it is likely a natural consequence of advancement in AI research more generally because of recursive self-improvement and convergent instrumental values.

As I’ve informally work shopped this argument I’ve come upon this objection: Even if it is true that a superintelligent system would not for systematic reasons become a existentially risky singleton, that does not mean that somebody couldn’t develop such a superintelligent system in an unsystematic way. There is still an existential risk, even if it is much lower. And because existential risks are so important, surely we should prepare ourselves for even this low probability event.

There is something inescapable about this logic. However, the argument applies equally well to all kinds of potential apocalypses, such as enormous meteors crashing into the earth and biowarfare produced zombies. Without some kind of accounting of the likelihood of these outcomes, it’s impossible to do a rational budgeting.

Moreover, I have to call into question the rationality of this counterargument. If Bostrom’s arguments are used in defense of the AI X-Risk position but then the argument is dismissed as unnecessary when it is challenged, that suggests that the AI X-Risk community is committed to their cause for reasons besides Bostrom’s argument. Perhaps these reasons are unarticulated. One could come up with all kinds of conspiratorial hypotheses about why a group of people would want to disingenuously spread the idea that superintelligent AI poses an existential threat to humanity.

The position I’m defending on this blog (until somebody convinces me otherwise–I welcome all comments) is that a superintelligent AI singleton is not a significantly likely X-Risk. Other outcomes that might be either very bad or very good, such as ones with many competing and cooperating superintelligences, are much more likely. I’d argue that it’s more or less what we have today, if you consider sociotechnical organizations as a form of collective superintelligence. This makes research into this topic not only impactful in the long run, but also relevant to problems faced by people now and in the near future.


by Sebastian Benthall at August 27, 2015 04:51 PM

August 25, 2015

Ph.D. student

Bostrom and Habermas: technical and political moralities, and the God’s eye view

An intriguing chapter that follows naturally from Nick Bostrom’s core argument is his discussion of machine ethics writ large. He asks: suppose one could install into an omnipotent machine ethical principles, trusting it with the future of humanity. What principles should we install?

What Bostrom accomplishes by positing his Superintelligence (which begins with something simply smarter than humans, and evolves over the course of the book into something that takes over the galaxy) is a return to what has been called “the God’s eye view”. Philosophers once attempted to define truth and morality according to perspective of an omnipotent–often both transcendent and immanent–god. Through the scope of his work, Bostrom has recovered some of these old themes. He does this not only through his discussion of Superintelligence (and positing its existence in other solar systems already) but also through his simulation arguments.

The way I see it, one thing I am doing by challenging the idea of an intelligence explosion and its resulting in a superintelligent singleton is problematizing this recovery of the God’s Eye view. If your future world is governed by many sovereign intelligent systems instead of just one, then ethics are something that have to emerge from political reality. There is something irreducibly difficult about interacting with other intelligences and it’s from this difficulty that we get values, not the other way around. This sort of thinking is much more like Habermas’s mature ethical philosophy.

I’ve written about how to apply Habermas to the design of networked publics that mediate political interactions between citizens. What I built and offer as toy example in that paper, @TheTweetserve, is simplistic but intended just as a proof of concept.

As I continue to read Bostrom, I expect a convergence on principles. “Coherent extrapolated volition” sounds a lot like a democratic governance structure with elected experts at first pass. The question of how to design a governance structure or institution that leverages artificial intelligence appropriately while legitimately serving its users motivates my dissertation research. My research so far has only scratched the surface of this problem.


by Sebastian Benthall at August 25, 2015 03:19 AM

August 24, 2015

Ph.D. student

Recalcitrance examined: an analysis of the potential for superintelligence explosion

To recap:

  • We have examined the core argument from Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies regarding the possibility of a decisively strategic superintelligent singleton–or, more glibly, an artificial intelligence that takes over the world.
  • With an eye to evaluating whether this outcome is particularly likely relative to other futurist outcomes, we have distilled the argument and in so doing have reduced it to a simpler problem.
  • That problem is to identify bounds on the recalcitrance of the capacities that are critical for instrumental reasoning. Recalcitrance is defined as the inverse of the rate of increase to intelligence per time per unit of effort put into increasing that intelligence. It is meant to capture how hard it is to make an intelligent system smarter, and in particular how hard it is for an intelligent system to make itself smarter. Bostrom’s argument is that if an intelligent system’s recalcitrance is constant or lower, then it is possible for the system to undergo an “intelligence explosion” and take over the world.
  • By analyzing how Bostrom’s argument depends only on the recalcitrance of instrumentality, and not of the recalcitrance of intelligence in general, we can get a firmer grip on the problem. In particular, we can focus on such tasks as prediction and planning. If we discover that these tasks are in fact significantly recalcitrant that should reduce our expected probability of an AI singleton and consequently cause us to divert research funds to problems that anticipate other outcomes.

In this section I will look in further depth at the parts of Bostrom’s intelligence explosion argument about optimization power and recalcitrance. How recalcitrant must a system be for it to not be susceptible to an intelligence explosion?

This section contains some formalism. For readers uncomfortable with that, trust me: if the system’s recalcitrance is roughly proportional to the amount that the system is able to invest in its own intelligence, then the system’s intelligence will not explode. Rather, it will climb linearly. If the system’s recalcitrance is significantly greater than the amount that the system can invest in its own intelligence, then the system’s intelligence won’t even climb steadily. Rather, it will plateau.

To see why, recall from our core argument and definitions that:

Rate of change in intelligence = Optimization power / Recalcitrance.

Optimization power is the amount of effort that is put into improving the intelligence of system. Recalcitrance is the resistance of that system to improvement. Bostrom presents this as a qualitative formula then expands it more formally in subsequent analysis.

\frac{dI}{dt} = \frac{O(I)}{R}

Bostrom’s claim is that for instrumental reasons an intelligent system is likely to invest some portion of its intelligence back into improving its intelligence. So, by assumption we can model O(I) = \alpha I + \beta for some parameters \alpha and \beta, where 0 < \alpha < 1 and \beta represents the contribution of optimization power by external forces (such as a team of researchers). If recalcitrance is constant, e.g R = k, then we can compute:

\Large \frac{dI}{dt} = \frac{\alpha I + \beta}{k}

Under these conditions, I will be exponentially increasing in time t. This is the “intelligence explosion” that gives Bostrom’s argument so much momentum. The explosion only gets worse if recalcitrance is below a constant.

In order to illustrate how quickly the “superintelligence takeoff” occurs under this model, I’ve plotted the above function plugging in a number of values for the parameters \alpha, \beta and k. Keep in mind that the y-axis is plotted on a log scale, which means that a roughly linear increase indicates exponential growth.

Plot of exponential takeoff rates

Modeled superintelligence takeoff where rate of intelligence gain is linear in current intelligence and recalcitrance is constant. Slope in the log scale is determine by alpha and k values.

It is true that in all the above cases, the intelligence function is exponentially increasing over time. The astute reader will notice that by my earlier claim \alpha cannot be greater than 1, and so one of the modeled functions is invalid. It’s a good point, but one that doesn’t matter. We are fundamentally just modeling intelligence expansion as something that is linear on the log scale here.

However, it’s important to remember that recalcitrance may also be a function of intelligence. Bostrom does not mention the possibility of recalcitrance being increasing in intelligence. How sensitive to intelligence would recalcitrance need to be in order to prevent exponential growth in intelligence?

Consider the following model where recalcitrance is, like optimization power, linearly increasing in intelligence.

\frac{dI}{dt} = \frac{\alpha_o I + \beta_o}{\alpha_r I + \beta_r}

Now there are four parameters instead of three. Note this model is identical to the one above it when \alpha_r = 0. Plugging in several values for these parameters and plotting again with the y-scale on the log axis, we get:

Plot of takeoff when both optimization power and recalcitrance are linearly increasing in intelligence. Only when recalcitrance is unaffected by intelligence level is there an exponential takeoff. In the other cases, intelligence quickly plateaus on the log scale. No matter how much the system can invest in its own optimization power as a proportion of its total intelligence, it still only takes off at a linear rate.

Plot of takeoff when both optimization power and recalcitrance are linearly increasing in intelligence. Only when recalcitrance is unaffected by intelligence level is there an exponential takeoff. In the other cases, intelligence quickly plateaus on the log scale. No matter how much the system can invest in its own optimization power as a proportion of its total intelligence, it still only takes off at a linear rate.

The point of this plot is to illustrate how easily exponential superintelligence takeoff might be stymied by a dependence of recalcitrance on intelligence. Even in the absurd case where the system is able to invest a thousand times as much intelligence that it already has back into its own advancement, and a large team steadily commits a million “units” of optimization power (whatever that means–Bostrom is never particularly clear on the definition of this), a minute linear dependence of recalcitrance on optimization power limits the takeoff to linear speed.

Are the reasons to think that recalcitrance might increase as intelligence increases? Prima facie, yes. Here’s a simple thought experiment: What if there is some distribution of intelligence algorithm advances that are available in nature and that some of them are harder to achieve than others. A system that dedicates itself to advancing its own intelligence, knowing that it gets more optimization power as it gets more intelligent, might start by finding the “low hanging fruit” of cognitive enhancement. But as it picks the low hanging fruit, it is left with only the harder discoveries. Therefore, recalcitrance increases as the system grows more intelligent.

This is not a decisive argument against fast superintelligence takeoff and the possibility of a decisively strategic superintelligent singleton. Above is just an argument about why it is important to consider recalcitrance carefully when making claims about takeoff speed, and to counter what I believe is a bias in Bostrom’s work towards considering unrealistically low recalcitrance levels.

In future work, I will analyze the kinds of instrumental intelligence tasks, like prediction and planning, that we have identified as being at the core of Bostrom’s superintelligence argument. The question we need to ask is: does the recalcitrance of prediction tasks increase as the agent performing them becomes better at prediction? And likewise for planning. If prediction and planning are the two fundamental components of means-ends reasoning, and both have recalcitrance that increases significantly with the intelligence of the agent performing them, then we have reason to reject Bostrom’s core argument and assign a very low probability to the doomsday scenario that occupies much of Bostrom’s imagination in Superintelligence. If this is the case, that suggests we should be devoting resources to anticipating what he calls multipolar scenarios, where no intelligent system has a decisive strategic advantage, instead.


by Sebastian Benthall at August 24, 2015 11:25 PM

August 23, 2015

Ph.D. student

Instrumentality run amok: Bostrom and Instrumentality

Narrowing our focus onto the crux of Bostrom’s argument, we can see how tightly it is bound to a much older philosophical notion of instrumental reason. This comes to the forefront in his discussion of the orthogonality thesis (p.107):

The orthogonality thesis
Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.

Bostrom goes on to clarify:

Note that the orthogonality thesis speaks not of rationality or reason, but of intelligence. By “intelligence” we here mean something like skill at prediction, planning, and means-ends reasoning in general. This sense of instrumental cognitive efficaciousness is most relevant when we are seeking to understand what the causal impact of a machine superintelligence might be.

Bostrom maintains that the generality of instrumental intelligence, which I would argue is evinced by the generality of computing, gives us a way to predict how intelligent systems will act. Specifically, he says that an intelligent system (and specifically a superintelligent) might be predictable because of its design, because of its inheritance of goals from a less intelligence system, or because of convergent instrumental reasons. (p.108)

Return to the core logic of Bostrom’s argument. The existential threat posed by superintelligence is simply that the instrumental intelligence of an intelligent system will invest in itself and overwhelm any ability by us (its well-intentioned creators) to control its behavior through design or inheritance. Bostrom thinks this is likely because instrumental intelligence (“skill at prediction, planning, and means-ends reasoning in general”) is a kind of resource or capacity that can be accumulated and put to other uses more widely. You can use instrumental intelligence to get more instrumental intelligence; why wouldn’t you? The doomsday prophecy of a fast takeoff superintelligence achieving a decisive strategic advantage and becoming a universe-dominating singleton depends on this internal cycle: instrumental intelligence investing in itself and expanding exponentially, assuming low recalcitrance.

This analysis brings us to a significant focal point. The critical missing formula in Bostrom’s argument is (specifically) the recalcitrance function of instrumental intelligence. This is not the same as recalcitrance with respect to “general” intelligence or even “super” intelligence. Rather, what’s critical is how much a process dedicated to “prediction, planning, and means-ends reasoning in general” can improve its own capacities at those things autonomously. The values of this recalcitrance function will bound the speed of superintelligence takeoff. These bounds can then inform the optimal allocation of research funding towards anticipation of future scenarios.


In what I hope won’t distract from the logical analysis of Bostrom’s argument, I’d like to put it in a broader context.

Take a minute to think about the power of general purpose computing and the impact it has had on the past hundred years of human history. As the earliest digital computers were informed by notions of artificial intelligence (c.f. Alan Turing), we can accurately say that the very machine I use to write this text, and the machine you use to read it, are the result of refined, formalized, and materialized instrumental reason. Every programming language is a level of abstraction over a machine that has no ends in itself, but which serves the ends of its programmer (when it’s working). There is a sense in which Bostrom’s argument is not about a near future scenario but rather is just a description of how things already are.

Our very concepts of “technology” and “instrument” are so related that it can be hard to see any distinction at all. (c.f. Heidegger, “The Question Concerning Technology“) Bostrom’s equating of instrumentality with intelligence is a move that makes more sense as computing becomes ubiquitously part of our experience of technology. However, if any instrumental mechanism can be seen as a form of intelligence, that lends credence to panpsychist views of cognition as life. (c.f. the Santiago theory)

Meanwhile, arguably the genius of the market is that it connects ends (through consumption or “demand”) with means (through manufacture and services, or “supply”) efficiently, bringing about the fruition of human desire. If you replace “instrumental intelligence” with “capital” or “money”, you get a familiar critique of capitalism as a system driven by capital accumulation at the expense of humanity. The analogy with capital accumulation is worthwhile here. Much as in Bostrom’s “takeoff” scenarios, we can see how capital (in the modern era, wealth) is reinvested in itself and grows at an exponential rate. Variable rates of return on investment lead to great disparities in wealth. We today have a “multipolar scenario” as far as the distribution of capital is concerned. At times people have advocated for an economic “singleton” through a planned economy.

It is striking that contemporary analytic philosopher and futurist Nick Bostrom’s contemplates the same malevolent force in his apocalyptic scenario as does Max Horkheimer in his 1947 treatise “Eclipse of Reason“: instrumentality run amok. Whereas Bostrom concerns himself primarily with what is literally a machine dominating the world, Horkheimer sees the mechanism of self-reinforcing instrumentality as pervasive throughout the economic and social system. For example, he sees engineers as loci of active instrumentalism. Bostrom never cites Horkheimer, let alone Heidegger. That there is a convergence of different philosophical sub-disciplines on the same problem suggests that there are convergent ultimate reasons which may triumph over convergent instrumental reasons in the end. The question of what these convergent ultimate reasons are, and what their relationship to instrumental reasons is, is a mystery.


by Sebastian Benthall at August 23, 2015 06:10 PM

August 21, 2015

Ph.D. student

Further distillation of Bostrom’s Superintelligence argument

Following up on this outline of the definitions and core argument of Bostrom’s Superintelligence, I will try to narrow in on the key mechanisms the argument depends on.

At the heart of the argument are a number of claims about instrumentally convergent values and self-improvement. It’s important to distill these claims to their logical core because their validity affects the probability of outcomes for humanity and the way we should invest resources in anticipation of superintelligence.

There are a number of ways to tighten Bostrom’s argument:

Focus the definition of superintelligence. Bostrom leads with the provocative but fuzzy definition of superintelligence as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.” But the overall logic of his argument makes it clear that the domain of interest does not necessarily include violin-playing or any number of other activities. Rather, the domains necessary for a Bostrom superintelligence explosion are those that pertain directly to improving ones own intellectual capacity. Bostrom speculates about these capacities in two ways. In one section he discusses the “cognitive superpowers”, domains that would quicken a superintelligence takeoff. In another section he discusses convergent instrumental values, values that agents with a broad variety of goals would converge on instrumentally.

  • Cognitive Superpowers
    • Intelligence amplification
    • Strategizing
    • Social manipulation
    • Hacking
    • Technology research
    • Economic productivity
  • Convergent Instrumental Values
    • Self-preservation
    • Goal-content integrity
    • Cognitive enhancement
    • Technological perfection
    • Resource acquisition

By focusing on these traits, we can start to see that Bostrom is not really worried about what has been termed an “Artificial General Intelligence” (AGI). He is concerned with a very specific kind of intelligence with certain capacities to exert its will on the world and, most importantly, to increase its power over nature and other intelligent systems rapidly enough to attain a decisive strategic advantage. Which leads us to a second way we can refine Bostrom’s argument.

Closely analyze recalcitrance. Recall that Bostrom speculates that the condition for a fast takeoff superintelligence, assuming that the system engages in “intelligence amplification”, is constant or lower recalcitrance. A weakness in his argument is his lack of in-depth analysis of this recalcitrance function. I will argue that for many of the convergent instrumental values and cognitive superpowers at the core of Bostrom’s argument, it is possible to be much more precise about system recalcitrance. This analysis should allow us to determine to a greater extent the likelihood of singleton vs. multipolar superintelligence outcomes.

For example, it’s worth noting that a number of the “superpowers” are explicitly in the domain of the social sciences. “Social manipulation” and “economic productivity” are both vastly complex domains of research in their own right. Each may well have bounds about how effective an intelligent system can be at them, no matter how much “optimization power” is applied to the task. The capacities of those manipulated to understand instructions is one such bound. The fragility or elasticity of markets could be another such bound.

For intelligence amplification, strategizing, technological research/perfection, and cognitive enhancement in particular, there is a wealth of literature in artificial intelligence and cognitive science that addresses the technical limits of these domains. Such technical limitations are a natural source of recalcitrance and an impediment to fast takeoff.


by Sebastian Benthall at August 21, 2015 07:42 PM

Bostrom’s Superintelligence: Definitions and core argument

I wanted to take the opportunity to spell out what I see as the core definitions and argument of Bostrom’s Superintelligence as a point of departure for future work. First, some definitions:

  • Superintelligence. “We can tentatively define a superintelligence as any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.” (p.22)
  • Speed superintelligence. “A system that can do all that a human intellect can do, but much faster.” (p.53)
  • Collective superintelligence. “A system composed of a large number of smaller intellects such that the system’s overall performance across many very general domains vastly outstrips that of any current cognitive system.” (p.54)
  • Quality superintelligence. “A system that is at least as fast as a human mind and vastly qualitatively smarter.” (p.56)
  • Takeoff. The event of the emergence of a superintelligence. The takeoff might be slow, moderate, or fast, depending on the conditions under which it occurs.
  • Optimization power and Recalcitrance. Bostrom’s proposed that we model the speed of superintelligence takeoff as: Rate of change in intelligence = Optimization power / Recalcitrance. Optimization power refers to the effort of improving the intelligence of the system. Recalcitrance refers to the resistance of the system to being optimized.(p.65, pp.75-77)
  • Decisive strategic advantage. The level of technological and other advantages sufficient to enable complete world domination. (p.78)
  • Singleton. A world order in which there is at the global level one decision-making agency. (p.78)
  • The wise-singleton sustainability threshold. “A capability set exceeds the wise-singleton threshold if and only if a patient and existential risk-savvy system with that capability set would, if it faced no intelligent opposition or competition, be able to colonize and re-engineer a large part of the accessible universe.” (p.100)
  • The orthogonality thesis. “Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.” (p.107)
  • The instrumental convergence thesis. “Several instrumental values can be identified which are convergent in the sense that their attainment would increase the chances of the agent’s goal being realized for a wide range of final goals and a wide range of situations, implying that these instrumental values are likely to be pursued by a broad spectrum of situated intelligent agents.” (p.109)

Bostrom’s core argument in the first eight chapters of the book, as I read it, is this:

  1. Intelligent systems are already being built and expanded on.
  2. If some constant proportion of a system’s intelligence is turned into optimization power, then if the recalcitrance of the system is constant or lower, then the intelligence of the system will increase at an exponential rate. This will be a fast takeoff.
  3. Recalcitrance is likely to be lower for machine intelligence than human intelligence because of the physical properties of artificial computing systems.
  4. An intelligent system is likely to invest in its own intelligence because of the instrumental convergence thesis. Improving intelligence is an instrumental goal given a broad spectrum of other goals.
  5. In the event of a fast takeoff, it is likely that the superintelligence will get a decisive strategic advantage, because of a first-mover advantage.
  6. Because of the instrumental convergence thesis, we should expect a superintelligence with a decisive strategic advantage to become a singleton.
  7. Machine superintelligences, which are more likely to takeoff fast and become singletons, are not likely to create nice outcomes for humanity by default.
  8. A superintelligent singleton is likely to be above the wise-singleton threshold. Hence the fate of the universe and the potential of humanity is at stake.

Having made this argument, Bostrom goes on to discuss ways we might anticipate and control the superintelligence as it becomes a singleton, thereby securing humanity.


by Sebastian Benthall at August 21, 2015 12:02 AM

August 16, 2015

Ph.D. student

And now for something completely different: Superintelligence and the social sciences

This semester I’ll be co-organizing, with Mahendra Prasad, a seminar on the subject of “Superintelligence and the Social Sciences”.

How I managed to find myself in this role is a bit of a long story. But as I’ve had a longstanding curiosity about this topic, I am glad to be putting energy into the seminar. It’s a great opportunity to get exposure to some of the very interesting work done by MIRI on this subject. It’s also a chance to thoroughly investigate (and critique) Bostrom’s book Superintelligence: Paths, Dangers, and Strategies.

I find the subject matter perplexing because in many ways it forces the very cultural and intellectual clash that I’ve been preoccupied with elsewhere on this blog: the failure of social scientists and engineers to communicate. Or, perhaps, the failure of qualitative researchers and quantitative researchers to communicate. Whatever you want to call it.

Broadly, the question at stake is: what impact will artificial intelligence have on society? This question is already misleading since in the imagination of most people who haven’t been trained in the subject, “artificial intelligence” refers to something of a science fiction scenario, whereas to practitioner, “artificial intelligence” is, basically, just software. Just as the press went wild last year speculating about “algorithms”, by which it meant software, so too is the press excited about artificial intelligence, which is just software.

But the concern that software is responsible for more and more of the activity in the world and that it is in a sense “smarter than us”, and especially the fear that it might become vastly smarter than us (i.e. turning into what Bostrom calls a “superintelligence”), is pervasive enough to drive research funding into topics like “AI Safety”. It also is apparently inspiring legal study into the regulation of autonomous systems. It may also have implications for what is called, vaguely, “social science”, though increasingly it seems like nobody really knows what that is.

There is a serious epistemological problem here. Some researchers are trying to predict or forewarn the societal impact of agents that are by assumption beyond their comprehension on the premise that they may come into existence at any moment.

This is fascinating but one has to get a grip.


by Sebastian Benthall at August 16, 2015 08:19 PM

August 04, 2015

MIMS 2015

Metafarce Update -> systemd, man pages, and TLS

I’ve recently had time to update the guts of metafarce.com. This post is about the updates to those guts, including what I tried that didn’t work out so well. The first section is full of personal opinion about the state of free UNIX OS’s.1 The second section concerns my adventures in getting TLS to work, and thoughts on the state of free TLS certificate signing services.

Background

I wanted to have IPv6 connectivity, DNSSEC and TLS for metafarce.com and a few other domains I host. The provider I had been using for VPS did not offer IPv6, so I found a VPS provider that did. The provider I had been using for DNS did not support DNSSEC, so I found a DNS provider that did.

Switching VPS providers meant I had to setup a new machine anyways. I had been running Debian for years, but I decided to switch to OpenBSD. My Debian VPS had been fine over the years. I kept it updated with apt-get and generally never had any major problems with. The next section deals with why I switched.

Because Reasons

Actually, two reasons.

The first reason is because of systemd. I simply didn’t want to deal with it. I didn’t want to learn it, I didn’t see the value in it, and it has crappy documentation. This isn’t me saying systemd is crap. I don’t know if it’s crap because I haven’t spent any time evaluating it. This is me saying I don’t care about systemd, and it isn’t worth my time to investigate. There are other places on the web where one can argue the [de]merits of systemd, this is not a place for that.

One of the key things I’ve missed in the assorted arguments surrounding systemd is the lack of historical context. As if many of the systemd combatants aren’t aware of how sacred init systems are to UNIX folk. One of the first big splits in UNIX history was between those who wanted a BSD style init, and those who wanted a sysV style init. There is a long history of UNIX folk arguing about how to start their OS. However, I saw very little recognition of that fact in arguments for/against systemd.

The second reason is because Debian man pages suck. Debian probably has the highest quality man pages of any Linux distro, but they still suck. They’re often outdated, incomplete, incorrect, and it doesn’t seem like Linux users care all that much that their man pages suck. Most users only read man pages during troubleshooting, and then only after failing to find their solution on the web. I read man pages for every application I install. I want to know how the application works, what files it uses, signals it accepts, etc.

The BSD UNIX’s have excellent man pages, and they get the attention they deserve during release cycles. Unlike most Linux distributions, updates to man pages in BSD UNIX’s are listed in changelogs and seen as development work on-par with software changes. This is as it should be. Documentation acts as kind of contract between user and programmer. It sets user expectations. If a man page says a program should behave in a certain fashion and the program doesn’t, then we know it’s a bug.

There is a trend in the UNIX world to think man pages are outdated. Some newer UNIX applications don’t even include man pages. This is stupid. Documentation is part of the program, and should not be relegated to an afterthought. Also, you might not always have the web when troubleshooting.

TLS and StartSSL

Metafarce.com and the other domains I run on this VPS now have both IPv6 and DNSSEC. Metafarce does not yet have TLS(i.e. https) because I refuse to pay for it. Startssl.com offers free certificates, so in theory I should be able to get one for free. The problem is that I cannot convince StartSSL that I control metafarce.com. To successfully validate that a user owns a domain the user must have access to any email address in the whois record, OR have access to either postmaster@, hostmaster@, or webmaster@ for that domain.

I don’t control any of the email addresses in my whois record, I don’t have a privacy service for my whois record, my registrar just doesn’t allow me to edit them. I’m also not willing to create an MX record for metafarce.com, then setup mail forwarding for postmaster@, hostmaster@, or webmaster@. Therefore I cannot convince StartSSL that I control metafarce.com. I shouldn’t be in this situation. We have DNS SOA records for reasons, and one of those reasons is to host the zone’s admin email address. At the very least the address listed in metafarce.com’s SOA record should be available to use for domain validation purposes.

Also, how do they know the DNS domain controller will be the only one who has access to the these email addresses? The list, while not arbitrary, is not forced reserved for all mail setups.2 There are plenty of email accepting domains that forward these addresses straight to /dev/null.

Another method I have seen used to confirm control of a zone is to create a TXT record with a unique string. StartSSL could provide me with a unique string, I would then add a TXT record with that string as its value. This method assumes that someone who can create TXT records for a domain controls the domain, which is probably a fair assumption.

I think StartSSL has chosen a poor method for tying users to domains. Whois records should not be relied upon as a method for proving control. Not only does this break for people who use whois privacy services, but many users cannot directly edit their whois record, and don’t have the skills/resources to setup email forwarding for their domain.

The outcome of all this is that I don’t support https for metafarce.com. Without having my cert signed by a CA, users have to wade through piles of buttons and dialogs that scare them away. Thus it remains unencrypted.3 Proving that a given user controls a given domain is a tough problem, and I don’t mean to suggest otherwise. StartSSL offers a free signing service and they should be commended for it. I just hope the situation improves so that myself and others can start hosting more content over TLS.

Let’s Encrypt to the Rescue

Let’s Encrypt is a soon to be launched certificate authority run by the Internet Security Research Group(ISRG). They’re a public benefit corporation backed by a few concerned corporate sponsors and the EFF. They’re going to sign web TLS certs for free at launch, which is great in and of itself. Even greater is the Internet draft they’ve written for their new automagic TLS cert creation and signing. We’ll see how it works out, but if they get it right this will be a huge boon for TLS adoption. At the very least I can then start running TLS everywhere without having to pay for it.

  1. I use the term UNIX very generally as a super category. Any OS which imbues the concepts of UNIX is a type of UNIX. Linux, Minix, MacOSX, *BSD, and Solaris are all types of UNIX. I’m not sure about QNX, but Windows and VxWorks are definitely not UNIX.

  2. RFC 2142 does actually reserve these, but that doesn’t mean mail admins always do.

  3. Another site I host on this VPS, synonomic.com, supports TLS. The cert for synonomic.com is not signed by any CA, so the user has to click at some scary looking buttons in order to view content. The cert is guaranteed by DANE to be for synonomic.com, yet no browsers currently support DANE out-of-the-box.

Metafarce Update -> systemd, man pages, and TLS was originally published by Andrew McConachie at Metafarce on August 04, 2015.

by Andrew McConachie (andrewm@ischool.berkeley.edu) at August 04, 2015 07:00 AM

July 29, 2015

Ph.D. student

intelligibility

The example of Arendt’s dismissal of scientific discourse from political discussion underscores a much deeper political problem: a lack of intelligibility.

Every language is intelligible to some people and not to others. This is obviously true in the case of major languages like English and Chinese. It is less obvious but still a problem with different dialects of a language. It becomes a source of conflict when there is a lack of intelligibility between the specialized languages of expertise or personal experience.

For many, mathematical formalism is unintelligible; it appears to be so for Arendt, and this disturbs her, as she locates politics in speech and wants there to be political controls on scientists. But how many scientists and mathematicians would find Arendt intelligible? She draws deeply on concepts from ancient Greek and Augustinian philosophy. Are these thoughts truly accessible? What about the intelligibility of the law, to non-lawyers? Or the intelligibility of spoken experiences of oppression to those who do not share such an experience?

To put it simply: people don’t always understand each other and this poses a problem for any political theory that locates justice in speech and consensus. Advocates of these speech-based politics are most often extraordinarily articulate and write persuasively about the need to curtail the power of any systems of control that they do not understand. They are unable to agree to a social contract that they cannot read.

But this persuasive speech is necessarily unable to account for the myriad mechanisms that are both conditions for the speech and unintelligible to the speaker. This includes the mechanisms of law and technology. There is a performative contradiction between these persuasive words and their conditions of dissemination, and this is reason to reject them.

Advocates of bureaucratic rule tend to be less eloquent, and those that create technological systems that replace bureaucratic functions even less so. Nevertheless each group is intelligible to itself and may have trouble understanding the other groups.

The temptation for any one segment of society to totalize its own understanding, dismissing other ways of experiencing and articulating reality as inessential or inferior, is so strong that it can be read in even great authors like Arendt. Ideological politics (as opposed to technocratic politics) is the conflict between groups expressing their interests as ideology.

The problem is that in order to function as it does at scale, modern society requires the cooperation of specialists. Its members are heterogeneous; this is the source of its flexibility and power. It is also the cause of ideological conflict between functional groups that should see themselves as part of a whole. Even if these members do see their interdependence in principle, their specialization makes them less intelligible. Articulation often involves different skills from action, and teaching to the uninitiated is another skill altogether. Meanwhile, the complexity of the social system expands as it integrates more diverse communities, reducing further the proportion understood by a single member.

There is still in some political discourse the ideal of deliberative consensus as the ground of normative or political legitimacy. Suppose, as seems likely, that this is impossible for the perfectly mundane and mechanistic reason that society is so complicated due to the demands of specialization that intelligibility among its constituents is never going to happen.

What then?


by Sebastian Benthall at July 29, 2015 05:44 AM

July 28, 2015

MIMS 2015
Ph.D. student

the state and the household in Chinese antiquity

It’s worthwhile in comparison with Arendt’s discussion of Athenian democracy to consider the ancient Chinese alternative. In Alfred Huang’s commentary on the I Ching, we find this passage:

The ancient sages always applied the principle of managing a household to governing a country. In their view, a country was simply a big household. With the spirit of sincerity and mutual love, one is able to create a harmonious situation anywhere, in any circumstance. In his Analects, Confucius says,

From the loving example of one household,
A whole state becomes loving.
From the courteous manner of one household,
A whole state becomes courteous.

Comparing the history of Europe and the rise of capitalistic bureaucracy with the history of China, where bureaucracy is much older, is interesting. I have comparatively little knowledge of the latter, but it is often said that China does not have the same emphasis on individualism that you find in the West. Security is considered much more important than Freedom.

The reminder that the democratic values proposed by Arendt and Horkheimer are culturally situated is an important one, especially as Horkheimer claims that free burghers are capable of producing art that expresses universal needs.


by Sebastian Benthall at July 28, 2015 02:38 AM

July 27, 2015

Ph.D. student

a refinement

If knowledge is situated, and scientific knowledge is the product of rational consensus among diverse constituents, then a social organization that unifies many different social units functionally will have a ‘scientific’ ideology or rationale that is specific to the situation of that organization.

In other words, the political ideology of a group of people will be part of the glue that constitutes the group. Social beliefs will be a component of the collective identity.

A social science may be the elaboration of one such ideology. Many have been. So social scientific beliefs are about capturing the conditions for the social organization which maintains that belief. (c.f. Nietzsche on tablets of values)

There are good reasons to teach these specialized social sciences as a part of vocational training for certain functions. For example, people who work in finance or business can benefit from learning economics.

Only in an academic context does the professional identity of disciplinary affiliation matter. This academic political context creates great division and confusion that merely reflects the disorganization of the academic system.

This disorganization is fruitful precisely because it allows for individuality (cf. Horkheimer). However, it is also inefficient and easy to corrupt. Hmm.

Against this, not all knowledge is situated. Some is universal. It’s universality is due to its pragmatic usefulness in technical design. Since technical design acts on everyone even when their own situated understanding does not include it, this kind of knowledge has universal ground (in violence, sadly, but maybe also in other ways.)

The question is whether there is room anywhere in the technically correct understanding of social organization (something we might see in Beniger) there is room for the articulation of what it supposed to be great and worthy of man (see Horkheimer).

I have thought for a long time that there is probably something like this describable in terms of complexity theory.


by Sebastian Benthall at July 27, 2015 04:22 AM

structuralism and/or functionalism

Previous entries detailing the arguments of Arendt, Horkheimer, and Beniger show these theorists have what you might call a structural functionalist bent. Society is conceived as a functional whole. There are units of organization within it. For Arendt, this social organization begins in the private household and expands to all of society. Horkheimer laments this as the triumph of mindless economic organization over genuine, valuable individuality.

Structuralism, let alone structural functionalism, is not in fashion in the social sciences. Purely speculatively, one reason for this might be that to the extent that society was organized to perform certain functions, more of those functions have been delegated to information processing infrastructure, as in Beniger’s analysis. That leaves “culture” more a domain of ephemerality and identity conflict, as activity in the sphere of economic production becomes if not private, opaque.

My empirical work on open source communities is suggestive (though certainly not conclusively so) that these communities are organized more for functional efficiency than other kinds of social groups (including academics). I draw this inference from the degree dissortativity of the open source social networks. Disassortativity suggests the interaction of different kinds of people, which is against homophilic patterns of social formation but which seems essential for economic activity where the interact of specialists is what creates value.

Assuming that society its entirety (!!) is very complex and not easily captured by a single grand theory, we can nevertheless distinguish difference kinds of social organization and see how they theorize themselves. We can also map how they interact and what mechanisms mediated between them.


by Sebastian Benthall at July 27, 2015 03:37 AM

July 25, 2015

Ph.D. student

Land and gold (Arendt, Horkheimer)

I am thirty, still in graduate school, and not thrilled about the prospects of home ownership since all any of the professionals around me talk about is the sky-rocketing price of real estate around the critical American urban centers.

It is with a leisure afforded by graduate school that I am able to take the long view on this predicament. It is very cheap to spend ones idle time reading Arendt, who has this to say about the relationship between wealth and property:

The profound connection between private and public, manifest on its most elementary level in the question of private property, is likely to be misunderstood today because of the modern equation of property and wealth on one side and propertylessness and poverty on the other. This misunderstanding is all the more annoying as both, property as well as wealth, are historically of greater relevance to the public realm than any other private matter or concern and have played, at least formally, more or less the same role as the chief condition for admission to the public realm and full-fledged citizenship. It is therefore easy to forget that wealth and property, far from being the same, are of an entirely different nature. The present emergence everywhere of actually or potentially very wealthy societies which at the same time are essentially propertyless, because the wealth of any single individual consists of his share in the annual income of society as a whole, clearly shows how little these two things are connected.

For Arendt, beginning with her analysis of ancient Greek society, property (landholding) is the condition of ones participation in democracy. It is a place of residence and source of ones material fulfilment, which is a prerequisite to ones free (because it is unnecessitated) participation in public life. This is contrasted with wealth, which is a feature of private life and is unpolitical. In ancient society, slaves could own wealth, but not property.

If we look at the history of Western civilization as a progression away from this rather extreme moment, we see the rise of social classes whose power is based on in landholding but in wealth. Industrialism and the economy based on private ownership of capital is a critical transition in history. That capital is not bound to a particular location but rather is mobile across international boundaries is one of the things that characterizes global capitalism and brings it in tension with a geographically bounded democratic state. It is interesting that a Jeffersonian democracy, designed with the assumption of landholding citizens, should predate industrial capitalism and be consitutionally unprepared for the result, but nevertheless be one of the models for other democratic governance structures throughout the world.

If private ownership of capital, not land, defines political power under capitalism, then wealth, not property, becomes the measure of ones status and security. For a time, when wealth was as a matter of international standard exchangeable for gold, private ownership of gold could replace private ownership of land as the guarantee of ones material security and thereby grounds for ones independent existence. This independent, free rationality has since Aristotle been the purpose (telos) of man.

In the United States, Franklin Roosevelt’s 1933 Executive Order 6102 forbade the private ownership of gold. The purpose of this was to free the Federal Reserve of the gold market’s constraint on increasing the money supply during the Great Depression.

A perhaps unexpected complaint against this political move comes from Horkheimer (Eclipse of Reason, 1947), who sees this as a further affront to individualism by capitalism.

The age of vast industrial power, by eliminating the perspectives of a stable past and future that grew out of ostensibly permanent property relations, is the process of liquidating the individual. The deterioration of his situation is perhaps best measured in terms of his utter insecurity as regards to his personal savings. As long as currencies were rigidly tied to gold, and gold could flow freely over frontiers, its value could shift only within narrow limits. Under present-day conditions the dangers of inflation, of a substantial reduction or complete loss of the purchasing power of his savings, lurks around the next corner. Private possession of gold was the symbol of bourgeois rule. Gold made the burgher somehow the successor of the aristocrat. With it he could establish security for himself and be reasonable sure that even after his death his dependents would not be completely sucked up by the economic system. His more or less independent position, based on his right to exchange goods and money for gold, and therefore on the relatively stable property values, expressed itself in the interest he took in the cultivation of his own personality–not, as today, in order to achieve a better career or for any professional reason, but for the sake of his own individual existence. The effort was meaningful because the material basis of the individual was not wholly unstable. Although the masses could not aspire to the position of the burgher, the presence of a relatively numerous class of individuals who were governed by interest in humanistic values formed the background for a kind of theoretical thought as well as for the type of manifestions in the arts that by virtue of their inherent truth express the needs of society as a whole.

Horkheimer’s historical arc, like many Marxists, appears to ignore its parallels in antiquity. Monetary policy in the Roman Empire, which used something like a gold standard, was not always straightforward. Inflation was sometimes a severe problem when generals would print money to pay the soldiers hat supported their political coups. So it’s not clear that the modern economy is more unstable than gold or land based economies. However, the criticism that economic security is largely a matter of ones continued participation in a larger system, and that there is little in the way of financial security besides this, holds. He continues:

The state’s restriction on the right to possess gold is the symbol of a complete change. Even the members of the middle class must resign themselves to insecurity. The individual consoles himself with the thought that his government, corporation, association, union, or insurance company will take care of him when he becomes ill or reaches the retiring age. The various laws prohibiting private possession of gold symbolize the verdict against the independent economic individual. Under liberalism, the beggar was always an eyesore to the rentier. In the age of big business both beggar and rentier are vanishing. There are no safety zones on society’s thoroughfares. Everyone must keep moving. The entrepreneur has become a functionary, the scholar a professional expert. The philosopher’s maxim, Bene qui latuit, bene vixit, is incompatible with the modern business cycles. Everyone is under the whip of a superior agency. Those who occupy the commanding positions have little more autonomy than their subordinates; they are bound by the power they wield.

In an academic context, it is easy to make a connection between Horkheimer’s concerns about gold ownership and tenure. Academic tenure is or was the refuge of the individual who could in theory develop themselves as individuals in obscurity. The price of this autonomy, which according the philosophical tradition represents the highest possible achievement of man, is that one teaches. So, the developed individual passes on the values developed through contemplation and reflection to the young. The privatization of the university and the emphasis on teaching marketable skills that allow graduates to participate more fully in the economic system is arguably an extension of Horkheimer’s cultural apocalypse.

The counter to this is the claim that the economy as a whole achieves a kind of homeostasis that provides greater security than one whose value is bound to something stable and exogenous like gold and land. Ones savings are secure as long as the system doesn’t fail. Meanwhile, the price of access to cultural materials through which one might expand ones individuality (i.e. videos of academic lectures, the arts, or music) decrease as a consequence of the pervasiveness of the economy. At this point one feels one has reached the limits of Horkheimer’s critique, which perhaps only sees one side of the story despite its sublime passion. We see echoes of it in contemporary feminist critique, which emphasizes how the demands of necessity are disproportionately burdened by women and how this affects their role in the economy. That women have only relatively recently, in historical terms, been released from the private household into the public world (c.f. Arendt again) situates them more precariously within the economic system.

What remains unclear (to me) is how one should conceive of society and values when there is an available continuum of work, opportunity, leisure, individuality, art, and labor under conditions of contemporary technological control. Specifically, the notion of inequality becomes more complicated when one considers that society has never been equal in the sense that is often aspired to in contemporary American society. This is largely because the notion of equality we use today draws from two distinct sources. The first is the equality of self-sufficient landholding men as they encounter each other freely in the polis. Or, equivalently, as self-sufficient goldholding men in something like the Habermasian bourgeois public sphere. The second is equality within society, which is economically organized and therefore requires specialization and managerial stratification. We can try to assure equality to members of society insofar as they are members of society, but not as to their function within society.


by Sebastian Benthall at July 25, 2015 11:30 PM

July 23, 2015

Ph.D. student

Horkheimer on engineers

Horkheimer’s comment on engineers:

It is true that the engineer, perhaps the symbol of this age, is not so exclusively bent on profitmaking as the industrialist or the merchant. Because his function is more directly connected with the requirements of the production job itself, his commands bear the mark of greater objectivity. His subordinates recognize that at least some of his orders are in the nature of things and therefore rational in a universal sense. But at bottom this rationality, too, pertains to domination, not reason. The engineer is not interested in understanding things for their own sake or the sake of insight, but in accordance to their being fitted into a scheme, no matter how alien to their own inner structure; this holds for living beings as well as for inanimate things. The engineer’s mind is that of industrialism in its streamlined form. His purposeful rule would make men an agglomeration of instruments without a purpose of their own.

This paragraph sums up much of what Horkheimer stands for. His criticism of engineers, the catalysts of industrialism, is not that they are incorrect. It is that their instrumental rationality is not humanely purposeful.

This humane purposefulness, for Horkheimer, is born out of individual contemplation. Though he recognizes that this has been a standpoint of the privileged (c.f. Arendt on the Greek polis), he sees industrialism as successful in bringing many people out of a place of necessity but at the cost of marginalizing and trivializing all individual contemplation. The result is an efficient machine with nobody in charge. This bodes ill because such a machine is vulnerable to being co-opted by an irrational despot or charlatan. Individuality, free of material necessity and also free of the machine that liberated it from that necessity, is the origin of moral judgement that prevents fascist rule.

This is very different from the picture of individuality Fred Turner presents in The Democratic Surround. In his account of how United States propaganda created a “national character” that was both individual enough to be anti-fascist and united enough to fight fascism, he emphasizes the role of art installations that encourage the view to stitch themselves synthetically into a large picture of the nation. One is unique within a larger, diverse…well, we might use the word society, borrowing from Arendt, who was also writing in the mid-century.

If this is all true, then this dates a transition in American culture from one of individuality to one of society. This coincides with the tendency of information organization traced assiduously by Beniger.

We can perhaps trace an epicycle of this process in the history of the Internet. In it’s “wild west” early days, when John Perry Barlow could write about the freedom of cyberspace, it was a place primarily occupied by the privileged few. Interestingly, many of these were engineers, and so were (I’ll assume for the sake of argument) but materially independent and not exclusively focused on profit-making. Hence the early Internet was not unlike the ancient polis, a place where free people could attempt words and deeds that would immortalize them.

As the Internet became more widely used and commercialized, it became more and more part of the profiteering machine of capitalism. So today we see it’s wildness curtailed by the demands of society (which includes an appeal to an ethics sensitive both to disparities in wealth and differences in the body, both part of the “private” realm in antiquity but an element of public concern in modern society.)


by Sebastian Benthall at July 23, 2015 09:33 PM

Arendt on social science

Despite my first (perhaps kneejerk) reaction to Arendt’s The Human Condition, as I read further I am finding it one of the most profoundly insightful books I’ve ever read.

It is difficult to summarize: not because it is written badly, but because it is written well. I feel every paragraph has real substance to it.

Here’s an example: Arendt’s take on the modern social sciences:

To gauge the extent of society’s victory in the modern age, its early substitution of behavior for action and its eventual substitution of bureaucracy, the rule of nobody, for personal rulership, it may be well to recall that its initial science of economics, which substitutes patterns of behavior only in this rather limited field of human activity, was finally followed by the all-comprehensive pretension of the social sciences which, as “behavioral sciences,” aim to reduce man as a whole, in all his activities, to the level of a conditioned and behaving animal. If economics is the science of society in its early stages, when it could impose its rules of behavior only on sections of the population and on parts of their activities, the rise of the “behavioral sciences” indicates clearly the final stage of this development, when mass society has devoured all strata of the nation and “social behavior” has become the standard for all regions of life.

To understand this paragraph, one has to know what Arendt means by society. She introduces the idea of society in contrast to the Ancient Greek polis, which is the sphere of life in Antiquity where the head of a household could meet with other heads of households to discuss public matters. Importantly for Arendt, all concerns relating to the basic maintenance and furthering of life–food, shelter, reproduction, etc.–were part of the private domain, not the polis. Participation in public affairs was for those who were otherwise self-sufficient. In their freedom, they would compete to outdo each other in acts and words that would resonate beyond their lifetime, deeds, through which they could aspire to immortality.

Society, in contrast, is what happens when the mass of people begin to organize themselves as if they were part of one household. The conditions of maintaining life are public. In modern society, people are defined by their job; even being the ruler is just another job. Deviation from ones role in society in an attempt to make a lasting change–deeds–are considered disruptive, and so are rejected by the norms of society.

From here, we get Arendt’s critique of the social sciences, which is essentially this: that is only possible to have a social science that finds regularities of people’s behavior when their behavior has been regularized by society. So the social sciences are not discovering a truth about people en masse that was not known before. The social sciences aren’t discovering things about people. They are rather reflecting the society as it is. The more that the masses are effectively ‘socialized’, the more pervasive a generalizing social science can be, because only under those conditions are there regularities there to be captured as knowledge and taught.


by Sebastian Benthall at July 23, 2015 02:06 AM

July 19, 2015

Ph.D. student

Hannah Arendt on the apoliticality of science

The next book for the Berkeley School of Information’s Classics reading group is Hannah Arendt’s The Human Condition, 1958. We are reading this as a follow-up to Sennett’s The Craftsman, working backwards through his intellectual lineage. We have the option to read other Arendt. I’m intrigued by her monograph On Violence, because it’s about the relationship between violence and power (which is an important thing to think about) and also because it’s comparatively short (~100 pages). But I’ve begun dipping into The Human Condition today only to find an analysis of the role of science in society. Of course I could not resist writing about it here.

Arendt opens the book with a prologue discussing the cultural significance of the Apollo mission. She muses at shift in human ambition that has lead to its seeking to leave Earth. Having rejected Heavenly God as Father, she sees this as a rejection of Earth as Mother. Poetic stuff–Arendt is a lucid writer, prose radiating wisdom.

Then Arendt begins to discuss The Problems with Science (emphasis mine):

While such possibilities [of space travel, and of artificial extension of human life and capabilities] still may lie in a distant future, the first boomerang effects of science’s great triumphs have made themselves felt in a crisis within the natural sciences themselves. The trouble concerns the fact that the “truths” of the modern scientific world view, though they can be demonstrated in mathematical formulas and proved technologically, will no longer lend themselves to normal expression in speech and thought. The moment these “truths” are spoken of conceptually and coherently, the resulting statements will be “perhaps not as meaningless as a ‘triangular circle,’ but much more so than a ‘winged lion'” (Erwin Schödinger). We do not yet know whether this situation is final. But it could be that we, who are earth-bound creatures and have begun to act as though we are dwellers of the universe, will forever be unable to unable to understand, that is, to think and speak about the things which nevertheless we are able to do. In this case, it would be as though our brain, which constitutes the physical, material condition of our thoughts, were unable to follow what we do, so that from now on we would indeed need artificial machines to do our thinking and speaking. If it should turn out to be true that knowledge (in the sense of know-how) and thought have parted company for good, then we would indeed become the helpless slaves, not so much of our machines as of our know-how, thoughtless creatures at the mercy of every gadget which is technically possible, no matter how murderous it is.

We can read into Arendt a Heideggerian concern about man’s enslavement of himself through technology, and equally a distrust mathematical formalism that one can also find in Horkheimer‘s Eclipse of Reason. It’s fair to say that the theme of technological menace haunted the 20th century; this is indeed the premise of Beniger‘s The Control Revolution, whose less loaded account described how the advance of technical control could be seen as nothing less or more than the continuing process of life’s self-organization.

What is striking to me about Arendt’s concerns, especially after having attended SciPy 2015, a conference full of people discussing their software code as a representation of scientific knowledge, is how ignorant Arendt is about how mathematics is used by scientists. (EDIT: The error here is mine. A skimming of the book past the prologue (always a good idea before judging the content of a book or its author…) makes it clear that this comment about mathematical formalism is not a throwaway statement at the beginning of the book to motivate a discussion of political action, but rather something derived from her analysis of political action and the history of science. Ironically, I’ve read her “speech” and interpreted it politically (in the narrow sense of implicating identities of “the scientist”, a term which she does seem to use disparagingly or distancingly elsewhere, when another more charitable reading (one that was more sensitive to how she is “technically” defining her terms (though I expect she would deny this usage)–“speech” being rather specialized for Arendt, not being merely ‘utterances’–wouldn’t be as objectionable. I’m agitated by the bluntness of my first reading, and encouraged to read further.)

On the one hand, Arendt wisely situates mathematics as an expression of know-how, and sees technology as an extension of human capacity not as something autonomous from it. But it’s strange to read her argue essentially that mathematics and technology is not something that can be discussed. This ignores the daily practice of scientists, mathematicians, and their intellectual heirs, software engineers, which involves lots of discussion about about technology. Often these discussions are about the political impact of technical decisions.

As an example, I had the pleasure of attending a meeting of the NumPy community at SciPy. NumPy is one of the core packages for scientific computing in Python which implements computationally efficient array operations. Much of the discussion hinged on whether and to what extent changes to the technical interface would break downstream implementations using the library, angering their user base. This political conflict, among other events, lead to the creation of sempervirens, a tool for collecting data about how people are using the library. This data will hopefully inform decisions about when to change the technical design.

Despite the facts of active discourse about technology in the mathematized language of technology, Arendt maintains that it is the inarticulateness of science that makes it politically dangerous.

However, even apart from these alst and yet uncertain consequences, the situation created by the sciences is of great political significance. Wherever the relevance of speech is at stake, matters become political by definition, for speech is what makes man a political being. If we would follow the advice , so frequently urged upon us, to adjust our cultural attitudes to the present status of scientific achievement, we wuld in all earnest adopt a way of life in which speech is no longer meaningful. For the sciences today have been forced to adopt a “language” of mathematical symbols which, though it was originally meant only as an abbreviation for spoken statements, now contains statements that in no way can be translated back into speech. The reason why it may be wise to distrust the political judgment of scientists qua scientists is not primarily their lack of “character”–that they did not refuse to develop atomic weapons–or their naivete–that they did not understand that once these weapons were developed they would be the last to be consulted about their use–but precisely the fact that they move in a world where speech has lost its power. And whatever men do or know or experience can make sense only to the extent that it can be spoken about. There may be truths beyond speech, and they may be of great relevance to man in the singular, that is, to man in so far as he is not a political being, whatever else he may be. Men in the plural, that is, men in so far as they live and move and act in this world, can experience meaningfulness only because they can talk with and make sense to each other and to themselves.

There is an element of truth to this analysis. But there is also a deep misunderstanding of the scientific process as one that somehow does not involve true speech. Here we find another root of a much more contemporary debate about technology in society reflected in recent concern about the power of ‘algorithms’. (EDIT: Again, after consideration, shallowly accusing Arendt of a “deep misunderstanding” at this stage is hubris. Though there does seem to be a connection between some of the contemporary debate about algorithms to Arendt’s view, it’s wrong to project historically backwards sixty years when The Human Condition is an analysis of the shifting conditions over the preceding two millennia.

Arendt claims early on that the most dramatic change in the human condition that she can anticipate is humanity’s leaving the earth to populate the universe. I want to argue that the creation of the Internet has been transformative of the human condition in a different way.)

I think it would be fair to say that Arendt, beloved a writer though she is, doesn’t know what she’s talking about when she’s talking about mathematical formalism. (EDIT: Again, a blunt conclusion. However, the role of formalism in, say, economics (though much debated) stands as a counterexample to Arendt in other ways.) And perhaps this is the real problem. When, for almost a century, theorists have tried to malign the role of scientific understanding in politics, it has been (incoherently) either on the grounds that it is secretly ideological in ways that have gone unstated, or (as for Arendt) that it is cognitively defective in a way that prevents it from participating in politics proper. (EDIT: This is a misreading of Arendt. It appears that what makes mathematical science apolitical for Arendt is precisely its universality, and hence its inability to be part of discussion about the different situations of political actors. Still, something seems quite wrong about Arendt’s views here. How would she think about Dwork’s “Fairness through awareness“?

The frustration for a politically motivated scientist is this: Political writers will sometimes mistake their own inability to speak or understand mathematical truths for their general intelligibility. On grounds of this alleged intelligibility they dismiss scientists from political discussion. They then find themselves apolitically enslaved by technology they don’t understand, and angry about it. Rather than blame their own ignorance of the subject matter, they blame scientists for being unintelligible. This is despite scientists intelligibility to each other.

An analysis of the politics of science will be incomplete without a clear picture of how scientists and non-scientists relate to each other and communicate. As far as I can tell, such an analysis is almost impossible politically speaking because of the power dynamic of the relation. Professional non-scientific intellects are loathe to credit scientists with an intellectual authority that they feel that they are not able to themselves attain, and scientific practice requires adhering to standards of rigor which give one greater intellectual authority; these standards by their nature require ahistorical analysis, dismissal of folk theorizing, etc. It has become politically impossible to ground an explanation of a social phenomenon on the basis that one population is “smarter” than another, despite this being a ready first approximation and one that is used in practice by the vast majority of people in private. Hence, the continuation of the tradition of treatises putting science in its place.


by Sebastian Benthall at July 19, 2015 12:48 AM

July 17, 2015

Ph.D. student

One Magisterium: a review (part 1)

I have come upon a remarkable book, titled One Magisterium: How Nature Knows Through Us, by Seán Ó Nualláin, President, University of Ireland, California. It is dedicated “To all working at the edges of society in an uncompromising search for truth and justice.” It’s acknowledgement section opens:

Kenyan middle-distance runners were famous for running like “scared rabbits”: going straight to the head of the field and staying there, come what may. Even more than was the case for my other books, I wrote this like a scared rabbit.”

Ó Nualláin is a recognizable face at UC Berkeley though I think it’s fair to say that most of the faculty and PhD students couldn’t tell you who he is. To a mainstream academic, he is one of the nebulous class of people who show up to events. One glorious loophole of university culture is that the riches of intellectual communion are often made available in open seminars held by people so weary of obscurity that they are happy for any warm body that cares enough to attend. This condition combined with the city of Berkeley’s accommodating attitude towards quacks and vagrants adds flavor to the university’s intellectual character.

There is of course no campus for the University of Ireland, California. Ó Nualláin is a truly independent scholar. Unlike many more unfortunate intellectuals, he has made the brilliant decision to not quit his day job, which is as a musician. A Google inquiry into the man indicates he probably got his PhD from Dublin City University and spent a good deal of time around Stanford’s Symbolic Systems department. (EDIT: Sean has corrected me on the details of his accomplished biography in the comments.)

I got on his mailing lists some time ago because of my interest in the Foundations of Mind conference, which he runs in Berkeley. Later, I was impressed by his aggressive volley of questions when Nick Bostrom spoke at Berkeley (I’ve become familiar with Bostrom’s work through MIRI (formerly SingInst). I’ve spoken to him just a couple times, once at a poster session at the Berkeley Institute of Data Science and once at Katy Huff’s scientific technology practice group, The Hacker Within.

I’m providing these details out of what you might call anthropological interest. At the School of Information I’ve somehow caught the bug of Science and Technology Studies by osmosis. Now I work for Charlotte Cabasse on her ethnographic team, despite believing myself to be a computational social scientist. This qualitative work is a wonderful excuse to write about ones experiences.

My perceptions of Ó Nualláin are relevant, then, because they situate the author of One Magisterium as an outsider to the academic mainstream at Berkeley. This outsider status comes through quite heavily in the book, starting from the Acknowledgments section (which recognizes all the service staff at the bars and coffee shops where he wrote the book) and running as a regular theme throughout. Discontent with and rejection from academia-as-usual are articulated in sublimated form as harsh critique of the academic institution. Ó Nualláin is engaged in an “uncompromising search for truth and justice,” and the university as it exists today demands too many compromises.

Magisterium is a Catholic term for a teaching authority. One Magisterium refers to the book’s ambition of pointing to a singular teaching authority, a new one heretofore unrecognized by other teaching authorities such as mainstream universities. Hence the book is an attack on other sources of intellectual authority. An example passage:

The devastating news for any reader venturing a toe into the stormy waters of this book is that its writer’s view is that we may never be able to dignify the moral, epistemological and political miasma of the early twenty-first century with terms like “crisis” for which the appropriate solution is of course a “paradigm shift”. It may simply be a set of hideously interconnected messes; epistemological and administrative in the academy, institutional and moral in the greater society. As a consequence, the landscape of possible “solutions” may seem so unconstrained that the wisdom of Joe the barman may be seen to equal that of any series of tomes, no matter how well-researched.

This book is above all an attempt to unify the plurality of discourses — scientific, religious, moral, aesthetic, and so on — that obtain at the start of the third millenium.

An anthropologist of science might observe that this criticality-of-everything, coupled with the claim to have a unifying theory of everything, is a surefire way to get ignored by the academy. The incentive structure of the academy requires specialization and a political balance of ideas. If somebody were to show up with the right idea, it would discredit a lot of otherwise important people and put others out of a job.

The problem, or one of them (there are many mentioned in the first chapter of One Magisterium, titled “The Trouble with Everything”), is that Ó Nualláin is right. At least as far as I can tell at this point. It is not an easy book to read; it is not structured linearly so much as (I imagine, not knowing what I’m talking about) like complex Irish dancing music, with motifs repeated and encircling themselves like a double helix or perhaps some more complex structure. Threaded together are topics from Quantum Mechanics, an analysis of the anthropic principle, a critique of Dawkins’ atheism and a positioning of the relevance of Vedanta theology to understanding physical reality, and an account of the proper role of the arts in society. I suspect that the book is meant to unfold on ones psychology slowly, resulting in ones adoption of what Ó Nualláin calls bionoetics, the new united worldview that is the alleged solution to everything.

A key principle of bionoetics is the recognition of what Ó Nualláin calls the “noetic” level of description, which is distinct from the “cognitive” third-person stance in that it is compressed in a way that makes it relevant to action in any particular domain of inquiry. Most of what he describes as “noetic” I read as “phenomenological”. I wonder if Ó Nualláin has read Merleau-Ponty–he uses the Husserlian critique of “psychologism” extensively.

I think it’s immaterial whether “noetic” is an appropriate neologism for this blending of the first-personal experience into the magisterium. Indeed, there is something comforting to a hard-headed scientist about Ó Nualláin’s views: contrary to the contemporary anthropological view, this first-personal knowledge has no place in academic science; it’s place is art. Having been in enough seminars at the School of Information where anthropologists lament not being taken seriously as producing knowledge comparable to that of the Scientists, and being one who appreciates the value of Art without needing it to be Science, I find something intuitively appealing about this view. Nevertheless, one wonders if the epistemic foundation of Ó Nualláin’s critique of the academy is grounded in scientific inquiry or his own and others first-personal noetic experiences coupled with observations of who is “successful” in scientific fields.

Just one chapter into One Magisterium, I have to say I’m impressed with it in a very specific way. Some of us learn about the world with a synthetic mind, searching for the truth with as few constraints on ones inquiry as possible. Indeed, that’s how I wound up at as nebulous place as the School of Information at Berkeley. As one conducts the search, one finds oneself increasingly isolated. Some truths may never be spoken, and it’s never appropriate to say all the truths at once. This is especially true in an academic context, where it is paramount for the reputation of the institution that everyone avoid intellectual embarrassment whenever possible. So we make compromises, contenting ourselves with minute and politically palatable expertise.

I am deeply impressed that Ó Nualláin has decided to fuck all and tell it like it is.


by Sebastian Benthall at July 17, 2015 06:51 PM

June 22, 2015

Ph.D. alumna

Which Students Get to Have Privacy?

There’s a fresh push to protect student data. But the people who need the most protection are the ones being left behind.

It seems that student privacy is trendy right now. At least among elected officials. Congressional aides are scrambling to write bills that one-up each other in showcasing how tough they are on protecting youth. We’ve got Congressmen Polis and Messer (with Senator Blumenthal expected to propose a similar bill in the Senate). Kline and Scott have a discussion draft of their bill out while Markey and Hatch have reintroduced the bill they introduced a year ago. And then there’s Senator Vitter’s proposed bill. And let’s not even talk about the myriad of state-level legislation.

Most of these bills are responding in some way or another to a 1974 piece of legislation called the Family Educational Rights and Privacy Act (FERPA), which restricted what schools could and could not do with student data.

Needless to say, lawmakers in 1974 weren’t imagining the world of technology that we live with today. On top of that, legislative and bureaucratic dynamics have made it difficult for the Department of Education to address failures at the school level without going nuclear and just defunding a school outright. And schools lack security measures (because they lack technical sophistication) and they’re entering into all sorts of contracts with vendors that give advocates heartburn.

So there’s no doubt that reform is needed, but the question — as always — is what reform? For whom? And with what kind of support?

The bills are pretty spectacularly different, pushing for a range of mechanisms to limit abuses of student data. Some are fine-driven; others take a more criminal approach. There are also differences in who can access what data under what circumstances. The bills give different priorities to parents, teachers, and schools. Of course, even though this is all about *students*, they don’t actually have a lot of power in any of these bills. It’s all a question of who can speak on their behalf and who is supposed to protect them from the evils of the world. And what kind of punishment for breaches is most appropriate. (Not surprisingly, none of the bills provide for funding to help schools come up to speed.)

As a youth advocate and privacy activist, I’m generally in favor of student privacy. But my panties also get in a bunch when I listen to how people imagine the work of student privacy. As is common in Congress as election cycles unfold, student privacy has a “save the children” narrative. And this forces me to want to know more about the threat models we’re talking about. What are we saving the children *from*?

Threat Models

There are four external threats that I think are interesting to consider. These are the dangers that students face if their data leaves the education context.

#1: The Stranger Danger Threat Model. It doesn’t matter how much data we have to challenge prominent fears, the possibly of creepy child predators lurking around school children still overwhelms any conversation about students, including their data.

#2: The Marketing Threat Model. From COPPA to the Markey/Hatch bill, there’s a lot of concern about how student data will be used by companies to advertise products to students or otherwise fuel commercial data collection that drives advertising ecosystems.

#3: The Consumer Finance Threat Model. In a post-housing bubble market, the new subprime lending schemes are all about enabling student debt, especially since students can’t declare bankruptcy when they default on their obscene loans. There is concern about how student data will be used to fuel the student debt ecosystem.

#4: The Criminal Justice Threat Model. Law enforcement has long been interested in student performance, but this data is increasingly desirable in a world of policing that is trying to assess risk. There are reasons to believe that student data will fuel the new policing architectures.

The first threat model is artificial (see: “It’s Complicated”), but it propels people to act and create laws that will not do a darn thing to address abuse of children. The other three threat models are real, but these threats are spread differently over the population. In the world of student privacy, #2 gets far more attention than #3 and #4. In fact, almost every bill creates carve-outs for “safety” or otherwise allows access to data if there’s concern about a risk to the child, other children, or the school. In other words, if police need it. And, of course, all of these laws allow parents and guardians to get access to student data with no consideration of the consequences for students who are under state supervision. So, really, #4 isn’t even in the cultural imagination because, as with nearly everything involving our criminal justice system, we don’t believe that “those people” deserve privacy.

The reason that I get grouchy is that I hate how the risks that we’re concerned about are shaped by the fears of privileged parents, not the risks of those who are already under constant surveillance, those who are economically disadvantaged, and those who are in the school-prison pipeline. #2-#4 are all real threat models with genuine risks, but we consistently take #2 far more seriously than #3 or #4, and privileged folks are more concerned with #1.

What would it take to actually consider the privacy rights of the most marginalized students?

The threats that poor youth face? That youth of color face? And the trade-offs they make in a hypersurveilled world? What would it take to get people to care about how we keep building out infrastructure and backdoors to track low-status youth in new ways? It saddens me that the conversation is constructed as being about student privacy, but it’s really about who has the right to monitor which youth. And, as always, we allow certain actors to continue asserting power over youth.

This post was originally published to The Message at Medium on May 22, 2015. Image credit: Francisco Osorio

by zephoria at June 22, 2015 01:51 PM

June 16, 2015

MIMS 2012

“Did You A/B Test It?”

After launching a feature, coworkers often ask me, “Did you A/B test it?” While the question is well-meaning, A/B testing isn’t the only way, or even the best way, of making data-informed decisions in product development. In this post, I’ll explain why, and provide other ways of validating hypotheses to assure your coworkers that a feature was worth building.

Implied Development Process

My coworker’s simple question implies a development process that looks like this:

  1. You have an idea for a new feature
  2. You build the new feature
  3. You A/B test it to prove its success
  4. Profit! High fives! Release party!

While this looks reasonable on the surface, it has a few flaws.

Flaw 1: What metric are you measuring?

The A/B test in step 3 implies that you’re comparing a version of the product with the new feature to a version without the new feature. But a key part of running an A/B test is choosing a metric to call the winner, which is where things get tricky. Your instinct is probably to measure usage of the new feature. But this doesn’t work because the control lacks the feature, so it loses before the test even begins.

There are, however, higher-level metrics you care about. These could range from broad business metrics, like revenue or time in product, to more narrow metrics, like completing a specific task (such as successfully booking a place to stay in the case of AirBnB). Generally speaking, broader metrics are slower to move and influenced by more factors, so narrow metrics are better.

Even so, this type of experiment isn’t what A/B tests excels at. At its core, A/B testing is a hill climbing technique. This means it’s good at telling you if small, incremental changes are an improvement (in other words, each test is a step up a hill). Launching a feature is more like exploring a new hill. You’re giving users the ability to do something they couldn’t do before. A/B testing isn’t good at comparing hills to each other, nor will it help you find new hills.

Flaw 2: What if the new feature loses?

Let’s say you have good metrics to measure, and enough traffic to run the test in a reasonable timeframe. But the results come back, and the unthinkable happened: your new feature lost. There’s no profit, high fives, or launch party. Now what do you do?

Because of sunk costs, your instinct is going to be to try to improve the feature until it wins. But an A/B test doesn’t tell you why it lost. Maybe there was a minor usability problem, or maybe it’s fundamentally flawed. Whatever the problem may be, an A/B test won’t tell you what it is, which doesn’t help you improve it.

The worst-case scenario is that the feature doesn’t solve a real problem, in which case you should remove it. But this is an expensive option because you spent the time to design, build, and launch it before learning it wasn’t worth building. Ideally you’d discover this earlier.

Revised Development Process

When our well-meaning coworker asked if we A/B tested the new feature, what they really wanted to know is if we have data to back up that it was worth building. To them, an A/B test is the only way they know how to answer that question. But as user experience professionals, we know there are plenty of methods for gathering data to guide our designs. Let’s revise our product development process from above:

  1. You have an idea for a new feature.
  2. You scope the problem the feature is supposed to solve by interviewing users, sending out surveys, analyzing product usage, or using other research methods.
  3. You create prototypes and show them to users.
  4. You refine the design based on user feedback.
  5. You repeat steps 3 and 4 until you’re confident the design solves the problem you set out to solve.
  6. You build the feature.
  7. You do user testing to find and fix usability flaws.
  8. You release the feature via a phased rollout (or a private/public/opt-in beta) and measure your key metrics to make sure they’re within normal parameters.
    • This can be run as an A/B test, but doesn’t need to be.
  9. Once you’re confident the feature is working as expected, fully launch it to everyone.
  10. Profit! High fives! Release party!
  11. Optimize the feature by A/B testing incremental changes.

In this revised development process (commonly called user-centered design), you’re gathering data every step of the way. Rather than building a feature and “validating” it at the end with an A/B test, you’re continually refining what you’re building based on user feedback. By the time you release it, you’ve iterated countless times and are confident it’s solving a real problem. And once it’s built, you can use A/B testing to do what A/B testing does best — optimization.

A longer process? Yes. A more confident, higher quality launch? Also yes.


Now when your coworkers ask if you A/B tested your feature, you can reply, “No, but we made data-informed decisions that told us users really want this feature. Let me show you all of our data!” By using research and A/B testing appropriately, you’ll build features that your users and your bottom line will love.

Further Reading

If you’d like to learn how other companies incorporate A/B testing into their development process, or about user-centered design in general, these articles are great resources:

Thanks to Kyle Rush, Olga Antonenko Young, and Silvia Amtmann for providing feedback on earlier drafts of this post.

by Jeff Zych at June 16, 2015 03:49 PM

June 03, 2015

Ph.D. alumna

I miss not being scared.

From the perspective of an adult in this society, I’ve taken a lot of stupid risks in my life. Physical risks like outrunning cops and professional risks like knowingly ignoring academic protocol. I have some scars, but I’ve come out pretty OK in the scheme of things. And many of those risks have paid off for me even as similar risks have devastated others.

Throughout the ten years that I was doing research on youth and social media, countless people told me that my perspective on teenagers’ practices would change once I had kids. Wary of this frame, I started studying the culture of fear, watching as parents exhibited fear of their children doing the same things that they once did, convinced that everything today is so much worse than it was when they were young or that the consequences would be so much greater. I followed the research on fear and the statistics on teen risks and knew that it wasn’t about rationality. There was something about how our society socialized parents into parenting that produced the culture of fear.

Now I’m a parent. And I’m in my late 30s. And I get to experience the irrational cloud of fear. The fear of mortality. The fear of my children’s well-being. Those quiet little moments when crossing the street where my brain flips to an image of a car plowing through the stroller. The heart-wrenching panic when my partner is late and I imagine all of the things that might have happened. The reading of stories of others’ pain and shuddering with fear that my turn is next. The moments of loss and misfortune in my own life when I close my eyes and hope my children don’t have to feel that pain. I can feel the haunting desire to avoid risks and to cocoon my children.

I know the stats. I know the ridiculousness of my fears. And all I can think of is the premise of Justine Larbalestier’s Magic or Madness where the protagonist must either use her magic or go crazy if she doesn’t use it. I feel like I am at constant war with my own brain over the dynamics of fear. I refuse to succumb to the fear because I know how irrational it is but in refusing, I send myself down crazy rabbit holes on a regular basis. For my kids’ sake, I want to not let fear shape my decision-making but then I’m fearing fear. And, well, welcome to the rabbit hole.

I miss not being scared. I miss taking absurd risks and not giving them a second thought. I miss doing the things that scare the shit out of most parents. I miss the ridiculousness of not realizing that I should be afraid in the first place.

In our society, we infantalize youth for their willingness to take risks that we deem dangerous and inappropriate. We get obsessed with protecting them and regulating them. We use brain science and biography to justify restrictions because we view their decision making as flawed. We look at new technologies or media and blame them for corrupting the morality of youth, for inviting them to do things they shouldn’t. Then we about face and capitalize on their risk taking when it’s to our advantage, such as when they go off to war on our behalf.

Is our society really worse off because youth take risks and adults don’t? Why are they wrong and us old people are right? Is it simply because we have more power? As more and more adults live long, fearful lives in Western societies, I keep thinking that we should start regulating our decision-making. Our inability to be brash is costing our society in all sorts of ways. And it will only get worse as some societies get younger while others get older. Us old people aren’t imagining new ways of addressing societal ills. Meanwhile, our conservative scaredy cat ways don’t allow youth to explore and challenge the status quo or invent new futures. I keep thinking that we need to protect ourselves and our children from our own irrationality produced from our fears.

I have to say that fear sucks. I respect its power, just like I respect the power of a hurricane, but it doesn’t make me like fear any more. So I keep dreaming of ways to eradicate fear. And what I know for certain is that statistical information won’t cut it. And so I dream of a sci-fi world in which I can manipulate my synapses to prevent those ideas from triggering. In the meanwhile, I clench my jaw and try desperately to not let the crazy visions of terrible things that could happen work their way into my cognitive perspective. And I wonder what it will take for others to recognize the impact that our culture of fear is having on all of us.

This post was originally published to The Message at Medium on May 4, 2015

by zephoria at June 03, 2015 04:12 PM

May 21, 2015

Ph.D. alumna

The Cost of Fame

We were in Juarez Mexico. We had gone there as a group of activists to stage a protest over the government’s refusal to investigate the disappearance and brutal murders of hundreds of women. It was a V-Day initiative and so there were celebrities among us.

I was assigned as one of the faux fans and my responsibility was to hover around the celebrities during the protest in order to minimize who could actually access the celebrities. The actual bodyguards kept a distance so that the celebrities could be seen and heard. And photographed. It was a weird role, a moment in which it was made clear how difficult it was for celebrities to be in public. Their accessibility was always mediated, planned for, negotiated. And I was to be invisible so that they could be visible.

Over the years, I’ve worked with a lot of celebrities through my activist work. I’ve had to create artificial distractions, distribute fake information about celebrities’ locations, and help celebrities hide. I’ve had to help architect a process for celebrities to use the bathroom or get a glass of water and I’ve watched the cost of that overhead. Every move has to be managed because of paparazzi and fans. There’s nothing elegant about being famous when you just need to take a shit.

There’s a cost to fame, a cost that is largely invisible to most people. Many of the teens that I interviewed wanted to be famous. They saw fame as freedom — freedom from parents, poverty, and insecurity.

What I learned in working with celebrities is that fame is a trap, a burden, a manacle.

It seems so appealing and, for some, it can be an amazing tool. But for many who aren’t prepared for it, fame is a restricting force, limiting your freedom and mobility, and forcing you to put process around every act you take. Forcing you to live with constant critique, with every move and action constantly judged by others who feel as though they have the right because the famous are seen as privileged. There’s a reason that substance abuse runs rampant among celebrities. There’s a reason so many celebrities crack under pressure. Fame is the opposite of freedom.

Social media has created new platforms for people to achieve fame. Instagram fame. YouTube fame. But most people who become Internet famous aren’t Justin Bieber. They’re people with millions of followers and no support structure. They don’t have a personal assistant and bodyguard. They don’t have someone who manages the millions of messages they receive or turns away the creepy fans who show up in person. They are on their own to handle all of the shit that comes their way.

In her brilliant book, “Status Update: Celebrity, Publicity, and Branding in the Social Media Age,” Alice Marwick highlights how attracting attention and achieving fame is a central part of being successful in the new economy. Welcome to our neoliberal society. Yet, as Marwick quickly uncovers in her analysis, these practices are experienced differently depending on race and gender. What it means to be famous online looks different if you’re a woman and/or a person of color. The quantity and quality of remarks you receive as you attract attention changes dramatically. The rape threats increase. The remarks on your body increase. And the interactions get creepier. And the costs skyrocket.

I’m relatively well-known on the internet. And each time that I’ve written or done something that’s attracted a lot of attention, I’ve felt a hint of the cost of fame, so much so that I purposefully go out of my way to disappear for a while and decrease my visibility. Throughout my career, there have been times in which I could’ve done things that would’ve taken my micro-celebrity to the next level, and yet I’ve chosen to back away because I don’t like the costs that I face. I don’t like the death threats. I also don’t like when people won’t be honest with me. I don’t like when people get nervous around me. And I don’t like being objectified, as though I have no feelings. These are all part of the cost of fame. We don’t see celebrities as people; we see them as cultural artifacts.

We’ve made fame a desirable commodity, produced and fetishized. From reality TV and Jerry Springer to YouTube and Instagram, we’ve created structures for everyday people to achieve mass attention. But we’ve never created the structures to help them cope. Or for those who help propel others into fame to think about the consequences of their actions. And we’ve never stopped to think about how these platforms that fuel fame culture help reinforce misogyny and racism.

There’s a cost to fame, a cost that is unevenly borne. And I have no idea how to make that cost visible to the teens who desire fame, the media producers who create the platforms for fame, or the fans who generate the ugliness behind fame. It’s far too easy to see the gloss, far too difficult to see what it means to be trapped.

trapped in my glasshouse
crowd has been gathering since dawn
i make a pot of coffee
while catastrophe awaits me out on the lawn
i think i’m going to stay in today
pretend like i don’t know what’s going on
Ani Difranco, Glass House

In Juarez, we got attacked. In the mayhem, Jane Fonda’s jewelry was taken from her. I still remember the elegance with which she handled that situation. I also remember her response when someone asked if the jewerly was valuable. “Do you think I’m an idiot? This isn’t my first protest.” As we spent the night camped out under the flickering blue lights in the roach motel, I listened to her tell stories of previous political actions and the attacks she’d faced. She was acutely aware of the costs of fame, but she was also aware of how she could use it to make a difference. I couldn’t help but think of a comment Angelina Jolie once made when she noted that people would always follow her around with a camera so she might as well go to places that needed to be photographed. Neither women made me want to be famous, but both made me deeply appreciate those who have learned to negotiate fame.

This post was originally published to The Message at Medium on April 21, 2015 as part of Fame Week.

by zephoria at May 21, 2015 06:20 PM

May 20, 2015

Ph.D. student

resisting the power of organizations

“From the day of his birth, the individual is made to feel there is only one way of getting along in this world–that of giving up hope in his ultimate self-realization. This he can achieve solely by imitation. He continuously responds to what he perceives about him, not only consciously but with his whole being, emulating the traits and attitudes represented by all the collectivities that enmesh him–his play group, his classmates, his athletic team, and all the other groups that, as has been pointed out, enforce a more strict conformity, a more radical surrender through complete assimilation, than any father or teacher in the nineteenth century could impose. By echoing, repeating, imitating his surroundings, by adapting himself to all the powerful groups to which he eventually belongs, by transforming himself from a human being into a member of organizations, by sacrificing his potentialities for the sake of readiness and ability to conform to and gain influence in such organizations, he manages to survive. It is survival achieved by the oldest biological means necessary, mimicry.” – Horkheimer, “Rise and Decline of the Individual”, Eclipse of Reason, 1947

Returning to Horkheimer‘s Eclipse of Reason (1947) after studying Beniger‘s Control Revolution (1986) serves to deepen ones respect for Horkheimer.

The two writers are for the most part in agreement as to the facts. It is a testament to their significance and honesty as writers that they are not quibbling about the nature of reality but rather are reflecting seriously upon it. But whereas maintains a purely pragmatic, unideological perspective, Horkheimer (forty years earlier) correctly attributes this pragmatic perspective to the class of business managers to whom Beniger’s work is directed.

Unlike more contemporary critiques, Horkheimer’s position is not to dismiss this perspective as ideological. He is not working within the postmodern context that sees all knowledge as contestable because it is situated. Rather, he is working with the mid-20th acknowledgment that objectivity is power. This is a necessary step in the criticality of the Frankfurt School, which is concerned largely with the way (real) power shapes society and identity.

It would be inaccurate to say that Beniger celebrates the organization. His history traces the development of social organization as evolving organism. Its expanding capacity for information processing is a result of the crisis of control unleashed by the integration of its energetic constituent components. Globalization (if we can extend Beniger’s story to include globalization) is the progressive organization of organizations of organization. It is interesting that this progression of organization is a strike against Weiner’s prediction of the need for society to arm itself against entropy. This conundrum is one we will need to address in later work.

For now, it is notable that Horkheimer appears to be responding to just the same historical developments later articulated by Beniger. Only Horkeimer is writing not as a descriptive scientist but as a philosopher engaged in the process of human meaning-making. This positions him to discuss the rise and decline of the individual in the era of increasingly powerful organizations.

Horkheimer sees the individual as positioned at the nexus of many powerful organizations to which he must adapt through mimicry for the sake of survival. His authentic identity is accomplished only when alone because submission to organizational norms is necessary for survival or the accumulation of organizational power. In an era where pragmatic ability to manipulate people, not spiritual ideals, qualifies one for organization power, the submissive man represses his indignation and rage at this condition and becomes an automoton of the system.

Which system? All systems. Part of the brilliance of both Horkheimer and Beniger is their ability to generalize over many systems to see their common effect on their constituents.

I have not read Horkheimer’s solution the individual’s problem of how to maintain his individuality despite the powerful organizations which demand mimicry of him. This is a pressing question when organizations are becoming ever more powerful by using the tools of data science. My own hypotheses, which is still in need of scientific validation, is that the solution lies in the intersecting agency implied by the complex topology of the organization of organizations.


by Sebastian Benthall at May 20, 2015 12:38 AM

May 18, 2015

MIMS 2012

May 17, 2015

Ph.D. student

software code as representation of knowledge

The reason why ubiquitous networked computing has changed how we represent knowledge is because the semantics of code are guaranteed by the mechnical implementations of its compilers.

This introduces a kind of discipline in the representation of knowledge as source code that is not present in natural language or even in formal mathematical notation, which must be interpreted by humans.

Evolutionarily, humanity’s innate capacity for natural language is well established. Literacy, however, is a trained skill that involves years of education. As Derrida points out in Of Grammatology, the transition from the understanding of language as speech or breath to the understanding of knowledge as text was a very significant change in the history of knowledge.

We have not yet adjusted institutionally to a world where knowledge is represented as code. Most of the institutions that run the world–the legal system, universities, etc.–still run on the basis of written language.

But the new institutions that are adapting to represent knowledge as data and software code to process it are becoming more powerful than these older institutions.

This power comes from these new institutions’ ability to assign the work of acting on their knowledge to computing machines that can work tirelessly and that integrate well with operations. These new institutions can process more information, gathered from more sources, than the old institutions. They are organizationally more intelligent than the older organizations. Because of this intelligence, they can accrue wealth and more power.


by Sebastian Benthall at May 17, 2015 08:57 PM

May 14, 2015

Ph.D. student

data science is not positivist, it’s power

Naively, we might assume that contemporary ‘data science’ is a form of positivist or post-positivist science. The scientist gathers data and subsumes it under logical formulae–models with fitted parameters. Indeed this is the case when data science is applied to natural phenomena, such as stars or the human genome.

The question of what kind of science ‘data science’ is becomes much more complex when we start to look at its application to social phenomena. This includes its application to the management of industrial and commercial technology–the so called “Internet of Things“. (Technology in general, and especially technology as situated socially, being a social phenomenon.)

There are (at least) two reasons why data science in these social domains is not strictly positivist.

The first is that, according to McKinsey’s Michael Chui, data science in the Internet of Things context is main about either real-time control or anomaly detection. Neither of these depends on the kind of nomothetic orientation that positivism requires. The former requires only an objective function over inputs to guide the steering of the dynamic system. The latter requires only the detection of deviation from historically observed patterns.

‘Data science’ applied in this context isn’t actually about the discovery of knowledge at all. It is not, strictly speaking, a science. Rather, it is a process through which the operations of existing technologies are related and improved by further technological interventions. Robust positivist engineering knowledge is applied to these cases. But however much the machines may ‘learn’, what they learn is not propositional.

Perhaps the best we can say is that ‘data science’ in this context is the science of techniques for making these kinds of interventions. As learning these techniques depends on mathematical rigor and empirical prototyping, we can say perhaps of the limited sense of ‘pure’ (not applied) data science that it is a positivist science.

But the second reason why data science is not positivist comes about as a result of its application. The problem is that when systems controlled by complex computational processes interact, the result is a more complex system. In adversarial cases, the interacting complex systems become the subject matter of cybersecurity research, towards which data science is one application. But as soon as on starts to study phenomena that are aware of the observer and can act in ways that respond to its presence, you get out of positivist territory.

A better way to think about data science might be to think of it in terms of perception. In, the visual system, data that comes in through the eye goes through many steps of preprocessing before it becomes the subject of attention. Visual representations feed into the control mechanisms of movement.

If we see data science not as a positivist attempt to discover natural laws, but rather as an extension of agency by expanding powers of perception and training skillful control, then we can get a picture of data science that’s consistent with theories of situated and embodied cognition.

These theories of situated and embodied cognition are perhaps the best contenders for what can displace the dominant paradigm as imagined by critics of cognitive science, economics, etc. Rather than being a rejection of explanatory power of naturalistic theories of information processing, these theories extend naive theories to embrace the complexity of how agents cognition is situated in a body in time, space, and society.

If we start to think of ‘data science’ not as a kind of natural science but as the techniques and tools for extending the information processing that is involved in ones individual or collective agency, then we can start to think about data science as what it really is: power.


by Sebastian Benthall at May 14, 2015 06:14 AM

May 09, 2015

Ph.D. student

is science ideological?

In a previous post, I argued that Beniger is an unideological social scientist because he grounds his social scientific theory in robust theory from the natural and formal sciences, like theory of computation and mathematical biology. Astute commenter mg has questioned this assertion.

Does firm scientific grounding absolve a theoretical inquiry from ideology – what about the ideological framework that the science itself has grown in and is embedded in? Can we ascribe such neutrality to science?

This is a good question.

To answer it, it would be good to have a working definition of ideology. I really like one suggested by this passage from Habermas, which I have used elsewhere.

The concept of knowledge-constitutive human interests already conjoins the two elements whose relation still has to be explained: knowledge and interest. From everyday experience we know that ideas serve often enough to furnish our actions with justifying motives in place of the real ones. What is called rationalization at this level is called ideology at the level of collective action. In both cases the manifest content of statements is falsified by consciousness’ unreflected tie to interests, despite its illusion of autonomy. The discipline of trained thought thus correctly aims at excluding such interests. In all the sciences routines have been developed that guard against the subjectivity of opinion, and a new discipline, the sociology of knowledge, has emerged to counter the uncontrolled influence of interests on a deeper level, which derive less from the individual than from the objective situation of social groups.

If we were to extract a definition of ideology from this passage, it would be something like this: an ideology is:

  1. an expression of motives that serves to justify collective action by a social group
  2. …that is false because it is unreflective of the social group’s real interests.

I maintain that the theories that Beniger uses to frame his history of technology are unideological because they are not expressions of motives. They are descriptive claims whose validity has been tested thoroughly be multiple independent social groups with conflicting interests. It’s this validity within and despite the contest of interests which gives scientific understanding its neutrality.

Related: Brookfield’s “Contesting Criticality: Epistemological and Practical Contradictions in Critical Reflection” (here), which I think is excellent, succinctly describes the intellectual history of criticality and how contemporary usage of it blends three distinct traditions:

  1. a Marxist view of ideology as the result of objectively true capitalistic social relations,
  2. a psychoanalytic view of ideology as a result of trauma or childhood,
  3. and a pragmatic/constructivist/postmodern view of all knowledge being situated.

Brookfield’s point is that an unreflective combination of these three perspectives is incoherent both theoretically and practically. That’s because while the first two schools of thought (which Habermas combines, above–later Frankfurt School writers deftly combined Marxism is psychoanalysis) both maintain an objectivist view of knowledge, the constructivists reject this in favor of a subjectivist view. Since discussion of “ideology” comes to us from the objectivist tradition, there is a contradiction in the view that all science is ideological. Calling something ‘ideological’ or ‘hegemonic’ requires that you take a stand on something, such as the possibility of an alternative social system.


by Sebastian Benthall at May 09, 2015 05:05 PM

May 08, 2015

Ph.D. student

Fascinated by Vijay Narayanan’s talk at #DataEDGE

As I write this I’m watching Vijay Narayanan’s, Director of Algorithms and Data Science Solutions at Microsoft, talk at the DataEDGE conference at UC Berkeley.

The talk is about “The Data Science Economy.” It began with a history of the evolution of the human centralized nervous system. He then went on to show the centralizing trend of the data economy. Data collection will be become more mobile, data processing will be done in the cloud. This data will be sifted by software and used to power a marketplace of services, which ultimately deliver intelligence to their users.

It was wonderful to see somebody so in the know reaffirming what has been a suspicion I’ve had since starting graduate school but have found little support for in the academic setting. The suspicion is that what’s needed to accurately model the data science economy is a synthesis of cognitive science and economics that can show the comparative market value and competitiveness of different services.

This is not out of the mainline of information technology, management science, computer science, and other associated disciplines that have been at the nexus of business and academia for 70 years. It’s an intellectual tradition that’s rooted in the 1940’s cybernetics vision of Norbert Wiener and was going strong in the social sciences as late as Beniger‘s The Control Revolution, which, like Narayanan, draws an explicit connection between information processing in the brain and information processing in the microprocessor–notably while acknowledging the intermediary step of bureaucracy as a large-scale information processing system.

There’s significant cross-pollination between engineering, economics, computer science, and cognitive psychology. I’ve read papers from, say, the Education field in the late 80’s and early 90’s that refers to this collectively as “the dominant paradigm”. At UC Berkeley today, it’s fascinating to see a departmental politics play out over ‘data science’ that echoes some of these concerns that a powerful alliance of ideas are getting mobilized by industry and governments while other disciplines are struggling to find relevance.

It’s possible that these specialized disciplinary discourses are important for the cultivation of thought that is important for its insight despite being fundamentally impractical. I’m coming to a different view: that maybe the ‘dominant paradigm’ is dominant because it is scientifically true, and that other disciplinary orientations are suffering because they are based on unsound theory. If disciplines that are ‘dominated’ by another paradigm are floundering because they are, to put it simply, wrong, then that is a very elegant explanation for what’s going on.

The ramification of this is that what’s needed is not a number of alternatives to ‘the dominant paradignm’. What’s needed is that scholars double down on the dominant paradigm and learn how to express in its logic the complexities and nuances that the other disciplines have been designed to capture. What we can hope for, in terms of intellectual continuity, is the preservation of what’s best of older ideas in a creative synthesis with the foundational principles of computer science and mathematical biology.


by Sebastian Benthall at May 08, 2015 10:41 PM

Ph.D. alumna

Are We Training Our Students to be Robots?

Excited about the possibility that he would project his creativity onto paper, I handed my 1-year-old son a crayon. He tried to eat it. I held his hand to show him how to draw, and he broke the crayon in half. I went to open the door and when I came back, he had figured out how to scribble… all over the wooden floor.

Crayons are pretty magical and versatile technologies. They can be used as educational tools — or alternatively, as projectiles. And in the process of exploring their properties, children learn to make sense of both their physical affordances and the social norms that surround them. “No, you can’t poke your brother’s eye with that crayon!” is a common refrain in my house. Learning to draw — on paper and with some sense of meaning — has a lot to do with the context, a context in which I help create, a context that is learned outside of the crayon itself.

From crayons to compasses, we’ve learned to incorporate all sorts of different tools into our lives and educational practices. Why, then, do computing and networked devices consistently stump us? Why do we imagine technology to be our educational savior, but also the demon undermining learning through distraction? Why are we so unable to see it as a tool whose value is most notably discovered situated in its context?

The arguments that Peg Tyre makes in “iPads < Teachers” are dead on. Personalized learning technologies won’t magically on their own solve our education crisis. The issues we are facing in education are social and political, reflective of our conflicting societal values. Our societal attitudes toward teachers are deeply destructive, a contemporary manifestation of historical attitudes towards women’s labor.

But rather than seeing learning as a process and valuing educators as an important part of a healthy society, we keep looking for easy ways out of our current predicament, solutions that don’t involve respecting the hard work that goes into educating our young.
In doing so, we glom onto technologies that will only exacerbate many existing issues of inequity and mistrust. What’s at stake isn’t the technology itself, but the future of learning.

An empty classroom at the Carpe Diem school in Indianapolis.
Education shouldn’t be just about reading, writing, and arithmetic. Students need to learn how to be a part of our society. And increasingly, that society is technologically mediated. As a result, excluding technology from the classroom makes little sense; it produces an unnecessary disconnect between school and contemporary life.

This forces us to consider two interwoven — and deeply political — societal goals of education: to create an informed citizenry and to develop the skills for a workforce.

With this in mind, there are different ways of interpreting the personalized learning agenda, which makes me feel simultaneously optimistic and outright terrified. If you take personalized learning to its logical positive extreme, technology will educate every student as efficiently as possible. This individual-centric agenda is very much rooted in American neoliberalism.

But what if there’s a darker story? What if we’re really training our students to be robots?

Let me go cynical for a moment. In the late 1800s, the goal of education in America was not particularly altruistic. Sure, there were reformers who imagined that a more educated populace would create an informed citizenry. But what made widespread education possible was that American business needed workers. Industrialization required a populace socialized into very particular frames of interaction and behavior. In other words, factories needed workers who could sit still.

Many of tomorrow’s workers aren’t going to be empowered creatives subscribed to the mantra of, “Do what you love!” Many will be slotted into systems of automation that are hybrid human and computer. Not in the sexy cyborg way, but in the ugly call center way.
Like today’s retail laborers who have to greet every potential customer with a smile, many humans in tomorrow’s economy will do the unrewarding tasks that are too expensive for robots to replace. We’re automating so many parts of our society that, to be employable, the majority of the workforce needs to be trained to be engaged with automated systems.

All of this begs one important question: who benefits, and who loses, from a technologically mediated world?

Education has long been held up as the solution to economic disparity (though some reports suggest that education doesn’t remedy inequity). While the rhetoric around personalized learning emphasizes the potential for addressing inequity, Tyre suggests that good teachers are key for personalized learning to work.

Not only are privileged students more likely to have great teachers, they are also more likely to have teachers who have been trained to use technology — and how to integrate it into the classroom’s pedagogy. If these technologies do indeed “enhance the teacher’s effect,” this does not bode well for low-status students, who are far less likely to have great teachers.

Technology also costs money. Increasingly, low-income schools are pouring large sums of money into new technologies in the hopes that those tools can fix the various problems that low-status students face. As a result, there’s less money for good teachers and other resources that schools need.

I wish I had a solution to our education woes, but I’ve been stumped time and again, mostly by the politics surrounding any possible intervention. Historically, education was the province of local schools making local decisions. Over the last 30 years, the federal government and corporations alike have worked to centralize education.

From textbooks to grading systems, large companies have standardized educational offerings, while making schools beholden to their design logic. This is how Texas values get baked into Minnesota classrooms. Simultaneously, over legitimate concern about the variation in students’ experiences, federal efforts have attempted to implement learning standards. They use funding as the stick for conformity, even as local politics and limited on-the-ground resources get in the way.

Personalized learning has the potential to introduce an entirely new factor into the education landscape: network effects. Even as ranking systems have compared schools to one another, we’ve never really had a system where one students’ learning opportunities truly depend on another’s. And yet, that’s core to how personalized learning works. These systems don’t evolve based on the individual, but based on what’s learned about students writ large.

Personalized learning is, somewhat ironically, far more socialist than it may first appear. You can’t “personalize” technology without building models that are deeply dependent on others. In other words, it is all about creating networks of people in a hyper-individualized world. It’s a strange hybrid of neoliberal and socialist ideologies.

An instructor works with a student in the learning center at the Carpe Diem school in Indianapolis.
Just as recommendation systems result in differentiated experiences online, creating dynamics where one person’s view of the internet radically differs from another’s, so too will personalized learning platforms.

More than anything, what personalized learning brings to the table for me is the stark reality that our society must start grappling with the ways we are both interconnected and differentiated. We are individuals and we are part of networks.

In the realm of education, we cannot and should not separate these two. By recognizing our interconnected nature, we might begin to fulfill the promises that technology can offer our students.

This post was originally published to Bright at Medium on April 7, 2015. Bright is made possible by funding from the New Venture Fund, and is supported by The Bill & Melinda Gates Foundation.

by zephoria at May 08, 2015 12:29 AM

April 28, 2015

Ph.D. student

I really like Beniger

I’ve been a fan of Castells for some time but reading Ampuja and Koivisto’s critique of him is driving home my new appreciation of Beniger‘s The Control Revolution (1986).

One reason why I like Beniger is that his book is an account of social history and its relationship with technology that is firmly grounded in empirically and formally validated scientific theory. That is, rather than using as a baseline any political ideological framework, Beniger grounds his analysis in an understanding of the algorithm based in Church and Turing, and understanding of biological evolution grounded in biology, and so on.

This allows him to extend ideas about programming and control from DNA to culture to bureaucracy to computers in a way that is straightforward and plausible. His goal is, admirably, to get people to see the changes that technology drives in society as a continuation of a long regular process rather than a reason to be upset or a transformation to hype up.

I think there is something fundamentally correct about this approach. I mean that with the full force of the word correct. I want to go so far as to argue that Beniger (at least as of Chapter 3…) is an unideological theory of history and society that is grounded in generalizable and universally valid scientific theory.

I would be interested to read a substantive critique of Beniger arguing otherwise. Does anybody know if one exists?


by Sebastian Benthall at April 28, 2015 06:43 AM

April 24, 2015

Ph.D. student

intersecting agencies and cybersecurity #RSAC

I recurring theme in my reading lately (such as, Beniger‘s The Control Revolution, Horkheimer‘s Eclipse of Reason, and Norbert Wiener’s Cybernetics work) is the problem of two ways of reconciling explanations of how-things-came-to-be:

  • Natural selection. Here a number of autonomous, uncoordinated agents with some exogenously given variability encounter obstacles that limit their reproduction or survival. The fittest survive. Adaptation is due to random exploration at the level of the exogenous specification of the agent, if at all. In unconstrained cases, randomness rules and there is no logic to reality.
  • Purpose. Here there is a teleological explanation based on a goal some agent has “in mind”. The goal is coupled with a controlling mechanism that influences or steers outcomes towards that goal. Adaptation is part of the endogenous process of agency itself.

Reconciling these two kinds of description is not easy. A point Beniger makes is that differences between social theories in the 20th century can be read as differences in the divisions of where one demarcates agents within a larger system.


This week at the RSA Conference, Amit Yoran, President of RSA, gave a keynote speech about the change in mindset of security professionals. Just the day before I had attended a talk on “Security Basics” to reacquaint myself with the field. In it, there was a lot of discussion of how a security professional needs to establish “the perimeter” of their organization’s network. In this framing, a network is like the nervous system of the macro-agent that is an organization. The security professional’s role is to preserve the integrity of the organization’s information systems. Even in this talk on “the basics”, the speaker acknowledged that a determined attacker will always get into your network because of the limitations of the affordances of defense, the economic incentives of attackers, and the constantly “evolving” nature of the technology. I was struck in particular by this speaker’s detachment from the arms race of cybersecurity. The goal-driven adversariality of the agents involved in cybersecurity was taken as a given; as a consequence, the system evolves through a process of natural selection. The role of the security professional is to adapt to an exogenously-given ecosystem of threats in a purposeful way.

Amit Yoran’s proposed escape from the “Dark Ages” of cybersecurity got away from this framing in at least one way. For Yoran, thinking about the perimeter is obsolete. Because the attacker will always be able to infiltrate, the emphasis must be on monitoring normal behavior within your organization–say, which resources are accessed and how often–and detecting deviance through pervasive surveillance and fast computing. Yoran’s vision replaces the “perimeter” with an all-seeing eye. The organization that one can protect is the organization that one can survey as if it was exogenously given, so that changes within it can be detected and audited.

We can speculate about how an organization’s members will feel about such pervasive monitoring and auditing of activity. The interests of the individual members of a (sociotechnical) organization, the interests of the organization as a whole, and the interests of sub-organizations within an organization can be either in accord or in conflict. An “adversary” within an organization can be conceived of as an agent within a supervening organization that acts against the latter’s interests. Like a cancer.

But viewing organizations purely hierarchically like this leaves something out. Just as human beings are capable of more complex, high-dimensional, and conflicted motivations than any one of the organs or cells in our bodies, so too should we expect the interests of organizations to be wide and perhaps beyond the understanding of anyone within it. That includes the executives or the security professionals, which RSA Conference blogger Tony Kontzer suggests should be increasingly one and the same. (What security professional would disagree?)

What if the evolution of cybersecurity results in the evolution of a new kind of agency?

As we start to think of new strategies for information-sharing between cybersecurity-interested organizations, we have to consider how agents supervene on other agents in possibly surprising ways. An evolutionary mechanism may be a part of the very mechanism of purposive control used by a super-agent. For example, an executive might have two competing security teams and reward them separately. A nation might have an enormous ecosystem of security companies within its perimeter (…) that it plays off of each other to improve the robustness of its internal economy, providing for it the way kombucha drinkers foster their own vibrant ecosystem of gut fauna.

Still stranger, we might discover ways that purposive agents intersect at the neuronal level, like Siamese twins. Indeed, this is what happens when two companies share generic networking infrastructure. Such mereological complexity is sure to affect the incentives of everyone involved.

Here’s the rub: every seam in the topology of agency, at every level of abstraction, is another potential vector of attack. If our understanding of the organizational agent becomes more complex as we abandon the idea of the organizational perimeter, that complexity provides new ways to infiltrate. Or, to put it in the Enlightened terms more aligned with Yoran’s vision, the complexity of the system with it multitudinous and intersecting purposive agents will become harder and harder to watch for infiltrators.

If a security-driven agent is driven by its need to predict and audit activity within itself, then those agents will let a level complexity within themselves that is bounded by their own capacity to compute. This point was driven home clearly by Dana Wolf’s excellent talk on Monday, “Security Enforcement (re)Explained”. She outlined several ways that the computationally difficult cybersecurity functions–such as anti-virus and firewall technology–are being moved to the Cloud, where elasticity of compute resources theoretically makes it easier to cope with these resource demands. I’m left wondering: does the end-game of cybersecurity come down to the market dynamics of computational asymmetry?

This blog post has been written for research purposes associated with the Center for Long-Term Cybersecurity.


by Sebastian Benthall at April 24, 2015 12:13 AM

April 23, 2015

Ph.D. student

Beniger on anomie and technophobia

The School of Information Classics group has moved on to a new book: James Beniger’s 1986 The Control Revolution: Technological and Economic Origins of the Information Society. I’m just a few chapters in but already it is a lucid and compelling account of how the societal transformations due to information technology that are announced bewilderingly every decade are an extension of a process that began in the Industrial Revolution and just has not stopped.

It’s a dense book with a lot of interesting material in it. One early section discusses Durkheim’s ideas about the division of labor and its effect on society.

In a nutshell, the argument is that with industrialization, barriers to transportation and communication break down and local markets merge into national and global markets. This induces cycles of market disruption where because producers and consumers cannot communicate directly, producers need to “trust to chance” by embracing a potentially limitless market. This creates and unregulated economy prone to crisis. This sounds a little like venture capital fueled Silicon Valley.

The consequence of greater specialization and division of labor is a greater need for communication between the specialized components of society. This is the problem of integration, and it affects both the material and the social. The specifically, the magnitude and complexity of material flows result in a sharpening division of labor. When properly integrated, the different ‘organs’ of society gain in social solidarity. But if communication between the organs is insufficient, then the result is a pathological breakdown of norms and sense of social purpose: anomie.

The state of anomie is impossible wherever solidary organs are sufficiently in contact or sufficiently prolonged. In effect, being continguous, they are quickly warned, in each circumstance, of the need which they have of one another, and, consequently, they have a lively and continuous sentiment of their mutual dependence… But, on the contrary, if some opaque environment is interposed, then only stimuli of a certain intensity can be communicated from one organ to another. Relations, being rare, are not repeated enough to be determined; each time there ensues new groping. The lines of passage taken by the streams of movement cannot deepen because the streams themselves are too intermittent. If some rules do come to constitute them, they are, however, general and vague.

An interesting question is to what extent Beniger’s thinking about the control revolution extend to today and the future. An interesting sub-question is to what extent Durkheim’s thinking is relevant today or in the future. I’ll hazard a guess that’s informed partly by Adam Elkus’s interesting thoughts about pervasive information asymmetry.

An issue of increasing significance as communication technology improves is that the bottlenecks to communication become less technological and more about our limitations as human beings to sense, process, and emit information. These cognitive limitations are being overwhelmed by the technologically enabled access to information. Meanwhile, there is a division of labor between those that do the intellectually demanding work of creating and maintaining technology and those that do the intellectually demanding work of creating and maintaining cultural artifacts. As intellectual work demands the specialization of limited cognitive resources, this results in conflicts of professional identity due to anomie.

Long story short: Anomie is why academic politics are so bad. It’s also why conferences specializing in different intellectual functions can harbor a kind of latent animosity towards each other.


by Sebastian Benthall at April 23, 2015 09:26 PM

April 18, 2015

MIMS 2015

Pervasively Distributed Trademark Enforcement

This post explores similarities between ICANN’s Domains Protected Marks List (DPML) process and Pervasively Distributed Copyright Enforcement (PDCE). The DPML operates on trademarks, while PDCE concerns copyright. However, similarities exist in the intentions and consequences of them.

I’ll first introduce PDCE with a brief summary. Then I will explain the Trademark Clearinghouse and the DPML service that depends on it. I’ll explore this further using an example, and then end with three points that PDCE and DPML have in common.

A Quick Introduction to PDCE

Pervasively Distributed Copyright Enforcement (PDCE) was first described in a paper by Julie Cohen in 20061, and while readers of this post would probably benefit from having read it, this is not a requirement for understanding this post.

To quote the abstract from Cohen’s paper,

“The distributed extension of intellectual property enforcement into private spaces and throughout communications networks can be understood as a new, hybrid species of disciplinary regime that locates the justification for its pervasive reach in a permanent state of crisis. This hybrid regime derives its force neither primarily from centralized authority nor primarily from decentralized, internalized norms, but instead from a set of coordinated processes for authorizing flows of information.”

PDCE relies on delegation of authority to processes carried out by machines. Digital Rights Management (DRM) is a good example of this. Whether or not a given act would be permissible under law is irrelevant if a machine implementing a process forbids it. Where a judge might have ruled an act of copying to be fair-use ex-post, DRM might forbid the action ex-ante.

An Introduction to DPML

In 2005 ICANN started a policy development process to introduce new generic Top Level Domains (gTLDs) to the Domain Name System (DNS) hierarchy.2 In 2013 the first of these domains went live.3 As of this writing there are more than 500 new gTLDs in use.3 Additionally, there are roughly 700 new gTLD applications still being processed by ICANN.3

As part of the development of its new gTLDs, ICANN also revisited its policy towards trademark protection. The Uniform Dispute Resolution Policy (UDRP) had been the sole means of resolving trademark disputes in DNS prior to the introduction of the new gTLDs. UDRP does not go away for new gTLDs, but it does get augmented with two new tools for trademark protection: the Trademark Clearinghouse (TMCH) and the Uniform Rapid Suspension System (URS).

The TMCH is a database of registered trademarks. It is not a trademark office, since each mark must already be registered at an actual trademark office. Trademark holders can pay to have their mark recorded at the TMCH. Currently this costs $150 for a year, $435 for three years, and $725 for five years. Bulk discounts are also available.4 In return, users gain access to services that help protect their trademark in the DNS.

The first service is a sunrise service, which gives the TMCH user 30 days of priority access to register their trademark as a Second Level Domain (SLD)5 in any new gTLD. During this sunrise period, only the trademark holder can register their mark as an SLD.Once the sunrise period ends the new gTLD will start accepting registrations from the general public.

Then begins the 90-day notification period. During this period, individual registrants receive notification when attempting to register a potentially infringing SLD. The TMCH user also receives notification that someone attempted to register their mark as an SLD. Following the 90-day notification period, TMCH users can still elect to receive notification when an SLD is registered that potentially infringes their mark.

The above services are offered by the Trademark Clearinghouse itself. The final service, Domain Protected Marks List (DPML), is optionally offered by new gTLD registries. DPML allows TMCH users to defensively block DNS registrations using their trademark. Each registry has slightly different policies regarding DPML, but the general idea is the same. The point of DPML is to prevent registrations of TMCH recorded trademarks at participating new gTLD registries. It is not a notice-based service like the two services offered by ICANN, instead it blocks registrations that a registry determines to be infringing the TMCH user’s mark.

TMCH users must pay for DPML protection at each new gTLD registry separately. However, since most new gTLD registries control multiple new gTLDs, paying for protection at one registry protects the TMCH user on all of that registry’s gTLDs.

For example, paying for DPML protection from Donuts, a new gTLD registry, would afford protection for all of Donuts’ new gTLDs.6 Donuts offers an expansive definition of protection. In addition to direct naming conflicts, Donuts will also block registrations of SLDs which merely contain the TMCH trademark. According to their website, “..if the Domain Name Label [is] ‘sample’ .., a DPML Block may be applied for any of the following labels: ‘sample’, ‘musicsample’, ‘samplesale’, or ‘thesampletest’”.7

It’s important to understand the distinction between the Trademark Clearinghouse and DPML. The TMCH is a database of verified trademarks. ICANN is responsible for hosting the TMCH and verifying that the data in it is valid. The DPML is a service provided by some new gTLD registries that makes use of the TMCH, and must be paid for separately.

An Example

Let’s say there is a company called Mixahedron Inc. that manufactures and sells drink-mixing equipment in multiple geometric shapes. Mixahedron Inc. holds the trademark for the term ‘Mixahedron’ in the country where it is incorporated. They own mixahedron.com and use it for their main corporate Internet presence, but in the past they’ve had problems on other TLDs. When .info was launched a cyber squatter registered mixahedron.info and sent phishing emails to Mixahedron Inc’s customers, directing them to change their account information on mixahedron.info. Mixahedron Inc. was able to gain control of mixahedron.info, but it cost time and money. This event caused customer complaints and loss of credibility.

In fear of this happening again Mixahedron Inc. became a user of the Trademark Clearinghouse when it was launched. In addition, they paid both Donuts and Rightside, for a ten-year service contract for DPML on their mark ‘Mixahedron’. Now when someone tries to register mixahedron.business they get blocked. Also nice is that disgruntled customers cannot register mixahedron-sucks.wtf, i-hate-mixahedron.gripe or mixahedron.fail. With thousands of new gTLDs coming into existence, a service like DPML offers the only viable venue for Mixahedron Inc. to defensively register all derivatives of their mark.

Another side to this story is from a customer of Mixahedron Inc’s named Mark. Mark had his left index finger ripped off by one of Mixahedron Inc’s professional mixers. After his recovery he started investigating their mixers and discovered other people had suffered similar fates with them. Mark decided to set up a forum website called mixahedron.surgery where the community of people injured by Mixahedron Inc’s mixers could share stories and plan actions. He thought the satirical name would help to get the message out, and provide a bit of a publicity boost to his campaign. Unfortunately for Mark, his registrar GoDaddy.com refused his registration. Donuts is the registry for .surgery, and since Mixahedron Inc. pays Donuts for DPML services only Mixahedron Inc. can register mixahedron.surgery.

Mark doesn’t understand any of this, and doesn’t know anything about trademark law, the Trademark Clearinghouse, or DPML. In frustration, Mark gave up and instead registered a domain name unrelated to Mixahedron. His entirely valid campaign against Mixahedron was constrained by his inability to register a recognizable domain name. To compensate for this, and to spread word of his campaign, Mark purchased Google AdWords for terms like ‘Mixahedron pain’, and ‘Mixahedron defect’.

Analysis

A key similarity between Pervasively Distributed Copyright Enforcement (PDCE) and the Domain Protected Marks List (DPML) is its lack of recourse for the user at the time of rights constraint. With PDCE this might take the form of an inability for a user to argue fair use with a DRM system. Similarly with DPML, a DNS registrant is unable to argue with the registry refusing their domain name application. Both PDCE and DPML are rigid processes with ex-ante assumptions of misuse that favor intellectual property holders.

One of the main purposes of trademark law is to prevent confusion of genuine branded products with illegitimate or fake products. In the United States, there is considerable legal precedent we call upon when deciding whether the use of a trademark is infringing, or is acceptable because of free speech protections. The DPML short-circuits this human decision making in favor of an immediate unappealable constraining of action.

The trademark theory that the DPML regime comes closest to implementing is referred to as the ‘initial interest confusion’ theory. In the context of cybersquatting case precedent, initial interest confusion results when users visiting a website mistake a so-called gripe site for an actual sponsored site of the trademark holder. Proponents of this theory argue that visitors to a gripe-site will confuse the gripe-site for that of the trademark holder’s. This theory ignores any content on the site for evaluating whether a user might be confused by the use of the trademark. Trademark holders attempting to shutdown gripe sites have attempted to use this theory, and have sometimes succeeded.

In Lamparello v. Falwell, Christopher Lamparello registered fallwell.com and hosted a gripe site discrediting Jerry Falwell and his ministry. Falwell sued but the court ruled in favor of Lamparello finding in part that, “Applying the initial interest confusion theory to gripe sites like Lamparello’s would enable the mark holder to insulate himself from criticism - or at least to minimize access to it. .. Rather, to determine whether a likelihood of confusion exists as to the source of a gripe site like that at issue in this case, a court must look not only to the allegedly infringing domain name, but also to the underlying content of the website.”8

The DPML affords no appeals process to the user who is denied registration of a domain name, and it cannot evaluate the content of a website before it is created. Both PDCE and DPML override legitimate freedom of expression concerns. Copyright’s doctrine of fair use can be seen as an outlet for free expression in a similar vein as limiting the scope of initial interest confusion in trademark law. Both PDCE and DPML effectively disable that outlet by default. They force the user to find a means of enabling it again via the courts or, in the case of some DRM, technical subversion.

Another similarity between PDCE and the DPML is that they both depend on a state of permanent crisis. For PDCE this is the increasing ease with which the Internet and software has allowed copyright infringement to happen. For DPML this is the permanent threat of consumer confusion brought on by domain cyber squatting and phishing. Cyber squatters set up websites with DNS names similar to famous brand names and either attempt to sell the domain to the brand owner, or attempt to trick users into visiting their site to harvest webpage impressions. Phishers trick users into visiting websites and then divulging sensitive information.

Web users need to know that when they visit an organization’s website, they are visiting the official website of that company instead of an imposter website attempting to scam them. Years of web browsing have established an expectation in users to perform this verification based largely on what appears in their web browser address bar, which for the time being, usually only contains a DNS name. There may be other icons in the address bar purporting to authenticate the website, but many users don’t understand these. Thus, brand owners look to the DNS to provide a solution. DPML is an attempt to directly respond to the problems of both cybersquatting and phishing by ‘cleaning up’ the DNS.

The consequences of being a reaction to permanent crisis hold true for both PDCE and DPML. “Rather than normalizing those who remain on the ‘right’ side of the new boundaries, [PDCE] seeks to normalize a regime of universal, technologically-encoded constraint.”9 The ultimate goal of both PDCE and DPML is to become invisible and establish new normative behavior.

The third similarity is that both PDCE and DPML are neither completely decentralized, nor completely centralized systems of control. Instead, they depend on a network of actors. “The resulting [PDCE] regime of crisis management is neither wholly centralized nor wholly decentralized; it relies, instead on coordination of technologies and processes for authorizing information flows.” This quote about PDCE could just as easily apply to DPML. DNS is decentralized, but DPML is not. The network of DPML revolves around the very centralized TMCH, but from there becomes more decentralized as it branches out to registries, registrars and eventually individual registrants.

Conclusion

We have explored three similarities between PDCE and DPML in this post. The reason for pointing them out is not to show common thinking across two domains of intellectual property law. It is instead to highlight some genuine issues with the approach ICANN has taken in establishing the TMCH and the DPML. This is a complex issue, and the rights of trademark holders need to be balanced with those of free expression. The TMCH and DPML are both very new, and it can be difficult to predict the future. Only time well tell how users react to these changes in the DNS registration process. There could also be court challenges to the DPML or the TMCH. We’ll just have to wait and see.

An earlier version of this paper was written as an assignment for Info 296a:Technology Delegation @ UC Berkeley’s School of Information.

  1. Julie Cohen, Pervasively Distributed Copyright Enforcement, Georgetown Law Journal, Vol. 95, 2006

  2. ICANN’s new gTLD Program

  3. ICANN’s new gTLD Statistics 2 3

  4. Basic Fee Structure for the Trademark Clearinghouse

  5. In DNS lingo a registry contracts with ICANN to service a DNS TLD. Registrars contract with registries to offer second-level domains(SLD) to the public. If you register example.com, you are contracting with a registrar for an SLD.

  6. Blocking Mechanisms for TMCH-clientsl

  7. Donuts DPML Overview

  8. Lamparello v. Falwell, 4th Cir. 2005, 420 F.3d 309

  9. Julie Cohen, Pervasively Distributed Copyright Enforcement, Georgetown Law Journal, Vol. 95, 2006, at page 28

Pervasively Distributed Trademark Enforcement was originally published by Andrew McConachie at Metafarce on April 18, 2015.

by Andrew McConachie (andrewm@ischool.berkeley.edu) at April 18, 2015 07:00 AM

April 08, 2015

Ph.D. student

causal inference in networks is hard

I am trying to make statistically valid inferences about the mechanisms underlying observational networked data and it is really hard.

Here’s what I’m up against:

  • Even though my data set is a complete ecologically valid data set representing a lot of real human communication over time, it (tautologically) leaves out everything that it leaves out. I can’t even count all the latent variables.
  • The best methods for detecting causal mechanism, the potential outcomes framework for Rubin model, depends on the assumption that different members of the sample don’t interfere. But I’m working with networked data. Everything interferes with everything else, at least indirectly. That’s why it’s a network.
  • Did I mention that I’m working with communications data? What’s interesting about human communication is that it’s not really generated at random at all. It’s very deliberately created by people acting more or less intelligently all the time. If the phenomenon I’m studying is not more complex than the models I’m using to study it, then there is something seriously wrong with the people I’m studying.

I think I can deal with the first point here by gracefully ignoring it. It may be true that any apparent causal effect in my data is spurious and due to a common latent cause upstream. It may be true that the variance in the data is largely due to exogenous factors. Fine. That’s noise. I’m looking for a reliable endogenous signal. If there isn’t something there that would suggest that my entire data set is epiphenomal. But I know it’s not. So there’s got to be something there.

For the second point, there are apparently sophisticated methods for extending the potential outcomes framework to handling peer effects. These are gnarly and though I figure I could work with them, I don’t think they are going to be what I need because I’m not really looking for a causal relationship like a statistical relationship between treatment and outcome. I’m not after in the first instance what might be called type causation. I’m rather trying to demonstrate cases of token causation where causation is literally the transfer of information from object to another. And then I’m trying to show regularity in this underlying kind of causation in a layer of abstraction over it.

The best angle I can come up with on this so far is to use emergent properties of the network like degree assortativity to sort through potential mathematically defined graph generation algorithms. These algorithms can act as alternative hypotheses, and the observed emergent properties can theoretically be used to compute the likelihood of the observed data given the generation methods. Then all I need is a prior over graph generation methods! It’s perfectly Bayesian! I wonder if it is at all feasible to execute on. I will try.

It’s not 100% clear how you can take an algorithmically defined process and turn that into a hypothesis about causal mechanisms. Theoretically, as long as a causal network has computable conditional dependencies it can be represented by and algorithm. I believe that any algorithm (in the Church/Turing sense) can be represented as a causal network. Can this be done elegantly, so that the corresponding causal network represents something like what we’d expect from the scientific theory on the matter? This is unclear because, again, Pearl’s causal networks are great at representing type causation but not as expressive of token causation among a large population of uniquely positioned, generatively produced stuff. Pearl is not good at modeling life, I think.

The strategic activity of the actors is a modeling challenge but I think this is actually where there is substantive potential in this kind of research. If effective strategic actors are working in a way that is observably different from naive actors in some way that’s measurable in aggregate behavior, that’s a solid empirical result! I have some hypotheses around this that I think are worth checking. For example, probably the success of an open source community depends in part on whether members of the community act in ways that successfully bring new members in. Strategies that cultivate new members are going to look different from strategies that exclude newcomers or try to maintain a superior status. Based on some preliminary results, it looks like this difference between successful open source projects and most other social networks is observable in the data.


by Sebastian Benthall at April 08, 2015 07:03 PM

April 05, 2015

MIMS 2014

The ‘Frozen’ expert predicts the sequel

I saw this comic (by Lauren Weisenstein) at the Nib, and sent it to S, my niece’s mom. She thought it would be fun to ask A what she thinks the Frozen sequel would be like. A did not see the other ideas because S didn’t want to influence her thinking. A also does not yet know that a sequel is in the works. Once I saw what she came up with, I couldn’t resist illustrating it.

Frozen Sequel

What do you think will happen in the sequel to Frozen?

I drew this on Paper, my favorite app However, they don’t let people upload drawings and mess with color, so in the absence of a stylus, I was stuck with fingerpainting this in entirety. I blame any smudges and inconsistencies on my fat fingers. I did the layout and captions on Photoshop. I would’ve loved to hand write them, but there’s no way my fat fingers would’ve stood up to THAT challenge (I tried!)

So, what do YOU think will happen in the sequel to Frozen?


by muchnessofd at April 05, 2015 08:06 PM

March 31, 2015

Ph.D. student

Innovation, automation, and inequality

What is the economic relationship between innovation, automation, and inequality?

This is a recurring topic in the discussion of technology and the economy. It comes up when people are worried about a new innovation (such as data science) that threatens their livelihood. It also comes up in discussions of inequality, such as in Picketty’s Capital in the Twenty-First Century.

For technological pessimists, innovation implies automation, and automation suggests the transfer of surplus from many service providers to a technological monopolist providing a substitute service at greater scale (scale being one of the primary benefits of automation).

For Picketty, it’s the spread of innovation in the sense of the education of skilled labor that is primary force that counteracts capitalism’s tendency towards inequality and (he suggests) the implied instability. For the importance Picketty places on this process, he treats it hardly at all in his book.

Whether or not you buy Picketty’s analysis, the preceding discussion indicates how innovation can cut both for and against inequality. When there is innovation in capital goods, this increases inequality. When there is innovation in a kind of skilled technique that can be broadly taught, that decreases inequality by increasing the relative value of labor to capital (which is generally much more concentrated than labor).

I’m a software engineer in the Bay Area and realize that it’s easy to overestimate the importance of software in the economy at large. This is apparently an easy mistake for other people to make as well. Matthew Rognlie, the economist who has been declared Picketty’s latest and greatest challenger, thinks that software is an important new form of capital and draws certain conclusions based on this.

I agree that software is an important form of capital–exactly how important I cannot yet say. One reason why software is an especially interesting kind of capital is that it exists ambiguously as both a capital good and as a skilled technique. While naively one can consider software as an artifact in isolation from its social environment, in the dynamic information economy a piece of software is only as good as the sociotechnical system in which it is embedded. Hence, its value depends both on its affordances as a capital good and its role as an extension of labor technique. It is perhaps easiest to see the latter aspect of software by considering it a form of extended cognition on the part of the software developer. The human capital required to understand, reproduce, and maintain the software is attained by, for example, studying its source code and documentation.

All software is a form of innovation. All software automates something. There has been a lot written about the potential effects of software on inequality through its function in decision-making (for example: Solon Barocas, Andrew D. Selbst, “Big Data’s Disparate Impact” (link).) Much less has been said about the effects of software on inequality through its effects on industrial organization and the labor market. After having my antennas up for this for many reasons, I’ve come to a conclusion about why: it’s because the intersection between those who are concerned about inequality in society and those that can identify well enough with software engineers and other skilled laborers is quite small. As a result there is not a ready audience for this kind of analysis.

However unreceptive society may be to it, I think it’s still worth making the point that we already have a very common and robust compromise in the technology industry that recognizes software’s dual role as a capital good and labor technique. This compromise is open source software. Open source software can exist both as an unalienated extension of its developer’s cognition and as a capital good playing a role in a production process. Human capital tied to the software is liquid between the software’s users. Surplus due to open software innovations goes first to the software users, then second to the ecosystem of developers who sell services around it. Contrast this with the proprietary case, where surplus goes mainly to a singular entity that owns and sells the software rights as a monopolist. The former case is vastly better if one considers societal equality a positive outcome.

This has straightforward policy implications. As an alternative to Picketty’s proposed tax on capital, any policies that encourage open source software are ones that combat societal inequality. This includes procurement policies, which need not increase government spending. On the contrary, if governments procure primarily open software, that should lead to savings over time as their investment leads to a more competitive market for services. Equivalently, R&D funding to open science institutions results in more income equality than equivalent funding provided to private companies.


by Sebastian Benthall at March 31, 2015 01:00 PM

March 29, 2015

Ph.D. student

going post-ideology

I’ve spent a lot of my intellectual life in the grips of ideology.

I’m glad to be getting past all of that. That’s one reason why I am so happy to be part of Glass Bead Labs.

Glass Bead Labs

There are a lot of people who believe that it’s impossible to get beyond ideology. They believe that all knowledge is political and nothing can be known with true clarity.

I’m excited to have an opportunity to try to prove them wrong.


by Sebastian Benthall at March 29, 2015 08:14 PM

March 22, 2015

MIMS 2012

Hiring Designers: Advice from Twitter, Uber, and GoPro

Google Ventures invited design leaders from Twitter, Uber, and GoPro to discuss the topic of hiring designers. What follows are my aggregated and summarized notes.

Finding Designers

Everyone agrees, finding designers is hard. They’re in high demand, and the best ones are never on the market for long (if at all). “If the job is good enough, everyone is available.” There are a few pieces of advice for finding them, though:

  • If you’re having trouble getting a full-time designer, start with contractors. If they’re good, you can try to woo them into joining full-time. Some designers like the freedom of contracting and don’t think they want to be full-time anywhere, but if you can show them how awesome your team and culture and product are, you can lure them over.
  • Look for people who are finishing up a big project, or have been at the same place for 2+ years. These people might be looking for a new challenge, and you can nab them before they’re officially on the market.
  • Dedicate hours each day to sourcing and recruiting. Work closely with your recruiters (if you have any) to train them on what to look for in portfolios and CVs. Include them in interview debriefs so they can understand what was good and bad about candidates, and tune who they reach out to accordingly. I.e. iterate on your hiring process. We’ve done this a lot of Optimizely.
    • Even better is to have dedicated design recruiter(s) who understand the market and designers.
    • If you have no recruiters, you could consider outsourcing recruiting to an agency.
  • When reaching out to designers, get creative. Use common connections, use info from their site or blog posts, follow people on Twitter, etc.
  • Typically you’ll have the highest chance for success if you, as the hiring manager, reach out, rather than a recruiter.

As a designer, this is what hiring managers will be looking for:

  • Have a high word-to-picture ratio. Product Design is all about communication, understanding the problem, solutions, and context. If you can’t clearly communicate that, you aren’t a good designer.
    • An exception is visual designers, who can get away with more visually-oriented portfolios.
  • What about your design is exceptional? Why should I care? Make sure to make this clear when writing about your work.
  • When looking at a portfolio, hiring managers will be wondering, “What’s the complexity of the problem being solved? Can they tell a story? Are they self critical? What would they do differently or what could be better?” Write about all of these things in your portfolio; don’t just have pictures of the final result.
    • An exception to the above is high demand designers, who don’t have time for a portfolio because they don’t need one to get work. Hiring these people is all based on reputation.
  • Don’t have spelling errors. Spelling errors are an automatic no-go. Designers need to be sticklers for details, and have “pride of ownership.”
    • One million percent agree

On Interviewing Designers

Pretty much everyone has a portfolio presentation, followed by 3–6 one-on-one interviews. Everyone must be a “Yes” for an offer to be made. (Optimizely is the same.)

Look for curiosity in designers. Designers should be motivated to learn, grow, read blogs/industry news, and use apps/products just to see what the UX and design is like. They should have a mental inventory of patterns and how they’re used.

In portfolio review, designers should NEVER play the victim. Don’t blame the PM, the organization, engineering, etc. (even if it’s true.) Don’t talk shit about the constraints. Design is all about constraints. Instead, talk about how you worked within those constraints (e.g. “there was limited budget, therefore…”)

On Design Exercises

People were pretty mixed about whether design exercises are useful during the interview process or not. Arguments against them include:

  • They can be ethically wrong if you’re having candidates do spec work for the company. You’re asking people to work for free, and you open yourself up to lawsuits.
    • I wholeheartedly agree
  • They don’t mimic the way people actually work. Designers aren’t usually at a board being forced to create UIs and make design decisions.
    • I disagree with this sentiment. A lot of work I do with our designers is at whiteboards. Decisions and final designs aren’t always being made, but we’re exploring ideas and thinking through our options. Doing this in an interview simulates what it’s like to work with someone, and how they approach design. It isn’t about the final whiteboarded designs, it’s about their process, questions they ask, solutions they propose, how they think about those solutions, etc. Plus, you get to experience what they’re like to interact with.
  • Take home exercises aren’t recommended. People are too busy for them, and senior candidates won’t do them.
    • The exception to this is junior designers who don’t have much of a portfolio yet so you can see how they actually design UIs
    • All of this has been true in my experience, as well.

Arguments for design exercises:

  • You get to see how candidates approach a problem and explore solutions
  • You get a sense of what it’s like to work with them
  • You hear them evaluate ideas, which tells you how self-critical they are and how well they know best practices

Personally, I find design exercises very useful. They tell me a lot about how a candidate thinks, and what they’re like to work with. The key is to find a good exercise that isn’t spec work. GV wrote a great article on this topic.

On Making a Hiring Decision

It’s easy when candidates are great or awful — the yes and no decisions are easy. The hard ones are when people are mixed. Typically this means you shouldn’t extend an offer, but there are reasons to give them a second chance:

  • They were nervous
  • English is their second language
  • They were stressed from interviewing

In these cases, try bringing the person back in a more relaxed environment; for example, have lunch or coffee together.

Some people have great work, but some sort of personality flaw (e.g. they don’t make eye contact with women). These people are a “no” — remember, “No assholes, no delicate geniuses”, and avoid drama at all costs.

When making an offer, you’ll sometimes have to sell them on the company, team, product, and challenges. One technique is to explain why they’ll be a great fit on the team (you’ll flatter them while simultaneously demonstrating the challenges they’ll face and impact they’ll have). If you have a big company and team, you can explain all the growth and learning opportunities a large team provides. And you don’t need to be small to move fast and make impactful decisions.

On Design Managers

Hiring design managers is hard. They’re hard to find, hard to attract, and most designers want to continue making cool shit rather than manage people. But if you’re searching for one, your best bet is to promote a senior designer to manager. They already understand the company, market, culture, and team, so they’re an easy fit. The art of management is often custom to the team and company.

If that isn’t an option, go through your network to find folks. You aren’t likely to have good luck from randos applying via the company website, or sourcing strangers.

Great managers are like great coaches — they’re ex-players who worked really hard to learn the game, and thus can teach it to others. Players that are naturally gifted, e.g. Michael Jordan, aren’t good coaches because they didn’t have to work hard to understand the game — it came naturally to them.

I feel like I fit this description. I worked hard to learn a lot of the skills that go into design. It took me a long time to feel comfortable calling myself a “designer”; it didn’t come naturally.

Management is a mix of creative direction, people management, and process. They should be able to partner with a senior designer to ship great product. Managers shouldn’t evaluate designers based on outcomes/impact. People can’t always control which project they’re on, some projects are cancelled, not all projects are equal, etc. Instead, reward behavior and process (e.g. “‘A’ for effort”.)

There are 4 things to look for in good managers:

  • They Get Shit Done
  • They improve the team, e.g. via recruiting, events, coaching/mentoring
  • They have, or can build, good relationships in the organization
  • They have hard design skills, empathy, and vision

On Generalists vs Specialists and Team Formation

The consensus is to hire 80/20 designers, i.e. generalists who have deep skills in one area (e.g. visual design, UX, etc.). They help teams move faster, and can work with specialists (e.g. content strategists) to ship high quality products quickly. Good ones will know what they don’t know, and seek help when they need it (e.g. getting input from visual designers if that isn’t their strength). “No assholes, no delicate geniuses”. Avoid drama at all costs.

This is the type of person we seek to hire as well. I’ve also seen firsthand that good designers are self-aware enough to know what their weaknesses are, and to seek help when necessary.

Cross-functional teams should be as small as possible while covering the breadth of skills needed to ship features. More people means more complexity and extra communication overhead. (I have certainly seen this mistake made at Optimizely.)

Having designers on a separate team (e.g. Comm/marketing designers on marketing) makes for sad designers. They become isolated, disgruntled, and unhappy. Ideally, they shouldn’t be on marketing. If they are separate, make bridges for the teams to communicate. Include them in larger design team meetings and crits and stuff so they feel included.

I totally agree. At Optimizely, we fought hard to keep our Communication Designers on the Design team for all the reasons listed here (Marketing wanted to hire their own designers). Our Marketing department ended up hiring their own developers to build and maintain our website, but earlier this year they moved over to the Design team so they could be closer to other developers and the Communication Designers working on the website. So far, they’re much happier on Design.

Should designers code?

People were somewhat mixed on this question. It was mostly agreed that it’s probably not a good use of their time, but it’s always a trade-off depending on what a specific team needs to launch high quality product. A potential danger is that they may only design what’s easy to code, or what they know they can build. That is, it’s a conflict of interest that leads to them artificially limiting themselves and the design.

As a designer who codes, I only partially agree with what was said here. It’s true that you can fall into the trap of designing what’s easy to build, but it doesn’t have to be that way. I overcame this by focusing on explicitly splitting out the ideation/exploration phase from the evaluation/convergence phase (something that good designers should be doing anyway). When designing, I explore as many ideas as I can without thinking at all about implementation, then I evaluate which idea(s) are best. One of those criteria (among many) is implementation cost and whether it used existing UI components we’ve already built. I’ve found this to be effective at not limiting myself to only what I know is easy to build, but it took a lot of work to compartmentalize my thinking this way.

Artificially constraining the solution space is also a trap any designer can fall into, regardless of whether or not you know how to code. I’ve heard designers object to ideas with, “But that will be hard to build!”, or, “This idea re-uses an existing frontend component!” Whenever I hear that, I always tell them that they’re in the ideation phase, and they shouldn’t limit their thinking. Any idea is a good idea at this point. Once you’ve explored enough ideas, then you can start evaluating them and thinking about implementation costs. And if you have a great idea that’s hard to implement, you can argue for why it’s worth building.

Design-to-Engineering Ratio

It depends on the work, and what the frontend or implementation challenges are. For example, apps with lots of complex interactions will need more engineers to build. A common ratio is about 1:10.

More important than the specific ratio is to not form teams without a designer. Those teams get into bad habits, won’t ship quality product, and will dig a hole of design debt that a future designer will have to climb out of. (I’ve been through this, and it takes a lot of time and effort to correct broken processes of teams that lack design resources).

One way of knowing if you don’t have enough designers is if engineering complains about design being a bottleneck, although this is typically a lagging indicator. A great response to this was that the phrase “Blocked on design” is terrible. Design is a necessary creative endeavor! Why don’t we say that engineering is blocking product from being released? (In fact, for the first time ever, we have been saying this at Optimizely, since we need more engineers to implement some finished designs. Interested in joining the Engineering team at Optimizely? Drop me a line @jlzych).

Another good quote: “There’s nothing more dangerous than an idle designer.” An idle designer can go off the deep end redesigning things, and eventually get frustrated when their work isn’t getting used. So there should always be a bit more work than available people to do it. True dat.


This was a great event with fun speakers, good attendees, and excellent advice. The most interesting discussion topic for me was on design managers, since we’re actively searching for a manager now (let me know if you’re interested!) Overall, Optimizely’s hiring practices are in line with the best practices recommended here, so it’s nice to know we’re in good company.

by Jeff Zych at March 22, 2015 08:57 PM

March 16, 2015

Ph.D. student

correcting an error in my analysis

There is an error in my last post where I was thinking through the interpretation of 25,000,000 hit number reported for the Buzzfeed blue/black/white/whatever dress post. In that post I assumed that the distribution of viewers would be the standard one you see in on-line participation: a power law distribution with a long tail. Depending on which way you hold the diagram, the “tail” is either the enormous number of instances that only occur once (in this case, a visitor who goes to the page once and never again) or it’s population of instances that have bizarrely high occurrences (like that one guy who hit refresh on the page 100 times, and the woman that looked at the page 300 times, and…). You can turn one tail into the other by turning the histogram sideways and shaking really hard.

The problem with this analysis is that it ignores the data I’ve been getting from a significant subset of people who I’ve talked to about this in passing, which is that because the page contains some sort of well-crafted optical illusion, lots of people have looked at it once (and seen it as, say, a blue and black dress) and then looked at it again, seeing it as white and gold. In fact the article seems designed to get the reader to do just this.

If I’m being somewhat abstract in my analysis, it’s because I’ve refused to go click on the link myself. I have read too much Adorno. I hear the drumbeat of fascism in all popular culture. I do not want to take part in intelligently designed collective effervescence if I can help it. This is my idiosyncrasy.

But this inferred stickiness of the dress image has consequences for the traffic analysis. I’m sure that whoever is actually looking at the metrics on the article is tracking repeat version unique visitors. I wonder how deliberately the image was created with the idea of maximizing repeat visitations in mind, and the observed correlation between repeat and unique visitors. Repeated visits suggests sustained interest over time, whereas “mere” virality is a momentary spread of information over space. If you see content as a kind of property and sustained traffic over time as the value of that property, it makes sense to try to create things with staying power. Memetic globules forever gunking the crisscrossed manifold of attention. Culture.

Does this require a different statistical distribution to process properly? Is Cosma Shalizi right after all, and are these “power law” distributions just overhyped log-normal distributions? What happens when the generative process has a stickiness term? Is that just reflected in the power law distribution’s exponent? One day I will get a grip on this. Maybe I can do it working with mailing list data.

I’m writing this because over the weekend I was talking with a linguist and a philosopher about collective attention, a subject of great interest to me. It was the linguist who reported having looked at the dress twice and seeing it in different colors. The philosopher had not seen it. The latter’s research specialty was philosophy of mind, a kind of philosophy I care about a lot. I asked him whether in cases of collective attention the mental representation supervenes reductively on many individual minds or on more than that. He said that this is a matter of current debate but that he wants to argue that collective attention means more than my awareness of X, and my awareness of your awareness of X, ad infinitum. Ultimately I’m a mathematical person and am happy to see the limit of the infinite process as itself and its relationship with what it reduces to mediated by the logic of infinitesimals. But perhaps even this is not enough. I gave the philosopher my recommendation of Soren Brier and Ulanowicz, who together I think provide the groundwork needed for an ontology of macroorganic mentality and representation. The operationalization of these theories is the goal of my work at Glass Bead Labs.


by Sebastian Benthall at March 16, 2015 08:22 PM

March 15, 2015

MIMS 2014

Moodboards = Design + Branding

So you’re developing this hot new website/app, and you’ve decided it’s time to convert those wireframes into a visual design. You have two choices – go with a design and color scheme that ‘feels’ right, or, link it back to what you think your brand stands for. This is where moodboards come in. I am a huge fan of moodboards as a way to link design and marketing. Here’s how they work, using the example of how my team used this technique with Wordcraft to create an integrated visual design.

Wordcraft is an app that lets kids develop their understanding of language by creating sentences and seeing immediate visual feedback. Our vision was to create an app that helped kids learn, as they had fun exploring different sentence combinations.

We started out by creating post-its with words that we felt best described the brand identity. Once we had a board full of words, we used the affinity diagramming method of combining them into themes and came up with our theme words – Vibrant, Discovery, Playful and Clear.

Now comes the fun part – finding images that are synonymous with these words. You could do this exercise by cutting out pictures from magazines, the internet or whatever else catches your interest. We chose to use a Pinterest-board to tag the images that we felt were the most descriptive. Here again, each team member picked images individually which also helped us talk about what the words meant to each of us. This is a good way to bring the team together in a shared understanding of what you want your brand to symbolize.

Each of us then picked our top images for the theme words. The exercise of talking about the images, and what they meant and how we saw them connect to our brand vision meant that there was a fair bit of overlap in these top images. Once we had the final moodboard ready, we used Adobe Kuler to distill colors from these images and create our brand colors. Ta-dah! In 1.5 hours, we had colors that were closest to what our team felt our brand represented. We used these across all our work – the app, the project website, our logo.

Wordcraft Moodboard

You can try this process out on any new app/website and see how it works for you. Personally, I love how it helps to bring a process to what could otherwise disintegrate into a very subjective conversation of, “I think our buttons should be blue, because my child likes blue.”

If you do try this out, let me know what you think!

Note: I put up a version of this post on Medium, as an experiment.


by muchnessofd at March 15, 2015 09:11 PM

March 08, 2015

Ph.D. student

25,000,000 re: @ftrain

It was gratifying to read Paul Ford’s reluctant think piece about the recent dress meme epidemic.

The most interesting fact in the article was that Buzzfeed’s dress article has gotten 25 million views:

People are also keenly aware that BuzzFeed garnered 25 million views (and climbing) for its article about the dress. Twenty-five million is a very, very serious number of visitors in a day — the sort of traffic that just about any global media property would kill for (while social media is like, ho hum).

I’ve recently become interested in the question: how important is the Internet, really? Those of us who work closely with it every day see it as central to our lives. Logically, we would tend to extrapolate and think that it is central to everybody’s life. If we are used to sampling from other’s experience using social media, we would see that social media is very important in everybody’s life, confirming this suspicion.

This is obviously a kind of sampling bias though.

This is where the 25,000,000 figure comes in handy. My experience of the dress meme was that it was completely ubiquitous. Literally nobody I was following on Twitter who was tweeting that day was not at least referencing the dress. The meme also got to me via an email backchannel, and came up in a seminar. Perhaps you had a similar experience: you and everyone you knew was aware of this meme.

Let’s assume that 25 million is an indicator of the order of magnitude of people that learned about this meme. If you googled the dress question, you probably clicked the article. Maybe you clicked it twice. Maybe you clicked it twenty times and you are an outlier. Maybe you didn’t click it at all. It’s plausible that it evens out and the actual number of people who were aware of the meme is somewhere between 10 million and 50 million.

That’s a lot of people. But–and this is really my point–it’s not that many people, compared to everybody. There’s about 300 million people in the United States. There’s over 7 billion people on the planet. Who are the tenth of the population who were interested in the dress? If you are reading this blog, they are probably people a lot like you or I. Who are the other ~93% of people in the U.S.?

I’ve got a bold hypothesis. My hypothesis is that the other 90% of people are people who have lives. I mean this in the sense of the idiom “get a life“, which has fallen out of fashion for some reason. Increasingly, I’m becoming interested in the vast but culturally foreign population of people who followed this advice at some point in their lives and did not turn back. Does anybody know of any good ethnographic work about them? Where do they hang out in the Bay Area?


by Sebastian Benthall at March 08, 2015 02:48 AM

March 04, 2015

MIMS 2014

Sketchnotes: Seattle Data Visualization Meetup

When I went to the Seattle Data Viz meetup today, I had 2 objectives:
1. To get some interesting inputs on the ‘Top 7 graphs’
2. To try Sketchnoting, just to build my skills in the area

I skipped the entire second half of the conversation because it focused almost entirely on plotly features, which were pretty cool but not my area of focus. For students / startups, I think they offer a very cool solution to experiment and collaborate on creating some of these.

My notes are kinda sparse because there wasn’t much discussion on the graphs themselves (which was what I was expecting). Ah well, 1/2 objectives ain’t too bad. So anyway, check out my Sketchnotes, and let me know what you think. I hope to do more soon (and maybe buy some more pens to add some dimensions to these!).

IMG_3668


by muchnessofd at March 04, 2015 05:42 AM

March 01, 2015

Ph.D. student

‘Bad twitter’ : exit, voice, and social media

I made the mistake in the past couple of days of checking my Twitter feed. I did this because there are some cool people on Twitter and I want to have conversations with them.

Unfortunately it wasn’t long before I started to read things that made me upset.

I used to think that a benefit of Twitter was that it allowed for exposure to alternative points of view. Of course you should want to see the other side, right?

But then there’s this: if you do that for long enough, you start to see each “side” make the same mistakes over and over again. It’s no longer enlightening. It’s just watching a train wreck in slow motion on repeat.

Hirschman’s Exit, Voice, and Loyalty is relevant to this. Presumably, over time, those who want a higher level of conversation Exit social media (and its associated news institutions, such as Salon.com) to more private channels, causing a deterioration in the quality of public discourse. Because social media sites have very strong network effects, they are robust to any revenue loss due to quality-sensitive Exiters, leaving a kind of monopoly-tyranny that Hirschman describes vividly thus:

While of undoubted benefit in the case of the exploitative, profit-maximizing monopolist, the presence of competition could do more harm than good when the main concern is to counteract the monopolist’s tendency toward flaccidity and mediocrity. For, in that case, exit-competition could just fatally weaken voice along the lines of the preceding section, without creating a serious threat to the organization’s survival. This was so for the Nigerian Railway Corporation because of the ease with which it could dip into the public treasury in case of deficit. But there are many other cases where competition does not restrain monopoly as it is supposed to, but comforts and bolsters it by unburdening it of its more troublesome customers. As a result, one can define an important and too little noticed type of monopoly-tyranny: a limited type, an oppression of the weak by the incompetent and an exploitation of the poor by the lazy which is the more durable and stifling as it is both unambitious and escapable. The contrast is stark indeed with totalitarian, expansionist tyrannies or the profit-maximizing, accumulation-minded monopolies which may have captured a disproportionate share of our attention.

It’s interesting to compare a Hirschman-inspired view of the decline of Twitter as a function of exit and voice to a Frankfurt School analysis of it in terms of the culture industry. It’s also interesting to compare this with boyd’s 2009 paper on “White flight in networked publics?” in which she chooses to describe the decline of MySpace in terms of the troubled history of race and housing.*

In particular, there are passages of Hirschman in which he addresses neighborhoods of “declining quality” and the exit and voice dynamics around them. It is interesting to me that the narrative of racialized housing policy and white flight is so salient to me lately that I could not read these passages of Hirschman without raising an eyebrow at the fact that he didn’t mention race in his analysis. Was this color-blind racism? Or am I now so socialized by the media to see racism and sexism everywhere that I assumed there were racial connotations when in fact he was talking about a general mechanism. Perhaps the salience of the white flight narrative to me has made me tacitly racist by making me assume that the perceived decline in neighborhood quality is due to race!

The only way I could know for sure what was causing what would be to conduct a rigorous empirical analysis I don’t have time for. And I’m an academic whose job is to conduct rigorous empirical analyses! I’m forced to conclude that without a more thorough understanding of the facts, any judgment either way will be a waste of time. I’m just doing my best over here and when push comes to shove I’m a pretty nice guy, my friends say. Nevertheless, it’s this kind of lazy baggage-slinging that is the bread and butter of the mass journalist today. Reputations earned and lost on the basis of political tribalism! It’s almost enough to make somebody think that these standards matter, or are the basis of a reasonable public ethics of some kind that must be enforced lest society fall into barbarism!

I would stop here except that I am painfully aware that as much as I know it to be true that there is a portion of the population that has exited the morass of social media and put it to one side, I know that many people have not. In particular, a lot of very smart, accomplished friends of mine are still wrapped up in a lot of stupid shit on the interwebs! (Pardon my language!) This is partly due to the fact that networked publics now mediate academic discourse, and so a lot of aspiring academics now feel they have to be clued in to social media to advance their careers. Suddenly, everybody who is anybody is a content farmer! There’s a generation who are looking up to jerks like us! What the hell?!?!

This has a depressing consequence. Since politically divisive content is popular content, and there is pressure for intellectuals to produce popular content, this means that intellectuals have incentives to propagate politically divisive narratives instead of working towards reconciliation and the greater good. Or, alternatively, there is pressure to aim for the lowest common denominator as an audience.

At this point, I am forced to declare myself an elitist who is simply against provocation of any kind. It’s juvenile, is the problem. (Did I mention I just turned 30? I’m an adult now, swear to god.) I would keep this opinion to myself, but at that point I’m part of the problem by not exercising my Voice option. So here’s to blogging.

* I take a particular interest in danah boyd’s work because in addition to being one of the original Internet-celebrity-academics-talking-about-the-Internet and so aptly doubles as both the foundational researcher and just slightly implicated subject matter for this kind of rambling about social media and intellectualism (see below), she also shares an alma mater with me (Brown) and is the star graduate of my own department (UC Berkeley’s School of Information) and so serves as a kind of role model.

I feel the need to write this footnote because while I am in the scholarly habit of treating all academic writers I’ve never met abstractly as if they are bundles of text subject to detached critique, other people think that academics are real people(!), especially academics themselves. Suddenly the purely intellectual pursuit becomes personal. Multiple simultaneous context collapses create paradoxes on the level of pragmatics that would make certain kinds of communication impossible if they are not ignored. This can be awkward but I get a kind of perverse pleasure out of leaving analytic puzzles to whoever comes next.

I’m having a related but eerier intellectual encounter with an Internet luminary in some other work I’m doing. I’m writing software to analyze a mailing list used by many prominent activists and professionals. Among the emails are some written by the late Aaron Swartz. In the process of working on the software, I accepted a pull request from a Swiss programmer I had never met which has the Python package html2text as a dependency. Who wrote the html2text package? Aaron Swartz. Understand I never met the guy, am trying to map out how on-line communication mediates the emergent structure of the sociotechnical ecosystem of software and the Internet, and obviously am interested reflexively in how my own communication and software production fits into that larger graph. (Or multigraph? Or multihypergraph?) Power law distributions of connectivity on all dimensions make this particular situation not terribly surprising. But it’s just one of many strange loops.


by Sebastian Benthall at March 01, 2015 10:35 PM

February 23, 2015

Ph.D. student

Hirschman, Nigerian railroads, and poor open source user interfaces

Hirschman says he got the idea for Exit, Voice, and Loyalty when studying the failure of the Nigerian railroad system to improve quality despite the availability of trucking as a substitute for long-range shipping. Conventional wisdom among economists at the time was that the quality of a good would suffer when it was provisioned by a monopoly. But why would a business that faced healthy competition not undergo the management changes needed to improve quality?

Hirschman’s answer is that because the trucking option was so readily available as an alternative, there wasn’t a need for consumers to develop their capacity for voice. The railroads weren’t hearing the complaints about their service, they were just seeing a decline in use as their customers exited. Meanwhile, because it was a monopoly, loss in revenue wasn’t “of utmost gravity” to the railway managers either.

The upshot of this is that it’s only when customers are locked in that voice plays a critical role in the recuperation mechanism.

This is interesting for me because I’m interested in the role of lock-in in software development. In particular, one argument made in favor of open source software is that because it is not technology held by a single firm, users of the software are not locked-in. Their switching costs are reduced, making the market more liquid and, in theory favorable.

You can contrast this with proprietary enterprise software, where vendor lock-in is a principle part of the business model as this establishes the “installed base” and customer support armies are necessary for managing disgruntled customer voice. Or, in the case of social media such as Facebook, network effects create a kind of perceived consumer lock-in and consumer voice gets articulated by everybody from Twitter activists to journalists to high-profile academics.

As much as it pains me to admit it, this is one good explanation for why the user interfaces of a lot of open source software projects are so bad specifically if you combine this mechanism with the idea that user-centered design is important for user interfaces. Open source projects generally make it easy to complain about the software. If they know what they are doing at all, they make it clear how to engage the developers as a user. There is a kind of rumor out there that open source developers are unfriendly towards users and this is perhaps true when users are used to the kind of customer support that’s available on a product for which there is customer lock-in. It’s precisely this difference between exit culture and voice culture, driven by the fundamental economics of the industry, that creates this perception. Enterprise open source business models (I’m thinking about models like the Pentaho ‘beekeeper’) theoretically provide a corrective to this by being an intermediary between consumer voice and developer exit.

A testable hypothesis is whether and to what extent a software project’s responsiveness to tickets scales with the number of downstream dependent projects. In software development, technical architecture is a reasonable proxy for industrial organization. A widely used project has network effects that increasing switching costs for its downstream users. How do exit and voice work in this context?


by Sebastian Benthall at February 23, 2015 01:30 AM

February 21, 2015

Ph.D. student

The node.js fork — something new to think about

For Classics we are reading Albert Hirschman’s Exit, Voice, and Loyalty. Oddly, though normally I hear about ‘voice’ as an action from within an organization, the first few chapters of the book (including the introduction of the Voice concept itselt), are preoccupied with elaborations on the neoclassical market mechanism. Not what I expected.

I’m looking for interesting research use cases for BigBang, which is about analyzing the sociotechnical dynamics of collaboration. I’m building it to better understand open source software development communities, primarily. This is because I want to create a harmonious sociotechnical superintelligence to take over the world.

For a while I’ve been interested in Hadoop’s interesting case of being one software project with two companies working together to build it. This is reminiscent (for me) of when we started GeoExt at OpenGeo and Camp2Camp. The economics of shared capital are fascinating and there are interesting questions about how human resources get organized in that sort of situation. In my experience, there becomes a tension between the needs of firms to differentiate their products and make good on their contracts and the needs of the developer community whose collective value is ultimately tied to the robustness of their technology.

Unfortunately, building out BigBang to integrate with various email, version control, and issue tracking backends is a lot of work and there’s only one of me right now to both build the infrastructure, do the research, and train new collaborators (who are starting to do some awesome work, so this is paying off.) While integrating with Apache’s infrastructure would have been a smart first move, instead I chose to focus on Mailman archives and git repositories. Google Groups and whatever Apache is using for their email lists do not publish their archives in .mbox format, which is pain for me. But luckily Google Takeout does export data from folks’ on-line inbox in .mbox format. This is great for BigBang because it means we can investigate email data from any project for which we know an insider willing to share their records.

Does a research ethics issue arise when you start working with email that is openly archived in a difficult format, then exported from somebody’s private email? Technically you get header information that wasn’t open before–perhaps it was ‘private’. But arguably this header information isn’t personal information. I think I’m still in the clear. Plus, IRB will be irrelevent when the robots take over.

All of this is a long way of getting around to talking about a new thing I’m wondering about, the Node.js fork. It’s interesting to think about open source software forks in light of Hirschman’s concepts of Exit and Voice since so much of the activity of open source development is open, virtual communication. While you might at first think a software fork is definitely a kind of Exit, it sounds like IO.js was perhaps a friendly fork of just somebody who wanted to hack around. In theory, code can be shared between forks–in fact this was the principle that GitHub’s forking system was founded on. So there are open questions (to me, who isn’t involved in the Node.js community at all and is just now beginning to wonder about it) along the lines of to what extent a fork is a real event in the history of the project, vs. to what extent it’s mythological, vs. to what extent it’s a reification of something that was already implicit in the project’s sociotechnical structure. There are probably other great questions here as well.

A friend on the inside tells me all the action on this happened (is happening?) on the GitHub issue tracker, which is definitely data we want to get BigBang connected with. Blissfully, there appear to be well supported Python libraries for working with the GitHub API. I expect the first big hurdle we hit here will be rate limiting.

Though we haven’t been able to make integration work yet, I’m still hoping there’s some way we can work with MetricsGrimoire. They’ve been a super inviting community so far. But our software stacks and architecture are just different enough, and the layers we’ve built so far thin enough, that it’s hard to see how to do the merge. A major difference is that while MetricsGrimoire tools are built to provide application interfaces around a MySQL data backend, since BigBang is foremost about scientific analysis our whole data pipeline is built to get things into Pandas dataframes. Both projects are in Python. This too is a weird microcosm of the larger sociotechnical ecosystem of software production, of which the “open” side is only one (important) part.


by Sebastian Benthall at February 21, 2015 11:15 PM

MIMS 2012

Behind the Design: Optimizely's Mobile Editor

On 11/18/14, Optimizely officially launched A/B testing for iOS apps. This was a big launch because our product had been in beta for months, but none of us felt proud to publicly launch it. To get us over the finish line, we focused our efforts on building out an MVPP — a Minimum Viable Product we’re Proud of (which I wrote about previously). A core part of the MVPP was redesigning our editing experience from scratch. In this post, I will walk you through the design process, show you the sketches and prototypes that led up to the final design, and the lessons learned along the way, told from my perspective as the Lead Designer.

A video of the final product

Starting Point

To provide context, our product enables mobile app developers to run A/B tests in their app, without needing to write any code or resubmit to the App Store for approval. By connecting your app to our editor, you can select elements, like buttons and headlines, and change their properties, like colors and text. Our beta product was functional in this regard, but not particularly easy or delightful to use. The biggest problem was that we didn’t show you your app, so you had to select elements by searching through a list of your app’s views (a process akin to navigating your computer’s folder hierarchy to find a file). This made the product cumbersome to use, and not visually engaging (see screenshot below).

Screenshot of Optimizely's original iOS editor

Optimizely’s original iOS editor.

Designing the WYSIWYG Editor

To make this a product we’re proud to launch, it was obvious we’d need to build a What-You-See-Is-What-You-Get (WYSIWYG) editor. This means we’d show the app in the browser, and let users directly select and edit their app’s content. This method is more visually engaging, faster, and easier to use (especially for non-developers). We’ve had great success with web A/B testing because of our WYSIWYG editor, and we wanted to replicate that success on mobile.

This is an easy design decision to make, but hard to actually build. For this to work, it had to be performant and reliable. A slow or buggy implementation would have been frustrating and a step backwards. So we locked a product designer and two engineers in a room to brainstorm ideas and build functional prototypes together. By the end of the week, they had a prototype that cleared the technical hurdles and proved we could build a delightful editing experience. This was a great accomplishment, and a reminder that any challenge can be solved by giving a group of smart, talented individuals space to work on a seemingly intractable problem.

Creating the Conceptual Model

With the app front and center, I needed an interface for how users change the properties of elements (text, color, position, etc.). Additionally, there are two other major features the editor needs to expose: Live Variables and Code Blocks. Live Variables are native Objective-C variables that can be changed on the fly through Optimizely (such as the price of items). Code Blocks let users choose code paths to execute (for example, a checkout flow that has 2 steps instead of 3).

Before jumping into sketches or anything visual, I had to get organized. What are all the features I need to expose in the UI? What types of elements can users edit? What properties can they change? Which of those are useful for A/B tests? I wrote down all the functionality I could think of. Additionally, I needed to make sure the UI would accommodate new features to prevent having to redesign the editor 3 months down the line, so I wrote out potential future functionality alongside current functionality.

I took all this functionality and clustered them into separate groups. This helped me form a sound conceptual model on which to build the UI. A good model makes it easier for users to form an accurate mental model of the product, thus making it easier to use (and more extensible for future features). This exercise made it clear to me that there are variation-level features, like Code Blocks and Live Variables, that should be separate from element-level features that act on specific elements (like changing a button’s text). This seems like an obvious organizing principle in retrospect, but at the time it was a big shift in thinking.

After forming the conceptual model, I curated the element properties we let users edit. The beta product exposed every property we could find, with no thought as to whether or not we should let users edit it. More properties sounds better and makes our product more powerful, but it comes at the cost of ease of use. Plus, a lot of the properties we let people change don’t make sense for our use case of creating A/B tests, and don’t make sense to non-developers (e.g. “Autoresizing mask” isn’t understandable to non-technical folks, or something that needs to be changed for an A/B test).

I was ruthless about cutting properties. I went through every single one and asked two questions: first, is this understandable to non-developers (my definition of “understandable” being would a person recognize it from common programs they use everyday, like MS Office or Gmail); and second, why is this necessary for creating an A/B test? If I was unsure about an attribute, I defaulted to cutting it. My reasoning was it’s easy to add features to a product, but hard to take them away. And if we’re missing any essential properties, we’ll hear about it from our customers and can add it back.

Screenshot of my Google Doc feature organization

My lo-fi Google Doc to organize features

Let the Sketching Begin!

With my thoughts organized, I finally started sketching a bunch of editor concepts (pictured below). I had two big questions to answer: after selecting an element, how does a user change its properties? And, how are variation-level features (such as Code Blocks) exposed? My top options were:

  • Use a context menu of options after selecting an element (like our web editor)
  • When an element is selected, pop up an inline property pane (ala Medium’s and Wordpress’s editors)
  • Have a toolbar of properties below the variation bar
  • Show the properties in a drawer next to the app

Picture of my toolbar sketch

A sketch of the toolbar concept

Picture of my inline formatting sketch

A messy sketch of inline formatting options (specifically text)

Picture of one of my drawer sketches

One of the many drawer sketches

Interactive Prototypes

Each approach had pros and cons, but organizing element properties in a drawer showed the most promise because it’s a common interaction paradigm, it fit easily into the editor, and was the most extensible for future features we might add. The other options were generally constraining and better suited to limited functionality (like simple text formatting).

Because I wanted to maximize space for showing the app, my original plan was to show variation-level features (e.g. Code Blocks; Live Variables) in the drawer when no element was selected, and then replace those with element-level features when an element was selected. Features at each level could be separated into their own panes (e.g. Code Blocks would have its own pane). Thus the drawer would be contextual, and all features would be in the same spot (though not at the same time). This left plenty of space for showing an app, and kept the editor uncluttered.

A sketch told me that layout-wise this plan was viable, but would it make sense to select an element one place, and edit its properties in another? Would it be jarring to see features come and go depending on whether an element was selected or not? How will you navigate between different panes in the drawer? To answer these questions, an interactive prototype was my best course of action (HTML/CSS/JS being my weapon of choice).

Screenshot of an early drawer prototype

An early drawer prototype. Pretend there’s an app in that big empty white space.

I prototyped dozens of versions of the drawer, and shopped them around to the team and fellow designers. Responses overall were very positive, but the main concern was that the tab buttons (“Text”, “Layout”, etc., in the image above) in the drawer won’t scale. Once there are more than about 4, the text gets really squeezed (especially in other languages), stunting our ability to add new features. One idea to alleviate this, suggested by another designer, was to use an accordion instead of tab buttons to reveal content. A long debate ensued about which approach was better. I felt the tab buttons were a more common approach (accordions were for static content, not interactive forms that users will be frequently interacting with), whereas he felt the accordion was more scalable by allowing room for adding more panes, and accommodates full text labels (see picture below).

Screenshot of the drawer with accordion prototype

Drawer with accordion prototype. Pretend that website is an iOS app.

To help break this tie, I built another prototype. After playing around with both for awhile, and gathering feedback from various members of the team, I realized we were both wrong.

Hitting reset

After weeks of prototyping and zeroing in on a solution, I realized it was the wrong solution. And the attempt to fix it (accordions), was in fact an iteration of the original concept that didn’t actually address the real problem. I needed a new idea that would be superior to all previous ideas. So I hit reset and went back to the drawing board (literally). I reviewed my initial organizing work and all required functionality. Clearly delineating variation-level properties from element-level properties was a sound organizing principle, but the drawer was getting overloaded by having everything in it. So I explored ways of more cleanly separating variation-level properties from element-level properties.

After reviewing my feature groupings, I realized there aren’t a lot of element properties. They can all be placed in one panel without needing to navigate between them with tabs or accordions at all (one problem solved!).

The variation properties were the real issue, and had the majority of potential new features to account for. Two new thoughts became apparent as I reviewed these properties: first, variation-level changes are typically quick and infrequent; and second, variation-level changes don’t typically visually affect the app content. Realizing this, I hit upon an idea to have a second drawer that would slide out over the app, and go away after you made your change.

To see how this would feel to use, I made yet another interactive prototype. This new UI was clean, obviated the need for tab buttons or accordions, was quick and easy to interact with, and put all features just a click or two away. In short, this new design direction was a lot better, and everyone quickly agreed it made more sense than my previous approach.

Reflecting back on this, I realize I had made design decisions based on edge cases, rather than focusing on the 80% use case. Starting the design process over from first principles helped me see this much more clearly. I only wish I would have caught it sooner!

Admitting this design was not the right solution, after a couple months of work, and after engineers already began building it, was difficult. The thought of going in front of everyone (engineers, managers, PMs, designers, etc.) and saying we needed to change direction was not something I was looking forward to. I was also worried about the amount of time it would take me to flesh out a completely new design. Not to mention that I needed to thoroughly vet it to make sure that it didn’t have any major drawbacks (I wouldn’t have another opportunity to start over).

Luckily, once I started fleshing out this new design, those fears mostly melted away. I could tell this new direction was stronger, which made me feel good about restarting, which made it easier to sell this idea to the whole team. I also learned that even though I was starting over from the beginning, I wasn’t starting with nothing. I had learned a lot from my previous iterations, which informed my decision making this second time through.

Build and Ship!

With a solid design direction finally in place, we were able to pour on the engineering resources to build out this new editor. Having put a lot of thought into both the UI and technical challenges before writing production code, we were able to rapidly build out the actual product, and ended up shipping a week ahead of our self-imposed deadline!

Screenshot of the finished mobile editor

The finished mobile editor

Lessons Learned

  • Create a clear conceptual model on which to build the UI. A UI that accurately represents the system’s conceptual model will make it easy for users to form a correct mental model of your product, thus making it easier to use. To create the system model, write down all the features, content, and use cases you need to design for before jumping into sketches or prototypes. Group them together and map out how they relate to each other. From this process, the conceptual model should become clear. Read more about mental models on UX Magazine.
  • Don’t be afraid to start over. It’s scary, and hard, and feels like you wasted a bunch of time, but the final design will come out better. And the time you spent on the earlier designs wasn’t wasted effort — it broadened your knowledge of both the problem and solution spaces, which will help you make better design decisions in your new designs.
  • Design for the core use case, not edge cases. Designing for edge cases can clutter a UI and get in the way of the core use case that people do 80% of the time. In the case of the drawer, it led to overloading it with functionality.
  • Any challenge can be solved by giving a group of smart, talented individuals space to work on seemingly intractable problems. We weren’t sure a WYSIWYG editor would be technically feasible, but we made a concerted effort to overcome the technical hurdles, and it payed off. I’ve experienced this time and time again, and this was yet another reminder of this lesson.

On 11/18/14, the team was proud to announce Optimizely’s mobile A/B testing product to the world. Week-over-week usage has been steadily rising, and customer feedback has been positive, with people saying the new editor is much easier and faster to use. This was a difficult product to design, for both technical and user experience reasons, but I had a great time doing it and learned a ton along the way. And this is only the beginning — we have a lot more work to do before we’re truly the best mobile A/B testing product on the planet.

by Jeff Zych at February 21, 2015 10:54 PM

February 20, 2015

Ph.D. alumna

Why I Joined Dove & Twitter to #SpeakBeautiful

I’ve been online long enough to see a lot of negativity. I wear a bracelet that reads “Don’t. Read. The. Comments.” (a gift from Molly Steenson) to remind myself that going down the path of negativity is not helpful to my soul or sanity. I grew up in a geeky environment, determined to prove that I could handle anything, to stomach the notion that “if you can’t stand the heat, get out of the kitchen.” My battle scars are part of who I am. But why does it have to be this way?

Over the last few years, as the internet went from being a geeky subculture to something that is truly mainstream, I started watching as young women used technology to demean themselves and each other. It has broken my heart over and over again. Women are hurting themselves in the process of hurting each other with their words. The answer isn’t to just ask everyone out there to develop a thick skin. A world of meanness and cruelty is destructive to all involved and we all need to push back at it, especially those of us who have the strength to stomach the heat.

I’m delighted and honored to partner with Dove and Twitter to change the conversation. In an effort to better understand what’s happening, Dove surveyed women and Twitter analyzed tweets. Even though only 9% of women surveyed admit to posting negative comments on social media, over 5 million negative tweets about beauty and body image were posted in 2014 alone and 4 out of 5 of those tweets appeared to come from women. Women know that negative comments are destructive to their self-esteem and to those around them and, yet, the women surveyed reported they are 50% more likely to say something negative than positive. What is happening here?

This weekend, we will watch celebrities parade down the red carpet wearing gorgeous gowns as they enter a theater to celebrate the pinnacle of film accomplishments. Yet, if history serves, the social media conversation around the Oscar’s will be filled with harsh commentary regarding celebrities’ beauty and self-loathing.

We live in a world in which self-critique and ugliness is not only accepted, but the norm. Especially for women. Yet, so many women are unable to see how what they say not only erodes their own self-worth, but harms others. Every time we tear someone down for what they’re wearing or how they’re acting – and every time that we talk badly about ourselves – we contribute to a culture of cruelty in which women are systemically disempowered. This has to change.

It’s high time that we all stop and reflect on what we’re saying and posting when we use our fingers to talk in public. It’s time to #Speak Beautiful. Negative commentary has a domino effect. But so does positive commentary.

In an effort to change the norm, Dove and Twitter have come together to try to combat negativity with positive thoughts. Beyond this video, they are working together to identify negative tweets and reach out to women who might not realize the ramifications of what they say. Social media and self-esteem experts will offer advice in an effort to empower women to speak with more confidence, optimism, and kindness.

Will this solve the problem? No. But the modest goal of this campaign is to get more women to step back and reflect about what they’re saying. At the end of the day, it’s us who need to solve the problem. We need to all collectively make a conscious decision to stop the ugliness. We need to #SpeakBeautiful.

I am honored to be able to contribute to this effort and I invite you to do the same. Spend some time today and over the weekend thinking about the negativity you see around you on social media and push back against it. If your instinct is to critique, take a moment to say something positive. An effort to #SpeakBeautiful is both selfish and altruistic. You help others while helping yourself.

I know that I will spend the weekend thinking about my grandmother, a beautiful woman in her 90s who grew up being told that negative thoughts were thoughts against God. As a teenager, I couldn’t understand how she could stay positive no matter what happened around her but as I grow older, I’m in awe of her ability to find the beauty in everything. I’ve watched this sustain her into her old age. I only wish more people could find the nourishment of such positivity. So let’s all take a moment to #SpeakBeautiful, for ourselves and for those around us.

by zephoria at February 20, 2015 02:28 PM

February 17, 2015

Ph.D. student

data science and the university

This is by now a familiar line of thought but it has just now struck me with clarity I wanted to jot down.

  1. Code is law, so the full weight of human inquiry should be brought to bear on software system design.
  2. (1) has been understood by “hackers” for years but has only recently been accepted by academics.
  3. (2) is due to disciplinary restrictions within the academy.
  4. (3) is due to the incentive structure of the academy.
  5. Since there are incentive structures for software development that are not available for subjects whose primary research project is writing, the institutional conditions that are best able to support software work and academic writing work are different.
  6. Software is a more precise and efficious way of communicating ideas than writing because its interpretation is guaranteed by programming language semantics.
  7. Because of (6), there is selective pressure to making software the lingua franca of scholarly work.
  8. (7) is inducing a cross-disciplinary paradigm shift in methods.
  9. (9) may induce a paradigm shift in theoretical content, or it may result in science whose contents are tailored to the efficient execution of adaptive systems. (This is not to say that such systems are necessarily atheoretic, just that they are subject to different epistemic considerations).
  10. Institutions are slow to change. That’s what makes them institutions.
  11. By (5), (7), and (9), the role of universities as the center of research is being threatened existentially.
  12. But by (1), the myriad intellectual threads currently housed in universities are necessary for software system design, or are at least potentially important.
  13. With (11) and (12), a priority is figuring out how to manage a transition to software-based scholarship without information loss.

by Sebastian Benthall at February 17, 2015 07:28 AM

a brief comment on feminist epistemology

One funny thing about having a blog is that I can tell when people are interested in particular posts through the site analytics. To my surprise, this post about Donna Haraway has been getting an increasing number of hits each month since I posted it. That is an indication that it has struck a chord, since steady exogenous growth like that is actually quite rare.

It is just possible that this means that people interested in feminist epistemology have been reading my blog lately. They probably have correctly guessed that I have not been the biggest fan of feminist epistemology because of concerns about bias.

But I’d like to take the opportunity to say that my friend Rachel McKinney has been recommending I read Elizabeth Anderson‘s stuff if I want to really get to know this body of theory. Since Rachel is an actual philosopher and I am an amateur who blogs about it on weekends, I respect her opinion on this a great deal.

So today I started reading through Anderson’s Stanford Encyclopedia of Philosophy article on Feminist Epistemology and I have to say I think it’s very good. I like her treatment of the situated knower. It’s also nice to learn that there are alternative feminist epistemologies to certain standpoint theories that I think are troublesome. In particular, it turns out that those standpoint theories are now considered by feminist philosophers to from a brief period in the 80’s that they’ve moved past already! Now subaltern standpoints are considered privileged in terms of discovery more than privileged in terms of justification.

This position is certainly easier to reconcile with computational methods. For example, it’s in a sense just mathematically mathematically correct if you think about it in terms of information gain from a sample. This principle appears to have been rediscovered in a way recently by the equity-in-data-science people when people talk about potential classifier error.

I’ve got some qualms about the articulation of this learning principle in the absence of a particular inquiry or decision problem because I think there’s still a subtle shift in the argumentation from logos to ethos embedded in there (I’ve been seeing things through the lens of Aristotelian rhetoric lately and it’s been surprisingly illuminating). I’m on the lookout for a concrete application of where this could apply in a technical domain, as opposed to as an articulation of a political affinity or anxiety in the language of algorithms. I’d be grateful for links in the comments.

Edit:

Wait, maybe I already built one. I am not sure if that really counts.


by Sebastian Benthall at February 17, 2015 05:19 AM

February 13, 2015

Ph.D. alumna

An Old Fogey’s Analysis of a Teenager’s View on Social Media

In the days that followed Andrew Watts’ “A Teenager’s View on Social Media written by an actual teen” post, dozens of people sent me a link. I found myself getting uncomfortable and angry by the folks who are pointing me to this. I feel the need to offer my perspective as someone who is not a teenager but who has thought about these issues extensively for years.

Almost all of them work in the tech industry and many of them are tech executives or venture capitalists. The general sentiment has been: “Look! Here’s an interesting kid who’s captured what kids these days are doing with social media!” Most don’t even ask for my interpretation, sending it to me as though it is gospel.

We’ve been down this path before. Andrew is not the first teen to speak as an “actual” teen and have his story picked up. Every few years, a (typically white male) teen with an interest in technology writes about technology among his peers on a popular tech platform and gets traction. Tons of conferences host teen panels, usually drawing on privileged teens in the community or related to the organizers. I’m not bothered by these teens’ comments; I’m bothered by the way they are interpreted and treated by the tech press and the digerati.

I’m a researcher. I’ve been studying American teens’ engagement with social media for over a decade. I wrote a book on the topic. I don’t speak on behalf of teens, but I do amplify their voices and try to make sense of the diversity of experiences teens have. I work hard to account for the biases in whose voices I have access to because I’m painfully aware that it’s hard to generalize about a population that’s roughly 16 million people strong. They are very diverse and, yet, journalists and entrepreneurs want to label them under one category and describe them as one thing.

Andrew is a very lucid writer and I completely trust his depiction of his peer group’s use of social media. He wrote a brilliant post about his life, his experiences, and his interpretations. His voice should be heard. And his candor is delightful to read. But his analysis cannot and should not be used to make claims about all teenagers. I don’t blame Andrew for this; I blame the readers — and especially tech elites and journalists — for their interpretation of Andrew’s post because they should know better by now. What he’s sharing is not indicative of all teens. More significantly, what he’s sharing reinforces existing biases in the tech industry and journalism that worry me tremendously.

His coverage of Twitter should raise a big red flag to anyone who has spent an iota of time paying attention to the news. Over the last six months, we’ve seen a phenomenal uptick in serious US-based activism by many youth in light of what took place in Ferguson. It’s hard to ignore Twitter’s role in this phenomenon, with hashtags like #blacklivesmatter and #IfTheyGunnedMeDown not only flowing from Twitter onto other social media platforms, but also getting serious coverage from major media. Andrew’s statement that “a lot of us simply do not understand the point of Twitter” should raise eyebrows, but it’s the rest of his description of Twitter that should serve as a stark reminder of Andrew’s position within the social media landscape.

Let me put this bluntly: teens’ use of social media is significantly shaped by race and class, geography and cultural background. Let me repeat that for emphasis.

Teens’ use of social media is significantly shaped by race and class, geography and cultural background.

The world of Twitter is many things and what journalists and tech elites see from Twitter is not even remotely similar to what many of the teens that I study see, especially black and brown urban youth. For starters, their Twitter feed doesn’t have links; this is often shocking to journalists and digerati whose entire stream is filled with URLs. But I’m also bothered by Andrew’s depiction of Twitter users as first and foremost doing so to “complain/express themselves.” While he offers other professional categorizations, it’s hard not to read this depiction in light of what I see in low-status communities and the ways that privileged folks interpret the types of expression that exists in these communities. When black and brown teens offer their perspective on the world using the language of their community, it is often derided as a complaint or dismissed as self-expression. I doubt that Andrew is trying to make an explicitly racist comment here, but I want to caution every reader out there that critiques of youth use of Twitter are often seen in a negative light because of the heavy use by low-status black and brown youth.

Andrew’s depiction of his peers’ use of social media is a depiction of a segment of the population, notably the segment most like those in the tech industry. In other words, what the tech elite are seeing and sharing is what people like them would’ve been doing with social media X years ago. It resonates. But it is not a full portrait of today’s youth. And its uptake and interpretation by journalists and the tech elite whitewashes teens practices in deeply problematic ways.

I’m not saying he’s wrong; I’m saying his story is incomplete and the incompleteness is important. His commentary on Facebook is probably the most generalizable, if we’re talking about urban and suburban American youth. Of course, his comments shouldn’t be shocking to anyone at this point (as Andrew himself points out). Somehow, though, declarations of Facebook’s lack of emotional weight with teens continues to be front page news. All that said, this does render invisible the cultural work of Facebook in rural areas and outside of the US.

Andrew is very visible about where he stands. He’s very clear about his passion for technology (and his love of blogging on Medium should be a big ole hint to anyone who missed his byline). He’s also a college student and talks about his peers as being obviously on path to college. But as readers, let’s not forget that only about half of US 19-year-olds are in college. He talks about WhatsApp being interesting when you go abroad, the practice of “going abroad” is itself privileged, with less than 1/3 of US citizens even holding passports. Furthermore, this renders invisible the ways in which many US-based youth use WhatsApp to communicate with family and friends who live outside of the US. Immigration isn’t part of his narrative.

I don’t for a second fault Andrew for not having a perspective beyond his peer group. But I do fault both the tech elite and journalists for not thinking critically through what he posted and presuming that a single person’s experience can speak on behalf of an entire generation. There’s a reason why researchers and organizations like Pew Research are doing the work that they do — they do so to make sure that we don’t forget about the populations that aren’t already in our networks. The fact that professionals prefer anecdotes from people like us over concerted efforts to understand a demographic as a whole is shameful. More importantly, it’s downright dangerous. It shapes what the tech industry builds and invests in, what gets promoted by journalists, and what gets legitimized by institutions of power. This is precisely why and how the tech industry is complicit in the increasing structural inequality that is plaguing our society.

This post was originally published to The Message at Medium on January 12, 2015

by zephoria at February 13, 2015 12:05 AM

February 12, 2015

Ph.D. student

scale and polemic

I love a good polemic but lately I have been disappointed by polemics as a genre because they generally don’t ground themselves on data at a suitable scale.

When people try to write about a social problem, they are likely to use potent examples as a rhetorical device. Their particular ideological framing of a situation will be illustrated by compelling stories that are easy to get emotional about. This is often considered to be the hallmark of A Good Presentation, or Good Writing. Somebody will say about some group X, “Group X is known for doing bad things. Here’s an example.”

There are some problems with this approach. If there are a lot of people in Group X, then there can be a lot of variance within that group. So providing just a couple examples really doesn’t tell you about the group as a whole. In fact, this is a great way to get a biased view of Group X.

There are consequences to this kind of rhetoric. Once there’s a narrative with a compelling example illustrating it, that spreads that way of framing things as an ideology. Then, because of the well-known problem of confirmation bias, people that have been exposed to that ideology will start to see more examples of that ideology everywhere.

Add to that stereotype threat and suddenly you’ve got an explanation for why so many political issues are polarized and terrible.

Collecting more data and providing statistical summaries of populations is a really useful remedy to this. While often less motivating than a really well told story of a person’s experience, it has the benefit of being more accurate in the sense of showing the diversity of perspectives there are about something.

Unfortunately, we like to hear stories so much that we will often only tell people about statistics on large populations if they show a clear trend one way or another. People that write polemics want to be able to say, “Group X has 20% more than Group Y in some way,” and talk about why. It’s not considered an interesting result if it turns out the data is just noise, that Group X and Group Y aren’t really that different.

We also aren’t good at hearing stories about how much variance there is in data. Maybe on average Group X has 20% more than Group Y in some way. But what if these distributions are bimodal? Or if one is more varied than the other? What does that mean, narratively?

It can be hard to construct narrations that are not about what can be easily experienced in one moment but rather are about the experiences of lots of people over lots of moments. The narrative form is very constraining because it doesn’t capture the reality of phenomena of great scale and complexity. Things of great scale and complexity can be beautiful but hard to talk about. Maybe talking about them is a waste of time, because that’s not a good way to understand them.


by Sebastian Benthall at February 12, 2015 09:38 PM

February 07, 2015

Ph.D. student

formalizing the cultural observer

I’m taking a brief break from Horkheimer because he is so depressing and because I believe the second half of Eclipse of Reason may include new ideas that will take energy to internalize.

In the meantime, I’ve rediscovered Soren Brier’s Cybersemiotics: Why Information Is Not Enough! (2008), which has remained faithfully on my desk for months.

Brier is concerned with the possibility of meaning generally, and attempts to synthesize the positions of Pierce (recall: philosophically disliked by Horkheimer as a pragmatist), Wittgenstein (who first was an advocate of the formalization of reason and language in his Tractatus, then turned dramatically against it in his Philosophical Investigations), second-order cyberneticists like Varela and Maturana, and the social theorist Niklas Luhmann.

Brier does not make any concessions to simplicity. Rather, his approach is to begin with the simplest theories of communication (Shannon) and show where each fails to account for a more complex form of interaction between more completely defined organisms. In this way, he reveals how each simpler form of communication is the core around which a more elaborate form of meaning-making is formed. He finally arrives at a picture of meaning-making that encompasses all of reality, including that which can be scientifically understood, but one that is necessarily incomplete and an open system. Meaning is all-pervading but never all-encompassing.

One element that makes meaning more complex than simple Shannon-esque communication is the role of the observer, who is maintained semiotically through an accomplishment of self-reference through time. This observer is a product of her own contingency. The language she uses is the result of nature, AND history, AND her own lived life. There is a specificity to her words and meanings that radiates outward as she communicates, meanings that interact in cybernetic exchange with the specific meanings of other speakers/observers. Language evolves in an ecology of meaning that can only poorly be reflected back upon the speaker.

What then can be said of the cultural observer, who carefully gathers meanings, distills them, and expresses new ones conclusively? She is a cybernetic captain, steering the world in one way or another, but only the world she perceives and conceives. Perhaps this is Haraway’s cyborg, existing in time and space through a self-referential loop, reinforced by stories told again and again: “I am this, I am this, I am this.” It is by clinging to this identity that the cyborg achieves the partiality glorified by Haraway. It is also this identity that positions her as an antagonist as she must daily fight the forces of entropy that would dissolve her personality.

Built on cybernetic foundations, does anything in principle prevent the formalization and implementation of Brier’s semiotic logic? What would a cultural observer that stands betwixt all cultures, looming like a spider on the webs of communication that wrap the earth at inconceivable scale? Without the same constraints of partiality of one human observer, belonging to one culture, what could such a robot scientist see? What meaning would they make for themselves or intend?

This is not simply an issue of the interpretability of the algorithms used by such a machine. More deeply, it is the problem that these machines do not speak for themselves. They have no self-reference or identity, and so do not participate in meaning-making except instrumentally as infrastructure. This cultural observer that is in the position to observe culture in the making without the limits of human partiality for now only serves to amplify signal or dampen noise. The design is incomplete.


by Sebastian Benthall at February 07, 2015 08:22 PM

February 05, 2015

Ph.D. student

Horkheimer and “The Revolt of Nature”

The third chapter of Horkheimer’s Eclipse of Reason (which by the way is apparently available here as a PDF) is titled “The Revolt of Nature”.

It opens with a reiteration of the Frankfurt School story: as reason gets formalized, society gets rationalized. “Rationalized” here is in the sense that goes back at least to Lukacs’s “Reification and the Consciousness of the Proletariat” in 1923. It refers to the process of being rendered predictable, and being treated as such. It’s this formalized reason that is a technique of prediction and predictability, but which is unable to furnish an objective ethics, that is the main subject of Horkheimer’s critique.

In “The Revolt of Nature”, Horkheimer claims that as more and more of society is rationalized, the more humanity needs to conform to the rationalizing system. This happens through the labor market. Predictable technology and working conditions such as the factory make workers more interchangeable in their jobs. Thus they are more “free” in a formal sense, but at the same time have less job security and so have to conform to economic forces that make them into means and not ends in themselves.

Recall that this is written in 1947, and Lukacs wrote in 1923. In recent years we’ve read a lot about the Sharing Economy and how it leads to less job security. This is an argument that is almost a century old.

As society and humanity in it conform more and more to rational, pragmatic demands on them, the element of man that is irrational, that is nature, is not eliminated. Horkheimer is implicitly Freudian. You don’t eradicate the natural impulses. You repress them. And what is repressed must revolt.

This view runs counter to some of the ideology of the American academic system that became more popular in the late 20th century. Many ideologues reject the idea of human nature at all, arguing that all human behavior can be attributed to socialization. This view is favored especially by certain extreme progressives, who have a post-Christian ideal of eradicating sin through media criticism and scientific intervention. Steven Pinker’s The Blank Slate is an interesting elaboration and rebuttal of this view. Pinker is hated by a lot of academics because (a) he writes very popular books and (b) he makes a persuasive case against the total mutability of human nature, which is something of a sacred cow to a lot of social scientists for some reason.

I’d argue that Horkheimer would agree with Pinker that there is such a thing as human nature, since he explicitly argues that repressed human nature will revolt against dominating rationalizing technology. But because rationalization is so powerful, the revolt of nature becomes part of the overall system. It helps sustain it. Horkheimer mentions “engineered” race riots. Today we might point to the provocation of bestial, villainous hate speech and its relationship to the gossip press. Or we might point to ISIS and the justification it provides for the military-industrial complex.

I don’t want to imply I endorse this framing 100%. It is just the continuation of Frankfurt School ideas to the present day. How they match up against reality is an empirical question. But it’s worth pointing out how many of these important tropes originated.


by Sebastian Benthall at February 05, 2015 06:56 AM

February 04, 2015

Ph.D. student

a new kind of scientism

Thinking it over, there are a number of problems with my last post. One was the claim that the scientism addressed by Horkheimer in 1947 is the same as the scientism of today.

Scientism is a pejorative term for the belief that science defines reality and/or is a solution to all problems. It’s not in common use now, but maybe it should be among the critical thinkers of today.

Frankfurt School thinkers like Horkheimer and Habermas used “scientism” to criticize the positivists, the 20th century philosophical school that sought to reduce all science and epistemology to formal empirical methods, and to reduce all phenomena, including social phenomena, to empirical science modeled on physics.

Lots of people find this idea offensive for one reason or another. I’d argue that it’s a lot like the idea that algorithms can capture all of social reality or perform the work of scientists. In some sense, “data science” is a contemporary positivism, and the use of “algorithms” to mediate social reality depends on a positivist epistemology.

I don’t know any computer scientists that believe in the omnipotence of algorithms. I did get an invitation to this event at UC Berkeley the other day, though:

This Saturday, at [redacted], we will celebrate the first 8 years of the [redacted].

Current students, recent grads from Berkeley and Stanford, and a group of entrepreneurs from Taiwan will get together with members of the Social Data Lab. Speakers include [redacted], former Palantir financial products lead and course assistant of the [redacted]. He will reflect on how data has been driving transforming innovation. There will be break-out sessions on sign flips, on predictions for 2020, and on why big data is the new religion, and what data scientists need to learn to become the new high priests. [emphasis mine]

I suppose you could call that scientistic rhetoric, though honestly it’s so preposterous I don’t know what to think.

Though I would recommend to the critical set the term “scientism”, I’m ambivalent about whether it’s appropriate to call the contemporary emphasis on algorithms scientistic for the following reason: it might be that ‘data science’ processes are better than the procedures developed for the advancement of physics in the mid-20th century because they stand on sixty years of foundational mathematical work with modeling cognition as an important aim. Recall that the AI research program didn’t start until Chomsky took down Skinner. Horkheimer quotes Dewey commenting that until naturalist researchers were able to use their methods to understand cognition, they wouldn’t be able to develop (this is my paraphrase:) a totalizing system. But the foundational mathematics of information theory, Bayesian statistics, etc. are robust enough or could be robust enough to simply be universally intersubjectively valid. That would mean data science would stand on transcendental not socially contingent grounds.

That would open up a whole host of problems that take us even further back than Horkheimer to early modern philosophers like Kant. I don’t want to go there right now. There’s still plenty to work with in Horkheimer, and in “Conflicting panaceas” he points to one of the critical problems, which is how to reconcile lived reality in its contingency with the formal requirements of positivist or, in the contemporary data scientific case, algorithmic epistemology.


by Sebastian Benthall at February 04, 2015 06:53 AM

MIMS 2012

Building an MVPP - A Minimum Viable Product we're Proud of

On November 18th, 2014, we publicly released Optimizely’s iOS editor. This was a big release for us because it marked the end of a months-long public beta in which we received a ton of customer feedback and built a lot of missing features. But before we launched, there was one problem the whole team rallied behind to fix: we weren’t proud of the product. To fix this issue, we went beyond a Minimum Viable Product (MVP) to an MVPP — the Minimum Viable Product we’re Proud of.

What follows is the story of how we pulled this off, what we learned along the way, and product development tips to help you ship great products, from the perspective of someone who just did it.

Finished iOS editor

The finished iOS editor.

Genesis of the MVPP

We released a public beta of Optimizely’s iOS editor in June 2014. At that time, the product wasn’t complete yet, but it was important for us to get real customer feedback to inform its growth and find bugs. So after months of incorporating user feedback, the beta product felt complete enough to publicly launch. There was just one problem: the entire team wasn’t proud of the product. It didn’t meet our quality bar; it felt like a bunch of features bolted together without a holistic vision. To fix this, we decided to overhaul the user experience, an ambiguous goal that could easily go on forever, never reaching a clear “done” state.

We did two things to be more directed in the overhaul. First, we committed to a deadline to prevent us from endlessly polishing the UI. Second, we took inspiration from the Lean Startup methodology and chose a set of features that made up a Minimum Viable Product (MVP). An MVP makes it clear that we’ll cut scope to make the deadline, but nothing about quality. So to make it explicit that we were focusing on quality and wanted the whole team to be proud of the final product, we added an extra “P” to MVP. And thus, the Minimum Viable Product we’re Proud of — our MVPP — was born.

Create the vision

Once we had agreed on a feature set for the MVPP, a fellow Product Designer and I locked ourselves in a war room for the better part of a week to flesh out the user experience. We mapped out user flows and created rough mock ups that we could use to communicate our vision to the larger development team. Fortunately, we had some pre-existing usability test findings to inform our design decisions.

Picture of our war room in action

Sketches, mockups, and user flows from our war room.

These mockups were immensely helpful in planning the engineering and design work ahead. Instead of talking about ideas in the abstract, we had concrete features and visuals to point to. For example, everyone knew what we meant when we said “Improved Onboarding Flow.” With mockups in hand, communication between team members became much more concrete and people felt inspired to work hard to achieve our vision.

Put 6 weeks on the clock… and go!

We had 3 sprints (6 weeks) to complete the MVPP (most teams at Optimizely work in 2 week cycles called “sprints”). It was an aggressive timeline, but it felt achievable — exactly where a good deadline should be.

In the first sprint, the team made amazing progress. All the major pieces had been built, without any major re-scoping or redesigns. There were still bugs to fix, polish to apply, and edge cases to consider, but the big pieces core to our vision were in place.

That momentum carried over into the second sprint, which we spent fixing the biggest bugs, filling functional holes, and polishing the UI.

For the third and final sprint, we gave ourselves a new goal: ship a week early. We were already focused on launching the MVPP, but at this point we became laser focused. During daily standups, we looked at our JIRA board and asked, “If we were launching tomorrow, what would we work on today?”

We were ruthless about prioritizing tasks and moved a lot of items that were important, but not launch-critical, to the backlog.

During the first week of sprint 3, we also did end-to-end product walkthroughs after every standup to ensure the team was proud of the new iOS editor. We all got to experience the product from the customer’s perspective, and caught user experience bugs that were degrading the quality of our work. We also found and fixed a lot of functional bugs during this time. By the end of the week, everyone was proud of the final product and felt confident launching.

The adrenaline rush & benefit of an early release

On 11/10, we quietly released our MVPP to the world — a full week early! Not only did shipping early feel great, it also gave us breathing room to further polish the design, fix bugs, and give the rest of the company time to organize all the components to launch the MVPP.

Product teams don’t launch products alone; it takes full collaboration between marketing, sales, and success to create materials to promote it, sell it, and enable our customers to use it. By the time the public announcement on 11/18 rolled around, the whole company was extremely proud of the final result.

Lessons learned

While writing this post and reflecting on the project as a whole, a number of techniques became clear to me that can help any team ensure a high quality, on-time launch:

  • Add a “P” to “MVP” to make quality a launch requirement: Referring to the project as the “Minimum Viable Product we’re Proud of” made sure everyone on the team approached the product with quality in mind. Every project has trade-offs between the ship date, quality, and scope. It’s very hard to do all three. Realistically, you can do two. By calling our project an MVPP, we were explicit that quality would not be sacrificed.
  • Set a deadline: Having a deadline focused everyone’s efforts, preventing designers from endlessly polishing interfaces and developers spinning their wheels imagining every possible edge case. Make it aggressive, yet realistic, to instill a sense of urgency in the team.
  • Focus on the smallest set of features that provide the largest customer impact: We were explicit about what features needed to be redesigned, and just as importantly, which were off limits. This prevented scope-creep, and increased the team’s focus.
  • Make mockups before starting development: This is well-known in the industry, but it’s worth repeating. Creating tangible user flows and mockups ahead of time keeps planning discussions on track, removes ambiguity, and quickly explains the product vision. It also inspires the team by rallying them to achieve a concrete goal.
  • Do daily product walkthroughs: Our product walkthroughs had two key benefits. First, numerous design and code bugs were discovered and fixed. And second, they ensured we lived up to the extra “P” in “MVPP.” Everyone had a place to verbally agree that they were proud of the final product and confident launching. Although these walkthroughs made our standups ~30 minutes longer, it was worth the cost.
  • Ask: “If we were shipping tomorrow, what would you work on today?”: When the launch date is approaching, asking this question separates the critical, pre-launch tasks from the post-launch tasks.

Lather, Rinse, and Repeat

By going beyond an MVP to a Minimum Viable Product we’re Proud of, we guaranteed that quality was the requirement for launching. And by using a deadline, we stayed focused only on the tasks that were absolutely critical to shipping. With a well-scoped vision, mockups, and a date not too far in the future, you too can rally teams to create product experiences they’re proud of. And then do it again.

by Jeff Zych at February 04, 2015 04:20 AM

February 01, 2015

Ph.D. student

“Conflicting panaceas”; decapitation and dogmatism in cultural studies counterpublics

I’m still reading through Horkheimer’s Eclipse of Reason. It is dense writing and slow going. I’m in the middle of the second chapter, “Conflicting Panaceas”.

This chapter recognizes and then critiques a variety of intellectual stances of his contemporaries. Whereas in the first chapter Horkheimer takes aim at pragmatism, in this he concerns himself with neo-Thomism and positivism.

Neo-Thomism? Yes, that’s right. Apparently in 1947 one of the major intellectual contenders was a school of thought based on adapting the metaphysics of Saint Thomas Aquinas to modern times. This school of thought was apparently notable enough that while Horkheimer is generally happy to call out the proponents of pragmatism and positivism by name and call them business interest lapdogs, he chooses instead to address the neo-Thomists anonymously in a conciliatory footnote

This important metaphysical school includes some of the most responsible historians and writers of our day. The critical remarks here bear exclusively on the trend by which independent philosophical thought is being superseded by dogmatism.

In a nutshell, Horkheimer’s criticism of neo-Thomism is that it is that since it tries and fails to repurpose old ontologies to the new world, it can’t fulfill its own ambitions as an intellectual system through rigor without losing the theological ambitions that motivate it, the identification of goodness, power, and eternal law. Since it can’t intellectually culminate, it becomes a “dogmatism” that can be coopted disingenuously by social forces.

This is, as I understand it, the essence of Horkheimer’s criticism of everything: That for any intellectual trend or project, unless the philosophical project is allowed to continue to completion within it, it will have its brains slurped out and become zombified by an instrumentalist capitalism that threatens to devolve into devastating world war. Hence, just as neo-Thomism becomes a dogmatism because it would refute itself if it allowed its logic to proceed to completion, so too does positivism become a dogmatism when it identifies the truth with disciplinarily enforced scientific methods. Since, as Horkheimer points out in 1947, these scientific methods are social processes, this dogmatic positivism is another zombie, prone to fads and politics not tracking truth.

I’ve been struggling over the past year or so with similar anxieties about what from my vantage point are prevailing intellectual trends of 2014. Perversely, in my experience the new intellectual identities that emerged to expose scientific procedures as social processes in the 20th century (STS) and establish rhetorics of resistance (cultural studies) have been similarly decapitated, recuperated, and dogmatic. [see 1 2 3].

Are these the hauntings of straw men? This is possible. Perhaps the intellectual currents I’ve witnessed are informal expressions, not serious intellectual work. But I think there is a deeper undercurrent which has turned up as I’ve worked on a paper resulting from this conversation about publics. It hinges on the interpretation of an influential article by Fraser in which she contests Habermas’s notion of the public sphere.

In my reading, Fraser more or less maintains the ideal of the public sphere as a place of legitimacy and reconciliation. For her it is notably inequitable, it is plural not singular, the boundaries of what is public and private are in constant negotiation, etc. But its function is roughly the same as it is for Habermas.

My growing suspicion is that this is not how Fraser is used by cultural studies today. This suspicion began when Fraser was introduced to me; upon reading her work I did not find the objection implicit in the reference to her. It continued as I worked with the comments of a reviewer on a paper. It was recently confirmed while reading Chris Wisniewski’s “Digital Deliberation ?” in Critical Review, vol 25, no. 2, 2013. He writes well:

The cultural-studies scholars and critical theorists interested in diversifying participation through the Internet have made a turn away from this deliberative ideal. In an essay first published in 1990, the critical theorist Nancy Fraser (1999, 521) rejects the idealized model of bourgeois public sphere as defined by Habermas on the grounds that it is exclusionary by design. Because the bourgeois public sphere brackets hierarchies of gender, race, ethnicity, class, etc., Fraser argues, it benefits the interests of dominant groups by default through its elision of socially significant inequalities. Lacking the ability to participate in the dominant discourse, disadvantaged groups establish alternative “subaltern counterpublics”.

Since the ideal speech situation does not acknowledge the socially significant inequalities that generate these counterpublics, Fraser argues for a different goal: a model of participatory democracy in which intercultural communications across socially stratified groups occur in forums that do not elide differences but intead allow diverse multiple publics the opportunity to determine the concerns or good of the public as a whole through “discursive contestations.” Fraser approaches thes subgroups as identity publics and aruges that culture and political debate are essentially power struggles among self-interested subgroups. Fraser’s ideas are similar to those prevalent in cultural studies (see Wisneiwski 2007 and 2010), a relatively young discipline in which her work has been influential.

Fraser’s theoretical model is inconsistent with studies of democratic voting behavior, which indicate that people tend to vote sociotropically, according to a perceived collective interest, and not in facor of their own perceived self-interest (e.g., Kinder and Kiewiet 1981). The argument that so-called “mass” culture excludes the interests of dominated groups in favor of the interests of the elites loses some of its valence if culture is not a site through which self-interested groups vie for their objective interests, but is rather a forum in which democratic citizens debate what constitutes, and the best way to achieve, the collective good. Diversification of discourse ceases to be an end in itself.”

I think Wisneiwski hits the nail on the head here, a nail I’d like to drive in farther. If culture is conceived of as consisting of the contests of self-interested identity groups, as this version of cultural studies does, then it will necessarily see itself as one of many self-interested identities. Cultural studies becomes, by its own logic, a counterpublic that exists primarily to advance its own interests.

But just like neo-Thomism, this positioning decapitates cultural studies by preventing it from intellectually confronting its own limitations. No identity can survive rigorous intellectual interrogation, because all identities are based on contingency, finitude, trauma. Cultural studies adopt and repurpose historical rhetorics of liberation much like neo-Thomists adopted and repurposed historical metaphysics of Christianity. The obsolescence of these rhetorics, like the obsolescence of Thomistic metaphysics, is what makes them dangerous. The rhetoric that maintains its own subordination as a condition of its own identity can never truly liberate, it can only antagonize. Unable to intellectually realize its own purpose, it becomes purposeless and hence coopted and recuperated like other dogmatisms. In particular, it feeds into “the politicization of absolutely everything”, in the language of Ezra Klein’s spot-on analysis of GamerGate. Cultural studies is a powerful ideology because it turns culture into a field of perpetual rivalry with all the distracting drama of reality television. In so doing, it undermines deeper intellectual penetration into the structural conditions of society.

If cultural studies is the neo-Thomism of today, a dogmatist religious revival of the profound theology of the civil rights movement, perhaps it’s the theocratic invocation of ‘algorithms’ that is the new scientism. I would have more to say about it if it weren’t so similar to the old scientism.


by Sebastian Benthall at February 01, 2015 08:08 PM