School of Information Blogs

May 20, 2015

Ph.D. student

resisting the power of organizations

“From the day of his birth, the individual is made to feel there is only one way of getting along in this world–that of giving up hope in his ultimate self-realization. This he can achieve solely by imitation. He continuously responds to what he perceives about him, not only consciously but with his whole being, emulating the traits and attitudes represented by all the collectivities that enmesh him–his play group, his classmates, his athletic team, and all the other groups that, as has been pointed out, enforce a more strict conformity, a more radical surrender through complete assimilation, than any father or teacher in the nineteenth century could impose. By echoing, repeating, imitating his surroundings, by adapting himself to all the powerful groups to which he eventually belongs, by transforming himself from a human being into a member of organizations, by sacrificing his potentialities for the sake of readiness and ability to conform to and gain influence in such organizations, he manages to survive. It is survival achieved by the oldest biological means necessary, mimicry.” – Horkheimer, “Rise and Decline of the Individual”, Eclipse of Reason, 1947

Returning to Horkheimer‘s Eclipse of Reason (1947) after studying Beniger‘s Control Revolution (1986) serves to deepen ones respect for Horkheimer.

The two writers are for the most part in agreement as to the facts. It is a testament to their significance and honesty as writers that they are not quibbling about the nature of reality but rather are reflecting seriously upon it. But whereas maintains a purely pragmatic, unideological perspective, Horkheimer (forty years earlier) correctly attributes this pragmatic perspective to the class of business managers to whom Beniger’s work is directed.

Unlike more contemporary critiques, Horkheimer’s position is not to dismiss this perspective as ideological. He is not working within the postmodern context that sees all knowledge as contestable because it is situated. Rather, he is working with the mid-20th acknowledgment that objectivity is power. This is a necessary step in the criticality of the Frankfurt School, which is concerned largely with the way (real) power shapes society and identity.

It would be inaccurate to say that Beniger celebrates the organization. His history traces the development of social organization as evolving organism. Its expanding capacity for information processing is a result of the crisis of control unleashed by the integration of its energetic constituent components. Globalization (if we can extend Beniger’s story to include globalization) is the progressive organization of organizations of organization. It is interesting that this progression of organization is a strike against Weiner’s prediction of the need for society to arm itself against entropy. This conundrum is one we will need to address in later work.

For now, it is notable that Horkheimer appears to be responding to just the same historical developments later articulated by Beniger. Only Horkeimer is writing not as a descriptive scientist but as a philosopher engaged in the process of human meaning-making. This positions him to discuss the rise and decline of the individual in the era of increasingly powerful organizations.

Horkheimer sees the individual as positioned at the nexus of many powerful organizations to which he must adapt through mimicry for the sake of survival. His authentic identity is accomplished only when alone because submission to organizational norms is necessary for survival or the accumulation of organizational power. In an era where pragmatic ability to manipulate people, not spiritual ideals, qualifies one for organization power, the submissive man represses his indignation and rage at this condition and becomes an automoton of the system.

Which system? All systems. Part of the brilliance of both Horkheimer and Beniger is their ability to generalize over many systems to see their common effect on their constituents.

I have not read Horkheimer’s solution the individual’s problem of how to maintain his individuality despite the powerful organizations which demand mimicry of him. This is a pressing question when organizations are becoming ever more powerful by using the tools of data science. My own hypotheses, which is still in need of scientific validation, is that the solution lies in the intersecting agency implied by the complex topology of the organization of organizations.


by Sebastian Benthall at May 20, 2015 12:38 AM

May 18, 2015

MIMS 2012

May 17, 2015

Ph.D. student

software code as representation of knowledge

The reason why ubiquitous networked computing has changed how we represent knowledge is because the semantics of code are guaranteed by the mechnical implementations of its compilers.

This introduces a kind of discipline in the representation of knowledge as source code that is not present in natural language or even in formal mathematical notation, which must be interpreted by humans.

Evolutionarily, humanity’s innate capacity for natural language is well established. Literacy, however, is a trained skill that involves years of education. As Derrida points out in Of Grammatology, the transition from the understanding of language as speech or breath to the understanding of knowledge as text was a very significant change in the history of knowledge.

We have not yet adjusted institutionally to a world where knowledge is represented as code. Most of the institutions that run the world–the legal system, universities, etc.–still run on the basis of written language.

But the new institutions that are adapting to represent knowledge as data and software code to process it are becoming more powerful than these older institutions.

This power comes from these new institutions’ ability to assign the work of acting on their knowledge to computing machines that can work tirelessly and that integrate well with operations. These new institutions can process more information, gathered from more sources, than the old institutions. They are organizationally more intelligent than the older organizations. Because of this intelligence, they can accrue wealth and more power.


by Sebastian Benthall at May 17, 2015 08:57 PM

May 14, 2015

Ph.D. student

data science is not positivist, it’s power

Naively, we might assume that contemporary ‘data science’ is a form of positivist or post-positivist science. The scientist gathers data and subsumes it under logical formulae–models with fitted parameters. Indeed this is the case when data science is applied to natural phenomena, such as stars or the human genome.

The question of what kind of science ‘data science’ is becomes much more complex when we start to look at its application to social phenomena. This includes its application to the management of industrial and commercial technology–the so called “Internet of Things“. (Technology in general, and especially technology as situated socially, being a social phenomenon.)

There are (at least) two reasons why data science in these social domains is not strictly positivist.

The first is that, according to McKinsey’s Michael Chui, data science in the Internet of Things context is main about either real-time control or anomaly detection. Neither of these depends on the kind of nomothetic orientation that positivism requires. The former requires only an objective function over inputs to guide the steering of the dynamic system. The latter requires only the detection of deviation from historically observed patterns.

‘Data science’ applied in this context isn’t actually about the discovery of knowledge at all. It is not, strictly speaking, a science. Rather, it is a process through which the operations of existing technologies are related and improved by further technological interventions. Robust positivist engineering knowledge is applied to these cases. But however much the machines may ‘learn’, what they learn is not propositional.

Perhaps the best we can say is that ‘data science’ in this context is the science of techniques for making these kinds of interventions. As learning these techniques depends on mathematical rigor and empirical prototyping, we can say perhaps of the limited sense of ‘pure’ (not applied) data science that it is a positivist science.

But the second reason why data science is not positivist comes about as a result of its application. The problem is that when systems controlled by complex computational processes interact, the result is a more complex system. In adversarial cases, the interacting complex systems become the subject matter of cybersecurity research, towards which data science is one application. But as soon as on starts to study phenomena that are aware of the observer and can act in ways that respond to its presence, you get out of positivist territory.

A better way to think about data science might be to think of it in terms of perception. In, the visual system, data that comes in through the eye goes through many steps of preprocessing before it becomes the subject of attention. Visual representations feed into the control mechanisms of movement.

If we see data science not as a positivist attempt to discover natural laws, but rather as an extension of agency by expanding powers of perception and training skillful control, then we can get a picture of data science that’s consistent with theories of situated and embodied cognition.

These theories of situated and embodied cognition are perhaps the best contenders for what can displace the dominant paradigm as imagined by critics of cognitive science, economics, etc. Rather than being a rejection of explanatory power of naturalistic theories of information processing, these theories extend naive theories to embrace the complexity of how agents cognition is situated in a body in time, space, and society.

If we start to think of ‘data science’ not as a kind of natural science but as the techniques and tools for extending the information processing that is involved in ones individual or collective agency, then we can start to think about data science as what it really is: power.


by Sebastian Benthall at May 14, 2015 06:14 AM

May 09, 2015

Ph.D. student

is science ideological?

In a previous post, I argued that Beniger is an unideological social scientist because he grounds his social scientific theory in robust theory from the natural and formal sciences, like theory of computation and mathematical biology. Astute commenter mg has questioned this assertion.

Does firm scientific grounding absolve a theoretical inquiry from ideology – what about the ideological framework that the science itself has grown in and is embedded in? Can we ascribe such neutrality to science?

This is a good question.

To answer it, it would be good to have a working definition of ideology. I really like one suggested by this passage from Habermas, which I have used elsewhere.

The concept of knowledge-constitutive human interests already conjoins the two elements whose relation still has to be explained: knowledge and interest. From everyday experience we know that ideas serve often enough to furnish our actions with justifying motives in place of the real ones. What is called rationalization at this level is called ideology at the level of collective action. In both cases the manifest content of statements is falsified by consciousness’ unreflected tie to interests, despite its illusion of autonomy. The discipline of trained thought thus correctly aims at excluding such interests. In all the sciences routines have been developed that guard against the subjectivity of opinion, and a new discipline, the sociology of knowledge, has emerged to counter the uncontrolled influence of interests on a deeper level, which derive less from the individual than from the objective situation of social groups.

If we were to extract a definition of ideology from this passage, it would be something like this: an ideology is:

  1. an expression of motives that serves to justify collective action by a social group
  2. …that is false because it is unreflective of the social group’s real interests.

I maintain that the theories that Beniger uses to frame his history of technology are unideological because they are not expressions of motives. They are descriptive claims whose validity has been tested thoroughly be multiple independent social groups with conflicting interests. It’s this validity within and despite the contest of interests which gives scientific understanding its neutrality.

Related: Brookfield’s “Contesting Criticality: Epistemological and Practical Contradictions in Critical Reflection” (here), which I think is excellent, succinctly describes the intellectual history of criticality and how contemporary usage of it blends three distinct traditions:

  1. a Marxist view of ideology as the result of objectively true capitalistic social relations,
  2. a psychoanalytic view of ideology as a result of trauma or childhood,
  3. and a pragmatic/constructivist/postmodern view of all knowledge being situated.

Brookfield’s point is that an unreflective combination of these three perspectives is incoherent both theoretically and practically. That’s because while the first two schools of thought (which Habermas combines, above–later Frankfurt School writers deftly combined Marxism is psychoanalysis) both maintain an objectivist view of knowledge, the constructivists reject this in favor of a subjectivist view. Since discussion of “ideology” comes to us from the objectivist tradition, there is a contradiction in the view that all science is ideological. Calling something ‘ideological’ or ‘hegemonic’ requires that you take a stand on something, such as the possibility of an alternative social system.


by Sebastian Benthall at May 09, 2015 05:05 PM

May 08, 2015

Ph.D. student

Fascinated by Vijay Narayanan’s talk at #DataEDGE

As I write this I’m watching Vijay Narayanan’s, Director of Algorithms and Data Science Solutions at Microsoft, talk at the DataEDGE conference at UC Berkeley.

The talk is about “The Data Science Economy.” It began with a history of the evolution of the human centralized nervous system. He then went on to show the centralizing trend of the data economy. Data collection will be become more mobile, data processing will be done in the cloud. This data will be sifted by software and used to power a marketplace of services, which ultimately deliver intelligence to their users.

It was wonderful to see somebody so in the know reaffirming what has been a suspicion I’ve had since starting graduate school but have found little support for in the academic setting. The suspicion is that what’s needed to accurately model the data science economy is a synthesis of cognitive science and economics that can show the comparative market value and competitiveness of different services.

This is not out of the mainline of information technology, management science, computer science, and other associated disciplines that have been at the nexus of business and academia for 70 years. It’s an intellectual tradition that’s rooted in the 1940’s cybernetics vision of Norbert Wiener and was going strong in the social sciences as late as Beniger‘s The Control Revolution, which, like Narayanan, draws an explicit connection between information processing in the brain and information processing in the microprocessor–notably while acknowledging the intermediary step of bureaucracy as a large-scale information processing system.

There’s significant cross-pollination between engineering, economics, computer science, and cognitive psychology. I’ve read papers from, say, the Education field in the late 80’s and early 90’s that refers to this collectively as “the dominant paradigm”. At UC Berkeley today, it’s fascinating to see a departmental politics play out over ‘data science’ that echoes some of these concerns that a powerful alliance of ideas are getting mobilized by industry and governments while other disciplines are struggling to find relevance.

It’s possible that these specialized disciplinary discourses are important for the cultivation of thought that is important for its insight despite being fundamentally impractical. I’m coming to a different view: that maybe the ‘dominant paradigm’ is dominant because it is scientifically true, and that other disciplinary orientations are suffering because they are based on unsound theory. If disciplines that are ‘dominated’ by another paradigm are floundering because they are, to put it simply, wrong, then that is a very elegant explanation for what’s going on.

The ramification of this is that what’s needed is not a number of alternatives to ‘the dominant paradignm’. What’s needed is that scholars double down on the dominant paradigm and learn how to express in its logic the complexities and nuances that the other disciplines have been designed to capture. What we can hope for, in terms of intellectual continuity, is the preservation of what’s best of older ideas in a creative synthesis with the foundational principles of computer science and mathematical biology.


by Sebastian Benthall at May 08, 2015 10:41 PM

Ph.D. alumna

Are We Training Our Students to be Robots?

Excited about the possibility that he would project his creativity onto paper, I handed my 1-year-old son a crayon. He tried to eat it. I held his hand to show him how to draw, and he broke the crayon in half. I went to open the door and when I came back, he had figured out how to scribble… all over the wooden floor.

Crayons are pretty magical and versatile technologies. They can be used as educational tools — or alternatively, as projectiles. And in the process of exploring their properties, children learn to make sense of both their physical affordances and the social norms that surround them. “No, you can’t poke your brother’s eye with that crayon!” is a common refrain in my house. Learning to draw — on paper and with some sense of meaning — has a lot to do with the context, a context in which I help create, a context that is learned outside of the crayon itself.

From crayons to compasses, we’ve learned to incorporate all sorts of different tools into our lives and educational practices. Why, then, do computing and networked devices consistently stump us? Why do we imagine technology to be our educational savior, but also the demon undermining learning through distraction? Why are we so unable to see it as a tool whose value is most notably discovered situated in its context?

The arguments that Peg Tyre makes in “iPads < Teachers” are dead on. Personalized learning technologies won’t magically on their own solve our education crisis. The issues we are facing in education are social and political, reflective of our conflicting societal values. Our societal attitudes toward teachers are deeply destructive, a contemporary manifestation of historical attitudes towards women’s labor.

But rather than seeing learning as a process and valuing educators as an important part of a healthy society, we keep looking for easy ways out of our current predicament, solutions that don’t involve respecting the hard work that goes into educating our young.
In doing so, we glom onto technologies that will only exacerbate many existing issues of inequity and mistrust. What’s at stake isn’t the technology itself, but the future of learning.

An empty classroom at the Carpe Diem school in Indianapolis.
Education shouldn’t be just about reading, writing, and arithmetic. Students need to learn how to be a part of our society. And increasingly, that society is technologically mediated. As a result, excluding technology from the classroom makes little sense; it produces an unnecessary disconnect between school and contemporary life.

This forces us to consider two interwoven — and deeply political — societal goals of education: to create an informed citizenry and to develop the skills for a workforce.

With this in mind, there are different ways of interpreting the personalized learning agenda, which makes me feel simultaneously optimistic and outright terrified. If you take personalized learning to its logical positive extreme, technology will educate every student as efficiently as possible. This individual-centric agenda is very much rooted in American neoliberalism.

But what if there’s a darker story? What if we’re really training our students to be robots?

Let me go cynical for a moment. In the late 1800s, the goal of education in America was not particularly altruistic. Sure, there were reformers who imagined that a more educated populace would create an informed citizenry. But what made widespread education possible was that American business needed workers. Industrialization required a populace socialized into very particular frames of interaction and behavior. In other words, factories needed workers who could sit still.

Many of tomorrow’s workers aren’t going to be empowered creatives subscribed to the mantra of, “Do what you love!” Many will be slotted into systems of automation that are hybrid human and computer. Not in the sexy cyborg way, but in the ugly call center way.
Like today’s retail laborers who have to greet every potential customer with a smile, many humans in tomorrow’s economy will do the unrewarding tasks that are too expensive for robots to replace. We’re automating so many parts of our society that, to be employable, the majority of the workforce needs to be trained to be engaged with automated systems.

All of this begs one important question: who benefits, and who loses, from a technologically mediated world?

Education has long been held up as the solution to economic disparity (though some reports suggest that education doesn’t remedy inequity). While the rhetoric around personalized learning emphasizes the potential for addressing inequity, Tyre suggests that good teachers are key for personalized learning to work.

Not only are privileged students more likely to have great teachers, they are also more likely to have teachers who have been trained to use technology — and how to integrate it into the classroom’s pedagogy. If these technologies do indeed “enhance the teacher’s effect,” this does not bode well for low-status students, who are far less likely to have great teachers.

Technology also costs money. Increasingly, low-income schools are pouring large sums of money into new technologies in the hopes that those tools can fix the various problems that low-status students face. As a result, there’s less money for good teachers and other resources that schools need.

I wish I had a solution to our education woes, but I’ve been stumped time and again, mostly by the politics surrounding any possible intervention. Historically, education was the province of local schools making local decisions. Over the last 30 years, the federal government and corporations alike have worked to centralize education.

From textbooks to grading systems, large companies have standardized educational offerings, while making schools beholden to their design logic. This is how Texas values get baked into Minnesota classrooms. Simultaneously, over legitimate concern about the variation in students’ experiences, federal efforts have attempted to implement learning standards. They use funding as the stick for conformity, even as local politics and limited on-the-ground resources get in the way.

Personalized learning has the potential to introduce an entirely new factor into the education landscape: network effects. Even as ranking systems have compared schools to one another, we’ve never really had a system where one students’ learning opportunities truly depend on another’s. And yet, that’s core to how personalized learning works. These systems don’t evolve based on the individual, but based on what’s learned about students writ large.

Personalized learning is, somewhat ironically, far more socialist than it may first appear. You can’t “personalize” technology without building models that are deeply dependent on others. In other words, it is all about creating networks of people in a hyper-individualized world. It’s a strange hybrid of neoliberal and socialist ideologies.

An instructor works with a student in the learning center at the Carpe Diem school in Indianapolis.
Just as recommendation systems result in differentiated experiences online, creating dynamics where one person’s view of the internet radically differs from another’s, so too will personalized learning platforms.

More than anything, what personalized learning brings to the table for me is the stark reality that our society must start grappling with the ways we are both interconnected and differentiated. We are individuals and we are part of networks.

In the realm of education, we cannot and should not separate these two. By recognizing our interconnected nature, we might begin to fulfill the promises that technology can offer our students.

This post was originally published to Bright at Medium on April 7, 2015. Bright is made possible by funding from the New Venture Fund, and is supported by The Bill & Melinda Gates Foundation.

by zephoria at May 08, 2015 12:29 AM

April 28, 2015

Ph.D. student

I really like Beniger

I’ve been a fan of Castells for some time but reading Ampuja and Koivisto’s critique of him is driving home my new appreciation of Beniger‘s The Control Revolution (1986).

One reason why I like Beniger is that his book is an account of social history and its relationship with technology that is firmly grounded in empirically and formally validated scientific theory. That is, rather than using as a baseline any political ideological framework, Beniger grounds his analysis in an understanding of the algorithm based in Church and Turing, and understanding of biological evolution grounded in biology, and so on.

This allows him to extend ideas about programming and control from DNA to culture to bureaucracy to computers in a way that is straightforward and plausible. His goal is, admirably, to get people to see the changes that technology drives in society as a continuation of a long regular process rather than a reason to be upset or a transformation to hype up.

I think there is something fundamentally correct about this approach. I mean that with the full force of the word correct. I want to go so far as to argue that Beniger (at least as of Chapter 3…) is an unideological theory of history and society that is grounded in generalizable and universally valid scientific theory.

I would be interested to read a substantive critique of Beniger arguing otherwise. Does anybody know if one exists?


by Sebastian Benthall at April 28, 2015 06:43 AM

April 24, 2015

Ph.D. student

intersecting agencies and cybersecurity #RSAC

I recurring theme in my reading lately (such as, Beniger‘s The Control Revolution, Horkheimer‘s Eclipse of Reason, and Norbert Wiener’s Cybernetics work) is the problem of two ways of reconciling explanations of how-things-came-to-be:

  • Natural selection. Here a number of autonomous, uncoordinated agents with some exogenously given variability encounter obstacles that limit their reproduction or survival. The fittest survive. Adaptation is due to random exploration at the level of the exogenous specification of the agent, if at all. In unconstrained cases, randomness rules and there is no logic to reality.
  • Purpose. Here there is a teleological explanation based on a goal some agent has “in mind”. The goal is coupled with a controlling mechanism that influences or steers outcomes towards that goal. Adaptation is part of the endogenous process of agency itself.

Reconciling these two kinds of description is not easy. A point Beniger makes is that differences between social theories in the 20th century can be read as differences in the divisions of where one demarcates agents within a larger system.


This week at the RSA Conference, Amit Yoran, President of RSA, gave a keynote speech about the change in mindset of security professionals. Just the day before I had attended a talk on “Security Basics” to reacquaint myself with the field. In it, there was a lot of discussion of how a security professional needs to establish “the perimeter” of their organization’s network. In this framing, a network is like the nervous system of the macro-agent that is an organization. The security professional’s role is to preserve the integrity of the organization’s information systems. Even in this talk on “the basics”, the speaker acknowledged that a determined attacker will always get into your network because of the limitations of the affordances of defense, the economic incentives of attackers, and the constantly “evolving” nature of the technology. I was struck in particular by this speaker’s detachment from the arms race of cybersecurity. The goal-driven adversariality of the agents involved in cybersecurity was taken as a given; as a consequence, the system evolves through a process of natural selection. The role of the security professional is to adapt to an exogenously-given ecosystem of threats in a purposeful way.

Amit Yoran’s proposed escape from the “Dark Ages” of cybersecurity got away from this framing in at least one way. For Yoran, thinking about the perimeter is obsolete. Because the attacker will always be able to infiltrate, the emphasis must be on monitoring normal behavior within your organization–say, which resources are accessed and how often–and detecting deviance through pervasive surveillance and fast computing. Yoran’s vision replaces the “perimeter” with an all-seeing eye. The organization that one can protect is the organization that one can survey as if it was exogenously given, so that changes within it can be detected and audited.

We can speculate about how an organization’s members will feel about such pervasive monitoring and auditing of activity. The interests of the individual members of a (sociotechnical) organization, the interests of the organization as a whole, and the interests of sub-organizations within an organization can be either in accord or in conflict. An “adversary” within an organization can be conceived of as an agent within a supervening organization that acts against the latter’s interests. Like a cancer.

But viewing organizations purely hierarchically like this leaves something out. Just as human beings are capable of more complex, high-dimensional, and conflicted motivations than any one of the organs or cells in our bodies, so too should we expect the interests of organizations to be wide and perhaps beyond the understanding of anyone within it. That includes the executives or the security professionals, which RSA Conference blogger Tony Kontzer suggests should be increasingly one and the same. (What security professional would disagree?)

What if the evolution of cybersecurity results in the evolution of a new kind of agency?

As we start to think of new strategies for information-sharing between cybersecurity-interested organizations, we have to consider how agents supervene on other agents in possibly surprising ways. An evolutionary mechanism may be a part of the very mechanism of purposive control used by a super-agent. For example, an executive might have two competing security teams and reward them separately. A nation might have an enormous ecosystem of security companies within its perimeter (…) that it plays off of each other to improve the robustness of its internal economy, providing for it the way kombucha drinkers foster their own vibrant ecosystem of gut fauna.

Still stranger, we might discover ways that purposive agents intersect at the neuronal level, like Siamese twins. Indeed, this is what happens when two companies share generic networking infrastructure. Such mereological complexity is sure to affect the incentives of everyone involved.

Here’s the rub: every seam in the topology of agency, at every level of abstraction, is another potential vector of attack. If our understanding of the organizational agent becomes more complex as we abandon the idea of the organizational perimeter, that complexity provides new ways to infiltrate. Or, to put it in the Enlightened terms more aligned with Yoran’s vision, the complexity of the system with it multitudinous and intersecting purposive agents will become harder and harder to watch for infiltrators.

If a security-driven agent is driven by its need to predict and audit activity within itself, then those agents will let a level complexity within themselves that is bounded by their own capacity to compute. This point was driven home clearly by Dana Wolf’s excellent talk on Monday, “Security Enforcement (re)Explained”. She outlined several ways that the computationally difficult cybersecurity functions–such as anti-virus and firewall technology–are being moved to the Cloud, where elasticity of compute resources theoretically makes it easier to cope with these resource demands. I’m left wondering: does the end-game of cybersecurity come down to the market dynamics of computational asymmetry?

This blog post has been written for research purposes associated with the Center for Long-Term Cybersecurity.


by Sebastian Benthall at April 24, 2015 12:13 AM

April 23, 2015

Ph.D. student

Beniger on anomie and technophobia

The School of Information Classics group has moved on to a new book: James Beniger’s 1986 The Control Revolution: Technological and Economic Origins of the Information Society. I’m just a few chapters in but already it is a lucid and compelling account of how the societal transformations due to information technology that are announced bewilderingly every decade are an extension of a process that began in the Industrial Revolution and just has not stopped.

It’s a dense book with a lot of interesting material in it. One early section discusses Durkheim’s ideas about the division of labor and its effect on society.

In a nutshell, the argument is that with industrialization, barriers to transportation and communication break down and local markets merge into national and global markets. This induces cycles of market disruption where because producers and consumers cannot communicate directly, producers need to “trust to chance” by embracing a potentially limitless market. This creates and unregulated economy prone to crisis. This sounds a little like venture capital fueled Silicon Valley.

The consequence of greater specialization and division of labor is a greater need for communication between the specialized components of society. This is the problem of integration, and it affects both the material and the social. The specifically, the magnitude and complexity of material flows result in a sharpening division of labor. When properly integrated, the different ‘organs’ of society gain in social solidarity. But if communication between the organs is insufficient, then the result is a pathological breakdown of norms and sense of social purpose: anomie.

The state of anomie is impossible wherever solidary organs are sufficiently in contact or sufficiently prolonged. In effect, being continguous, they are quickly warned, in each circumstance, of the need which they have of one another, and, consequently, they have a lively and continuous sentiment of their mutual dependence… But, on the contrary, if some opaque environment is interposed, then only stimuli of a certain intensity can be communicated from one organ to another. Relations, being rare, are not repeated enough to be determined; each time there ensues new groping. The lines of passage taken by the streams of movement cannot deepen because the streams themselves are too intermittent. If some rules do come to constitute them, they are, however, general and vague.

An interesting question is to what extent Beniger’s thinking about the control revolution extend to today and the future. An interesting sub-question is to what extent Durkheim’s thinking is relevant today or in the future. I’ll hazard a guess that’s informed partly by Adam Elkus’s interesting thoughts about pervasive information asymmetry.

An issue of increasing significance as communication technology improves is that the bottlenecks to communication become less technological and more about our limitations as human beings to sense, process, and emit information. These cognitive limitations are being overwhelmed by the technologically enabled access to information. Meanwhile, there is a division of labor between those that do the intellectually demanding work of creating and maintaining technology and those that do the intellectually demanding work of creating and maintaining cultural artifacts. As intellectual work demands the specialization of limited cognitive resources, this results in conflicts of professional identity due to anomie.

Long story short: Anomie is why academic politics are so bad. It’s also why conferences specializing in different intellectual functions can harbor a kind of latent animosity towards each other.


by Sebastian Benthall at April 23, 2015 09:26 PM

April 18, 2015

MIMS 2015

ICANN's DPML:Pervasively Distributed Trademark Enforcement

This post explores similarities between ICANN’s new Domains Protected Marks List(DPML) process and Pervasively Distributed Copyright Enforcement(PDCE). PDCE was first described in a paper Julie Cohen in 20061, and while readers of this post would probably benefit from having readit, this is not a requirement for understanding this post.

Both DPML and PDCE operate on different types of intellectual property, but similarities exist in the intentions and consequences of them. We’ll start by explaining ICANN’s Trademark Clearinghouse and its associated Domain Protected Marks List(DPML) service. We then illustrate their implementation through a hypothetical situation. Finally we point out three conceptual similarities of both regimes.

Exposition

In 2005 ICANN started a policy development process to introduce new generic Top Level Domains(gTLDs) to the Domain Name System(DNS) hierarchy.2 In 2013 the first of these domains went live.3 As of this writing there are slightly over 500 new gTLDs in use.3 There are roughly an additional 900 new gTLD applications still being processed by ICANN.3 As part of the development of the new gTLD policy, ICANN also revisited its policy towards trademark protection. Specifically its Uniform Dispute Resolution Policy(UDRP), which had been the sole means of resolving trademark disputes in DNS. The UDRP does not go away for the new gTLDs, but it does get augmented with two new tools for trademark protection, the Trademark Clearinghouse(TMCH) and the Uniform Rapid Suspension System(URS).

The Trademark Clearinghouse is a database of registered trademarks. It is not a trademark office, since each mark must already be registered at an actual trademark office somewhere in the world. Trademark holders can pay to have their mark registered at the TMCH. Currently it’s $150 for a year, $435 for three years, and $725 for five years. Bulk discounts are also available.4 In return, they gain access to services which help protect their trademark in DNS.

The first service is a sunrise service, which gives the TMCH user 30 days priority access to register SLDs in any new gTLD. The second service is the notification service. There is a mandatory 90 day notification service which starts after the sunrise period ends. In this mandatory notification service, TMCH users receive notifications whenever a new SLD is registered under a new gTLD. At the end of the 90 day mandatory notification period, the TMCH user can elect whether to continue the service or not.

The above two services are offered by the Trademark Clearinghouse, and ICANN itself. The final service, Domain Protected Marks List(DPML), is optionally offered by new gTLD registries.5 DPML allows TMCH users to defensively block DNS registrations using their trademark. Each registry has slightly different policies regarding DPML, but the general idea is the same. The point of DPML is to prevent registrations of TMCH registered trademarks at participating new gTLD registries It is not a notice based service like the two services offered by ICANN. TMCH users must register with each new gTLD registrar separately. However, since most new gTLD registries are registries for many gTLDs, registering at one registry would protect the trademark holder at all of their gTLDs. Registering for DPML with Donuts, a registry for many new gTLDs, would afford protection among all of Donuts’ new gTLDs.6

In addition to direct naming conflicts, Donuts will also block registrations of SLDs which contain the TMCH trademark. According to their website, “..if the Domain Name Label [is] ‘sample’ .., a DPML Block may be applied for any of the following labels: ‘sample’, ‘musicsample’, ‘samplesale’, or ‘thesampletest’”.7

It’s important to understand the distinction between the Trademark Clearinghouse and DPML. The TMCH is a database of verified trademarks. ICANN is responsible for providing the TMCH and verifying that the data in it is not garbage. The DPML is a service provided by DNS registries that makes use of this database. Most of our further discussion will center around the DPML since that is what constrains human action the most.

Hypothetical

Let’s say there is a company called Mixahedron that makes drink mixing equipment in the shape of dodecahedrons. Mixahedron holds the trademark for the term ‘Mixahedron’ in the country where it is incorporated. They own mixahedron.com and use it for their main corporate Internet presence, but in the past they’ve had problems on other TLDs. When .info was launched a cyber squatter registered mixahedron.info and sent phishing emails to their customers directing them to change their account information on mixahedron.info. They were able to gain control of mixahedron.info, but it cost time and money. Also many of their customers were angry at them, and their brand lost credibility.

In fear of this happening again they promptly became users of the Trademark Clearinghouse when it was launched. In addition, they paid both Donuts and Rightside, the only two registries currently offering DPML services, for a ten year service contract for DPML on ‘Mixahedron’. Now when someone tries to register mixahedron.business they get blocked. Also nice is that disgruntled customers cannot register mixahedron-sucks.wtf, i-hate-mixahedron.gripe or mixahedron.fail. With thousands of new gTLDs coming into existence it would be impossible for Mixahedron to defensively register all derivatives of their mark. From their perspective this is simply great brand protection.

Another perspective on this hypothetical is from a customer of Mixahedron’s named Mark. Mark had his left index finger ripped off by one of Mixahedron’s professional mixers. After his recovery he started investigating Mixahedron and discovered other people had suffered similar fates with their devices. Mark decided to setup a forum website called mixahedron.surgery where the community of people injured by Mixahedron’s mixers could share stories and plan actions. He thought the satirical name would help to get the message out, and provide a bit of a kick start to his campaign. Unfortunately for Mark, his registrar GoDaddy.com refused his registration. He doesn’t understand why, and doesn’t know anything about the Trademark Clearinghouse or DPML. Mark instead registered a domain name unrelated to Mixahedron, and purchased Google AdWords for terms like ‘Mixahedron pain’, and ‘Mixahedron defect’.

Analysis

The key similarity between Pervasively Distributed Copyright Enforcement(PDCE) and the DPML is its lack of recourse for the user at the time of constraint. In PDCE terms this takes the form of an inability for a user to argue fair use with a DRM system. While in the DPML a user is unable to argue that consumers will not be confused by the use of a mark. The purpose of trademark law is to prevent confusion of genuine branded products with illegitimate, or fake, products. There is considerable legal precedent we call on when deciding whether the use of a trademark is infringing, or acceptable because of free speech protections. The DPML short-circuits this human decision making in favor of an immediate unappealable constraining of action.

The trademark theory that the DPML regime comes closest to implementing is referred to as the ‘initial interest confusion’ theory. In the context of cybersquatting case precedent, initial interest confusion results when users visiting a website might mistake a so called gripe site for an actual sponsored site of the trademark holder. It ignores any content on the site for evaluating whether a user might be confused by the use of the trademark. Trademark holders attempting to shutdown gripe sites have attempted to use this theory, and sometimes succeeded.

In Lamparello v. Falwell, Christopher Lamparello registered fallwell.com and hosted a gripe site discrediting Jerry Falwell and his ministry. Falwell sued but the court ruled in favor of Lamparello finding in part that, “Applying the initial interest confusion theory to gripe sites like Lamparello’s would enable the mark holder to insulate himself from criticism - or at least to minimize access to it. .. Rather, to determine whether a likelihood of confusion exists as to the source of a gripe site like that at issue in this case, a court must look not only to the allegedly infringing domain name, but also to the underlying content of the website.”8

The DPML affords no appeals process to the user denied registration of a domain name, and it cannot evaluate the content of a website before it is created. Both PDCE and DPML override legitimate freedom of expression concerns. Copyright’s doctrine of fair use can be seen as an outlet for free expression in a similar vein as limiting the scope of initial interest confusion in trademark law. Both PDCE and DPML effectively disable that outlet by default. Then force the user to find a means of enabling it again via the courts or, in the case of some DRM, technical subversion. The URS process is designed only to redress harm to trademark holders who have registered with the TMCH. There is no ICANN sponsored procedure for individual users of registrars who have been denied a DNS registration based on DPML.

Another similarity between PDCE and the DPML is that they both depend on a state of permanent crisis. For PDCE this is the increasing ease with which the Internet and software has allowed copyright infringement to happen. For DPML this is the permanent threat of consumer confusion brought on by domain cybersquatting and phishing. Cyber squatters setup websites with DNS names similar to famous brand names and, either attempt to sell the domain to the bran downer, or attempt to trick users into visiting their site to harvest webpage impressions. Phishers trick unsuspecting users into visiting websites and then divulging sensitive information.

World Wide Web users need to know that when they visit an organization’s website, they are visiting the official website of that company. Not the website of an imposter attempting to scam them. Years of web browsing have established an expectation in users to perform this verification based largely on what appears in their web browser address bar, which at least for the time being, usually only contains a DNS name. There may be other icons in the address bar purporting to authenticate the website, but most users don’t understand these. Thus the problem falls to the DNS to provide a solution. DPML is an attempt to directly respond to the threat of both cybersquatting and phishing by ‘cleaning up’ the DNS.

The consequences of being a reaction to permanent crisis hold true for both PDCE and DPML. “Rather than normalizing those who remain on the ‘right’ side of the new boundaries, [PDCE] seeks to normalize a regime of universal, technologically-encoded constraint.”9 The ultimate goal of both PDCE and DPML is to become invisible and establish new normative behavior.

Our third similarity is that both PDCE and DPML are neither completely decentralized, nor completely centralized systems of control. They depend on a network of actors. “The resulting [DPML] regime of crisis management is neither wholly centralized nor wholly decentralized; it relies, instead on coordination of technologies and processes for authorizing information flows.” This quote about PDCE could just as easily apply to DPML. DNS is decentralized, but DPML is not. The network of DPML centers around the very centralized TMCH, but from there becomes more decentralized as it branches out to registries, registrars and eventually individual users.

We have explored three similarities between PDCE and DPML in this post, but there are likely more. The reason for pointing them out isn’t to show common thinking across two domains of intellectual property law. It is instead to highlight some genuine problems with the approach ICANN has taken in establishing the TMCH and DPML process. The TMCH and DPML are both very new, and we don’t know what the future holds. There could be court challenges to DPML or the TMCH. ICANN might even lose control of DNS policy regulation. We’ll have to wait and see.

  1. Julie Cohen, Pervasively Distributed Copyright Enforcement, Georgetown Law Journal, Vol. 95, 2006

  2. ICANN’s new gTLD Program

  3. ICANN’s new gTLD Statistics 2 3

  4. Basic Fee Structure for the Trademark Clearinghouse

  5. In DNS lingo a registry contracts with ICANN to service a DNS TLD. Registrars contract with registries to offer second-level domains(SLD) to the public. If you register example.com, you are contracting with a registrar for an SLD.

  6. Blocking Mechanisms for TMCH-clientsl

  7. Donuts DPML Overview

  8. Lamparello v. Falwell, 4th Cir. 2005, 420 F.3d 309

  9. Julie Cohen, Pervasively Distributed Copyright Enforcement, Georgetown Law Journal, Vol. 95, 2006, at page 28

ICANN's DPML:Pervasively Distributed Trademark Enforcement was originally published by Andrew McConachie at Metafarce on April 18, 2015.

by Andrew McConachie (andrewm@ischool.berkeley.edu) at April 18, 2015 07:00 AM

April 08, 2015

Ph.D. student

causal inference in networks is hard

I am trying to make statistically valid inferences about the mechanisms underlying observational networked data and it is really hard.

Here’s what I’m up against:

  • Even though my data set is a complete ecologically valid data set representing a lot of real human communication over time, it (tautologically) leaves out everything that it leaves out. I can’t even count all the latent variables.
  • The best methods for detecting causal mechanism, the potential outcomes framework for Rubin model, depends on the assumption that different members of the sample don’t interfere. But I’m working with networked data. Everything interferes with everything else, at least indirectly. That’s why it’s a network.
  • Did I mention that I’m working with communications data? What’s interesting about human communication is that it’s not really generated at random at all. It’s very deliberately created by people acting more or less intelligently all the time. If the phenomenon I’m studying is not more complex than the models I’m using to study it, then there is something seriously wrong with the people I’m studying.

I think I can deal with the first point here by gracefully ignoring it. It may be true that any apparent causal effect in my data is spurious and due to a common latent cause upstream. It may be true that the variance in the data is largely due to exogenous factors. Fine. That’s noise. I’m looking for a reliable endogenous signal. If there isn’t something there that would suggest that my entire data set is epiphenomal. But I know it’s not. So there’s got to be something there.

For the second point, there are apparently sophisticated methods for extending the potential outcomes framework to handling peer effects. These are gnarly and though I figure I could work with them, I don’t think they are going to be what I need because I’m not really looking for a causal relationship like a statistical relationship between treatment and outcome. I’m not after in the first instance what might be called type causation. I’m rather trying to demonstrate cases of token causation where causation is literally the transfer of information from object to another. And then I’m trying to show regularity in this underlying kind of causation in a layer of abstraction over it.

The best angle I can come up with on this so far is to use emergent properties of the network like degree assortativity to sort through potential mathematically defined graph generation algorithms. These algorithms can act as alternative hypotheses, and the observed emergent properties can theoretically be used to compute the likelihood of the observed data given the generation methods. Then all I need is a prior over graph generation methods! It’s perfectly Bayesian! I wonder if it is at all feasible to execute on. I will try.

It’s not 100% clear how you can take an algorithmically defined process and turn that into a hypothesis about causal mechanisms. Theoretically, as long as a causal network has computable conditional dependencies it can be represented by and algorithm. I believe that any algorithm (in the Church/Turing sense) can be represented as a causal network. Can this be done elegantly, so that the corresponding causal network represents something like what we’d expect from the scientific theory on the matter? This is unclear because, again, Pearl’s causal networks are great at representing type causation but not as expressive of token causation among a large population of uniquely positioned, generatively produced stuff. Pearl is not good at modeling life, I think.

The strategic activity of the actors is a modeling challenge but I think this is actually where there is substantive potential in this kind of research. If effective strategic actors are working in a way that is observably different from naive actors in some way that’s measurable in aggregate behavior, that’s a solid empirical result! I have some hypotheses around this that I think are worth checking. For example, probably the success of an open source community depends in part on whether members of the community act in ways that successfully bring new members in. Strategies that cultivate new members are going to look different from strategies that exclude newcomers or try to maintain a superior status. Based on some preliminary results, it looks like this difference between successful open source projects and most other social networks is observable in the data.


by Sebastian Benthall at April 08, 2015 07:03 PM

April 05, 2015

MIMS 2014

The ‘Frozen’ expert predicts the sequel

I saw this comic (by Lauren Weisenstein) at the Nib, and sent it to S, my niece’s mom. She thought it would be fun to ask A what she thinks the Frozen sequel would be like. A did not see the other ideas because S didn’t want to influence her thinking. A also does not yet know that a sequel is in the works. Once I saw what she came up with, I couldn’t resist illustrating it.

Frozen Sequel

What do you think will happen in the sequel to Frozen?

I drew this on Paper, my favorite app However, they don’t let people upload drawings and mess with color, so in the absence of a stylus, I was stuck with fingerpainting this in entirety. I blame any smudges and inconsistencies on my fat fingers. I did the layout and captions on Photoshop. I would’ve loved to hand write them, but there’s no way my fat fingers would’ve stood up to THAT challenge (I tried!)

So, what do YOU think will happen in the sequel to Frozen?


by muchnessofd at April 05, 2015 08:06 PM

March 31, 2015

Ph.D. student

Innovation, automation, and inequality

What is the economic relationship between innovation, automation, and inequality?

This is a recurring topic in the discussion of technology and the economy. It comes up when people are worried about a new innovation (such as data science) that threatens their livelihood. It also comes up in discussions of inequality, such as in Picketty’s Capital in the Twenty-First Century.

For technological pessimists, innovation implies automation, and automation suggests the transfer of surplus from many service providers to a technological monopolist providing a substitute service at greater scale (scale being one of the primary benefits of automation).

For Picketty, it’s the spread of innovation in the sense of the education of skilled labor that is primary force that counteracts capitalism’s tendency towards inequality and (he suggests) the implied instability. For the importance Picketty places on this process, he treats it hardly at all in his book.

Whether or not you buy Picketty’s analysis, the preceding discussion indicates how innovation can cut both for and against inequality. When there is innovation in capital goods, this increases inequality. When there is innovation in a kind of skilled technique that can be broadly taught, that decreases inequality by increasing the relative value of labor to capital (which is generally much more concentrated than labor).

I’m a software engineer in the Bay Area and realize that it’s easy to overestimate the importance of software in the economy at large. This is apparently an easy mistake for other people to make as well. Matthew Rognlie, the economist who has been declared Picketty’s latest and greatest challenger, thinks that software is an important new form of capital and draws certain conclusions based on this.

I agree that software is an important form of capital–exactly how important I cannot yet say. One reason why software is an especially interesting kind of capital is that it exists ambiguously as both a capital good and as a skilled technique. While naively one can consider software as an artifact in isolation from its social environment, in the dynamic information economy a piece of software is only as good as the sociotechnical system in which it is embedded. Hence, its value depends both on its affordances as a capital good and its role as an extension of labor technique. It is perhaps easiest to see the latter aspect of software by considering it a form of extended cognition on the part of the software developer. The human capital required to understand, reproduce, and maintain the software is attained by, for example, studying its source code and documentation.

All software is a form of innovation. All software automates something. There has been a lot written about the potential effects of software on inequality through its function in decision-making (for example: Solon Barocas, Andrew D. Selbst, “Big Data’s Disparate Impact” (link).) Much less has been said about the effects of software on inequality through its effects on industrial organization and the labor market. After having my antennas up for this for many reasons, I’ve come to a conclusion about why: it’s because the intersection between those who are concerned about inequality in society and those that can identify well enough with software engineers and other skilled laborers is quite small. As a result there is not a ready audience for this kind of analysis.

However unreceptive society may be to it, I think it’s still worth making the point that we already have a very common and robust compromise in the technology industry that recognizes software’s dual role as a capital good and labor technique. This compromise is open source software. Open source software can exist both as an unalienated extension of its developer’s cognition and as a capital good playing a role in a production process. Human capital tied to the software is liquid between the software’s users. Surplus due to open software innovations goes first to the software users, then second to the ecosystem of developers who sell services around it. Contrast this with the proprietary case, where surplus goes mainly to a singular entity that owns and sells the software rights as a monopolist. The former case is vastly better if one considers societal equality a positive outcome.

This has straightforward policy implications. As an alternative to Picketty’s proposed tax on capital, any policies that encourage open source software are ones that combat societal inequality. This includes procurement policies, which need not increase government spending. On the contrary, if governments procure primarily open software, that should lead to savings over time as their investment leads to a more competitive market for services. Equivalently, R&D funding to open science institutions results in more income equality than equivalent funding provided to private companies.


by Sebastian Benthall at March 31, 2015 01:00 PM

March 29, 2015

Ph.D. student

going post-ideology

I’ve spent a lot of my intellectual life in the grips of ideology.

I’m glad to be getting past all of that. That’s one reason why I am so happy to be part of Glass Bead Labs.

Glass Bead Labs

There are a lot of people who believe that it’s impossible to get beyond ideology. They believe that all knowledge is political and nothing can be known with true clarity.

I’m excited to have an opportunity to try to prove them wrong.


by Sebastian Benthall at March 29, 2015 08:14 PM

March 22, 2015

MIMS 2012

Hiring Designers: Advice from Twitter, Uber, and GoPro

Google Ventures invited design leaders from Twitter, Uber, and GoPro to discuss the topic of hiring designers. What follows are my aggregated and summarized notes.

Finding Designers

Everyone agrees, finding designers is hard. They’re in high demand, and the best ones are never on the market for long (if at all). “If the job is good enough, everyone is available.” There are a few pieces of advice for finding them, though:

  • If you’re having trouble getting a full-time designer, start with contractors. If they’re good, you can try to woo them into joining full-time. Some designers like the freedom of contracting and don’t think they want to be full-time anywhere, but if you can show them how awesome your team and culture and product are, you can lure them over.
  • Look for people who are finishing up a big project, or have been at the same place for 2+ years. These people might be looking for a new challenge, and you can nab them before they’re officially on the market.
  • Dedicate hours each day to sourcing and recruiting. Work closely with your recruiters (if you have any) to train them on what to look for in portfolios and CVs. Include them in interview debriefs so they can understand what was good and bad about candidates, and tune who they reach out to accordingly. I.e. iterate on your hiring process. We’ve done this a lot of Optimizely.
    • Even better is to have dedicated design recruiter(s) who understand the market and designers.
    • If you have no recruiters, you could consider outsourcing recruiting to an agency.
  • When reaching out to designers, get creative. Use common connections, use info from their site or blog posts, follow people on Twitter, etc.
  • Typically you’ll have the highest chance for success if you, as the hiring manager, reach out, rather than a recruiter.

As a designer, this is what hiring managers will be looking for:

  • Have a high word-to-picture ratio. Product Design is all about communication, understanding the problem, solutions, and context. If you can’t clearly communicate that, you aren’t a good designer.
    • An exception is visual designers, who can get away with more visually-oriented portfolios.
  • What about your design is exceptional? Why should I care? Make sure to make this clear when writing about your work.
  • When looking at a portfolio, hiring managers will be wondering, “What’s the complexity of the problem being solved? Can they tell a story? Are they self critical? What would they do differently or what could be better?” Write about all of these things in your portfolio; don’t just have pictures of the final result.
    • An exception to the above is high demand designers, who don’t have time for a portfolio because they don’t need one to get work. Hiring these people is all based on reputation.
  • Don’t have spelling errors. Spelling errors are an automatic no-go. Designers need to be sticklers for details, and have “pride of ownership.”
    • One million percent agree

On Interviewing Designers

Pretty much everyone has a portfolio presentation, followed by 3–6 one-on-one interviews. Everyone must be a “Yes” for an offer to be made. (Optimizely is the same.)

Look for curiosity in designers. Designers should be motivated to learn, grow, read blogs/industry news, and use apps/products just to see what the UX and design is like. They should have a mental inventory of patterns and how they’re used.

In portfolio review, designers should NEVER play the victim. Don’t blame the PM, the organization, engineering, etc. (even if it’s true.) Don’t talk shit about the constraints. Design is all about constraints. Instead, talk about how you worked within those constraints (e.g. “there was limited budget, therefore…”)

On Design Exercises

People were pretty mixed about whether design exercises are useful during the interview process or not. Arguments against them include:

  • They can be ethically wrong if you’re having candidates do spec work for the company. You’re asking people to work for free, and you open yourself up to lawsuits.
    • I wholeheartedly agree
  • They don’t mimic the way people actually work. Designers aren’t usually at a board being forced to create UIs and make design decisions.
    • I disagree with this sentiment. A lot of work I do with our designers is at whiteboards. Decisions and final designs aren’t always being made, but we’re exploring ideas and thinking through our options. Doing this in an interview simulates what it’s like to work with someone, and how they approach design. It isn’t about the final whiteboarded designs, it’s about their process, questions they ask, solutions they propose, how they think about those solutions, etc. Plus, you get to experience what they’re like to interact with.
  • Take home exercises aren’t recommended. People are too busy for them, and senior candidates won’t do them.
    • The exception to this is junior designers who don’t have much of a portfolio yet so you can see how they actually design UIs
    • All of this has been true in my experience, as well.

Arguments for design exercises:

  • You get to see how candidates approach a problem and explore solutions
  • You get a sense of what it’s like to work with them
  • You hear them evaluate ideas, which tells you how self-critical they are and how well they know best practices

Personally, I find design exercises very useful. They tell me a lot about how a candidate thinks, and what they’re like to work with. The key is to find a good exercise that isn’t spec work. GV wrote a great article on this topic.

On Making a Hiring Decision

It’s easy when candidates are great or awful — the yes and no decisions are easy. The hard ones are when people are mixed. Typically this means you shouldn’t extend an offer, but there are reasons to give them a second chance:

  • They were nervous
  • English is their second language
  • They were stressed from interviewing

In these cases, try bringing the person back in a more relaxed environment; for example, have lunch or coffee together.

Some people have great work, but some sort of personality flaw (e.g. they don’t make eye contact with women). These people are a “no” — remember, “No assholes, no delicate geniuses”, and avoid drama at all costs.

When making an offer, you’ll sometimes have to sell them on the company, team, product, and challenges. One technique is to explain why they’ll be a great fit on the team (you’ll flatter them while simultaneously demonstrating the challenges they’ll face and impact they’ll have). If you have a big company and team, you can explain all the growth and learning opportunities a large team provides. And you don’t need to be small to move fast and make impactful decisions.

On Design Managers

Hiring design managers is hard. They’re hard to find, hard to attract, and most designers want to continue making cool shit rather than manage people. But if you’re searching for one, your best bet is to promote a senior designer to manager. They already understand the company, market, culture, and team, so they’re an easy fit. The art of management is often custom to the team and company.

If that isn’t an option, go through your network to find folks. You aren’t likely to have good luck from randos applying via the company website, or sourcing strangers.

Great managers are like great coaches — they’re ex-players who worked really hard to learn the game, and thus can teach it to others. Players that are naturally gifted, e.g. Michael Jordan, aren’t good coaches because they didn’t have to work hard to understand the game — it came naturally to them.

I feel like I fit this description. I worked hard to learn a lot of the skills that go into design. It took me a long time to feel comfortable calling myself a “designer”; it didn’t come naturally.

Management is a mix of creative direction, people management, and process. They should be able to partner with a senior designer to ship great product. Managers shouldn’t evaluate designers based on outcomes/impact. People can’t always control which project they’re on, some projects are cancelled, not all projects are equal, etc. Instead, reward behavior and process (e.g. “‘A’ for effort”.)

There are 4 things to look for in good managers:

  • They Get Shit Done
  • They improve the team, e.g. via recruiting, events, coaching/mentoring
  • They have, or can build, good relationships in the organization
  • They have hard design skills, empathy, and vision

On Generalists vs Specialists and Team Formation

The consensus is to hire 80/20 designers, i.e. generalists who have deep skills in one area (e.g. visual design, UX, etc.). They help teams move faster, and can work with specialists (e.g. content strategists) to ship high quality products quickly. Good ones will know what they don’t know, and seek help when they need it (e.g. getting input from visual designers if that isn’t their strength). “No assholes, no delicate geniuses”. Avoid drama at all costs.

This is the type of person we seek to hire as well. I’ve also seen firsthand that good designers are self-aware enough to know what their weaknesses are, and to seek help when necessary.

Cross-functional teams should be as small as possible while covering the breadth of skills needed to ship features. More people means more complexity and extra communication overhead. (I have certainly seen this mistake made at Optimizely.)

Having designers on a separate team (e.g. Comm/marketing designers on marketing) makes for sad designers. They become isolated, disgruntled, and unhappy. Ideally, they shouldn’t be on marketing. If they are separate, make bridges for the teams to communicate. Include them in larger design team meetings and crits and stuff so they feel included.

I totally agree. At Optimizely, we fought hard to keep our Communication Designers on the Design team for all the reasons listed here (Marketing wanted to hire their own designers). Our Marketing department ended up hiring their own developers to build and maintain our website, but earlier this year they moved over to the Design team so they could be closer to other developers and the Communication Designers working on the website. So far, they’re much happier on Design.

Should designers code?

People were somewhat mixed on this question. It was mostly agreed that it’s probably not a good use of their time, but it’s always a trade-off depending on what a specific team needs to launch high quality product. A potential danger is that they may only design what’s easy to code, or what they know they can build. That is, it’s a conflict of interest that leads to them artificially limiting themselves and the design.

As a designer who codes, I only partially agree with what was said here. It’s true that you can fall into the trap of designing what’s easy to build, but it doesn’t have to be that way. I overcame this by focusing on explicitly splitting out the ideation/exploration phase from the evaluation/convergence phase (something that good designers should be doing anyway). When designing, I explore as many ideas as I can without thinking at all about implementation, then I evaluate which idea(s) are best. One of those criteria (among many) is implementation cost and whether it used existing UI components we’ve already built. I’ve found this to be effective at not limiting myself to only what I know is easy to build, but it took a lot of work to compartmentalize my thinking this way.

Artificially constraining the solution space is also a trap any designer can fall into, regardless of whether or not you know how to code. I’ve heard designers object to ideas with, “But that will be hard to build!”, or, “This idea re-uses an existing frontend component!” Whenever I hear that, I always tell them that they’re in the ideation phase, and they shouldn’t limit their thinking. Any idea is a good idea at this point. Once you’ve explored enough ideas, then you can start evaluating them and thinking about implementation costs. And if you have a great idea that’s hard to implement, you can argue for why it’s worth building.

Design-to-Engineering Ratio

It depends on the work, and what the frontend or implementation challenges are. For example, apps with lots of complex interactions will need more engineers to build. A common ratio is about 1:10.

More important than the specific ratio is to not form teams without a designer. Those teams get into bad habits, won’t ship quality product, and will dig a hole of design debt that a future designer will have to climb out of. (I’ve been through this, and it takes a lot of time and effort to correct broken processes of teams that lack design resources).

One way of knowing if you don’t have enough designers is if engineering complains about design being a bottleneck, although this is typically a lagging indicator. A great response to this was that the phrase “Blocked on design” is terrible. Design is a necessary creative endeavor! Why don’t we say that engineering is blocking product from being released? (In fact, for the first time ever, we have been saying this at Optimizely, since we need more engineers to implement some finished designs. Interested in joining the Engineering team at Optimizely? Drop me a line @jlzych).

Another good quote: “There’s nothing more dangerous than an idle designer.” An idle designer can go off the deep end redesigning things, and eventually get frustrated when their work isn’t getting used. So there should always be a bit more work than available people to do it. True dat.


This was a great event with fun speakers, good attendees, and excellent advice. The most interesting discussion topic for me was on design managers, since we’re actively searching for a manager now (let me know if you’re interested!) Overall, Optimizely’s hiring practices are in line with the best practices recommended here, so it’s nice to know we’re in good company.

by Jeff Zych at March 22, 2015 08:57 PM

March 16, 2015

Ph.D. student

correcting an error in my analysis

There is an error in my last post where I was thinking through the interpretation of 25,000,000 hit number reported for the Buzzfeed blue/black/white/whatever dress post. In that post I assumed that the distribution of viewers would be the standard one you see in on-line participation: a power law distribution with a long tail. Depending on which way you hold the diagram, the “tail” is either the enormous number of instances that only occur once (in this case, a visitor who goes to the page once and never again) or it’s population of instances that have bizarrely high occurrences (like that one guy who hit refresh on the page 100 times, and the woman that looked at the page 300 times, and…). You can turn one tail into the other by turning the histogram sideways and shaking really hard.

The problem with this analysis is that it ignores the data I’ve been getting from a significant subset of people who I’ve talked to about this in passing, which is that because the page contains some sort of well-crafted optical illusion, lots of people have looked at it once (and seen it as, say, a blue and black dress) and then looked at it again, seeing it as white and gold. In fact the article seems designed to get the reader to do just this.

If I’m being somewhat abstract in my analysis, it’s because I’ve refused to go click on the link myself. I have read too much Adorno. I hear the drumbeat of fascism in all popular culture. I do not want to take part in intelligently designed collective effervescence if I can help it. This is my idiosyncrasy.

But this inferred stickiness of the dress image has consequences for the traffic analysis. I’m sure that whoever is actually looking at the metrics on the article is tracking repeat version unique visitors. I wonder how deliberately the image was created with the idea of maximizing repeat visitations in mind, and the observed correlation between repeat and unique visitors. Repeated visits suggests sustained interest over time, whereas “mere” virality is a momentary spread of information over space. If you see content as a kind of property and sustained traffic over time as the value of that property, it makes sense to try to create things with staying power. Memetic globules forever gunking the crisscrossed manifold of attention. Culture.

Does this require a different statistical distribution to process properly? Is Cosma Shalizi right after all, and are these “power law” distributions just overhyped log-normal distributions? What happens when the generative process has a stickiness term? Is that just reflected in the power law distribution’s exponent? One day I will get a grip on this. Maybe I can do it working with mailing list data.

I’m writing this because over the weekend I was talking with a linguist and a philosopher about collective attention, a subject of great interest to me. It was the linguist who reported having looked at the dress twice and seeing it in different colors. The philosopher had not seen it. The latter’s research specialty was philosophy of mind, a kind of philosophy I care about a lot. I asked him whether in cases of collective attention the mental representation supervenes reductively on many individual minds or on more than that. He said that this is a matter of current debate but that he wants to argue that collective attention means more than my awareness of X, and my awareness of your awareness of X, ad infinitum. Ultimately I’m a mathematical person and am happy to see the limit of the infinite process as itself and its relationship with what it reduces to mediated by the logic of infinitesimals. But perhaps even this is not enough. I gave the philosopher my recommendation of Soren Brier and Ulanowicz, who together I think provide the groundwork needed for an ontology of macroorganic mentality and representation. The operationalization of these theories is the goal of my work at Glass Bead Labs.


by Sebastian Benthall at March 16, 2015 08:22 PM

March 15, 2015

MIMS 2014

Moodboards = Design + Branding

So you’re developing this hot new website/app, and you’ve decided it’s time to convert those wireframes into a visual design. You have two choices – go with a design and color scheme that ‘feels’ right, or, link it back to what you think your brand stands for. This is where moodboards come in. I am a huge fan of moodboards as a way to link design and marketing. Here’s how they work, using the example of how my team used this technique with Wordcraft to create an integrated visual design.

Wordcraft is an app that lets kids develop their understanding of language by creating sentences and seeing immediate visual feedback. Our vision was to create an app that helped kids learn, as they had fun exploring different sentence combinations.

We started out by creating post-its with words that we felt best described the brand identity. Once we had a board full of words, we used the affinity diagramming method of combining them into themes and came up with our theme words – Vibrant, Discovery, Playful and Clear.

Now comes the fun part – finding images that are synonymous with these words. You could do this exercise by cutting out pictures from magazines, the internet or whatever else catches your interest. We chose to use a Pinterest-board to tag the images that we felt were the most descriptive. Here again, each team member picked images individually which also helped us talk about what the words meant to each of us. This is a good way to bring the team together in a shared understanding of what you want your brand to symbolize.

Each of us then picked our top images for the theme words. The exercise of talking about the images, and what they meant and how we saw them connect to our brand vision meant that there was a fair bit of overlap in these top images. Once we had the final moodboard ready, we used Adobe Kuler to distill colors from these images and create our brand colors. Ta-dah! In 1.5 hours, we had colors that were closest to what our team felt our brand represented. We used these across all our work – the app, the project website, our logo.

Wordcraft Moodboard

You can try this process out on any new app/website and see how it works for you. Personally, I love how it helps to bring a process to what could otherwise disintegrate into a very subjective conversation of, “I think our buttons should be blue, because my child likes blue.”

If you do try this out, let me know what you think!

Note: I put up a version of this post on Medium, as an experiment.


by muchnessofd at March 15, 2015 09:11 PM

March 08, 2015

Ph.D. student

25,000,000 re: @ftrain

It was gratifying to read Paul Ford’s reluctant think piece about the recent dress meme epidemic.

The most interesting fact in the article was that Buzzfeed’s dress article has gotten 25 million views:

People are also keenly aware that BuzzFeed garnered 25 million views (and climbing) for its article about the dress. Twenty-five million is a very, very serious number of visitors in a day — the sort of traffic that just about any global media property would kill for (while social media is like, ho hum).

I’ve recently become interested in the question: how important is the Internet, really? Those of us who work closely with it every day see it as central to our lives. Logically, we would tend to extrapolate and think that it is central to everybody’s life. If we are used to sampling from other’s experience using social media, we would see that social media is very important in everybody’s life, confirming this suspicion.

This is obviously a kind of sampling bias though.

This is where the 25,000,000 figure comes in handy. My experience of the dress meme was that it was completely ubiquitous. Literally nobody I was following on Twitter who was tweeting that day was not at least referencing the dress. The meme also got to me via an email backchannel, and came up in a seminar. Perhaps you had a similar experience: you and everyone you knew was aware of this meme.

Let’s assume that 25 million is an indicator of the order of magnitude of people that learned about this meme. If you googled the dress question, you probably clicked the article. Maybe you clicked it twice. Maybe you clicked it twenty times and you are an outlier. Maybe you didn’t click it at all. It’s plausible that it evens out and the actual number of people who were aware of the meme is somewhere between 10 million and 50 million.

That’s a lot of people. But–and this is really my point–it’s not that many people, compared to everybody. There’s about 300 million people in the United States. There’s over 7 billion people on the planet. Who are the tenth of the population who were interested in the dress? If you are reading this blog, they are probably people a lot like you or I. Who are the other ~93% of people in the U.S.?

I’ve got a bold hypothesis. My hypothesis is that the other 90% of people are people who have lives. I mean this in the sense of the idiom “get a life“, which has fallen out of fashion for some reason. Increasingly, I’m becoming interested in the vast but culturally foreign population of people who followed this advice at some point in their lives and did not turn back. Does anybody know of any good ethnographic work about them? Where do they hang out in the Bay Area?


by Sebastian Benthall at March 08, 2015 02:48 AM

March 04, 2015

MIMS 2014

Sketchnotes: Seattle Data Visualization Meetup

When I went to the Seattle Data Viz meetup today, I had 2 objectives:
1. To get some interesting inputs on the ‘Top 7 graphs’
2. To try Sketchnoting, just to build my skills in the area

I skipped the entire second half of the conversation because it focused almost entirely on plotly features, which were pretty cool but not my area of focus. For students / startups, I think they offer a very cool solution to experiment and collaborate on creating some of these.

My notes are kinda sparse because there wasn’t much discussion on the graphs themselves (which was what I was expecting). Ah well, 1/2 objectives ain’t too bad. So anyway, check out my Sketchnotes, and let me know what you think. I hope to do more soon (and maybe buy some more pens to add some dimensions to these!).

IMG_3668


by muchnessofd at March 04, 2015 05:42 AM

March 01, 2015

Ph.D. student

‘Bad twitter’ : exit, voice, and social media

I made the mistake in the past couple of days of checking my Twitter feed. I did this because there are some cool people on Twitter and I want to have conversations with them.

Unfortunately it wasn’t long before I started to read things that made me upset.

I used to think that a benefit of Twitter was that it allowed for exposure to alternative points of view. Of course you should want to see the other side, right?

But then there’s this: if you do that for long enough, you start to see each “side” make the same mistakes over and over again. It’s no longer enlightening. It’s just watching a train wreck in slow motion on repeat.

Hirschman’s Exit, Voice, and Loyalty is relevant to this. Presumably, over time, those who want a higher level of conversation Exit social media (and its associated news institutions, such as Salon.com) to more private channels, causing a deterioration in the quality of public discourse. Because social media sites have very strong network effects, they are robust to any revenue loss due to quality-sensitive Exiters, leaving a kind of monopoly-tyranny that Hirschman describes vividly thus:

While of undoubted benefit in the case of the exploitative, profit-maximizing monopolist, the presence of competition could do more harm than good when the main concern is to counteract the monopolist’s tendency toward flaccidity and mediocrity. For, in that case, exit-competition could just fatally weaken voice along the lines of the preceding section, without creating a serious threat to the organization’s survival. This was so for the Nigerian Railway Corporation because of the ease with which it could dip into the public treasury in case of deficit. But there are many other cases where competition does not restrain monopoly as it is supposed to, but comforts and bolsters it by unburdening it of its more troublesome customers. As a result, one can define an important and too little noticed type of monopoly-tyranny: a limited type, an oppression of the weak by the incompetent and an exploitation of the poor by the lazy which is the more durable and stifling as it is both unambitious and escapable. The contrast is stark indeed with totalitarian, expansionist tyrannies or the profit-maximizing, accumulation-minded monopolies which may have captured a disproportionate share of our attention.

It’s interesting to compare a Hirschman-inspired view of the decline of Twitter as a function of exit and voice to a Frankfurt School analysis of it in terms of the culture industry. It’s also interesting to compare this with boyd’s 2009 paper on “White flight in networked publics?” in which she chooses to describe the decline of MySpace in terms of the troubled history of race and housing.*

In particular, there are passages of Hirschman in which he addresses neighborhoods of “declining quality” and the exit and voice dynamics around them. It is interesting to me that the narrative of racialized housing policy and white flight is so salient to me lately that I could not read these passages of Hirschman without raising an eyebrow at the fact that he didn’t mention race in his analysis. Was this color-blind racism? Or am I now so socialized by the media to see racism and sexism everywhere that I assumed there were racial connotations when in fact he was talking about a general mechanism. Perhaps the salience of the white flight narrative to me has made me tacitly racist by making me assume that the perceived decline in neighborhood quality is due to race!

The only way I could know for sure what was causing what would be to conduct a rigorous empirical analysis I don’t have time for. And I’m an academic whose job is to conduct rigorous empirical analyses! I’m forced to conclude that without a more thorough understanding of the facts, any judgment either way will be a waste of time. I’m just doing my best over here and when push comes to shove I’m a pretty nice guy, my friends say. Nevertheless, it’s this kind of lazy baggage-slinging that is the bread and butter of the mass journalist today. Reputations earned and lost on the basis of political tribalism! It’s almost enough to make somebody think that these standards matter, or are the basis of a reasonable public ethics of some kind that must be enforced lest society fall into barbarism!

I would stop here except that I am painfully aware that as much as I know it to be true that there is a portion of the population that has exited the morass of social media and put it to one side, I know that many people have not. In particular, a lot of very smart, accomplished friends of mine are still wrapped up in a lot of stupid shit on the interwebs! (Pardon my language!) This is partly due to the fact that networked publics now mediate academic discourse, and so a lot of aspiring academics now feel they have to be clued in to social media to advance their careers. Suddenly, everybody who is anybody is a content farmer! There’s a generation who are looking up to jerks like us! What the hell?!?!

This has a depressing consequence. Since politically divisive content is popular content, and there is pressure for intellectuals to produce popular content, this means that intellectuals have incentives to propagate politically divisive narratives instead of working towards reconciliation and the greater good. Or, alternatively, there is pressure to aim for the lowest common denominator as an audience.

At this point, I am forced to declare myself an elitist who is simply against provocation of any kind. It’s juvenile, is the problem. (Did I mention I just turned 30? I’m an adult now, swear to god.) I would keep this opinion to myself, but at that point I’m part of the problem by not exercising my Voice option. So here’s to blogging.

* I take a particular interest in danah boyd’s work because in addition to being one of the original Internet-celebrity-academics-talking-about-the-Internet and so aptly doubles as both the foundational researcher and just slightly implicated subject matter for this kind of rambling about social media and intellectualism (see below), she also shares an alma mater with me (Brown) and is the star graduate of my own department (UC Berkeley’s School of Information) and so serves as a kind of role model.

I feel the need to write this footnote because while I am in the scholarly habit of treating all academic writers I’ve never met abstractly as if they are bundles of text subject to detached critique, other people think that academics are real people(!), especially academics themselves. Suddenly the purely intellectual pursuit becomes personal. Multiple simultaneous context collapses create paradoxes on the level of pragmatics that would make certain kinds of communication impossible if they are not ignored. This can be awkward but I get a kind of perverse pleasure out of leaving analytic puzzles to whoever comes next.

I’m having a related but eerier intellectual encounter with an Internet luminary in some other work I’m doing. I’m writing software to analyze a mailing list used by many prominent activists and professionals. Among the emails are some written by the late Aaron Swartz. In the process of working on the software, I accepted a pull request from a Swiss programmer I had never met which has the Python package html2text as a dependency. Who wrote the html2text package? Aaron Swartz. Understand I never met the guy, am trying to map out how on-line communication mediates the emergent structure of the sociotechnical ecosystem of software and the Internet, and obviously am interested reflexively in how my own communication and software production fits into that larger graph. (Or multigraph? Or multihypergraph?) Power law distributions of connectivity on all dimensions make this particular situation not terribly surprising. But it’s just one of many strange loops.


by Sebastian Benthall at March 01, 2015 10:35 PM

February 23, 2015

Ph.D. student

Hirschman, Nigerian railroads, and poor open source user interfaces

Hirschman says he got the idea for Exit, Voice, and Loyalty when studying the failure of the Nigerian railroad system to improve quality despite the availability of trucking as a substitute for long-range shipping. Conventional wisdom among economists at the time was that the quality of a good would suffer when it was provisioned by a monopoly. But why would a business that faced healthy competition not undergo the management changes needed to improve quality?

Hirschman’s answer is that because the trucking option was so readily available as an alternative, there wasn’t a need for consumers to develop their capacity for voice. The railroads weren’t hearing the complaints about their service, they were just seeing a decline in use as their customers exited. Meanwhile, because it was a monopoly, loss in revenue wasn’t “of utmost gravity” to the railway managers either.

The upshot of this is that it’s only when customers are locked in that voice plays a critical role in the recuperation mechanism.

This is interesting for me because I’m interested in the role of lock-in in software development. In particular, one argument made in favor of open source software is that because it is not technology held by a single firm, users of the software are not locked-in. Their switching costs are reduced, making the market more liquid and, in theory favorable.

You can contrast this with proprietary enterprise software, where vendor lock-in is a principle part of the business model as this establishes the “installed base” and customer support armies are necessary for managing disgruntled customer voice. Or, in the case of social media such as Facebook, network effects create a kind of perceived consumer lock-in and consumer voice gets articulated by everybody from Twitter activists to journalists to high-profile academics.

As much as it pains me to admit it, this is one good explanation for why the user interfaces of a lot of open source software projects are so bad specifically if you combine this mechanism with the idea that user-centered design is important for user interfaces. Open source projects generally make it easy to complain about the software. If they know what they are doing at all, they make it clear how to engage the developers as a user. There is a kind of rumor out there that open source developers are unfriendly towards users and this is perhaps true when users are used to the kind of customer support that’s available on a product for which there is customer lock-in. It’s precisely this difference between exit culture and voice culture, driven by the fundamental economics of the industry, that creates this perception. Enterprise open source business models (I’m thinking about models like the Pentaho ‘beekeeper’) theoretically provide a corrective to this by being an intermediary between consumer voice and developer exit.

A testable hypothesis is whether and to what extent a software project’s responsiveness to tickets scales with the number of downstream dependent projects. In software development, technical architecture is a reasonable proxy for industrial organization. A widely used project has network effects that increasing switching costs for its downstream users. How do exit and voice work in this context?


by Sebastian Benthall at February 23, 2015 01:30 AM

February 21, 2015

Ph.D. student

The node.js fork — something new to think about

For Classics we are reading Albert Hirschman’s Exit, Voice, and Loyalty. Oddly, though normally I hear about ‘voice’ as an action from within an organization, the first few chapters of the book (including the introduction of the Voice concept itselt), are preoccupied with elaborations on the neoclassical market mechanism. Not what I expected.

I’m looking for interesting research use cases for BigBang, which is about analyzing the sociotechnical dynamics of collaboration. I’m building it to better understand open source software development communities, primarily. This is because I want to create a harmonious sociotechnical superintelligence to take over the world.

For a while I’ve been interested in Hadoop’s interesting case of being one software project with two companies working together to build it. This is reminiscent (for me) of when we started GeoExt at OpenGeo and Camp2Camp. The economics of shared capital are fascinating and there are interesting questions about how human resources get organized in that sort of situation. In my experience, there becomes a tension between the needs of firms to differentiate their products and make good on their contracts and the needs of the developer community whose collective value is ultimately tied to the robustness of their technology.

Unfortunately, building out BigBang to integrate with various email, version control, and issue tracking backends is a lot of work and there’s only one of me right now to both build the infrastructure, do the research, and train new collaborators (who are starting to do some awesome work, so this is paying off.) While integrating with Apache’s infrastructure would have been a smart first move, instead I chose to focus on Mailman archives and git repositories. Google Groups and whatever Apache is using for their email lists do not publish their archives in .mbox format, which is pain for me. But luckily Google Takeout does export data from folks’ on-line inbox in .mbox format. This is great for BigBang because it means we can investigate email data from any project for which we know an insider willing to share their records.

Does a research ethics issue arise when you start working with email that is openly archived in a difficult format, then exported from somebody’s private email? Technically you get header information that wasn’t open before–perhaps it was ‘private’. But arguably this header information isn’t personal information. I think I’m still in the clear. Plus, IRB will be irrelevent when the robots take over.

All of this is a long way of getting around to talking about a new thing I’m wondering about, the Node.js fork. It’s interesting to think about open source software forks in light of Hirschman’s concepts of Exit and Voice since so much of the activity of open source development is open, virtual communication. While you might at first think a software fork is definitely a kind of Exit, it sounds like IO.js was perhaps a friendly fork of just somebody who wanted to hack around. In theory, code can be shared between forks–in fact this was the principle that GitHub’s forking system was founded on. So there are open questions (to me, who isn’t involved in the Node.js community at all and is just now beginning to wonder about it) along the lines of to what extent a fork is a real event in the history of the project, vs. to what extent it’s mythological, vs. to what extent it’s a reification of something that was already implicit in the project’s sociotechnical structure. There are probably other great questions here as well.

A friend on the inside tells me all the action on this happened (is happening?) on the GitHub issue tracker, which is definitely data we want to get BigBang connected with. Blissfully, there appear to be well supported Python libraries for working with the GitHub API. I expect the first big hurdle we hit here will be rate limiting.

Though we haven’t been able to make integration work yet, I’m still hoping there’s some way we can work with MetricsGrimoire. They’ve been a super inviting community so far. But our software stacks and architecture are just different enough, and the layers we’ve built so far thin enough, that it’s hard to see how to do the merge. A major difference is that while MetricsGrimoire tools are built to provide application interfaces around a MySQL data backend, since BigBang is foremost about scientific analysis our whole data pipeline is built to get things into Pandas dataframes. Both projects are in Python. This too is a weird microcosm of the larger sociotechnical ecosystem of software production, of which the “open” side is only one (important) part.


by Sebastian Benthall at February 21, 2015 11:15 PM

MIMS 2012

Behind the Design: Optimizely's Mobile Editor

On 11/18/14, Optimizely officially launched A/B testing for iOS apps. This was a big launch because our product had been in beta for months, but none of us felt proud to publicly launch it. To get us over the finish line, we focused our efforts on building out an MVPP — a Minimum Viable Product we’re Proud of (which I wrote about previously). A core part of the MVPP was redesigning our editing experience from scratch. In this post, I will walk you through the design process, show you the sketches and prototypes that led up to the final design, and the lessons learned along the way, told from my perspective as the Lead Designer.

A video of the final product

Starting Point

To provide context, our product enables mobile app developers to run A/B tests in their app, without needing to write any code or resubmit to the App Store for approval. By connecting your app to our editor, you can select elements, like buttons and headlines, and change their properties, like colors and text. Our beta product was functional in this regard, but not particularly easy or delightful to use. The biggest problem was that we didn’t show you your app, so you had to select elements by searching through a list of your app’s views (a process akin to navigating your computer’s folder hierarchy to find a file). This made the product cumbersome to use, and not visually engaging (see screenshot below).

Screenshot of Optimizely's original iOS editor

Optimizely’s original iOS editor.

Designing the WYSIWYG Editor

To make this a product we’re proud to launch, it was obvious we’d need to build a What-You-See-Is-What-You-Get (WYSIWYG) editor. This means we’d show the app in the browser, and let users directly select and edit their app’s content. This method is more visually engaging, faster, and easier to use (especially for non-developers). We’ve had great success with web A/B testing because of our WYSIWYG editor, and we wanted to replicate that success on mobile.

This is an easy design decision to make, but hard to actually build. For this to work, it had to be performant and reliable. A slow or buggy implementation would have been frustrating and a step backwards. So we locked a product designer and two engineers in a room to brainstorm ideas and build functional prototypes together. By the end of the week, they had a prototype that cleared the technical hurdles and proved we could build a delightful editing experience. This was a great accomplishment, and a reminder that any challenge can be solved by giving a group of smart, talented individuals space to work on a seemingly intractable problem.

Creating the Conceptual Model

With the app front and center, I needed an interface for how users change the properties of elements (text, color, position, etc.). Additionally, there are two other major features the editor needs to expose: Live Variables and Code Blocks. Live Variables are native Objective-C variables that can be changed on the fly through Optimizely (such as the price of items). Code Blocks let users choose code paths to execute (for example, a checkout flow that has 2 steps instead of 3).

Before jumping into sketches or anything visual, I had to get organized. What are all the features I need to expose in the UI? What types of elements can users edit? What properties can they change? Which of those are useful for A/B tests? I wrote down all the functionality I could think of. Additionally, I needed to make sure the UI would accommodate new features to prevent having to redesign the editor 3 months down the line, so I wrote out potential future functionality alongside current functionality.

I took all this functionality and clustered them into separate groups. This helped me form a sound conceptual model on which to build the UI. A good model makes it easier for users to form an accurate mental model of the product, thus making it easier to use (and more extensible for future features). This exercise made it clear to me that there are variation-level features, like Code Blocks and Live Variables, that should be separate from element-level features that act on specific elements (like changing a button’s text). This seems like an obvious organizing principle in retrospect, but at the time it was a big shift in thinking.

After forming the conceptual model, I curated the element properties we let users edit. The beta product exposed every property we could find, with no thought as to whether or not we should let users edit it. More properties sounds better and makes our product more powerful, but it comes at the cost of ease of use. Plus, a lot of the properties we let people change don’t make sense for our use case of creating A/B tests, and don’t make sense to non-developers (e.g. “Autoresizing mask” isn’t understandable to non-technical folks, or something that needs to be changed for an A/B test).

I was ruthless about cutting properties. I went through every single one and asked two questions: first, is this understandable to non-developers (my definition of “understandable” being would a person recognize it from common programs they use everyday, like MS Office or Gmail); and second, why is this necessary for creating an A/B test? If I was unsure about an attribute, I defaulted to cutting it. My reasoning was it’s easy to add features to a product, but hard to take them away. And if we’re missing any essential properties, we’ll hear about it from our customers and can add it back.

Screenshot of my Google Doc feature organization

My lo-fi Google Doc to organize features

Let the Sketching Begin!

With my thoughts organized, I finally started sketching a bunch of editor concepts (pictured below). I had two big questions to answer: after selecting an element, how does a user change its properties? And, how are variation-level features (such as Code Blocks) exposed? My top options were:

  • Use a context menu of options after selecting an element (like our web editor)
  • When an element is selected, pop up an inline property pane (ala Medium’s and Wordpress’s editors)
  • Have a toolbar of properties below the variation bar
  • Show the properties in a drawer next to the app

Picture of my toolbar sketch

A sketch of the toolbar concept

Picture of my inline formatting sketch

A messy sketch of inline formatting options (specifically text)

Picture of one of my drawer sketches

One of the many drawer sketches

Interactive Prototypes

Each approach had pros and cons, but organizing element properties in a drawer showed the most promise because it’s a common interaction paradigm, it fit easily into the editor, and was the most extensible for future features we might add. The other options were generally constraining and better suited to limited functionality (like simple text formatting).

Because I wanted to maximize space for showing the app, my original plan was to show variation-level features (e.g. Code Blocks; Live Variables) in the drawer when no element was selected, and then replace those with element-level features when an element was selected. Features at each level could be separated into their own panes (e.g. Code Blocks would have its own pane). Thus the drawer would be contextual, and all features would be in the same spot (though not at the same time). This left plenty of space for showing an app, and kept the editor uncluttered.

A sketch told me that layout-wise this plan was viable, but would it make sense to select an element one place, and edit its properties in another? Would it be jarring to see features come and go depending on whether an element was selected or not? How will you navigate between different panes in the drawer? To answer these questions, an interactive prototype was my best course of action (HTML/CSS/JS being my weapon of choice).

Screenshot of an early drawer prototype

An early drawer prototype. Pretend there’s an app in that big empty white space.

I prototyped dozens of versions of the drawer, and shopped them around to the team and fellow designers. Responses overall were very positive, but the main concern was that the tab buttons (“Text”, “Layout”, etc., in the image above) in the drawer won’t scale. Once there are more than about 4, the text gets really squeezed (especially in other languages), stunting our ability to add new features. One idea to alleviate this, suggested by another designer, was to use an accordion instead of tab buttons to reveal content. A long debate ensued about which approach was better. I felt the tab buttons were a more common approach (accordions were for static content, not interactive forms that users will be frequently interacting with), whereas he felt the accordion was more scalable by allowing room for adding more panes, and accommodates full text labels (see picture below).

Screenshot of the drawer with accordion prototype

Drawer with accordion prototype. Pretend that website is an iOS app.

To help break this tie, I built another prototype. After playing around with both for awhile, and gathering feedback from various members of the team, I realized we were both wrong.

Hitting reset

After weeks of prototyping and zeroing in on a solution, I realized it was the wrong solution. And the attempt to fix it (accordions), was in fact an iteration of the original concept that didn’t actually address the real problem. I needed a new idea that would be superior to all previous ideas. So I hit reset and went back to the drawing board (literally). I reviewed my initial organizing work and all required functionality. Clearly delineating variation-level properties from element-level properties was a sound organizing principle, but the drawer was getting overloaded by having everything in it. So I explored ways of more cleanly separating variation-level properties from element-level properties.

After reviewing my feature groupings, I realized there aren’t a lot of element properties. They can all be placed in one panel without needing to navigate between them with tabs or accordions at all (one problem solved!).

The variation properties were the real issue, and had the majority of potential new features to account for. Two new thoughts became apparent as I reviewed these properties: first, variation-level changes are typically quick and infrequent; and second, variation-level changes don’t typically visually affect the app content. Realizing this, I hit upon an idea to have a second drawer that would slide out over the app, and go away after you made your change.

To see how this would feel to use, I made yet another interactive prototype. This new UI was clean, obviated the need for tab buttons or accordions, was quick and easy to interact with, and put all features just a click or two away. In short, this new design direction was a lot better, and everyone quickly agreed it made more sense than my previous approach.

Reflecting back on this, I realize I had made design decisions based on edge cases, rather than focusing on the 80% use case. Starting the design process over from first principles helped me see this much more clearly. I only wish I would have caught it sooner!

Admitting this design was not the right solution, after a couple months of work, and after engineers already began building it, was difficult. The thought of going in front of everyone (engineers, managers, PMs, designers, etc.) and saying we needed to change direction was not something I was looking forward to. I was also worried about the amount of time it would take me to flesh out a completely new design. Not to mention that I needed to thoroughly vet it to make sure that it didn’t have any major drawbacks (I wouldn’t have another opportunity to start over).

Luckily, once I started fleshing out this new design, those fears mostly melted away. I could tell this new direction was stronger, which made me feel good about restarting, which made it easier to sell this idea to the whole team. I also learned that even though I was starting over from the beginning, I wasn’t starting with nothing. I had learned a lot from my previous iterations, which informed my decision making this second time through.

Build and Ship!

With a solid design direction finally in place, we were able to pour on the engineering resources to build out this new editor. Having put a lot of thought into both the UI and technical challenges before writing production code, we were able to rapidly build out the actual product, and ended up shipping a week ahead of our self-imposed deadline!

Screenshot of the finished mobile editor

The finished mobile editor

Lessons Learned

  • Create a clear conceptual model on which to build the UI. A UI that accurately represents the system’s conceptual model will make it easy for users to form a correct mental model of your product, thus making it easier to use. To create the system model, write down all the features, content, and use cases you need to design for before jumping into sketches or prototypes. Group them together and map out how they relate to each other. From this process, the conceptual model should become clear. Read more about mental models on UX Magazine.
  • Don’t be afraid to start over. It’s scary, and hard, and feels like you wasted a bunch of time, but the final design will come out better. And the time you spent on the earlier designs wasn’t wasted effort — it broadened your knowledge of both the problem and solution spaces, which will help you make better design decisions in your new designs.
  • Design for the core use case, not edge cases. Designing for edge cases can clutter a UI and get in the way of the core use case that people do 80% of the time. In the case of the drawer, it led to overloading it with functionality.
  • Any challenge can be solved by giving a group of smart, talented individuals space to work on seemingly intractable problems. We weren’t sure a WYSIWYG editor would be technically feasible, but we made a concerted effort to overcome the technical hurdles, and it payed off. I’ve experienced this time and time again, and this was yet another reminder of this lesson.

On 11/18/14, the team was proud to announce Optimizely’s mobile A/B testing product to the world. Week-over-week usage has been steadily rising, and customer feedback has been positive, with people saying the new editor is much easier and faster to use. This was a difficult product to design, for both technical and user experience reasons, but I had a great time doing it and learned a ton along the way. And this is only the beginning — we have a lot more work to do before we’re truly the best mobile A/B testing product on the planet.

by Jeff Zych at February 21, 2015 10:54 PM

February 20, 2015

Ph.D. alumna

Why I Joined Dove & Twitter to #SpeakBeautiful

I’ve been online long enough to see a lot of negativity. I wear a bracelet that reads “Don’t. Read. The. Comments.” (a gift from Molly Steenson) to remind myself that going down the path of negativity is not helpful to my soul or sanity. I grew up in a geeky environment, determined to prove that I could handle anything, to stomach the notion that “if you can’t stand the heat, get out of the kitchen.” My battle scars are part of who I am. But why does it have to be this way?

Over the last few years, as the internet went from being a geeky subculture to something that is truly mainstream, I started watching as young women used technology to demean themselves and each other. It has broken my heart over and over again. Women are hurting themselves in the process of hurting each other with their words. The answer isn’t to just ask everyone out there to develop a thick skin. A world of meanness and cruelty is destructive to all involved and we all need to push back at it, especially those of us who have the strength to stomach the heat.

I’m delighted and honored to partner with Dove and Twitter to change the conversation. In an effort to better understand what’s happening, Dove surveyed women and Twitter analyzed tweets. Even though only 9% of women surveyed admit to posting negative comments on social media, over 5 million negative tweets about beauty and body image were posted in 2014 alone and 4 out of 5 of those tweets appeared to come from women. Women know that negative comments are destructive to their self-esteem and to those around them and, yet, the women surveyed reported they are 50% more likely to say something negative than positive. What is happening here?

This weekend, we will watch celebrities parade down the red carpet wearing gorgeous gowns as they enter a theater to celebrate the pinnacle of film accomplishments. Yet, if history serves, the social media conversation around the Oscar’s will be filled with harsh commentary regarding celebrities’ beauty and self-loathing.

We live in a world in which self-critique and ugliness is not only accepted, but the norm. Especially for women. Yet, so many women are unable to see how what they say not only erodes their own self-worth, but harms others. Every time we tear someone down for what they’re wearing or how they’re acting – and every time that we talk badly about ourselves – we contribute to a culture of cruelty in which women are systemically disempowered. This has to change.

It’s high time that we all stop and reflect on what we’re saying and posting when we use our fingers to talk in public. It’s time to #Speak Beautiful. Negative commentary has a domino effect. But so does positive commentary.

In an effort to change the norm, Dove and Twitter have come together to try to combat negativity with positive thoughts. Beyond this video, they are working together to identify negative tweets and reach out to women who might not realize the ramifications of what they say. Social media and self-esteem experts will offer advice in an effort to empower women to speak with more confidence, optimism, and kindness.

Will this solve the problem? No. But the modest goal of this campaign is to get more women to step back and reflect about what they’re saying. At the end of the day, it’s us who need to solve the problem. We need to all collectively make a conscious decision to stop the ugliness. We need to #SpeakBeautiful.

I am honored to be able to contribute to this effort and I invite you to do the same. Spend some time today and over the weekend thinking about the negativity you see around you on social media and push back against it. If your instinct is to critique, take a moment to say something positive. An effort to #SpeakBeautiful is both selfish and altruistic. You help others while helping yourself.

I know that I will spend the weekend thinking about my grandmother, a beautiful woman in her 90s who grew up being told that negative thoughts were thoughts against God. As a teenager, I couldn’t understand how she could stay positive no matter what happened around her but as I grow older, I’m in awe of her ability to find the beauty in everything. I’ve watched this sustain her into her old age. I only wish more people could find the nourishment of such positivity. So let’s all take a moment to #SpeakBeautiful, for ourselves and for those around us.

by zephoria at February 20, 2015 02:28 PM

February 17, 2015

Ph.D. student

data science and the university

This is by now a familiar line of thought but it has just now struck me with clarity I wanted to jot down.

  1. Code is law, so the full weight of human inquiry should be brought to bear on software system design.
  2. (1) has been understood by “hackers” for years but has only recently been accepted by academics.
  3. (2) is due to disciplinary restrictions within the academy.
  4. (3) is due to the incentive structure of the academy.
  5. Since there are incentive structures for software development that are not available for subjects whose primary research project is writing, the institutional conditions that are best able to support software work and academic writing work are different.
  6. Software is a more precise and efficious way of communicating ideas than writing because its interpretation is guaranteed by programming language semantics.
  7. Because of (6), there is selective pressure to making software the lingua franca of scholarly work.
  8. (7) is inducing a cross-disciplinary paradigm shift in methods.
  9. (9) may induce a paradigm shift in theoretical content, or it may result in science whose contents are tailored to the efficient execution of adaptive systems. (This is not to say that such systems are necessarily atheoretic, just that they are subject to different epistemic considerations).
  10. Institutions are slow to change. That’s what makes them institutions.
  11. By (5), (7), and (9), the role of universities as the center of research is being threatened existentially.
  12. But by (1), the myriad intellectual threads currently housed in universities are necessary for software system design, or are at least potentially important.
  13. With (11) and (12), a priority is figuring out how to manage a transition to software-based scholarship without information loss.

by Sebastian Benthall at February 17, 2015 07:28 AM

a brief comment on feminist epistemology

One funny thing about having a blog is that I can tell when people are interested in particular posts through the site analytics. To my surprise, this post about Donna Haraway has been getting an increasing number of hits each month since I posted it. That is an indication that it has struck a chord, since steady exogenous growth like that is actually quite rare.

It is just possible that this means that people interested in feminist epistemology have been reading my blog lately. They probably have correctly guessed that I have not been the biggest fan of feminist epistemology because of concerns about bias.

But I’d like to take the opportunity to say that my friend Rachel McKinney has been recommending I read Elizabeth Anderson‘s stuff if I want to really get to know this body of theory. Since Rachel is an actual philosopher and I am an amateur who blogs about it on weekends, I respect her opinion on this a great deal.

So today I started reading through Anderson’s Stanford Encyclopedia of Philosophy article on Feminist Epistemology and I have to say I think it’s very good. I like her treatment of the situated knower. It’s also nice to learn that there are alternative feminist epistemologies to certain standpoint theories that I think are troublesome. In particular, it turns out that those standpoint theories are now considered by feminist philosophers to from a brief period in the 80’s that they’ve moved past already! Now subaltern standpoints are considered privileged in terms of discovery more than privileged in terms of justification.

This position is certainly easier to reconcile with computational methods. For example, it’s in a sense just mathematically mathematically correct if you think about it in terms of information gain from a sample. This principle appears to have been rediscovered in a way recently by the equity-in-data-science people when people talk about potential classifier error.

I’ve got some qualms about the articulation of this learning principle in the absence of a particular inquiry or decision problem because I think there’s still a subtle shift in the argumentation from logos to ethos embedded in there (I’ve been seeing things through the lens of Aristotelian rhetoric lately and it’s been surprisingly illuminating). I’m on the lookout for a concrete application of where this could apply in a technical domain, as opposed to as an articulation of a political affinity or anxiety in the language of algorithms. I’d be grateful for links in the comments.

Edit:

Wait, maybe I already built one. I am not sure if that really counts.


by Sebastian Benthall at February 17, 2015 05:19 AM

February 13, 2015

Ph.D. alumna

An Old Fogey’s Analysis of a Teenager’s View on Social Media

In the days that followed Andrew Watts’ “A Teenager’s View on Social Media written by an actual teen” post, dozens of people sent me a link. I found myself getting uncomfortable and angry by the folks who are pointing me to this. I feel the need to offer my perspective as someone who is not a teenager but who has thought about these issues extensively for years.

Almost all of them work in the tech industry and many of them are tech executives or venture capitalists. The general sentiment has been: “Look! Here’s an interesting kid who’s captured what kids these days are doing with social media!” Most don’t even ask for my interpretation, sending it to me as though it is gospel.

We’ve been down this path before. Andrew is not the first teen to speak as an “actual” teen and have his story picked up. Every few years, a (typically white male) teen with an interest in technology writes about technology among his peers on a popular tech platform and gets traction. Tons of conferences host teen panels, usually drawing on privileged teens in the community or related to the organizers. I’m not bothered by these teens’ comments; I’m bothered by the way they are interpreted and treated by the tech press and the digerati.

I’m a researcher. I’ve been studying American teens’ engagement with social media for over a decade. I wrote a book on the topic. I don’t speak on behalf of teens, but I do amplify their voices and try to make sense of the diversity of experiences teens have. I work hard to account for the biases in whose voices I have access to because I’m painfully aware that it’s hard to generalize about a population that’s roughly 16 million people strong. They are very diverse and, yet, journalists and entrepreneurs want to label them under one category and describe them as one thing.

Andrew is a very lucid writer and I completely trust his depiction of his peer group’s use of social media. He wrote a brilliant post about his life, his experiences, and his interpretations. His voice should be heard. And his candor is delightful to read. But his analysis cannot and should not be used to make claims about all teenagers. I don’t blame Andrew for this; I blame the readers — and especially tech elites and journalists — for their interpretation of Andrew’s post because they should know better by now. What he’s sharing is not indicative of all teens. More significantly, what he’s sharing reinforces existing biases in the tech industry and journalism that worry me tremendously.

His coverage of Twitter should raise a big red flag to anyone who has spent an iota of time paying attention to the news. Over the last six months, we’ve seen a phenomenal uptick in serious US-based activism by many youth in light of what took place in Ferguson. It’s hard to ignore Twitter’s role in this phenomenon, with hashtags like #blacklivesmatter and #IfTheyGunnedMeDown not only flowing from Twitter onto other social media platforms, but also getting serious coverage from major media. Andrew’s statement that “a lot of us simply do not understand the point of Twitter” should raise eyebrows, but it’s the rest of his description of Twitter that should serve as a stark reminder of Andrew’s position within the social media landscape.

Let me put this bluntly: teens’ use of social media is significantly shaped by race and class, geography and cultural background. Let me repeat that for emphasis.

Teens’ use of social media is significantly shaped by race and class, geography and cultural background.

The world of Twitter is many things and what journalists and tech elites see from Twitter is not even remotely similar to what many of the teens that I study see, especially black and brown urban youth. For starters, their Twitter feed doesn’t have links; this is often shocking to journalists and digerati whose entire stream is filled with URLs. But I’m also bothered by Andrew’s depiction of Twitter users as first and foremost doing so to “complain/express themselves.” While he offers other professional categorizations, it’s hard not to read this depiction in light of what I see in low-status communities and the ways that privileged folks interpret the types of expression that exists in these communities. When black and brown teens offer their perspective on the world using the language of their community, it is often derided as a complaint or dismissed as self-expression. I doubt that Andrew is trying to make an explicitly racist comment here, but I want to caution every reader out there that critiques of youth use of Twitter are often seen in a negative light because of the heavy use by low-status black and brown youth.

Andrew’s depiction of his peers’ use of social media is a depiction of a segment of the population, notably the segment most like those in the tech industry. In other words, what the tech elite are seeing and sharing is what people like them would’ve been doing with social media X years ago. It resonates. But it is not a full portrait of today’s youth. And its uptake and interpretation by journalists and the tech elite whitewashes teens practices in deeply problematic ways.

I’m not saying he’s wrong; I’m saying his story is incomplete and the incompleteness is important. His commentary on Facebook is probably the most generalizable, if we’re talking about urban and suburban American youth. Of course, his comments shouldn’t be shocking to anyone at this point (as Andrew himself points out). Somehow, though, declarations of Facebook’s lack of emotional weight with teens continues to be front page news. All that said, this does render invisible the cultural work of Facebook in rural areas and outside of the US.

Andrew is very visible about where he stands. He’s very clear about his passion for technology (and his love of blogging on Medium should be a big ole hint to anyone who missed his byline). He’s also a college student and talks about his peers as being obviously on path to college. But as readers, let’s not forget that only about half of US 19-year-olds are in college. He talks about WhatsApp being interesting when you go abroad, the practice of “going abroad” is itself privileged, with less than 1/3 of US citizens even holding passports. Furthermore, this renders invisible the ways in which many US-based youth use WhatsApp to communicate with family and friends who live outside of the US. Immigration isn’t part of his narrative.

I don’t for a second fault Andrew for not having a perspective beyond his peer group. But I do fault both the tech elite and journalists for not thinking critically through what he posted and presuming that a single person’s experience can speak on behalf of an entire generation. There’s a reason why researchers and organizations like Pew Research are doing the work that they do — they do so to make sure that we don’t forget about the populations that aren’t already in our networks. The fact that professionals prefer anecdotes from people like us over concerted efforts to understand a demographic as a whole is shameful. More importantly, it’s downright dangerous. It shapes what the tech industry builds and invests in, what gets promoted by journalists, and what gets legitimized by institutions of power. This is precisely why and how the tech industry is complicit in the increasing structural inequality that is plaguing our society.

This post was originally published to The Message at Medium on January 12, 2015

by zephoria at February 13, 2015 12:05 AM

February 12, 2015

Ph.D. student

scale and polemic

I love a good polemic but lately I have been disappointed by polemics as a genre because they generally don’t ground themselves on data at a suitable scale.

When people try to write about a social problem, they are likely to use potent examples as a rhetorical device. Their particular ideological framing of a situation will be illustrated by compelling stories that are easy to get emotional about. This is often considered to be the hallmark of A Good Presentation, or Good Writing. Somebody will say about some group X, “Group X is known for doing bad things. Here’s an example.”

There are some problems with this approach. If there are a lot of people in Group X, then there can be a lot of variance within that group. So providing just a couple examples really doesn’t tell you about the group as a whole. In fact, this is a great way to get a biased view of Group X.

There are consequences to this kind of rhetoric. Once there’s a narrative with a compelling example illustrating it, that spreads that way of framing things as an ideology. Then, because of the well-known problem of confirmation bias, people that have been exposed to that ideology will start to see more examples of that ideology everywhere.

Add to that stereotype threat and suddenly you’ve got an explanation for why so many political issues are polarized and terrible.

Collecting more data and providing statistical summaries of populations is a really useful remedy to this. While often less motivating than a really well told story of a person’s experience, it has the benefit of being more accurate in the sense of showing the diversity of perspectives there are about something.

Unfortunately, we like to hear stories so much that we will often only tell people about statistics on large populations if they show a clear trend one way or another. People that write polemics want to be able to say, “Group X has 20% more than Group Y in some way,” and talk about why. It’s not considered an interesting result if it turns out the data is just noise, that Group X and Group Y aren’t really that different.

We also aren’t good at hearing stories about how much variance there is in data. Maybe on average Group X has 20% more than Group Y in some way. But what if these distributions are bimodal? Or if one is more varied than the other? What does that mean, narratively?

It can be hard to construct narrations that are not about what can be easily experienced in one moment but rather are about the experiences of lots of people over lots of moments. The narrative form is very constraining because it doesn’t capture the reality of phenomena of great scale and complexity. Things of great scale and complexity can be beautiful but hard to talk about. Maybe talking about them is a waste of time, because that’s not a good way to understand them.


by Sebastian Benthall at February 12, 2015 09:38 PM

February 07, 2015

Ph.D. student

formalizing the cultural observer

I’m taking a brief break from Horkheimer because he is so depressing and because I believe the second half of Eclipse of Reason may include new ideas that will take energy to internalize.

In the meantime, I’ve rediscovered Soren Brier’s Cybersemiotics: Why Information Is Not Enough! (2008), which has remained faithfully on my desk for months.

Brier is concerned with the possibility of meaning generally, and attempts to synthesize the positions of Pierce (recall: philosophically disliked by Horkheimer as a pragmatist), Wittgenstein (who first was an advocate of the formalization of reason and language in his Tractatus, then turned dramatically against it in his Philosophical Investigations), second-order cyberneticists like Varela and Maturana, and the social theorist Niklas Luhmann.

Brier does not make any concessions to simplicity. Rather, his approach is to begin with the simplest theories of communication (Shannon) and show where each fails to account for a more complex form of interaction between more completely defined organisms. In this way, he reveals how each simpler form of communication is the core around which a more elaborate form of meaning-making is formed. He finally arrives at a picture of meaning-making that encompasses all of reality, including that which can be scientifically understood, but one that is necessarily incomplete and an open system. Meaning is all-pervading but never all-encompassing.

One element that makes meaning more complex than simple Shannon-esque communication is the role of the observer, who is maintained semiotically through an accomplishment of self-reference through time. This observer is a product of her own contingency. The language she uses is the result of nature, AND history, AND her own lived life. There is a specificity to her words and meanings that radiates outward as she communicates, meanings that interact in cybernetic exchange with the specific meanings of other speakers/observers. Language evolves in an ecology of meaning that can only poorly be reflected back upon the speaker.

What then can be said of the cultural observer, who carefully gathers meanings, distills them, and expresses new ones conclusively? She is a cybernetic captain, steering the world in one way or another, but only the world she perceives and conceives. Perhaps this is Haraway’s cyborg, existing in time and space through a self-referential loop, reinforced by stories told again and again: “I am this, I am this, I am this.” It is by clinging to this identity that the cyborg achieves the partiality glorified by Haraway. It is also this identity that positions her as an antagonist as she must daily fight the forces of entropy that would dissolve her personality.

Built on cybernetic foundations, does anything in principle prevent the formalization and implementation of Brier’s semiotic logic? What would a cultural observer that stands betwixt all cultures, looming like a spider on the webs of communication that wrap the earth at inconceivable scale? Without the same constraints of partiality of one human observer, belonging to one culture, what could such a robot scientist see? What meaning would they make for themselves or intend?

This is not simply an issue of the interpretability of the algorithms used by such a machine. More deeply, it is the problem that these machines do not speak for themselves. They have no self-reference or identity, and so do not participate in meaning-making except instrumentally as infrastructure. This cultural observer that is in the position to observe culture in the making without the limits of human partiality for now only serves to amplify signal or dampen noise. The design is incomplete.


by Sebastian Benthall at February 07, 2015 08:22 PM

February 05, 2015

Ph.D. student

Horkheimer and “The Revolt of Nature”

The third chapter of Horkheimer’s Eclipse of Reason (which by the way is apparently available here as a PDF) is titled “The Revolt of Nature”.

It opens with a reiteration of the Frankfurt School story: as reason gets formalized, society gets rationalized. “Rationalized” here is in the sense that goes back at least to Lukacs’s “Reification and the Consciousness of the Proletariat” in 1923. It refers to the process of being rendered predictable, and being treated as such. It’s this formalized reason that is a technique of prediction and predictability, but which is unable to furnish an objective ethics, that is the main subject of Horkheimer’s critique.

In “The Revolt of Nature”, Horkheimer claims that as more and more of society is rationalized, the more humanity needs to conform to the rationalizing system. This happens through the labor market. Predictable technology and working conditions such as the factory make workers more interchangeable in their jobs. Thus they are more “free” in a formal sense, but at the same time have less job security and so have to conform to economic forces that make them into means and not ends in themselves.

Recall that this is written in 1947, and Lukacs wrote in 1923. In recent years we’ve read a lot about the Sharing Economy and how it leads to less job security. This is an argument that is almost a century old.

As society and humanity in it conform more and more to rational, pragmatic demands on them, the element of man that is irrational, that is nature, is not eliminated. Horkheimer is implicitly Freudian. You don’t eradicate the natural impulses. You repress them. And what is repressed must revolt.

This view runs counter to some of the ideology of the American academic system that became more popular in the late 20th century. Many ideologues reject the idea of human nature at all, arguing that all human behavior can be attributed to socialization. This view is favored especially by certain extreme progressives, who have a post-Christian ideal of eradicating sin through media criticism and scientific intervention. Steven Pinker’s The Blank Slate is an interesting elaboration and rebuttal of this view. Pinker is hated by a lot of academics because (a) he writes very popular books and (b) he makes a persuasive case against the total mutability of human nature, which is something of a sacred cow to a lot of social scientists for some reason.

I’d argue that Horkheimer would agree with Pinker that there is such a thing as human nature, since he explicitly argues that repressed human nature will revolt against dominating rationalizing technology. But because rationalization is so powerful, the revolt of nature becomes part of the overall system. It helps sustain it. Horkheimer mentions “engineered” race riots. Today we might point to the provocation of bestial, villainous hate speech and its relationship to the gossip press. Or we might point to ISIS and the justification it provides for the military-industrial complex.

I don’t want to imply I endorse this framing 100%. It is just the continuation of Frankfurt School ideas to the present day. How they match up against reality is an empirical question. But it’s worth pointing out how many of these important tropes originated.


by Sebastian Benthall at February 05, 2015 06:56 AM

February 04, 2015

Ph.D. student

a new kind of scientism

Thinking it over, there are a number of problems with my last post. One was the claim that the scientism addressed by Horkheimer in 1947 is the same as the scientism of today.

Scientism is a pejorative term for the belief that science defines reality and/or is a solution to all problems. It’s not in common use now, but maybe it should be among the critical thinkers of today.

Frankfurt School thinkers like Horkheimer and Habermas used “scientism” to criticize the positivists, the 20th century philosophical school that sought to reduce all science and epistemology to formal empirical methods, and to reduce all phenomena, including social phenomena, to empirical science modeled on physics.

Lots of people find this idea offensive for one reason or another. I’d argue that it’s a lot like the idea that algorithms can capture all of social reality or perform the work of scientists. In some sense, “data science” is a contemporary positivism, and the use of “algorithms” to mediate social reality depends on a positivist epistemology.

I don’t know any computer scientists that believe in the omnipotence of algorithms. I did get an invitation to this event at UC Berkeley the other day, though:

This Saturday, at [redacted], we will celebrate the first 8 years of the [redacted].

Current students, recent grads from Berkeley and Stanford, and a group of entrepreneurs from Taiwan will get together with members of the Social Data Lab. Speakers include [redacted], former Palantir financial products lead and course assistant of the [redacted]. He will reflect on how data has been driving transforming innovation. There will be break-out sessions on sign flips, on predictions for 2020, and on why big data is the new religion, and what data scientists need to learn to become the new high priests. [emphasis mine]

I suppose you could call that scientistic rhetoric, though honestly it’s so preposterous I don’t know what to think.

Though I would recommend to the critical set the term “scientism”, I’m ambivalent about whether it’s appropriate to call the contemporary emphasis on algorithms scientistic for the following reason: it might be that ‘data science’ processes are better than the procedures developed for the advancement of physics in the mid-20th century because they stand on sixty years of foundational mathematical work with modeling cognition as an important aim. Recall that the AI research program didn’t start until Chomsky took down Skinner. Horkheimer quotes Dewey commenting that until naturalist researchers were able to use their methods to understand cognition, they wouldn’t be able to develop (this is my paraphrase:) a totalizing system. But the foundational mathematics of information theory, Bayesian statistics, etc. are robust enough or could be robust enough to simply be universally intersubjectively valid. That would mean data science would stand on transcendental not socially contingent grounds.

That would open up a whole host of problems that take us even further back than Horkheimer to early modern philosophers like Kant. I don’t want to go there right now. There’s still plenty to work with in Horkheimer, and in “Conflicting panaceas” he points to one of the critical problems, which is how to reconcile lived reality in its contingency with the formal requirements of positivist or, in the contemporary data scientific case, algorithmic epistemology.


by Sebastian Benthall at February 04, 2015 06:53 AM

MIMS 2012

Building an MVPP - A Minimum Viable Product we're Proud of

On November 18th, 2014, we publicly released Optimizely’s iOS editor. This was a big release for us because it marked the end of a months-long public beta in which we received a ton of customer feedback and built a lot of missing features. But before we launched, there was one problem the whole team rallied behind to fix: we weren’t proud of the product. To fix this issue, we went beyond a Minimum Viable Product (MVP) to an MVPP — the Minimum Viable Product we’re Proud of.

What follows is the story of how we pulled this off, what we learned along the way, and product development tips to help you ship great products, from the perspective of someone who just did it.

Finished iOS editor

The finished iOS editor.

Genesis of the MVPP

We released a public beta of Optimizely’s iOS editor in June 2014. At that time, the product wasn’t complete yet, but it was important for us to get real customer feedback to inform its growth and find bugs. So after months of incorporating user feedback, the beta product felt complete enough to publicly launch. There was just one problem: the entire team wasn’t proud of the product. It didn’t meet our quality bar; it felt like a bunch of features bolted together without a holistic vision. To fix this, we decided to overhaul the user experience, an ambiguous goal that could easily go on forever, never reaching a clear “done” state.

We did two things to be more directed in the overhaul. First, we committed to a deadline to prevent us from endlessly polishing the UI. Second, we took inspiration from the Lean Startup methodology and chose a set of features that made up a Minimum Viable Product (MVP). An MVP makes it clear that we’ll cut scope to make the deadline, but nothing about quality. So to make it explicit that we were focusing on quality and wanted the whole team to be proud of the final product, we added an extra “P” to MVP. And thus, the Minimum Viable Product we’re Proud of — our MVPP — was born.

Create the vision

Once we had agreed on a feature set for the MVPP, a fellow Product Designer and I locked ourselves in a war room for the better part of a week to flesh out the user experience. We mapped out user flows and created rough mock ups that we could use to communicate our vision to the larger development team. Fortunately, we had some pre-existing usability test findings to inform our design decisions.

Picture of our war room in action

Sketches, mockups, and user flows from our war room.

These mockups were immensely helpful in planning the engineering and design work ahead. Instead of talking about ideas in the abstract, we had concrete features and visuals to point to. For example, everyone knew what we meant when we said “Improved Onboarding Flow.” With mockups in hand, communication between team members became much more concrete and people felt inspired to work hard to achieve our vision.

Put 6 weeks on the clock… and go!

We had 3 sprints (6 weeks) to complete the MVPP (most teams at Optimizely work in 2 week cycles called “sprints”). It was an aggressive timeline, but it felt achievable — exactly where a good deadline should be.

In the first sprint, the team made amazing progress. All the major pieces had been built, without any major re-scoping or redesigns. There were still bugs to fix, polish to apply, and edge cases to consider, but the big pieces core to our vision were in place.

That momentum carried over into the second sprint, which we spent fixing the biggest bugs, filling functional holes, and polishing the UI.

For the third and final sprint, we gave ourselves a new goal: ship a week early. We were already focused on launching the MVPP, but at this point we became laser focused. During daily standups, we looked at our JIRA board and asked, “If we were launching tomorrow, what would we work on today?”

We were ruthless about prioritizing tasks and moved a lot of items that were important, but not launch-critical, to the backlog.

During the first week of sprint 3, we also did end-to-end product walkthroughs after every standup to ensure the team was proud of the new iOS editor. We all got to experience the product from the customer’s perspective, and caught user experience bugs that were degrading the quality of our work. We also found and fixed a lot of functional bugs during this time. By the end of the week, everyone was proud of the final product and felt confident launching.

The adrenaline rush & benefit of an early release

On 11/10, we quietly released our MVPP to the world — a full week early! Not only did shipping early feel great, it also gave us breathing room to further polish the design, fix bugs, and give the rest of the company time to organize all the components to launch the MVPP.

Product teams don’t launch products alone; it takes full collaboration between marketing, sales, and success to create materials to promote it, sell it, and enable our customers to use it. By the time the public announcement on 11/18 rolled around, the whole company was extremely proud of the final result.

Lessons learned

While writing this post and reflecting on the project as a whole, a number of techniques became clear to me that can help any team ensure a high quality, on-time launch:

  • Add a “P” to “MVP” to make quality a launch requirement: Referring to the project as the “Minimum Viable Product we’re Proud of” made sure everyone on the team approached the product with quality in mind. Every project has trade-offs between the ship date, quality, and scope. It’s very hard to do all three. Realistically, you can do two. By calling our project an MVPP, we were explicit that quality would not be sacrificed.
  • Set a deadline: Having a deadline focused everyone’s efforts, preventing designers from endlessly polishing interfaces and developers spinning their wheels imagining every possible edge case. Make it aggressive, yet realistic, to instill a sense of urgency in the team.
  • Focus on the smallest set of features that provide the largest customer impact: We were explicit about what features needed to be redesigned, and just as importantly, which were off limits. This prevented scope-creep, and increased the team’s focus.
  • Make mockups before starting development: This is well-known in the industry, but it’s worth repeating. Creating tangible user flows and mockups ahead of time keeps planning discussions on track, removes ambiguity, and quickly explains the product vision. It also inspires the team by rallying them to achieve a concrete goal.
  • Do daily product walkthroughs: Our product walkthroughs had two key benefits. First, numerous design and code bugs were discovered and fixed. And second, they ensured we lived up to the extra “P” in “MVPP.” Everyone had a place to verbally agree that they were proud of the final product and confident launching. Although these walkthroughs made our standups ~30 minutes longer, it was worth the cost.
  • Ask: “If we were shipping tomorrow, what would you work on today?”: When the launch date is approaching, asking this question separates the critical, pre-launch tasks from the post-launch tasks.

Lather, Rinse, and Repeat

By going beyond an MVP to a Minimum Viable Product we’re Proud of, we guaranteed that quality was the requirement for launching. And by using a deadline, we stayed focused only on the tasks that were absolutely critical to shipping. With a well-scoped vision, mockups, and a date not too far in the future, you too can rally teams to create product experiences they’re proud of. And then do it again.

by Jeff Zych at February 04, 2015 04:20 AM

February 01, 2015

Ph.D. student

“Conflicting panaceas”; decapitation and dogmatism in cultural studies counterpublics

I’m still reading through Horkheimer’s Eclipse of Reason. It is dense writing and slow going. I’m in the middle of the second chapter, “Conflicting Panaceas”.

This chapter recognizes and then critiques a variety of intellectual stances of his contemporaries. Whereas in the first chapter Horkheimer takes aim at pragmatism, in this he concerns himself with neo-Thomism and positivism.

Neo-Thomism? Yes, that’s right. Apparently in 1947 one of the major intellectual contenders was a school of thought based on adapting the metaphysics of Saint Thomas Aquinas to modern times. This school of thought was apparently notable enough that while Horkheimer is generally happy to call out the proponents of pragmatism and positivism by name and call them business interest lapdogs, he chooses instead to address the neo-Thomists anonymously in a conciliatory footnote

This important metaphysical school includes some of the most responsible historians and writers of our day. The critical remarks here bear exclusively on the trend by which independent philosophical thought is being superseded by dogmatism.

In a nutshell, Horkheimer’s criticism of neo-Thomism is that it is that since it tries and fails to repurpose old ontologies to the new world, it can’t fulfill its own ambitions as an intellectual system through rigor without losing the theological ambitions that motivate it, the identification of goodness, power, and eternal law. Since it can’t intellectually culminate, it becomes a “dogmatism” that can be coopted disingenuously by social forces.

This is, as I understand it, the essence of Horkheimer’s criticism of everything: That for any intellectual trend or project, unless the philosophical project is allowed to continue to completion within it, it will have its brains slurped out and become zombified by an instrumentalist capitalism that threatens to devolve into devastating world war. Hence, just as neo-Thomism becomes a dogmatism because it would refute itself if it allowed its logic to proceed to completion, so too does positivism become a dogmatism when it identifies the truth with disciplinarily enforced scientific methods. Since, as Horkheimer points out in 1947, these scientific methods are social processes, this dogmatic positivism is another zombie, prone to fads and politics not tracking truth.

I’ve been struggling over the past year or so with similar anxieties about what from my vantage point are prevailing intellectual trends of 2014. Perversely, in my experience the new intellectual identities that emerged to expose scientific procedures as social processes in the 20th century (STS) and establish rhetorics of resistance (cultural studies) have been similarly decapitated, recuperated, and dogmatic. [see 1 2 3].

Are these the hauntings of straw men? This is possible. Perhaps the intellectual currents I’ve witnessed are informal expressions, not serious intellectual work. But I think there is a deeper undercurrent which has turned up as I’ve worked on a paper resulting from this conversation about publics. It hinges on the interpretation of an influential article by Fraser in which she contests Habermas’s notion of the public sphere.

In my reading, Fraser more or less maintains the ideal of the public sphere as a place of legitimacy and reconciliation. For her it is notably inequitable, it is plural not singular, the boundaries of what is public and private are in constant negotiation, etc. But its function is roughly the same as it is for Habermas.

My growing suspicion is that this is not how Fraser is used by cultural studies today. This suspicion began when Fraser was introduced to me; upon reading her work I did not find the objection implicit in the reference to her. It continued as I worked with the comments of a reviewer on a paper. It was recently confirmed while reading Chris Wisniewski’s “Digital Deliberation ?” in Critical Review, vol 25, no. 2, 2013. He writes well:

The cultural-studies scholars and critical theorists interested in diversifying participation through the Internet have made a turn away from this deliberative ideal. In an essay first published in 1990, the critical theorist Nancy Fraser (1999, 521) rejects the idealized model of bourgeois public sphere as defined by Habermas on the grounds that it is exclusionary by design. Because the bourgeois public sphere brackets hierarchies of gender, race, ethnicity, class, etc., Fraser argues, it benefits the interests of dominant groups by default through its elision of socially significant inequalities. Lacking the ability to participate in the dominant discourse, disadvantaged groups establish alternative “subaltern counterpublics”.

Since the ideal speech situation does not acknowledge the socially significant inequalities that generate these counterpublics, Fraser argues for a different goal: a model of participatory democracy in which intercultural communications across socially stratified groups occur in forums that do not elide differences but intead allow diverse multiple publics the opportunity to determine the concerns or good of the public as a whole through “discursive contestations.” Fraser approaches thes subgroups as identity publics and aruges that culture and political debate are essentially power struggles among self-interested subgroups. Fraser’s ideas are similar to those prevalent in cultural studies (see Wisneiwski 2007 and 2010), a relatively young discipline in which her work has been influential.

Fraser’s theoretical model is inconsistent with studies of democratic voting behavior, which indicate that people tend to vote sociotropically, according to a perceived collective interest, and not in facor of their own perceived self-interest (e.g., Kinder and Kiewiet 1981). The argument that so-called “mass” culture excludes the interests of dominated groups in favor of the interests of the elites loses some of its valence if culture is not a site through which self-interested groups vie for their objective interests, but is rather a forum in which democratic citizens debate what constitutes, and the best way to achieve, the collective good. Diversification of discourse ceases to be an end in itself.”

I think Wisneiwski hits the nail on the head here, a nail I’d like to drive in farther. If culture is conceived of as consisting of the contests of self-interested identity groups, as this version of cultural studies does, then it will necessarily see itself as one of many self-interested identities. Cultural studies becomes, by its own logic, a counterpublic that exists primarily to advance its own interests.

But just like neo-Thomism, this positioning decapitates cultural studies by preventing it from intellectually confronting its own limitations. No identity can survive rigorous intellectual interrogation, because all identities are based on contingency, finitude, trauma. Cultural studies adopt and repurpose historical rhetorics of liberation much like neo-Thomists adopted and repurposed historical metaphysics of Christianity. The obsolescence of these rhetorics, like the obsolescence of Thomistic metaphysics, is what makes them dangerous. The rhetoric that maintains its own subordination as a condition of its own identity can never truly liberate, it can only antagonize. Unable to intellectually realize its own purpose, it becomes purposeless and hence coopted and recuperated like other dogmatisms. In particular, it feeds into “the politicization of absolutely everything”, in the language of Ezra Klein’s spot-on analysis of GamerGate. Cultural studies is a powerful ideology because it turns culture into a field of perpetual rivalry with all the distracting drama of reality television. In so doing, it undermines deeper intellectual penetration into the structural conditions of society.

If cultural studies is the neo-Thomism of today, a dogmatist religious revival of the profound theology of the civil rights movement, perhaps it’s the theocratic invocation of ‘algorithms’ that is the new scientism. I would have more to say about it if it weren’t so similar to the old scientism.


by Sebastian Benthall at February 01, 2015 08:08 PM

January 28, 2015

MIMS 2015

Framing Network Neutrality for the American Electorate

I originally wrote this paper for a class on conceptual metaphor taught by George Lakoff @ UC Berkeley’s department of Linguistics. It borows a notation for metaphors(e.g. Affection Is Warmth), and many common conceptual systems(e.g. Moral Accounting), from Philosophy in the Flesh1. All errors are mine and mine alone.

Introduction

I study technology and public policy with an emphasis on the Internet and computer networking. Network neutrality is an issue that has seen significant attention lately. It is also a topic that I have studied rather extensively from legal, economic and technical perspectives. This paper will look at the metaphors used in framing the network neutrality debate. A running theme will be how the concepts of network neutrality have been adapted to fit the political reality of the United States.

Defining Network Neutrality

Before we can get into the metaphors underlying network neutrality we need to define the term and scope the arguments of both sides. Network neutrality supporters believe that traffic on the Internet should be forwarded indiscriminately. The term derives from the English common law concept of common carrier, which dates back to the time of English monarchs. We don’t know exactly which one, but some English monarch made it illegal for ferry operators to charge wealthier patrons more for their services. This common carrier concept continued into the United States where in 1887 it was codified in statute with the Interstate Commerce Act. This act established the Interstate Commerce Commission (ICC), an agency most famous for regulating prices of railroad freight based on common carrier principles. Then in 1934 Congress passed the Communications Act, which transformed the ICC into the FCC, the agency now responsible for regulating communications networks. The term network neutrality was first used by Columbia Law Professor Tim Wu2. Common carrier and network neutrality might have slightly different meanings to different people, but the basic idea is the same.

It’s important to recognize that our concept of network neutrality derives from a concept that is hundreds of years old. Like most old legal concepts it has changed through application of case law to new technologies, but it has fundamentally remained the same idea. The essence of common carrier and network neutrality concepts is forbidding price discrimination on transport services necessary for the public good. Proponents of network neutrality believe exactly this, while opponents of network neutrality believe exactly the opposite. Proponents of network neutrality argue that price discrimination of transport should be outlawed. Opponents of network neutrality argue that, while Internet access is a vital service, regulating its price would be harmful to the public good. They argue instead that a laissez-faire approach, including price discrimination, would create a larger benefit to society by lowering costs via a liberal market mechanism.

Network Morality

The moral grounding of this concept, regardless of whether we call it common carrier or network neutrality, is rooted in concepts of fairness imposed by an authority. Proponents believe in an equality of opportunity model of fairness rooted in the moral accounting metaphor. In this model all parties are able to use and access the network equally. Proponents view the FCC as an organization tasked with ensuring this fairness through nurturing, but not meddlesome, behavior. While many proponents may view the FCC as captured by the telecommunications industry, they still see the legitimate job of the FCC as ensuring a fairness of equal distribution. This all aligns well with the nurturant parent model of governance.

In contrast, opponents believe in a scalar model of use and access. In this model a laissez- faire market determines who can use the network based on merit, where merit is measured by wealth. Opponents view the FCC as a meddlesome organization that can only ruin a natural moral ordering of the world. It’s a strict father model of government where the father is seen as overpowering and psychotic. In this version of the strict father model Morality Is Obedience gets downplayed since the father cannot be relied upon. Instead the Moral Order is prominent with the market taking the place of nature in the Folk Theory of the Natural Order. The market is viewed as naturally occurring, thus the metaphor becomes The Moral Order Is The Market. Like the proponent’s view, this is also rooted in moral accounting, with the main difference being the choice of fairness model.

The Internet Is A Highway System

Now that we have a basic understanding of what network neutrality is, let’s look at how its most complicated nuance is explained to the public via metaphor. We’re interested in it because, not only is it deployed to explain network policy implementation, but it creates entailments in the target domain that don’t match entailments in the source domain. We’ll call this metaphor, The Internet Is A Highway System.

In President Obama’s most recent address regarding network neutrality he made the following two promises about a network neutrality future. “There are no gatekeepers deciding which sites you get to access. .. There are no toll roads on the information superhighway.”3 Price discrimination via paid prioritization is then inferred by evoking “Internet slow lanes.” In this metaphor network packets are cars travelling down a road, and prioritization allows some of them to travel faster than others, slowing down the cars of the majority. Obama intentionally chose “slow lanes” instead of “fast lanes” to illustrate a negative, instead of a positive, aspect of this entailment.

The biggest problem with this metaphor is that all packets travel at the same speed, they all travel at roughly the speed of light. What prioritization actually determines is which packets are dropped when links exceed carrying capacity. The Internet does not guarantee delivery, and packets regularly get dropped, but cars don’t get discarded from the road if there are too many of them. Instead we get congestion and traffic backs up with cars not moving. This metaphor implies that some packets are just slightly faster than others, but all remain moving, or at least wait in line until they can move.

The reality is much more inelegant than that. When low priority packets are discarded the original sending computer has to resend them, creating yet more low priority packets that might be dropped. “Internet slow lanes” is too innocuous of a term for what is happening on the wire, and favors an anti-neutrality argument. A more accurate term might be “Internet drop precedence”, “Internet caste”, or “Internet service class”. Assuming Obama has smart speech writers, why is he not choosing the strongest framing possible?

Network Equality

Neutrality assumes there is an existing conflict that a neutral party is abstaining from. However, in the case of network neutrality, the conflict is in the network itself. Thus network neutrality is actually a neutral party abstaining from a conflict it itself is involved in. This doesn’t make any sense! This is not the same thing as Switzerland remaining neutral during a war between France and Germany. Switzerland remains neutral by not getting involved. How can a network remain neutral when it’s responsible for forwarding packets? Thus, a more accurate term for network neutrality would be network equality.

However, Obama cannot invoke any argument related to caste, class or equality in his arguments for net neutrality. Equality is viewed as disturbing the natural ordering of the market, whereas neutrality leaves the market alone. Calling for network equality would get Obama labeled a communist, and accused of inciting class warfare. Instead he needs to stick to the language of neutrality, when in fact network neutrality is about treating all packets equally, not neutrally. Thankfully the major proponents of network neutrality understand this framing dilemma, so we get arguments for neutral instead of equal networks.

Ted Cruz gets this as well when he says, “Net neutrality is Obamacare for the Internet.”4 This is a fallacious statement, but it possibly reveals Cruz understands that network neutrality is really about network equality. He’s trying to invoke the same framing of network neutrality that worked against the Affordable Care Act(ACA). In his strict father morality, equality is a direct threat to meritocratic distribution based on The Moral Order Is The Market. He knows most of his supporters see the world like this as well, so he’s attempting to frame network neutrality as an assault on the natural order. This statement is his attempt to frame the argument as constraining action by a do-gooder, nurturant parent who does more well intentioned harm than good.

Unfortunately for Ted Cruz, most of his supporters did not agree with this framing of network neutrality. Not because they don’t believe in systems of meritocratic distribution, but because they know that the ACA and network neutrality have nothing to do with one another. It’s an apples to oranges comparison.

Conclusion

I suspect, but have no real proof, that the average voter doesn’t trust statements where they notice a metaphor is being used rhetorically. I posit that people are more likely to view a rhetorical statement as disingenuous when the embedded metaphor is salient. Most people don’t know that it’s metaphor all the way down. When they’re confronted with a metaphorical explanation of a technological topic they find it disingenuous. They feel they’re being tricked. Maybe they are, obviously these metaphors are being deployed rhetorically, but so is everything a politician says.

We witnessed this when Ted Stevens made his infamous, “[The Internet is] a series of tubes” statement.5 Because the metaphor was so superficial in this statement, many people found it disingenuous. For an explanatory metaphor, it’s not actually that bad.

This paper is just a small example of some of the metaphors involved in the network neutrality debate. We have learned that underlying network neutrality are some primary metaphors for morality. That network “slow lanes” don’t really exist, and that equality, not neutrality, is ultimately what this debate is about. We’ve also learned that, given American political reality, framing the discussion in terms of neutrality instead of equality is more effective for proponents.

  1. Lakoff, George & Johnson, Mark, 1999. Philosophy in the Flesh: The Embodied Mind & its Challenge to Western Thought. New York: Basic Books.

  2. Wu, Tim, “Network Neutrality, Broadband Discrimination,” Journal of Telecommunications and High Technology Law, Vol. 2, p. 141, 2003.

  3. Obama, Barack, Net Neutrality: President Obama’s Plan for a Free and Open Internet

  4. Cruz, Ted, Twitter

  5. Petri, Alexander, Sen. Stevens, the tubes salute you, Washington Post, August 10, 2010. All URLs retrieved Nov 19, 2014

Framing Network Neutrality for the American Electorate was originally published by Andrew McConachie at Metafarce on January 28, 2015.

by Andrew McConachie (andrewm@ischool.berkeley.edu) at January 28, 2015 08:00 AM

January 27, 2015

Ph.D. alumna

Baby Boaz

Boaz Lotan Boyd saw the blizzard coming and knew it was time for him to enter this world. At 4PM on Monday, January 26, a little ball of fuzzy cuteness let out a yelp and announced his presence at a healthy 6 pounds 14 ounces. We are now snuggling in the hospital as the world outside gets blanketed with snow.

I’ll be on parental leave for a bit, but I don’t know exactly what that means. What I do know is that I will prioritize existing collaborations and my amazing team at Data & Society. Please be patient as my family bonds and we get our bearings. Time is the hardest resource to manufacture so please understand that I will probably not be particularly responsive for a while.

by zephoria at January 27, 2015 12:24 AM

January 26, 2015

Ph.D. student

The solution to Secular Stagnation is more gigantic stone monuments

Because I am very opinionated, I know what we should do about secular stagnation.

Secular stagnation is what economists are calling the problem of an economy that is growing incorrigibly slowly due to insufficient demand–low demand caused in part by high inequality. A consequence of this is that for the economy to maintain high levels of employment, real interest rates need to be negative. That is bad for people who have a lot of money and nothing to do with it. What, they must ask themselves in their sleepless nights, can we do with all this extra money, if not save it and earn interest?

History provides an answer for them. The great empires of the past that have had more money than they knew what to do with and lots of otherwise unemployed people built gigantic stone monuments. The Pyramids of Egypt. Angor Wat in Cambodia. Easter Island. Machu Pichu.

The great wonders of the world were all, in retrospect, enormous wastes of time and money. They also created full employment and will be considered amazing forever.

Chances like this do not come often in history.


by Sebastian Benthall at January 26, 2015 04:03 AM

January 19, 2015

MIMS 2012

Why I Became a Design Manager

At the start of 2015, I officially became a Design Manager at Optimizely. I transitioned from an individual contributor, product design role into a leadership role. For the entirety of my career up to this point, I had never planned on going into management. Doing hands-on, nitty-gritty design and coding work seemed so much more exciting and fulfilling. Management sounded more like a necessary evil than an interesting challenge. But since then, my thinking has changed. So why did I become a Design Manager?

When I joined Optimizely in September of 2012, the design team was just 2 people. I made it 3 in early 2013 when my role moved from engineering to design. And at the end of 2014, we were 16. (As of this writing we’re at 21!). So we’ve seen tremendous growth, and I’ve been present for all of it. And throughout this time, there had only been 1 manager for the entire team, which was not healthy for my manager or the team. The possibility of managing had been floated by me in mid-2014, but I wasn’t interested. I had been a Product Designer for less than a year and wasn’t ready to move on yet. I felt like I had just started hitting my stride as a designer, and wanted to continue honing my craft. I recognized the need for another manager, but I didn’t want it to be me.

As more designers joined Optimizely, I began taking on more managerial tasks. I also saw more issues rising within the team that our manager didn’t always have time to address. So in short, the importance of this role became more apparent, and the day-to-day work of the role became more real to me.

But the real turning point came when my manager went on vacation. In his absence, I was the go-to person for all of the team’s needs. I suspended most of my design work for this period, and really got a taste of what it would be like to work as a full-time manager. I started asking myself, “What if this was my full-time role? Would I enjoy it?” I went back and forth in my head quite a bit. The idea of leaving behind design and code was both scary and saddening. I had so much I was still looking forward to building! Plus, as we all know, change is hard.

But the team has more needs than our lone manager can handle. And I care deeply about the team, so for the greater good I decided it was time to step up. I realized that by helping my team be as great as possible, I would have a bigger impact on the company. And by working closer with engineering managers and PMs, I would have a bigger impact on the product. I’d be getting out of the weeds of day-to-day design to work on the product from a higher perspective across individuals and teams. The impact is less direct, but broader. All of this sounded tremendously exciting to me, and more impactful than individual contributor work.

I also realized the things I love about design (problem solving, ideating, etc.) would still be present in my new role. But instead of applying those skills to concrete visual interfaces, I would apply them to abstract team and personnel issues. I’d be using the design process to solve a different set of problems.

So when my manager got back from vacation, I told him my decision and we started transitioning me into a managerial role. As of the start of 2015, I’ve been managing full time and loving it. I’m still a bit sad to leave design work behind, and worry about my skills atrophying, but I look forward to the new challenges that await. It was a difficult decision that took me a long time to come around to, but I’m excited to make the team, product, and company as great as possible.

by Jeff Zych at January 19, 2015 08:32 PM

January 14, 2015

Ph.D. student

Communication complexity

Today at the Simons Institute Information Theory Boot Camp I learned about communication complexity.

This is exciting to me. I find complexity theory fascinating and I am always drawing on it in my understanding of phenomena in the world. But even though I study communication, and am very interested in both algorithmic information theory and computational complexity theory, I had never even heard of communication complexity until today! Even though it’s a theory that’s been around since 1979 and appears to be a bridge between the two!

After the talk today, a couple colleagues and I got into an argument about whether or not this theory is useful for anything. Of course, it’s useful for chip and network protocol design. But what about the social significance of it? Isn’t that what we care about at the School of Information–the social significance of technology?

Shaking my head. Just earlier I had been chatting with another colleague at the boot camp, a PhD candidate in electrical engineering whose work in my opinion has transformative social potential. She is working on privacy preserving distributed algorithm and protocol design. If her stuff works out, it could turn everything upside down.

It’s not surprising that an electrical engineer working of privacy preserving protocols would be interested in formal complexity theory as applied to digital communication. Suppose she develops her amazing invention. Five years later, scholars who restrict their studies to the “social impact” of technology will be scrambling to explain and adapt to the changes her work has wrought, all the while insisting that the formal theories used to develop it are irrelevant to their work.

Perhaps it is irrelevant to their work because of the incentives that drive them. If your main purpose is to sound expert about the role of technology to other people, you definitely don’t want to speak about cryptic things that are hard to understand. You want to be able to tell a good story. That’s what gives your work ‘social impact’.

Maybe that’s it. If you think the purpose of theory is to communicate ideas, then only the interesting ideas will have a social impact. If you can’t see why an idea is interesting, then there’s no point to it, right?

On the other hand, if you think the purpose of theory is as a guide to praxis of some kind, including design, then you have every reason to learn and evaluate theories in a different way. You search for useful theory, theory that points to ways to make an impact.

Something I’ve been trying to do in my work recently is show how theories from disciplines that are not natively technical can be used to inform technical design. I want to do this because I think that there are good insights in disciplines that think about social phenomena about how to make the world a better place. Many of these insights come from those who think critically about technology. But when people who think critically about technology and its social significance so glibly exempt themselves from understanding how the technology works, that drives me up the wall. The extraordinary amount of effort that has gone into the design of the computer chip or the telecommunications system is a triumphant meditation on the social significance of technology. There are values embedded in that design. Those values can be understood in part by understanding the theories that informed those designs, theories that describe the hard limits of what’s possible with computing, the limits of humanity’s ability to understand or communicate or triumph over nature. It’s beautiful, poetic theory about the tradeoffs inherent in any technical solution.

But it’s irrelevant, because these are solutions that are taken for granted, forgotten, and then criticized by those who had no need to understand how the world that shaped them works, because it works so well. In the Phaedrus, Socrates warned that a dependence on writing for recording knowledge would lead to forgetting. Today, a dependence of technology for embodying values and techniques has lead to a special ignorance where we hold the ephemerality of fashion and passion to be of highest significance, and what is firmest and most enduring is the least. I wonder what will happen when the infrastructure begins to decay.


by Sebastian Benthall at January 14, 2015 04:23 AM

January 09, 2015

Ph.D. student

Know-how is not interpretable so algorithms are not interpretable

I happened upon Hildreth and Kimble’s “The duality of knowledge” (2002) earlier this morning while writing this and have found it thought-provoking through to lunch.

What’s interesting is that it is (a) 12 years old, (b) a rather straightforward analysis of information technology, expert systems, ‘knowledge management’, etc. in light of solid post-Enlightenment thinking about the nature of knowledge, and (c) an anticipation of the problems of ‘interpretability’ that were a couple months ago at least an active topic of academic discussion. Or so I hear.

This is the paper’s abstract:

Knowledge Management (KM) is a field that has attracted much attention both in academic and practitioner circles. Most KM projects appear to be primarily concerned with knowledge that can be quantified and can be captured, codified and stored – an approach more deserving of the label Information Management.

Recently there has been recognition that some knowledge cannot be quantified and cannot be captured, codified or stored. However, the predominant approach to the management of this knowledge remains to try to convert it to a form that can be handled using the ‘traditional’ approach.

In this paper, we argue that this approach is flawed and some knowledge simply cannot be captured. A method is needed which recognises that knowledge resides in people: not in machines or documents. We will argue that KM is essentially about people and the earlier technology driven approaches, which failed to consider this, were bound to be limited in their success. One possible way forward is offered by Communities of Practice, which provide an environment for people to develop knowledge through interaction with others in an environment where knowledge is created nurtured and sustained.

The authors point out that Knowledge Management (KM) is an extension of the earlier program of Artificiali Intelligence, depends on a model of knowledge that maintains that knowledge can be explicitly represented and hence stored and transfered, and propose an alternative way of thinking about things based on the Communities of Practice framework.

A lot of their analysis is about the failures of “expert systems”, which is a term that has fallen out of use but means basically the same thing as the contemporary uncomputational scholarly use of ‘algorithm’. An expert system was a computer program designed to make decisions about things. Broadly speaking, a search engine is a kind of expert system. What’s changed are the particular techniques and algorithms that such systems employ, and their relationship with computing and sensing hardware.

Here’s what Hildreth and Kimble have to say about expert systems in 2002:

Viewing knowledge as a duality can help to explain the failure of some KM initiatives. When the harder aspects are abstracted in isolation the representation is incomplete: the softer aspects of knowledge must also be taken into account. Hargadon (1998) gives the example of a server holding past projects, but developers do not look there for solutions. As they put it, ‘the important knowledge is all in people’s heads’, that is the solutions on the server only represent the harder aspects of the knowledge. For a complete picture, the softer aspects are also necessary. Similarly, the expert systems of the 1980s can be seen as failing because they concentrated solely on the harder aspects of knowledge. Ignoring the softer aspects meant the picture was incomplete and the system could not be moved from the environment in which it was developed.

However, even knowledge that is ‘in people’s heads’ is not sufficient – the interactive aspect of Cook and Seely Brown’s (1999) ‘knowing’ must also be taken into account. This is one of the key aspects to the management of the softer side to knowledge.

In 2002, this kind of argument was seen as a valuable critique of artificial intelligence and the practices based on it as a paradigm. But already by 2002 this paradigm was falling away. Statistical computing, reinforcement learning, decision tree bagging, etc. were already in use at this time. These methods are “softer” in that they don’t require the “hard” concrete representations of the earlier artificial intelligence program, which I believe by that time was already refered to as “Good Old Fashioned AI” or GOFAI by a number of practicioners.

(I should note–that’s a term I learned while studying AI as an undergraduate in 2005.)

So throughout the 90’s and the 00’s, if not earlier, ‘AI’ transformed into ‘machine learning’ and become the implementation of ‘soft’ forms of knowledge. These systems are built to learn to perform a task optimally based flexibly on feedback from past performance. They are in fact the cybernetic systems imagined by Norbert Wiener.

Perplexing, then, is the contemporary problem that the models created by these machine learning algorithms are opaque to their creators. These models were created using techniques that were designed precisely to solve the problems that systems based on explicit, communicable knowledge were meant to solve.

If you accept the thesis that contemporary ‘algorithms’-driven systems are well-designed implementations of ‘soft’ knowledge systems, then you get some interesting conclusions.

First, forget about interpeting the learned models of these systems and testing them for things like social discrimination, which is apparently in vogue. The right place to focus attention is on the function being optimized. All these feedback-based systems–whether they be based on evolutionary algorithms, or convergence on local maxima, or reinforcement learning, or whatever–are designed to optimize some goal function. That goal function is the closest thing you will get to an explicit representation of the purpose of the algorithm. It may change over time, but it should be coded there explicitly.

Interestingly, this is exactly the sense of ‘purpose’ that Wiener proposed could be applied to physical systems in his landmark essay, published with Rosenbleuth and Bigelow, “Purpose, Behavior, and Teleology.” In 1943. Sly devil.

EDIT: An excellent analysis of how fairness can be represented as an explicit goal function can be found in Dwork et al. 2011.

Second, because what the algorithms is designed to optimize is generally going to be something like ‘maximize ad revenue’ and not anything particularly explicitly pernicious like ‘screw over the disadvantaged people’, this line of inquiry will raise some interesting questions about, for example, the relationship between capitalism and social justice. By “raise some interesting questions”, I mean, “reveal some uncomfortable truths everyone is already aware of”. Once it becomes clear that the whole discussion of “algorithms” and their inscrutability is just a way of talking about societal problems and entrenched political interests without talking about it, it will probably be tabled due to its political infeasibility.

That is (and I guess this is the third point) unless somebody can figure out how to explicitly define the social justice goals of the activists/advocates into a goal function that could be implemented by one of these soft-touch expert systems. That would be rad. Whether anybody would be interested in using or investing in such a system is an important open question. Not a wide open question–the answer is probably “Not really”–but just open enough to let some air onto the embers of my idealism.


by Sebastian Benthall at January 09, 2015 05:52 PM

Horkheimer and Wiener

[I began writing this weeks ago and never finished it. I’m posting it here in its unfinished form just because.]

I think I may be condemning myself to irrelevance by reading so many books. But as I make an effort to read up on the foundational literature of today’s major intellectual traditions, I can’t help but be impressed by the richness of their insight. Something has been lost.

I’m currently reading Norbert Wiener’s The Human Use of Human Beings (1950) and Max Horkheimer’s Eclipse of Reason (1947). The former I am reading for the Berkeley School of Information Classics reading group. Norbert Wiener was one of the foundational mathematicians of 20th century information technology, a colleague of Claude Shannon. Out of his own sense of social responsibility, he articulated his predictions for the consequences of the technology he developed in Human Use. This work was the foundation of cybernetics, an influential school of thought in the 20th century. Terrell Bynum, in his Stanford Encyclopedia of Philosophy article on “Computer and Information Ethics“, attributes to Wiener’s cybernetics the foundation of all future computer ethics. (I think that the threads go back earlier, at least through to Heidegger’s Question Concerning Technology.) It is hard to find a straight answer to the question of what happened to cybernetics?. By some reports, the artificial intelligence community cut their NSF funding in the 60’s.

Horkheimer is one of the major thinkers of the very influential Frankfurt School, the postwar social theorists at the core of intellectual critical theory. Of the Frankfurt School, perhaps the most famous in the United States is Adorno. Adorno is also the most caustic and depressed, and unfortunately much of popular critical theory now takes on his character. Horkheimer is more level-headed. Eclipse of Reason is an argument about the ways that philosophical empiricism and pragmatism became complicit in fascism. Here is an interested quotation.

It is very interesting to read them side by side. Published only a few years apart, Wiener and Horkheimer are giants of two very different intellectual traditions. There’s little reason to expect they ever communicated (a more thorough historian would know more). But each makes sweeping claims about society, language, and technology and contextualizes them in broader intellectual awareness of religion, history and science.

Horkheimer writes about how the collapse of the Enlightment project of objective reason has opened the way for a society ruled by subjective reason, which he characterizes as the reason of formal mathematics and scientific thinking that is neutral to its content. It is instrumental thinking in its purest, most rigorous form. His descriptions of it sound like gestures to what we today call “data science”–a set of mechanical techniques that we can use to analyze and classify anything, perfecting our understanding of technical probabilities towards whatever ends one likes.

I find this a more powerful critique of data science than recent paranoia about “algorithms”. It is frustrating to read something over sixty years old that covers the same ground as we are going over again today but with more composure. Mathematized reasoning about the world is an early 20th century phenomenon and automated computation a mid-20th century phenomenon. The disparities in power that result from the deployment of these tools were thoroughly discussed at the time.

But today, at least in my own intellectual climate, it’s common to hear a mention of “logic” with the rebuttal “whose logic?“. Multiculturalism and standpoint epistemology, profoundly important for sensitizing researchers to bias, are taken to an extreme the glorifies technical ignorance. If the foundation of knowledge is in ones lived experience, as these ideologies purport, and one does not understand the technical logic used so effectively by dominant identity groups, then one can dismiss technical logic as merely a cultural logic of an opposing identity group. I experience the technically competent person as the Other and cannot perceive their actions as skill but only as power and in particular power over me. Because my lived experience is my surest guide, what I experience must be so!

It is simply tragic that the education system has promoted this kind of thinking so much that it pervades even mainstream journalism. This is tragic for reasons I’ve expressed in “objectivity is powerful“. One solution is to provide more accessible accounts of the lived experience of technicality through qualitative reporting, which I have attempted in “technical work“.

But the real problem is that the kind of formal logic that is at the foundation of modern scientific thought, including its most recent manifestation ‘data science’, is at its heart perfectly abstract and so cannot be captured by accounts of observed practices or lived experience. It is reason or thought. Is it disembodied? Not exactly. But at least according to constructivist accounts of mathematical knowledge, which occupy a fortunate dialectical position in this debate, mathematical insight is built from embodied phenomenological primitives but by their psychological construction are abstract. This process makes it possible for people to learn abstract principles such as the mathematical theory of information on which so much of the contemporary telecommunications and artificial intelligence apparatus depends. These are the abstract principles with which the mathematician Norbert Wiener was so intimately familiar.


by Sebastian Benthall at January 09, 2015 04:00 PM

Privacy, trust, context, and legitimate peripheral participation

Privacy is important. For Nissenbaum, what’s essential to privacy is control over context. But what is context?

Using Luhmann’s framework of social systems–ignoring for a moment e.g. Habermas’ criticism and accepting the naturalized, systems theoretic understanding of society–we would have to see a context as a subsystem of the total social system. In so far as the social system is constituted by many acts of communication–let’s visualize this as a network of agents, whose edges are acts of communication–then a context is something preserved by configurations of agents and the way they interact.

Some of the forces that shape a social system will be exogenous. A river dividing two cities or, more abstractly, distance. In the digital domain, the barriers of interoperability between one virtual community infrastructure and another.

But others will be endogenous, formed from the social interactions themselves. An example is the gradual deepening of trust between agents based on a history of communication. Perhaps early conversations are formal, stilted. Later, an agent takes a risk, sharing something more personal–more private? It is reciprocated. Slowly, a trust bond, an evinced sharing of interests and mutual investment, becomes the foundation of cooperation. The Prisoner’s Dilemma is solved the old fashioned way.

Following Carey’s logic that communication as mere transmission when sustained over time becomes communication as ritual and the foundation of community, we can look at this slow process of trust formation as one of the ways that a context, in Nissenbaum’s sense, perhaps, forms. If Anne and Betsy have mutually internalized each others interests, then information flow between them will by and large support the interests of the pair, and Betsy will have low incentives to reveal private information in a way that would be detrimental to Anne.

Of course this is a huge oversimplification in lots of ways. One way is that it does not take into account the way the same agent may participant in many social roles or contexts. Communication is not a single edge from one agent to another in many circumstances. Perhaps the situation is better represented as a hypergraph. One reason why this whole domain may be so difficult to reason about is the sheer representational complexity of modeling the situation. It may require the kind of mathematical sophistication used by quantum physicists. Why not?

Not having that kind of insight into the problem yet, I will continue to sling what the social scientists call ‘theory’. Let’s talk about an exisiting community of practice, where the practice is a certain kind of communication. A community of scholars. A community of software developers. Weird Twitter. A backchannel mailing list coordinating a political campaign. A church.

According to Lave and Wenger, the way newcomers gradually become members and oldtimers of a community of practice is legitimate peripheral participation. This is consistent with the model described above characterizing the growth of trust through gradually deepening communication. Peripheral participation is low-risk. In an open source context, this might be as simple as writing a question to the mailing list or filing a bug report. Over time, the agent displays good faith and competence. (I’m disappointed to read just now that Wenger ultimately abandoned this model in favor of a theory of dualities. Is that a Hail Mary for empirical content for the theory? Also interested to follow links on this topic to a citation of von Krogh 1998, whose later work found its way onto my Open Collaboration and Peer Production syllabus. It’s a small world.

I’ve begun reading as I write this fascinating paper by Hildreth and Kimble 2002 and am now have lost my thread. Can I recover?)

Some questions:

  • Can this process of context-formation be characterized empirically through an analysis of e.g. the timing dynamics of communication (c.f. Thomas Maillart’s work)? If so, what does that tell us about the design of information systems for privacy?
  • What about illegitimate peripheral participation? Arguably, this blog is that kind of participation–it participates in a form of informal, unendorsed quasi-scholarship. It is a tool of context and disciplinary collapse. Is that a kind of violation of privacy? Why not?

by Sebastian Benthall at January 09, 2015 03:54 PM

January 08, 2015

Ph.D. student

Come to the Trace Ethnography workshop at the 2015 iConference!

We’re organizing a workshop on trace ethnography at the 2015 iConference, led by Amelia Acker, Matt Burton, David Ribes, and myself. See more information about it on the workshop’s website, or feel free to contact me for more information.

Date: March 24th 2015, 9:00-a.m.-5:00 p.m.

Location: iConference venue, Newport Beach Marriott Hotel & Spa, Newport Beach, CA

Deadline to register through this form: Feb 1st, 2015. Note: you will also have to register through the official iConference website as well.

Notification: Feb 15th, 2015

Description: This workshop introduces participants to trace ethnography, building a network of scholars interested in the collection and interpretation of trace data and distributed documentary practices. The intended audience is broad, and participants need not have any existing experience working with trace data from either qualitative or quantitative approaches. The workshop provides an interactive introduction to the background, theories, methods, and applications–present and future–of trace ethnography. Participants with more experience in this area will demonstrate how they apply these techniques in their own research, discussing various issues as they arise. The workshop is intended to help researchers identify documentary traces, plan for their collection and analysis, and further formulate trace ethnography as it is currently conceived. In all, this workshop will support the advancement of boundaries, theories, concepts, and applications in trace ethnography, identifying the diversity of approaches that can be assembled around the idea of ‘trace ethnography’ within the iSchool community.

by stuart at January 08, 2015 04:23 PM

December 23, 2014

Ph.D. student

Horkheimer, pragmatism, and cognitive ecology

In Eclipse of Reason, Horkheimer rips into the American pragmatists Peirce, James, and Dewey like nobody I’ve ever read. Normally seen as reasonable and benign, Horkheimer paints these figures as ignorant and undermining of the whole social order.

The reason is that he believes that they reduce epistemology to a kind a instrumentalism. But that’s selling their position a bit short. Dewey’s moral epistemology is pragmatist in that it is driven by particular, situated interests and concerns, but these are ingredients to moral inquiry and not conclusions in themselves.

So to the extent that Horkheimer is looking to dialectic reason as the grounds to uncovering objective truths, Dewey’s emphasis on the establishing institutions that allow for meaningful moral inquiry seems consistent with Horkheimer’s view. The difference is in whether the dialectics are trancendental (as for Kant) or immanent (as for Hegel?).

The tension around objectivity in epistemology that comes up in the present academic environment is that all claims to objectivity are necessarily situated and this situatedness is raised as a challenge to their objective status. If the claims or their justification depend on conditions that exclude some subjects (as they no doubt do; whether or not dialectical reason is transcendental or immanent is requires opportunities for reflection that are rare–privileged), can these conclusions be said to be true for all subjects?

The Friendly AI research program more or less assumes that yes, this is the case. Yudkowsky’s notion of Coherent Extrapolated Volition–the position arrived at by simulated, idealized reasoners, is a 21st century remake of Peirce’s limiting consensus of the rational. And yet the cry from standpoint theorists and certain anthropologically inspired disciplines is a recognition of the validity of partial perspectives. Haraway, for example, calls for an alliance of partial perspectives. Critical and adversarial design folks appear to have picked up this baton. Their vision is of a future of constantly vying (“agonistic”) partiality, with no perspective presuming to be settled, objective or complete.

If we make cognitivist assumptions about the computationality of all epistemic agents, then we are forced to acknowledge the finiteness of all actually existing reasoning. Finite capacity and situatedness become two sides of the same coin. Partiality, then, becomes a function of both ones place in the network (eccentricity vs. centrality) as well as capacity to integrate information from the periphery. Those locations in the network most able to valuably integrate information, whether they be Google’s data centers or the conversational hubs of research universities, are more impartial, more objective. But they can never be the complete system. Because of their finite capacity, their representations can at best be lossy compressions of the whole.

Horkheimer dreams of an objective truth obtainable by a single subject through transcendental dialectic. Perhaps he thinks this is unattainable today (I have to read on). But if there’s hope in this vision, it seems to me it must come from one of two possibilities:

  • The fortuitously fractal structure of the sociotechnical world such that an adequate representation of it can be maintained in its epistemic hubs through quining, or
  • A generative grammar or modeling language of cognitive ecology such that we can get insights into the larger interactive system from toy models, and apply these simplified models pragmatically in specific cases. For this to work and not suffer the same failures as theoretical economics, these models need to have empirical content. Something like Wolpert, Lee, and Bono’s Predictive Game Theory (for which I just discovered they’ve released a Python package…cool!) may be critical here.

by Sebastian Benthall at December 23, 2014 03:56 AM

December 21, 2014

Ph.D. student

Eclipse of Reason

I’m starting to read Max Horkheimer’s Eclipse of Reason. I have had high hopes for it and have not been disappointed.

The distinction Horkheimer draws in the first section, “Means and Ends”, is between subjective reason and objective reason.

Subjective reason is the kind of reasoning that is used to most efficiently achieve ones goals, whatever they are. Writing even as early as 1947, Horkheimer notes that subjective reason has become formalized and reduced to the computation of technical probabilities. He is referring to the formalization of logic in the Anglophone tradition by Russell and Whitehead and its use in early computer science, most likely. (See Imre Lakatos and programming as dialectic for more background on this, as well as resonant material on where this is going)

Objective reason is, within a simple “means/ends” binary, most simply described as the reasoning of ends. I am not very far through the book and Horkheimer is so far unspecific about what this entails in practice but instead articulates it as an idea that has fallen out of use. He associates it with Platonic forms. With logos–a word that becomes especially charged for me around Christmas and whose religious connotations are certainly intertwined with the idea of objectivity. Since it is objective and not bound to a particular subject, the rationality of correct ends is the rationality of the whole world or universe, it’s proper ordering or harmony. Humanity’s understanding of it is not a technical accomplishment so much an achievement of revelation or wisdom achieved–and I think this is Horkheimer’s Hegelian/Marxist twist–dialectically.

Horkheimer in 1947 believes that subjective reason, and specifically its formalization, have undermined objective reason by exposing its mythological origins. While we have countless traditions still based in old ideologies that give us shared values and norms simply out of habit, they have been exposed as superstition. And so while our ability to achieve our goals has been amplified, our ability to have goals with intellectual integrity has hollowed out. This is a crisis.

One reason this is a crisis is because (to paraphrase) the functions once performed by objectivity or authoritarian religion or metaphysics are now taken on by the reifying apparatus of the market. This is a Marxist critique that is apropos today.

It is not hard to see that Horkheimer’s critique of “formalized subjective reason” extends to the wide use of computational statistics or “data science” in the vast ways it is now. Moreover, it’s easy to see how the “Internet of Things” and everything else instrumented–the Facebook user interface, this blog post, everything else–participates in this reifying market apparatus. Every critique of the Internet and the data economy from the past five years has just been a reiteration of Horkheimer, whose warning came loud and clear in the 40’s.

Moreover, the anxieties of the “apocalyptic libertarians” of Sam Franks article, the Less Wrong theorists of friendly and unfriendly Artificial intelligence, are straight out of the old books of the Frankfurt School. Ironically, todays “rationalists” have no awareness of the broader history of rationality. Rather, their version of rationality begins with Von Neummann, and ends with two kinds of rationality, “epistemic rationality”, about determining correct beliefs, and “instrumental rationality”, about correctly reaching ones ends. Both are formal and subjective, in Horkheimer’s analysis; they don’t even have a word for ‘objective reason’, it has so far fallen away from their awareness of what is intellectually possible.

But the consequence is that this same community lives in fear of the unfriendly AI–a superintelligence driven by a “utility function” so inhuman that it creates a dystopia. Unarmed with the tools of Marxist criticism, they are unable to see the present economic system as precisely that inhuman superintelligence, a monster bricolage of formally reasoning market apparati.

For Horkheimer (and I’m talking out of my butt a little here because I haven’t read enough of the book to really know; I’m going on some context I’ve read up on early) the formalization and automation of reason is part of the problem. Having a computer think for you is very different from actually thinking. The latter is psychologically transformative in ways that the former is not. It is hard for me to tell whether Horkheimer would prefer things to go back the way they were, or if he thinks that we must resign ourselves to a bleak inhuman future, or what.

My own view, which I am worried is deeply quixotic, is that a formalization of objective reason would allow us to achieve its conditions faster. You could say I’m a logos-accelerationist. However, if the way to achieve objective reason is dialectically, then this requires a mathematical formalization of dialectic. That’s shooting the moon.

This is not entirely unlike the goals and position of MIRI in a number of ways except that I think I’ve got some deep intellectual disagreements about their formulation of the problem.


by Sebastian Benthall at December 21, 2014 11:46 PM

Reflecting on “Technoscience and Expressionism” by @FractalOntology

I’ve come across Joseph Weissman’s (@FractalOntology) “Technoscience and Expressionism” and am grateful for it, as its filled me in on a philosophical position that I missed the first time around, accelerationism. I’m not a Deleuzian and prefer my analytic texts to plod, so I can’t say I understood all of the essay. On the other hand, I gather the angle of this kind of philosophizing is intentionally psychotherapeutic and hence serves and artistic/literary function rather than one that explicitly guides praxis.

I am curious about the essay because I would like to see a thorough analysis of the political possibilities for the 21st century that gets past 20th century tropes. The passions of journalistic and intellectual debate have an atavistic tendency due to a lack of imagination that I would like to avoid in my own life and work.

Accelerationism looks new. It was pronounced in a manifesto, which is a good start.

Here is a quote from it:

Democracy cannot be defined simply by its means — not via voting, discussion, or general assemblies. Real democracy must be defined by its goal — collective self-​mastery. This is a project which must align politics with the legacy of the Enlightenment, to the extent that it is only through harnessing our ability to understand ourselves and our world better (our social, technical, economic, psychological world) that we can come to rule ourselves. We need to posit a collectively controlled legitimate vertical authority in addition to distributed horizontal forms of sociality, to avoid becoming the slaves of either a tyrannical totalitarian centralism or a capricious emergent order beyond our control. The command of The Plan must be married to the improvised order of The Network.

Hell yeah, the Enlightenment! Sign me up!

The manifesto calls for an end to the left’s emphasis on local action, transparency, and direct democracy. Rather, it calls for a muscular hegemonic left that fully employs and deploys “technoscience”.

It is good to be able to name this political tendency and distinguish it from other left tendencies. It is also good to distinguish it from “right accelerationism”, which Weissman identifies with billionaires who want to create exurb communities.

A left-accelerationist impulse is today playing out dramatically against a right-accelerationist one. And the right-accelerationists are about as dangerous as you may imagine. With silicon valley VCs, and libertarian technologists more generally reading Nick Land on geopolitical fragmentation, the reception or at least receptivity to hard-right accelerants seems problematically open (and the recent $2M campaign proposing the segmentation of California into six microstates seems to provide some evidence for this.) Billionaires consuming hard-right accelerationist materials arguing for hyper-secessionism undoubtedly amounts to a critically dangerous situation. I suspect that the right-accelerationist materials, perspectives, affect, energy expresses a similar shadow, if it is not partly what is catalyzing the resurgence of micro-fascisms elsewhere (and macro ones as well — perhaps most significant to my mind here is the overlap of right-acceleration with white nationalism, and more generally what is deplorably and disingenuously called “race realism” — and is of course simply racism; consider Marine le Pen’s fascist front, which recently won 25% of the seats in the French parliament, UKIP’s resurgence in Great Britain; while we may not hear accelerationist allegiances and watchwords explicitly, the political implications and continuity is at the very least somewhat unsettling…)

There is an unfortunate conflation of several different points of view here. It is too easy to associate racism, wealth, and libertarianism as these are the nightmares of the left’s political imagination. If ideological writing is therapeutic, a way of articulating ones dreams, then this is entirely appropriate with a caveat. The caveat being that every nightmare is a creation of ones own psychology more so than a reflection of the real world.

The same elisions are made by Sam Frank in his recent article thematizing Silicon Valley libertarianism, friendly artificial intelligence research, and contemporary rationalism as a self-help technique. There are interesting organizational ties between these institutions that are validly worth investigating but it would be lazy to collapse vast swathes of the intellectual spectrum into binaries.

In March 2013 I wrote about the Bay Area Rationalists:

There is a good story here, somewhere. If I were a journalist, I would get in on this and publish something about it, just because there is such a great opportunity for sensationalist exploitation.

I would like to say “I called it”–Sam Frank has recently written just such a sensationalist, exploitative piece in Harper’s Magazine. It is thoroughly enjoyable and I wouldn’t say it’s inaccurate. But I don’t think this is the best way to get to know these people. A better one is to attend a CFAR workshop. It used to be that you could avoid the fee with a promise to volunteer, but that there was a money-back guarantee which extended to ones promise to volunteer. If that’s still the case, then one can essentially attend for free.

Another way to engage this community intellectually, which I would encourage the left accelerationists to do because it’s interesting, is to start participating on LessWrong. For some reason this community is not subject to ideological raids like so many other community platforms. I think it could stand for an influx of Deleuze.

Ultimately the left/right divide comes down to a question of distribution of resources and/or surplus. Left accelerationist tactics appear from here to be a more viable way of seizing resources than direct democracy. However, the question is whether accelerationist tactics inevitably result in inequalities that create control structures of the kind originally objected to. In other words, this may simply be politics as usual and nothing radical at all.

So there’s an intersection between these considerations (accelerationist vs. … decelerationism? Capital accumulation vs. capital redistribution?) and the question of decentralization of decision-making process (is that the managerialism vs. multistakeholderism divide?) whose logic is unclear to me. I want to know which affinities are necessary and which are merely contingent.


by Sebastian Benthall at December 21, 2014 06:36 PM

Imre Lakatos and programming as dialectic

My dissertation is about the role of software in scholarly communication. Specifically, I’m interested in the way software code is itself a kind of scholarly communication, and how the informal communications around software production represent and constitute communities of scientists. I see science as a cognitive task accomplished by the sociotechnical system of science, including both scientists and their infrastructure. Looking particularly at scientist’s use of communications infrastructure such as email, issue trackers, and version control, I hope to study the mechanisms of the scientific process much like a neuroscientist studies the mechanisms of the mind by studying neural architecture and brainwave activity.

To get a grip on this problem I’ve been building BigBang, a tool for collecting data from open source projects and readying it for scientific analysis.

I have also been reading background literature to give my dissertation work theoretical heft and to procrastinate from coding. This is why I have been reading Imre Lakatos’ Proofs and Refutations (1976).

Proofs and Refutations is a brilliantly written book about the history of mathematical proof. In particular, it is an analysis of informal mathematics through an investigation of the letters written by mathematicians working on proofs about the Euler characteristic of polyhedra in the 18th and 19th centuries.

Whereas in the early 20th century, based on the work of Russel and Whitehead and others, formal logic was axiomatized, prior to this mathematical argumentation had less formal grounding. As a result, mathematicians would argue not just substantively about the theorem they were trying to prove or disprove, but also about what constitutes a proof, a conjecture, or a theorem in the first place. Lakatos demonstrates this by condensing 200+ years of scholarly communication into a fictional, impassioned classroom dialog where characters representing mathematicians throughout history banter about polyhedra and proof techniques.

What’s fascinating is how convincingly Lakatos presents the progress of mathematical understanding as an example of dialectical logic. Though he doesn’t use the word “dialectical” as far as I’m aware, he tells the story of the informal logic of pre-Russellian mathematics through dialog. But this dialog is designed to capture the timeless logic behind what’s been said before. It takes the reader through the thought process of mathematical discovery in abbreviated form.

I’ve had conversations with serious historians and ethnographers of science who would object strongly to the idea of a history of a scientific discipline reflecting a “timeless logic”. Historians are apt to think that nothing is timeless. I’m inclined to think that the objectivity of logic persists over time much the same way that it persists over space and between subjects, even illogical ones, hence its power. These are perhaps theological questions.

What I’d like to argue (but am not sure how) is that the process of informal mathematics presented by Lakatos is strikingly similar to that used by software engineers. The process of selecting a conjecture, then of writing a proof (which for Lakatos is a logical argument whether or not it is sound or valid), then having it critiqued with counterexamples, which may either be global (counter to the original conjecture) or local (counter to a lemma), then modifying the proof, then perhaps starting from scratch based on a new insight… all this reads uncannily like the process of debugging source code.

The argument for this correspondence is strengthened by later work in theory of computation and complexity theory. I learned this theory so long ago I forget who to attribute it to, but much of the foundational work in computer science was the establishment of a correspondence between classes of formal logic and classes of programming languages. So in a sense its uncontroversial within computer science to consider programs to be proofs.

As I write I am unsure whether I’m simply restating what’s obvious to computer scientists in an antiquated philosophical language (a danger I feel every time I read a book, lately) or if I’m capturing something that could be an interesting synthesis. But my point is this: that if programming language design and the construction of progressively more powerful software libraries is akin to the expanding of formal mathematical knowledge from axiomatic grounds, then the act of programming itself is much more like the informal mathematics of pre-Russellian mathematics. Specifically, in that it is unaxiomatic and proofs are in play without necessarily being sound. When we use a software system, we are depending necessarily on a system of imperfected proofs that we fix iteratively through discovered counterexamples (bugs).

Is it fair to say, then, that whereas the logic of software is formal, deductive logic, the logic of programming is dialectical logic?

Bear with me; let’s presume it is. That’s a foundational idea of my dissertation work. Proving or disproving it may or may not be out of scope of the dissertation itself, but it’s where it’s ultimately headed.

The question is whether it is possible to develop a formal understanding of dialectical logic through a scientific analysis of the software collaboration. (see a mathematical model of collective creativity). If this could be done, then we could then build better software or protocols to assist this dialectical process.


by Sebastian Benthall at December 21, 2014 03:23 PM

December 20, 2014

Ph.D. student

Discourse theory of law from Habermas

There has been at least one major gap in my understanding of Habermas’s social theory which I’m just filling now. The position Habermas reaches towards the end of Theory of Communicative Action vol 2 and develops further in later work in Between Facts and Norms (1992) is the discourse theory of law.

What I think went on is that Habermas eventually gave up on deliberative democracy in its purest form. After a career of scholarship about the public sphere, the ideal speech situation, and communicative action–fully developing the lifeworld as the ground for legitimate norms–but eventually had to make a concession to “the steering media” of money and power as necessary for the organization of society at scale. But at the intersection between lifeworld and system is law. Lawserves as a transmission belt between legitimate norms established by civil society and “system”; at it’s best it is both efficacious and legitimate.

Law is ambiguous; it can serve both legitimate citizen interests united in communicative solidarity. It can also serve strong powerful interests. But it’s where the action is, because it’s where Habermas sees the ability for lifeworld to counter-steer the whole political apparatus towards legitimacy, including shifting the balance of power between lifeworld and system.

This is interesting because:

  • Habermas is like the last living heir of the Frankfurt School mission and this is a mature and actionable view nevertheless founded in the Critical Theory tradition.
  • If you pair it with Lessig’s Code is Law thesis, you get a framework for thinking about how technical mediation of civil society can be legitimate but also efficacious. I.e., code can be legitimized discoursively through communicative action. Arguably, this is how a lot of open source communities work, as well as standards bodies.
  • Thinking about managerialism as a system of centralized power that provides a framework of freedoms within it, Habermas seems to be presenting an alternative model where law or code evolves with the direct input of civil stakeholders. I’m fascinated by where Nick Doty’s work on multistakeholderism in the W3C is going and think there’s an alternative model in there somewhere. There’s a deep consistency in this, noted a while ago (2003) by Froomkin but largely unacknowledged as far as I can tell in the Data and Society or Berkman worlds.

I don’t see in Habermas anything about funding the state. That would mean acknowledging military force and the power to tax. But this is progress for me.

References

Zurn, Christopher. “Discourse theory of law”, in Jurgen Habermas: Key Concepts, edited by Barbara Fultner


by Sebastian Benthall at December 20, 2014 04:43 AM

December 15, 2014

Ph.D. student

Some research questions

Last week was so interesting. Some weeks you just get exposed to so many different ideas that it’s trouble to integrate them. I tried to articulate what’s been coming up as a result. It’s several difficult questions.

  • Assuming trust is necessary for effective context management, how does one organize sociotechnical systems to provide social equity in a sustainable way?
  • Assuming an ecology of scientific practices, what are appropriate selection mechanisms (or criteria)? Are they transcendent or immanent?
  • Given the contradictory character of emotional reality, how can psychic integration occur without rendering one dead or at least very boring?
  • Are there limitations of the computational paradigm imposed by data science as an emerging pan-constructivist practice coextensive with the limits of cognitive or phenomenological primitives?

Some notes:

  • I think that two or three of these questions above may be in essence the same question. In that they can be formalized into the same mathematical problem, and the solution is the same in each case.
  • I really do have to read Isabelle Stengers and Nancy Nersessian. Based on the signals I’m getting, they seem to be the people most on top of their game in terms of understanding how science happens.
  • I’ve been assuming that trust relations are interpersonal but I suppose they can be interorganizational as well, or between a person and an organization. This gets back to a problem I struggle with in a recurring way: how do you account for causal relationships between a macro-organism (like an organization or company) and a micro-organism? I think it’s when there are entanglements between these kinds of entities that we are inclined to call something an “ecosystem”, though I learned recently that this use of the term bothers actual ecologists (no surprise there). The only things I know about ecology are from reading Ulanowicz papers, but those have been so on point and beautiful that I feel I can proceed with confidence anyway.
  • I don’t think there’s any way to get around having at least a psychological model to work with when looking at these sorts of things. A recurring an promising angle is that of psychic integration. Carl Jung, who has inspired clinical practices that I can personally vouch for, and Gregory Bateson both understood the goal of personal growth to be integration of disparate elements. I’ve learned recently from Turner’s The Democratic Surround that Bateson was a more significant historical figure than I thought, unless Turner’s account of history is a glorification of intellectuals that appeal to him, which is entirely possible. Perhaps more importantly to me, Bateson inspired Ulanowicz, and so these theories are compatible; Bateson was also a cyberneticist following Wiener, who was prescient and either foundational to contemporary data science or a good articulator of its roots. But there is also a tie-in to constructivist epistemology. DiSessa’s epistemology, building on Piaget but embracing what he calls the computational metaphor, understands the learning of math and physics as the integration of phenomenological primitives.
  • The purpose of all this is ultimately protocol design.
  • This does not pertain directly to my dissertation, though I think it’s useful orienting context.

by Sebastian Benthall at December 15, 2014 07:03 AM

Ph.D. student

what i'm protesting for

[Meta: Is noise an appropriate list for this conversation? I hope so, but I take no offense if you immediately archive this message.]

I’ve been asked, what are you protesting for? Black lives matter, but what should we do about it, besides asking cops not to shoot people?

Well, I think there’s value in marching and protesting even without a specific goal. If you’ve been pushed to the edge for so long, you need some outlet for anger and frustration and I want to respect that and take part to demonstrate solidarity. I see good reason to have sympathy even for protesters taking actions I wouldn’t support.

As Jay Smooth puts it, "That unrest we saw […] was a byproduct of the injustice that preceded it." Or MLK, Jr: "I think that we've got to see that a riot is the language of the unheard."

If you’re frustrated with late-night protests that express that anger in ways that might include destruction of property, I encourage you to vote with your feet and attend daytime marches. I was very pleased to run into iSchool alumni at yesterday afternoon’s millions march in Oakland. Families were welcome and plenty of children were in attendance.

But I also think it’s completely reasonable to ask for some pragmatic ends. To really show that black lives matter, we must take action that decreases the number of these thoughtless deaths. There are various lists of demands you can find online (I link to a few below). But below I’ll list four demands I’ve seen that resonate with me (and aren’t Missouri-specific). This is what I’m marching for, and will keep marching for. (It is obviously not an exhaustive list or even the highest priority list for people who face this more directly than I; it's just my list.) If our elected and appointed government leaders, most of whom are currently doing nothing to lead or respond to the massive outpouring of popular demand, want there to be justice and want protesters to stop protesting, I believe this would be a good start.

* Special Prosecutor for All Deadly Force Cases

Media have reported extensively on the “grand jury decisions” in St. Louis County and in Staten Island. I believe this is a misnomer. Prosecutors, who regularly work very closely with their police colleagues in bringing charges, have taken extraordinary means in the alleged investigations of Darren Wilson and Daniel Pantaleo not to obtain indictments. Bob McCullough in St. Louis spent several months presenting massive amounts of evidence to the grand jury, and presented conflicting or even grossly inaccurate information about the laws governing police use of force. A typical grand jury in such a case would have taken a day: presentation of forensic evidence showing several bullets fired into an unarmed man, a couple eyewitnesses describing the scene, that’s more than enough evidence to get an indictment on a number of different charges and have a proper public trial. Instead, McCullough sought to have a sort of closed door trial where Wilson himself testified (unusual for a grand jury), and then presented as much evidence as he could during the announcement of non-indictment in defense of the police officer. That might sound like a sort of fair process with the evidence, but it’s actually nothing like a trial, because we don’t have both sides represented, we don’t have public transparency, we don’t have counsel cross-examining statements and the like. If regular prosecutors (who work with these police forces every day) won’t actually seek indictments in cases where police kill unarmed citizens, states need to set a formal requirement that independent special prosecutors will be appointed in cases of deadly force.

More info:
The Demands
Gothamist on grand juries

* Police Forces Must Be Representative and Trained

States and municipalities should take measures so that their police forces are representative of the communities they police. We should be suspicious of forces where the police are not residents of the towns where they serve and protect or where the racial makeup is dramatically different from the population. In Ferguson, Missouri, for example, a mostly white police force serves a mostly black town, and makes significant revenue by extremely frequently citing and fining those black residents. Oakland has its own issues with police who aren’t residents (in part, I expect, because of the high cost of living here). But I applaud Oakland for running police academies in order to give the sufficient training to existing residents so they can become officers. Training might also be one way to help with racial disparities in policing. Our incoming mayor, Libby Schaaf, calls for an increase in “community policing”. I’m not sure why she isn’t attending and speaking up at these protests and demonstrating her commitment to implementing such changes in a city where lack of trust in the police has been a deep and sometimes fatal problem.

More info:
The Demands
Libby Schaaf on community policing
FiveThirtyEight on where police live
WaPo on police race and ethnicitiy
Bloomberg on Ferguson ticketing revenues

* The Right to Protest

Police must not use indiscriminate violent tactics against non-violent protesters. I was pleased to have on-the-ground reports from our colleague Stu from Berkeley this past week. The use of tear gas, a chemical weapon, against unarmed and non-violent student protesters is particularly outrageous. If our elected officials want our trust, they need to work on coordinating the activities of different police departments and making it absolutely clear that police violence is not an acceptable response to non-violent demonstration.

Did the Oakland PD really not even know about the undercover California “Highway” Patrol officers who were walking with protesters at a march in Oakland then wildly waved a gun at the protesters and media when they were discovered? Are police instigating vandalism and violence among protesters?

In St. Louis, it seemed to be a regular problem that no one knew who was in charge of the law enforcement response to protesters, and we seem to be having the same problem when non-Berkeley police are called in to confront Berkeley protesters. Law enforcement must make it clear who is in charge and to whom crimes and complaints about police brutality can be reported.

More info:
Tweets on undercover cops
staeiou on Twitter

* Body Cameras for Armed Police

The family of Michael Brown has said:

Join with us in our campaign to ensure that every police officer working the streets in this country wears a body camera.

This is an important project, one that has received support even from the Obama administration, and one where the School of Information can be particularly relevant. While it’s not clear to me that all or even most law enforcement officials need to carry firearms at all times, we could at least ask that those officers use body-worn cameras to improve transparency about events where police use potentially deadly force against civilians. The policies, practices and technologies used for those body cameras and the handling of that data will be particularly important, as emphasized by the ACLU. Cameras are no panacea — the killings of Oscar Grant, Eric Garner, Tamir Rice and others have been well-captured by various video sources — but at least some evidence shows that increased transparency can decrease use of violence by police and help absolve police of complaints where their actions are justified.

More info:
ACLU on body cameras
White House fact sheet on policing reforms proposal
NYT on Rialto study of body cameras

Finally, here are some of the lists of demands that I’ve found informative or useful:
The Demands
Millions March Demands, via Facebook
Millions March NYC Demands, via Twitter
MillionsMarchOakland Demands, via Facebook

I have valued so much the conversations I’ve been able to have in this intellectual community about these local protests and the ongoing civil rights struggle. I hope these words can contribute something, anything to that discussion. I look forward to learning much more.

Nick

by npdoty@ischool.berkeley.edu at December 15, 2014 12:50 AM

December 10, 2014

Ph.D. student

Discovering Thomas Sowell #blacklivesmatter

If you come up with a lot of wrong ideas and pay a price for it, you are forced to think about it and change your ways or else be eliminated. But there is no such test. The only test for most intellectuals is whether other intellectuals go along with them. And if they all have a wrong idea, then it becomes invincible.

On Sunday night I walked restlessly through the streets of Berkeley while news helicopters circled overhead and sirens wailed. For the second night in a row I saw lines of militarized police. Texting with a friend who had participated in the protests the night before about how he was assaulted by the cops, I walked down Shattuck counting smashed shop windows. I discovered a smoldering dumpster. According to Bernt Wahl, who I bumped into outside of a shattered RadioShack storefront, there had been several fires started around the city; he was wielding a fire extinguisher, advising people to walk the streets to prevent further looting.

The dramatic events around me and the sincere urgings of many deeply respected friends that I join the public outcry against racial injustice made me realize that I could no longer withhold judgment on the Brown and Garner cases and the responses to them. I have reserved my judgment, unwilling to follow the flow of events as told play-by-play by journalists because, frankly, I don’t trust them. As I was discussing this morning with somebody in my lab, real journalism takes time. You have to interview people, assemble facts. That’s not how things are being done around these highly sensitive and contentious issues. In The Democratic Surround, Fred Turner writes about how in the history of the United States, psychologists and social scientists once thought the principal mechanism by which fascism spread was through the mass media’s skillful manipulation of their audience’s emotions. Out of their concern for mobilizing the American people to fight World War II, the state sponsored a new kind of domestic media strategy that aimed to give its audience the grounds to come to its own rational conclusions. That media strategy sustained what we now call “the greatest generation.” These principles seem to be lacking in journalism today.

I am a social scientist, so when I started to investigate the killings thoroughly, the first thing I wanted to see was numbers. Specifically, I wanted to know the comparative rates of police killings broken down by race so I could understand the size of the problem. The first article I found on this subject was Jack Kelly’s article in Real Clear Politics, which I do not recommend you read. It is not a sensitively written piece and some of the sources and arguments he uses signal, to me, a conservative bias.

What I do highly recommend you read are two of Kelly’s sources, which he doesn’t link to but which are both in my view excellent. One is Pro Publica’s research into the data about police violence and the killings of young men. It gave me a sense of proportion I needed to understand the problems at hand.

Thomas Sowell

The other is this article on Michael Brown published last Saturday by Thomas Sowell, who has just skyrocketed to the top of my list of highly respected people. Sowell is far more accomplished than I will ever be and of much humbler origins. He is a military veteran and apparently a courageous scholar. He is now Senior Fellow at the Hoover Institution at Stanford University. Though I am at UC Berkeley and say this very grudgingly, as I write this blog post I am slowly coming to understand that Stanford might be a place of deep and sincere intellectual inquiry, not just the preprofessional school spitting out entrepreneurial drones whose caricature I had come to believe.

Sowell’s claim is that the grand jury has determined that Brown was guilty of assaulting the officer who shot him, that this judgment was based on the testimony of several black witnesses. He notes the tragedy of the riots related to the event and accuses the media of misrepresenting the facts.

So far I have no reason to doubt Sowell’s sober analysis of the Brown case. From what I’ve heard, the Garner case is more horrific and I have not yet had the stomach to work my way through its complexities. Instead I’ve looked more into Sowell’s scholarly work. I recommend watching this YouTube video of him discussing his book Intellectuals and Race in full.

I don’t agree with everything in this video, and not just because much of what Sowell says is the sort of thing I “can’t say”. I find the interviewer too eager in his guiding questions. I think Sowell does not give enough credence to the prison industrial complex and ignores the recent empirical work on the value of diversity–I’m thinking of Scott Page’s work in particular. But Sowell makes serious and sincere arguments about race and racism with a rare historical awareness. In particular, he is critical of the role of intellectuals in making race relations in the U.S. worse. As an intellectual myself, I think it’s important to pay attention to this criticism.


by Sebastian Benthall at December 10, 2014 03:30 PM

December 09, 2014

Ph.D. alumna

Data & Civil Rights: What do we know? What don’t we know?

From algorithmic sentencing to workplace analytics, data is increasingly being used in areas of society that have had longstanding civil rights issues.  This prompts a very real and challenging set of questions: What does the intersection of data and civil rights look like? When can technology be used to enable civil rights? And when are technologies being used in ways that undermine them? For the last 50 years, civil rights has been a legal battle.  But with new technologies shaping society in new ways, perhaps we need to start wondering what the technological battle over civil rights will look like.

To get our heads around what is emerging and where the hard questions lie, the Data & Society Research Institute, The Leadership Conference on Civil and Human Rights, and New America’s Open Technology Institute teamed up to host the first “Data & Civil Rights” conference.  For this event, we brought together diverse constituencies (civil rights leaders, corporate allies, government agencies, philanthropists, and technology researchers) to explore how data and civil rights are increasingly colliding in complicated ways.

In preparation for the conversation, we dove into the literature and see what is known and unknown about the intersection of data and civil rights in six domains: criminal justice, education, employment, finance, health, and housing.  We produced a series of primers that contextualize where we’re at and what questions we need to consider.  And, for the conference, we used these materials to spark a series of small-group moderated conversations.

The conference itself was an invite-only event, with small groups brought together to dive into hard issues around these domains in a workshop-style format.  We felt it was important that we make available our findings and questions.  Today, we’re releasing all of the write-ups from the workshops and breakouts we held, the videos from the level-setting opening, and an executive summary of what we learned.  This event was designed to elicit tensions and push deeper into hard questions. Much is needed for us to move forward in these domains, including empirical evidence, innovation, community organizing, and strategic thinking.  We learned a lot during this process, but we don’t have clear answers about what the future of data and civil rights will or should look like.  Instead, what we learned in this process is how important it is for diverse constituencies to come together to address the challenges and opportunities that face us.

Moving forward, we need your help.  We need to go beyond hype and fear, hope and anxiety, and deepen our collective understanding of technology, civil rights, and social justice. We need to work across sectors to imagine how we can create a more robust society, free of the cancerous nature of inequity. We need to imagine how technology can be used to empower all of us as a society, not just the most privileged individuals.  This means that computer scientists, software engineers, and entrepreneurs must take seriously the costs and consequences of inequity in our society. It means that those working to create a more fair and just society need to understand how technology works.  And it means that all of us need to come together and get creative about building the society that we want to live in.

The material we are releasing today is a baby step, an attempt to scope out the landscape as best we know it so that we can all work together to go further and deeper.  Please help us imagine how we should move forward.  If you have any ideas or feedback, don’t hesitate to contact us at nextsteps at datacivilrights.org

(Image by Mark K.)

by zephoria at December 09, 2014 04:47 PM

December 07, 2014

MIMS 2012

Warm Gun 2014 Conference Notes

This year’s Warm Gun conference was great, just like last year’s. This year generally seemed to be about using design to generate and validate product insights, e.g. through MVPs, prototyping, researching, etc.

Kickoff (Jared Spool)

Jared Spool’s opening talk focused mostly on MVPs and designing delight into products. To achieve this, he recommended the Kano Model and Dana Chisnell’s Three Approaches to Delight (adding pleasure e.g. through humorous copy, improving the flow, and meaning e.g. believing in a company’s mission [hardest to achieve]).

You’re Hired! Strategies for Finding the Perfect Fit (Kim Goodwin)

This was a great talk about how to hire a great design team, which is certainly no easy task (as I’ve seen at Optimizely).

  • Hiring is like dating on a deadline – after a couple of dates, you have to decide whether or not to marry the person!
  • You should worry more about missing the right opportunity, rather than avoiding the wrong choice

5 Lessons She’s Learned

  1. Hire with a long-term growth plan in mind
    • Waiting until you need someone to start looking is too late (it takes months to find the right person)
    • What kind of will you need? Do you want generalists (can do everything, but not great at any one thing; typically needed by small startups) or specialists (great at one narrow thing, like visual design; typically needed by larger companies)
    • Grow (i.e. mentor junior people) or buy talent? Training junior people isn’t cheap - it takes a senior person’s time.
      • A healthy team has a mix of skills levels (she recommends a ratio of 1 senior:4 mid:2 junior). (Optimizely isn’t far off – we mostly lack a senior member!)
    • A big mistake she sees a lot of startups make is to hire too junior of a person too early
  2. Understand the Market
    • The market has 5 cohorts: low skill junior folks (think design is knowing tools); spotty/developing skills; skilled specialists; generalists; and team leads
    • Senior == able to teach others, NOT a title (there’s lots of title inflation in the startup industry). Years of experience does NOT make a person senior. Many people with “senior” in their title have holes in their skills (especially if they’ve only worked on small teams at startups)
    • 5 years experience only at a design consultancy == somewhat valuable (lots of mentorship opportunities from senior folks, but lack working continuously/iteratively on a product)
    • 5 years experience mixed at design consultancy and on an in-house team == best mix of skills (worked with senior folks, and on a single product continuously)
    • 5 years only on small startup teams == less valuable than the other two; is a red flag. There are often holes in the skills. They’re often “lone rangers” who haven’t worked with senior designers, or a team, and probably developed bad habits. Often have an inflated self-assessment of their skills and don’t know how to take feedback. (uh-oh! I’m kind of in this group)
    • It takes craft to assess craft
    • Alternate between hiring leads and those who can learn from the leads (i.e. a mix of skill levels)
    • Education matters - school can fill in gaps of knowledge. Schools have different types of people they product (HCI, visual, etc.). (yay for me!)
  3. Make 2 lists before posting the job
    • First, list the responsibilities a person will have (what will they do?)
    • Second, list the skills they need to achieve the first list.
    • Turn these 2 lists into a job posting (avoid listing tools in the hiring criteria - that is dumb)
    • DON’T look for someone who has experience designing your exact product in your industry already. The best designers can quickly adapt to different contexts (better to hire a skilled designer with no mobile experience than a junior/mid designer with ONLY mobile experience - there’s ramp-up time, but that’s negligible for a skilled designer)
    • Junior to senior folks progress through this: Knows concepts -> Can design effectively w/coach -> Can design solo -> Can teach others
    • On small/inexperienced teams, watch out for the “Similar to me” effect. Designers new to hiring/interviewing will evaluate people against themselves, rather than evaluate their actual skills or potential. (Can ask, “Where were you relative to this person when you were hired?” to combat this).
  4. Evaluate Based on Skills You Need
    • Resumes == bozo filter
    • Look at the goals, process, role, results, lessons learned, things they’d do differently (we’re pretty good at this at Optimizely!)
    • Do “behavioral interviewing”, i.e. focus on specifics of actual behavior. Look at their actual work, do design exercises, ask “Tell me about a time when…”. It’s a conversation, not an interrogation. (Optimizely is also good at this!)
  5. Be Real to Close the Deal
    • Be honest about what you’re looking for
    • If you have doubts about a person, discuss them directly with the candidate to try overcome them (or confirm them). (We need to do this more at Optimizely)

Product Strategy in a Growing Company (Des Traynor)

This was one of my favorite talks. Product strategy is hard, and it’s really easy to say “Yes” to every idea and feature request. One of my favorite bits was you need to say “No” because somethings not in the product vision. If you don’t ever say this, then you have no product vision. (This has been a challenge at Optimizely at times).

  • Software is eating the world!
  • We’re the ones who control the software that’s available to everyone.
  • Niece asked, “Where do products come from?”. There are 5 reasons a product is built:
    1. Product visionary
    2. Customer-focused (built to solve pain point(s))
    3. Auter (art + business)
    4. Scratching own itch (you see a need in the marketplace)
    5. Copy a pattern (e.g. “Uber for X!”)
  • (Optimizely started as scratching own itch, but is adapting to customer-focused)
  • Scope: scalpel vs. swiss army knife
    • When first starting, build a scalpel (it’s the only way to achieve marketshare when starting)
    • Gall’s Law: complex systems evolve from simple ones (like WWW. Optimizely is also evolving towards complexity!). You can’t set out to design a complex system from scratch (think Google Wave)
  • A simple product !== making a product simple (i.e. an easy to use product isn’t necessarily simple [difference between designing a whole UX vs. polishing a specific UI]).
    • Simplify a product by removing steps. Watch out for Scopi-locks (i.e. scope the problem just right – not too big, not too small). You don’t want to solve steps of a flow that are already solved by a market leader, or when there are multiple solutions already in use by people (e.g. don’t rebuild email, there’s already Gmail and Outlook and Mailbox, etc.)
  • How to fight feature creep
    • Say “No” by default
    • Have a checklist new ideas must go through before you build them, e.g. (this is a subset):
      • Does it benefit all customers?
      • Is the value worth the effort?
      • Does it improve existing features? Does it increase engagement across the system, or divide it?
      • If a feature takes off, can we afford it? (E.g. if you have a contractor build an Android app, how will you respond to customer feedback and bugs?)
      • Is it low effort for the customer to use, and result in high value? (E.g. Circles in G+ fail this criteria - no one wants to manage friends like this)
      • It’s been talked about forever; it’s easy to build; we’ve put a lot of effort in already == all bad reasons to ship a new feature
    • (Optimizely has failed at this a couple of times. E.g. Preview As. On the other hand, Audiences met these criteria)
    • Once you ship something, it’s really hard to take back. Even if customers complain about it, there is always a minority that will be really angry if you take it away.

Hunches, Instincts, and Trusting Your Gut (Leah Buley)

This was probably my least favorite talk. The gist of it is that as a designer, there are times you need to be an expert and just make a call using your gut (colleagues and upper management need you to be this person sometimes). We have design processes to follow, but there are always points at which you need to make a leap of faith and trust your gut. I agree with those main points, but this talk lost me by focusing only on visual design. She barely mentioned anything about user goals or UX design.

Her talk was mainly about “The Gut Test”, which is a way of looking at a UI (or print piece, or anything that has visual design) and assessing your gut reaction to it. This is useful for visual design, but won’t find issues like, “Can the user accomplish their goal? Is the product/feature easy to use?” (Something can be easy to use, but fail her gut test). It’s fine that she didn’t tackle these issues, but I wish she would have acknowledged more explicitly that the talk was only addressing a thin slice of product design.

  • In the first 50ms of seeing something, we have an immediate visceral reaction to things
  • Exposure effect: the more we see something, the more we like it (we lose our gut reaction to it)
  • To combat this, close your eyes for 5 seconds, then look at a UI and ask these questions:
    • What do you notice first?
    • How does it make you feel (if anything)? What words would you use to describe it?
    • Is it prototypical? (i.e. does it conform to known patterns). Non-conformity == dislike and distrust
  • Then figure out what you can do to address any issues discovered.

Real Life Trust Online (Mara Zepeda)

This talk was interesting, but not directly applicable to my everyday job at Optimizely. The gist of it was how do we increase trust in the world, and not just in the specific product or company we’re using? For example, when you buy or sell something successfully on Craigslist, your faith in humanity increases a little bit. But reviews on Amazon, for example, increases your trust in that product and Amazon, but not necessarily in your fellow human beings.

  • Before trust is earned, there’s a moment of vulnerability and an optimism about the future.
  • Trust gets built up in specific companies (e.g. Craigslist - there’s no reason to trust the company or site, but trust in humans and universe increases when a transaction is successful).
  • Social networks don’t create networks of trust in the real world
  • Switchboard MVP was a phone hotline
    • LinkedIn: ask for job connections, no one responds. But if you call via Switchboard, people are more likely to respond (there’s a real human connection)
    • They’re trying to create a trust network online
  • To build trust:
    • Humanize the transaction (e.g. make it person to person)
    • Give a victory lap (i.e. celebrate when the system works)
    • Provide allies / mentors along the journey (i.e. people who are invested in the journey being successful, and can celebrate the win)
  • She brought up the USDA’s “Hay Net” as an example of this. It connects those who have Hay with those who need Hay (and vice versa). UI had two options: “Have Hay” and “Need Hay”, which I find hilarious and amazing.

Designing for Unmet Needs (Steve Portigal)

Steve Portigal’s talk was solid, but it didn’t really tell me anything I didn’t already know. The gist of it was there are different types of research (generative v. evaluative), you need to know which is appropriate for your goals (although it’s a spectrum, not a dichotomy), and there are ways around anything you think is stopping you (e.g. no resources; no users; no product; etc.). The two most interesting points to me were:

  • Create provocative examples/prototypes/mocks to show people and gather responses (he likened this to a scientist measuring reactions to new stimuli). Create a vision of the future and see what people think of it, find needs, iterate, adapt. Go beyond iterative improvements to an existing product or UI (we’re starting to explore this technique at Optimizely now).
  • Build an infrastructure for ongoing research. This is something that’s been on my mind for awhile, since we’re very reactive in our research needs at Optimizely. I’d like us to have more continual, ongoing research that’s constantly informing product decisions.

Redesigning with Confidence (Jessica Harllee)

This was a cool talk that described the process Etsy went through to redesign the seller onboarding experience, and how they used data to be confident in the final result. The primary goal was increasing the shop open rate, while maintaining the products listed per seller. They a/b tested a new design that increased the open rate, but had fewer products listed per seller. They made some tweaks, a/b tested again, and found a design that increased the shop open rate while maintaining the number of products listed per seller. Which means more products are listed on Etsy overall!

I didn’t learn much new from this talk, but enjoy hearing these stories. It also got me thinking about how we don’t a/b test much in the product at Optimizely. A big reason is because it takes too long to get significant results (as Jessica mentioned in her talk, they had to run both tests for months, and the overall project took over a year). Another reason is that when developing new features, there aren’t any direct metrics to compare. Since Jessica’s example was a redesign, they could directly compare behavior of the redesign to the original.

Designing for Startups Problems (Braden Kowitz)

Braden’s talk was solid, as usual, but since I’ve seen him talk before and read his blog I didn’t get much new out of it. His talk was about how Design (and therefore, designers) can help build a great company (beyond just UIs). Most companies think of design at the “surface level”, i.e. visual design, logos, etc., but at its core design is about product and process and problem solving. Design can help at the highest levels.

  • 4 Skills Designers Need:
    1. Know where to dig
      • Map the problem
      • Identify the riskiest part (e.g. does anyone need this product or feature at all?)
      • Design to answer that question. Find the cheapest/simplest/fastest thing you can create to answer this question (fake as much as you can to avoid building a fully working artifact)
    2. Get dirty
      • Prototype as quickly as possible (colors, polish, etc., aren’t important)
      • Focus on the most important part, e.g. the user flow, layout, copy, etc. Use templates/libraries to save time
      • Use deadlines (it’s really easy to polish a prototype forever)
    3. Pump in fresh data
      • Your brain fills in gaps in data, so have regular research and testing (reinforces Portigal’s points nicely)
    4. Take big leaps
      • Combine the above 3 steps to generate innovative solutions to problems

Accomplish Big Goals with Objective & Key Results (Christina Wodtke)

This was an illuminating talk about the right way to create and evaluate OKRs. I didn’t hear much I hadn’t already heard (we use OKRs at Optimizely and have discussed best practices). But to recap:

  • Objective == Your Dream, Your Goal. It’s hard. It’s qualitative.
  • Key Results == How you know you reached your goal. They’re quantitative. They’re measurable. They’re not tasks (it’s something you can put on a dashboard and measure over time, e.g. sales numbers, adoption, etc.).
  • Focus: only have ONE Objective at a time, and measure it with 3 Key Results. (She didn’t talk about how to scale this as companies get bigger. I wish she did).
  • Measure it throughout the quarter so you can know how you’re tracking. Don’t wait until the end of the quarter.

Thought Experiments for Data-Driven Design (Aviva Rosenstein)

This was an illuminating talk about the right way to incorporate data into the decision making process. You need to find a balance between researching/measuring to death, and making a decision. She used DocuSign’s feedback button as a good example of this.

  • Don’t research to death — try something and measure the result (but make an educated guess).
  • DocuSign tried to roll their own “Feedback” button (rather than using an existing service). They gave the user a text box to enter feedback, and submitting it sent it to an email alias (not stored anywhere; not categorized at all).
    • This approach became a data deluge
    • There was no owner of the feedback
    • Users entered all kinds of stuff in that box that shouldn’t have gone there (e.g. asking for product help). People use the path of least resistance to get what they want. (I experienced this with the feedback button in the Preview tool)
  • Data should lead to insight (via analysis and interpretation)
  • Collecting feedback by itself has no ROI (can be negative because if people feel their feedback is being ignored they get upset)
  • Aviva’s goal: find a feedback mechanism that’s actually useful
  • Other feedback approaches:
    • Phone/email conversation (inefficient, hard to track)
    • Social media (same as above; biased)
    • Ad hoc survey/focus groups (not systematic; creating good surveys is time consuming)
  • Feedback goals:
    1. Valid: trustworthy and unbiased
    2. Delivery: goes to the right person/people
    3. Retention: increase overall company knowledge; make it available when needed
    4. Usable: can’t pollute customer feedback
    5. Scalable: easy to implement
    6. Contextual: gather feedback in context of use
  • They changed their feedback mechanism slightly by asking users to bucket the feedback first (e.g. “Billing problems”, “Positive feedback”, etc.), then emailed it to different places. This made it more actionable.
  • Doesn’t need to be “Ready -> Fire -> Aim”: we can use data and the double diamond approach to inform the problem, and make our best guess.
    • This limits collateral damage from not aiming. A poorly aimed guess can mar the user experience, which users don’t easily forget.

Growing Your Userbase with Better Onboarding (Samuel Hulick)

This was one of my favorite talks of the day (and not only because Samuel gave Optimizely a sweet shout out). I didn’t learn a ton new from it, but Samuel is an entertaining speaker. His pitch is basically that the first run experience is important, and needs to be thought about at the start of developing a product (not tacked on right before launch).

  • “Onboarding” is often just overlaying an UI with coach’s marks. But there’s very little utility in this.
  • Product design tends to focus on the “flying” state, once someone is using a system. Empty states, and new user experiences, are often tacked on.
  • You need to start design with where the users start
  • Design Recommendations
    • Show a single, action-oriented tooltip at a time (Optimizely was his example of choice here!)
      • Ask for signup when there’s something to lose (e.g. after you’ve already created an experiment)
      • Assume guided tours will be skipped, i.e. don’t rely on them to make your product usable
    • Use completion meters to get people fully using a product
    • Keep in mind that users don’t buy products, they buy better versions of themselves (Mario + fire flower), and use this as the driving force to get people fully engaged with your product
    • Provide positive reinforcement when they complete steps! (Emails can help push them along)

Fostering Effective Collaboration in a Global Environment (PJ McCormick)

PJ’s talk was just as good this year as it was last year. He gave lots of great tips for increasing collaboration and trust among teams (especially the engineering and design teams), which is also a topic that has been on my mind recently.

  • His UX team designs retail pages (e.g. discover new music page). In one case, he presented work to the stakeholders and dev team, who then essentially killed the project. What went wrong? Essentially, it was a breakdown of communication and he didn’t include the dev team early enough.
  • Steps to increasing collaboration:
    1. Be accessible and transparent
      • Put work up on public walls so everyone can see progress (this is something I want to do more of)
      • Get comfortable showing work in progress
      • Demystify the black box of design
    2. Listen
      • Listen to stakeholders’ and outside team members opinions and feedback (you don’t have to incorporate it, but make sure they know they’re being heard)
    3. Be a Person
      • On this project, the communication was primarily through email or bug tracking, which lacks tone of voice, body language, etc.
      • There was no real dialog. Talk more face to face, or over phone. (I have learned this again and again, and regularly walk over to someone to hash things out at the first hint of contention in an email chain. It’s both faster to resolve and maintains good relations among team members)
    4. Work with people, not at them
      • He should have included stakeholders and outside team members in the design process.
      • Show them the wall; show UX studies; show work in progress
      • Help teach people what design is (this is hard. I want to get better at this)

A question came up about distributed teams, since much of his advice hinges on face to face communication. I’ve been struggling with this (actually, the whole company has), and his recommendations are in line with what we’ve been trying: use a webcam + video chat to show walls (awkward; not as effective as in person), and take pictures/digitize artifacts to share with people (has the side effect of being archived for future, but introduces the problem of discoverability).


And that’s all! (Actually, I missed the last talk…). Overall, a great conference that I intend to go back to next year.

by Jeff Zych at December 07, 2014 02:53 AM

December 04, 2014

Ph.D. student

Notes on The Democratic Surround; managerialism

I’ve been greatly enjoying Fred Turner’s The Democratic Surround partly because it cuts through a lot of ideological baggage with smart historical detail. It marks a turn, perhaps, in what intellectuals talk about. The critical left has been hung up on neoliberalism for decades while the actual institutions that are worth criticizing have moved on. It’s nice to see a new name for what’s happening. That new name is managerialism.

Managerialism is a way to talk about what Facebook and the Democratic Party and everybody else providing a highly computationally tuned menu of options is doing without making the mistake of using old metaphors of control to talk about a new thing.

Turner is ambivalent about managerialism perhaps because he’s at Stanford and so occupies an interesting position in the grand intellectual matrix. He’s read his Foucault, he explains when he speaks in public, though he is sometimes criticized for not being critical enough. I think ‘critical’ intellectuals may find him confusing because he’s not deploying the same ‘critical’ tropes that have been used since Adorno even though he’s writing sometimes about Adorno. He is optimistic, or at least writes optimistically about the past, or at least writes about the past in a way that isn’t overtly scathing which is just more upbeat than a lot of writing nowadays.

Managerialism is, roughly, the idea of technocratically bounded space of complex interactive freedom as a principle of governance or social organization. In The Democratic Surround, he is providing a historical analysis of a Bauhaus-initiated multimedia curation format, the ‘surround’, to represent managerialist democracy in the same way Foucault provided a historical analysis of the Panopticon to represent surveillance. He is attempting to implant a new symbol into the vocabulary of political and social thinkers that we can use to understand the world around us while giving it a rich and subtle history that expands our sense of its possibilities.

I’m about halfway through the book. I love it. If I have a criticism of it it’s that everything in it is a managerialist surround and sometimes his arguments seems a bit stretched. For example, here’s his description of how John Cage’s famous 4’33” is a managerialist surround:

With 4’33”, as with Theater Piece #1, Cage freed sounds, performers, and audiences alike from the tyrannical wills of musical dictators. All tensions–between composer, performer, and audience; between sound and music; between the West and the East–had dissolved. Even as he turned away from what he saw as more authoritarian modes of composition and performance, though, Cage did not relinquish all control of the situation. Rather, he acted as an aesthetic expert, issuing instructions that set the parameters for action. Even as he declined the dictator’s baton, Cage took up a version of the manager’s spreadsheet and memo. Thanks to his benevolent instructions, listeners and music makers alike became free to hear the world as it was and to know themselves in that moment. Sounds and people became unified in their diversity, free to act as they liked, within a distinctly American musical universe–a universe finally freed of dictators, but not without order.

I have two weaknesses as a reader. One is a soft spot for wicked vitriol. Another is an intolerance of rhetorical flourish. The above paragraph is rhetorical flourish that doesn’t make sense. Saying that 4’33” is a manager’s spreadsheet is just about the most nonsensical metaphor I could imagine. In a universe with only fascists and managerialists, then I guess 4’33” is more like a memo. But there are so many more apt musical metaphors for unification in diversity in music. For example, a blues or jazz band playing a standard. Literally any improvisational musical form. No less quintessentially American.

If you bear with me and agree that this particular point is poorly argued and that John Cage wasn’t actually a managerialist and was in fact the Zen spiritualist that he claimed to be in his essays, then either Turner is equating managerialism with Zen spiritualism or Turner is trying to make Cage a symbol of managerialism for his own ideological ends.

Either of these is plausible. Steve Jobs was an I Ching enthusiast like Cage. Stewart Brand, the subject of Turner’s last book, From Counterculture to Cyberculture, was a back-to-land commune enthusiast before he become a capitalist digerati hero. Running through Turner’s work is the demonstration of the cool origins of today’s world that’s run by managerialist power. We are where we are today because democracy won against fascism. We are where we are today because hippies won against whoever. Sort of. Turner is also frank about capitalist recuperation of everything cool. But this is not so bad. Startups are basically like co-ops–worker owned until the VC’s get too involved.

I’m a tech guy, sort of. It’s easy for me to read my own ambivalence about the world we’re in today into Turner’s book. I’m cool, right? I like interesting music and read books on intellectual history and am tolerant of people despite my connections to power, right? Managers aren’t so bad. I’ve been a manager. They are necessary. Sometimes they are benevolent and loved. That’s not bad, right? Maybe everything is just fine because we have a mode of social organization that just makes more sense now than what we had before. It’s a nice happy medium between fascism, communism, anarchism, and all the other extreme -ism’s that plagued the 20th century with war. People used to starve to death or kill each other en masse. Now they complain about bad management or, more likely, bad customer service. They complain as if the bad managers are likely to commit a war crime at any minute but that’s because their complaints would sound so petty and trivial if they were voiced without the use of tropes that let us associate poor customer service with deliberate mind-control propaganda or industrial wage slavery. We’ve forgotten how to complain in a way that isn’t hyperbolic.

Maybe it’s the hyperbole that’s the real issue. Maybe a managerialist world lacks catastrophe and so is so frickin’ boring that we just don’t have the kinds of social crises that a generation of intellectuals trained in social criticism have been prepared for. Maybe we talk about how things are “totally awesome!” and totally bad because nothing really is that good or that bad and so our field of attention has contracted to the minute, amplifying even the faintest signal into something significant. Case in point, Alex from Target. Under well-tuned managerialism, the only thing worth getting worked up about is that people are worked up about something. Even if it’s nothing. That’s the news!

So if there’s a critique of managerialism, it’s that it renders the managed stupid. This is a problem.


by Sebastian Benthall at December 04, 2014 02:45 AM

December 01, 2014

MIMS 2012

Optimizely's iOS SDK Hits Version 1.0!

On Novemeber 18th, 2014, Optimizely officially released version 1.0 of our iOS SDK and a new mobile editing experience. As the lead designer of this project, I’m extremely proud of the progress we’ve made. This is just the beginning — there’s a lot more work to come! Check out the product video below:

Stay tuned for future posts about the design process.

by Jeff Zych at December 01, 2014 02:13 AM

November 29, 2014

Ph.D. student

textual causation

A problem that’s coming up for me as a data scientist is the problem of textual causation.

There has been significant interesting research into the problem of extracting causal relationships between things in the world from text about those things. That’s an interesting problem but not the problem I am talking about.

I am talking about the problem of identifying when a piece of text has been the cause of some event in the world. So, did the State of the Union address affect the stock prices of U.S. companies? Specifically, did the text of the State of the Union address affect the stock price? Did my email cause my company to be more productive? Did specifically what I wrote in the email make a difference?

A trivial example of textual causation (if I have my facts right–maybe I don’t) is the calculation of Twitter trending topics. Millions of users write text. That text is algorithmically scanned and under certain conditions, Twitter determines a topic to be trending and displays it to more users through its user interface, which also uses text. The user interface text causes thousands more users to look at what people are saying about the topic, increasing the causal impact of the original text. And so on.

These are some challenges to understanding the causal impact of text:

  • Text is an extraordinarily high-dimensional space with tremendous irregularity in distribution of features.
  • Textual events are unique not just because the probability of any particular utterance is so low, but also because the context of an utterance is informed by all the text prior to it.
  • For the most part, text is generated by a process of unfathomable complexity and interpreted likewise.
  • A single ‘piece’ of text can appear and reappear in multiple contexts as distinct events.

I am interested in whether it is possible to get a grip on textual causation mathematically and with machine learning tools. Bayesian methods theoretically can help with the prediction of unique events. And the Pearl/Rubin model of causation is well integrated with Bayesian methods. But is it possible to use the Pearl/Rubin model to understand unique events? The methodological uses of Pearl/Rubin I’ve seen are all about establishing type causation between independent occurrences. Textual causation appears to be as a rule a kind of token causation in a deeply integrated contextual web.

Perhaps this is what makes the study of textual causation uninteresting. If it does not generalize, then it is difficult to monetize. It is a matter of historical or cultural interest.

But think about all the effort that goes into communication at, say, the operational level of an organization. How many jobs require “excellent communication skills.” A great deal of emphasis is placed not only on that communication happens, but how people communicate.

One way to approach this is using the tools of linguistics. Linguistics looks at speech and breaks it down into components and structures that can be scientifically analyzed. It can identify when there are differences in these components and structures, calling these differences dialects or languages.


by Sebastian Benthall at November 29, 2014 04:49 PM

analysis of content vs. analysis of distribution of media

A theme that keeps coming up for me in work and conversation lately is the difference between analysis of the content of media and analysis of the distribution of media.

Analysis of content looks for the tropes, motifs, psychological intentions, unconscious historical influences, etc. of the media. Over Thanksgiving a friend of mine was arguing that the Scorpions were a dog whistle to white listeners because that band made a deliberate move to distance themselves from influence of black music on rock. Contrast this with Def Leppard. He reached this conclusion based by listening carefully to the beats and contextualizing them in historical conversations that were happening at the time.

Analysis of distribution looks at information flow and the systemic channels that shape it. How did the telegraph change patterns of communication? How did television? Radio? The Internet? Google? Facebook? Twitter? Ello? Who is paying for the distribution of this media? How far does the signal reach?

Each of these views is incomplete. Just as data underdetermines hypotheses, media underdetermines its interpretation. In both cases, a more complete understanding of the etiology of the data/media is needed to select between competing hypotheses. We can’t truly understand content unless we understand the channels through which it passes.

Analysis of distribution is more difficult than analysis of content because distribution is less visible. It is much easier to possess and study data/media than it is to possess and study the means of distribution. The means of distribution are a kind of capital. Those that study it from the outside must work hard to get anything better than a superficial view of it. Those on the inside work hard to get a deep view of it that stays up to date.

Part of the difficulty of analysis of distribution is that the system of distribution depends on the totality of information passing through it. Communication involves the dynamic engagement of both speakers and an audience. So a complete analysis of distribution must include an analysis of content for every piece of implicated content.

One thing that makes the content analysis necessary for analysis of distribution more difficult than what passes for content analysis simpliciter is that the former needs to take into account incorrect interpretation. Suppose you were trying to understand the popularity of Fascist propaganda in pre-WWII Germany and were interested in how the state owned the mass media channels. You could initially base your theory simply on how people were getting bombarded by the same information all the time. But you would at some point need to consider how the audience was reacting. Was it stirring feelings of patriotic national identity? Did they experience communal feelings with others sharing similar opinions? As propaganda provided interpretations of Shakespeare saying he was secretly a German and denunciation of other works as “degenerate art”, did the audience believe this content analysis? Did their belief in the propaganda allow them to continue to endorse the systems of distribution in which they took part?

This shows how the question of how media is interpreted is a political battle fought by many. Nobody fighting these battles is an impartial scientist. Since one gets an understanding of the means of distribution through impartial science, and since this understanding of the means of distribution is necessary for correct content analysis, we can dismiss most content analysis as speculative garbage, from a scientific perspective. What this kind of content analysis is instead is art. It can be really beautiful and important art.

On the other hand, since distribution analysis depends on the analysis of every piece of implicated content, distribution analysis is ultimately hopeless without automated methods for content analysis. This is one reason why machine learning techniques for analyzing text, images, and video are such a hot research area. While the techniques for optimizing supply chain logistics (for example) are rather old, the automated processing of media is a more subtle problem precisely because it involves the interpretation and reinterpretation by finite subjects.

By “finite subject” here I mean subjects that are inescapably limited by the boundaries of their own perspective. These limits are what makes their interpretation possible and also what makes their interpretation incomplete.


by Sebastian Benthall at November 29, 2014 04:16 PM

November 26, 2014

Ph.D. student

things I’ve been doing while not looking at twitter

Twitter was getting me down so I went on a hiatus. I’m still on that hiatus. Instead of reading Twitter, I’ve been:

  • Reading Fred Turner’s The Democratic Surround. This is a great book about the relationship between media and democracy. Since a lot of my interest in Twitter has been because of my interest in the media and democracy, this gives me those kinds of jollies without the soap opera trainwreck of actually participating in social media.
  • Going to arts events. There was a staging of Rhinoceros at Berkeley. It’s an absurdist play in which a small French village is suddenly stricken by an epidemic wherein everybody is transformed into a rhinoceros. It’s probably an allegory for the rise of Communism or Fascism but the play is written so that it’s completely ambiguous. Mainly it’s about conformity in general, perhaps ideological conformity but just as easily about conformity to non-ideology, to a state of nature (hence, the animal form, rhinoceros.) It’s a good play.
  • I’ve been playing Transistor. What an incredible game! The gameplay is appealingly designed and original, but beyond that it is powerfully written an atmospheric. In many ways it can be read as a commentary on the virtual realities of the Internet and the problems with them. Somehow there was more media attention to GamerGate than to this one actually great game. Too bad.
  • I’ve been working on papers, software, and research in anticipation of the next semester. Lots of work to do!

Above all, what’s great about unplugging from social media is that it isn’t actually unplugging at all. Instead, you can plug into a smarter, better, deeper world of content where people are more complex and reasonable. It’s elevating!

I’m writing this because some time ago it was a matter of debate whether or not you can ‘just quit Facebook’ etc. It turns out you definitely can and it’s great. Go for it!

(Happy to respond to comments but won’t respond to tweets until back from the hiatus)


by Sebastian Benthall at November 26, 2014 10:02 PM

November 14, 2014

Ph.D. alumna

Heads Up: Upcoming Parental Leave

If you’ve seen me waddle onto stage lately, you’ve probably guessed that I’m either growing a baby or an alien. I’m hoping for the former, although contemporary imaging technologies still do make me wonder. If all goes well, I will give birth in late January or early February. Although I don’t publicly talk much about my son, this will be #2 for me and so I have both a vague sense of what I’m in for and no clue at all. I avoid parenting advice like the plague so I’m mostly plugging my ears and singing “la-la-la-la” whenever anyone tells me what I’m in for. I don’t know, no one knows, and I’m not going to pretend like anything I imagine now will determine how I will feel come this baby’s arrival.

What I do know is that I don’t want to leave any collaborator or partner in the lurch since there’s a pretty good chance that I’ll be relatively out of commission (a.k.a. loopy as all getup) for a bit. I will most likely turn off my email firehose and give collaborators alternate channels for contacting me. I do know that I’m not taking on additional speaking gigs, writing responsibilities, scholarly commitments, or other non-critical tasks. I also know that I’m going to do everything possible to make sure that Data & Society is in good hands and will continue to grow while I wade through the insane mysteries of biology. If you want to stay in touch with everything happening at D&S, please make sure to sign up for our newsletter! (You may even catch me sneaking into our events with a baby.)

As an employee of Microsoft Research who is running an independent research institute, I have a ridiculous amount of flexibility in how I navigate my parental leave. I thank my lucky stars for this privilege on a regular basis, especially in a society where we force parents (and especially mothers) into impossible trade-offs. What this means in practice for me is that I refuse to commit to exactly how I’m going to navigate parental leave once #2 arrives. Last time, I penned an essay “Choosing the ‘Right’ Maternity Leave Plan” to express my uncertainty. What I learned last time is that the flexibility to be able to work when it made sense and not work when I’d been up all night made me more happy and sane than creating rigid leave plans. I’m fully aware of just how fortunate I am to be able to make these determinations and how utterly unfair it is that others can’t. I’m also aware of just how much I love what I do for work and, in spite of folks telling me that work wouldn’t matter as much after having a child, I’ve found that having and loving a child has made me love what I do professionally all the more. I will continue to be passionately engaged in my work, even as I spend time welcoming a new member of my family to this earth.

I don’t know what the new year has in store for me, but I do know that I don’t want anyone who needs something from me to feel blindsided. If you need something from me, now is the time to holler and I will do my best. I’m excited that my family is growing and I’m also ecstatic that I’ve been able to build a non-profit startup this year. It’s been a crazy year and I expect that 2015 will be no different.

by zephoria at November 14, 2014 03:35 PM

November 10, 2014

Ph.D. alumna

me + National Museum of the American Indian

I’m pleased to share that I’m joining the Board of Trustees of Smithsonian’s National Museum of the American Indian (NMAI) in 2015.  I am honored and humbled by the opportunity to help guide such an esteemed organization full of wonderful people who are working hard to create a more informed and respectful society.

I am not (knowingly) of Native descent, but as an American who has struggled to make sense of our history and my place in our cultural ecosystem, I’ve always been passionate about using the privileges I have to make sure that our public narrative is as complex as our people.  America has a sordid history and out of those ashes, we have a responsibility to both remember and do right by future generations.  When the folks at NMAI approached me to see if I were willing to use the knowledge I have about technology, youth, and social justice to help them imagine their future as a cultural institution, the answer was obvious to me.

Make no mistake – I have a lot to learn.  I cannot and will not speak on behalf of Native peoples or their experiences. I’m joining this Board, fully aware of how little I know about the struggles of Indians today, but I am doing so with a deep appreciation of their stories and passions. I am coming to this table to learn from those who identify as Native and Indian with the hopes that what I have to offer as a youth researcher, technologist, and committed activist can be valuable. As an ally, I hope that I can help the Museum accomplish its dual mission of preserving and sharing the culture of Native peoples to advance public understanding and empower those who have been historically disenfranchised.

I am still trying to figure out how I will be able to be most helpful, but at the very least, please feel free to use me to share your thoughts and perspectives that might help NMAI advance its mission and more actively help inform and shape American society. I would also greatly appreciate your help in supporting NMAI’s education initiatives through a generous donation. In the United States, these donations are tax deductible.

by zephoria at November 10, 2014 04:10 PM