School of Information Blogs

August 09, 2017

Ph.D. student

Differing ethnographic accounts of the effectiveness of technology

I’m curious as I compare two recent papers, one by Christin [2017] and one by Levy [2015], both about the role of technology in society. and backed by ethnographic data.

What interests me is that the two papers both examine the use of algorithms in practice, but they differ in their account of the effectiveness of the algorithms used. Christin emphasizes the way web journalists and legal professionals deliberately undermine the impact of algorithms. Levy discusses how electronic monitoring achieves central organizational control over truckers.

I’m interested in the different framings because, as Christin points out, a central point of contention in the critical scholarship around data and algorithms is the effectiveness of the technology, especially “in practice”. Implicitly if not explicitly, if the technology is not as effective as its advocates say it is, then it is overhyped and this debunking is an accomplishment of the critical and often ethnographic field.

On the other hand, if the technology is effective at control, as Levy’s article argues that it is, then it poses a much more real managerialist threat to worker’s autonomy. Identifying that this is occurring is also a serious accomplishment of the ethnographic field.

What must be recognized, however, is that these two positions contradict each other, at least as general perspectives on data-collection and algorithmic decision-making. The use of a particular technology in a particular place cannot be both so ineffective as to be overhyped and so effective as to constitute a managerialist threat. The substance of the two critiques is at odds with each other, and they call for different pragmatic responses. The former suggests a rhetorical strategy of further debunking, the latter demands a material strategy of changing working conditions.

I have seen both strategies used in critical scholarship, sometimes even in the same article, chapter, or book. I have never seen critical scholars attempt to resolve this difference between themselves using their shared assumptions and methods. I’d like to see more resolution in the ethnographic field on this point.

Correction, 8/10/17:

The apparent tension is resolved on a closer reading of Christin (2017). The argument there is that technology (in the managerialist use common to both papers) is ineffective when its intended use is resisted by those being managed by it.

That shifts the ethnographic challenge to technology away from an attack on the technical quality of the work (which is a non-starter) to accomplish what it is designed to do, but rather on the uncontroversial proposition that the effectiveness of technology depends in part on assumptions on how it will be used, and that these assumptions can be violated.

The political question of to what extent these new technologies should be adopted can then be addressed straightforwardly in terms of whether or not it is fully and properly adopted, or only partially and improperly adopted. Using language like this would be helpful in bridging technical and ethnographic fields.

References

Christin, 2017. “Algorithms in practice: Comparing journalism and criminal justice.” (link)

Levy, 2015. “The Contexts of Control: Information, Power, and Truck-Driving Work.” (link)


by Sebastian Benthall at August 09, 2017 06:25 PM

August 06, 2017

MIMS 2018

Don’t Let Vegetarian Guilt Get You Down

Refraining from eating meat is just a means to an end.

Vegetarianism isn’t just about rabbit food either, but these colors sure do pop. Source: Flickr.

Every time I go to a barbecue, I find myself talking smack about vegetarians. It’s not just because as a vegetarian myself, I’m grumpy to be eating a dry, mulch-colored hockey puck while everyone else eats succulent, reddish-colored hockey pucks. It’s also a way to separate myself in the eyes of others from those who see vegetarianism as an ideology.

Vegetarianism is not really an ism. An ism a belief in something. Vegetarianism is not the belief that it is inherently good to refrain from eating animals. Rather, it’s a means to realize other ism’s.

Some vegetarians believe in a moral imperative against killing animals. Others, like myself, believe in environmentalism. Still others believe in religions like Hinduism and Rastafarianism. Even people who do it for health reasons believe that personal sacrifice now is worth future health outcomes.

Unfortunately, many non-meat eaters treat vegetarianism or veganism as the end rather than the means. They stick so ardently to their diet’s mores that they burn out, or worse, become insufferable. The latter, though it seems noble, harms any ism that gains from wider adoption. A vegetarian whose easygoing attitude convinces someone else to only eat meat at dinner has done more for environmentalism than the vegan who doesn’t sit on leather seats.

Burning out comes from the same fallacy. When a newly minted vegetarian succumbs to the smell of bacon (and who hasn’t?) they often think that they have gone astray of their belief. The seal has been broken so they may as well go back to eating meat. But eating a strip of bacon does not forgo the belief that killing animals is wrong! No one argues that committing adultery forgoes belief in the Judeo-Christian God.

What I’m really saying is, have some chill, folks. Militant vegetarians and vegans: remember your personal ism and don’t hurt others’ ism’s by making us all seem like ass holes. Hesitant meat eaters: it doesn’t take a blood oath to eat less meat. Yes, rules make it easier, but it’s more important that the rules are sustainable. Sometimes, that means they need to be flexible. Fervent anti-vegetarians: 2006 called, it wants it’s shitty attitude back.

by Gabe Nicholas at August 06, 2017 06:07 PM

Ph.D. student

legitimacy in peace; legitimacy in war

I recently wrote a reflection on the reception of Habermas in the United States and argued that the lack of intellectual uptake of his later work have been a problem with politics here. Here’s what I wrote, admittedly venting a bit:

In my experience, it is very difficult to find support in academia for the view that rational consensus around democratic institutions is a worthwhile thing to study or advocate for. Identity politics and the endless contest of perspectives is much more popular among students and scholars coming out of places like UC Berkeley. In my own department, students were encouraged to read Habermas’s early work in the context of the identity politics critique, but never exposed to the later work that reacted to these critiques constructively to build a theory that was specifically about pluralism, which is what identity politics need in order to unify as a legitimate state. There’s a sense in which the whole idea that one should continue an philosophical argument to the point of constructive agreement, despite the hard work and discipline that this demands, was abandoned in favor of an ideology of intellectual diversity that discouraged scrutiny and rigor across boundaries of identity, even in the narrow sense of professional or disciplinary identity.

Tapan Parikh succinctly made the point that Habermas’s philosophy may be too idealistic to ever work out:

“I still don’t buy it without taking history, race, class and gender into account. The ledger doesn’t start at zero I’m afraid, and some interests are fundamentally antagonistic.”

This objection really is the crux of it all, isn’t it? There is a contradiction between agreement, necessary for a legitimate pluralistic state, and antagonistic interests of different social identities, especially as they are historically and presently unequal. Can there ever be a satisfactory resolution? I don’t know. Perhaps the dialectical method will get us somewhere. (This is a blog after all; we can experiment here).

But first, a note on intellectual history, as part of the fantasy of this argument is that intellectual history matters for actual political outcomes. When discussing the origins of contemporary German political theory, we should acknowledge that post-War Germany has been profoundly interested in peace as it has experienced the worst of war. The roots of German theories of peace are in Immanual Kant’s work on “perpetual peace”, the hypothetical situation in which states are no longer at way. He wrote an essay about it in 1795, which by the way begins with this wonderful preface:

PERPETUAL PEACE

Whether this satirical inscription on a Dutch innkeeper’s sign upon which a burial ground was painted had for its object mankind in general, or the rulers of states in particular, who are insatiable of war, or merely the philosophers who dream this sweet dream, it is not for us to decide. But one condition the author of this essay wishes to lay down. The practical politician assumes the attitude of looking down with great self-satisfaction on the political theorist as a pedant whose empty ideas in no way threaten the security of the state, inasmuch as the state must proceed on empirical principles; so the theorist is allowed to play his game without interference from the worldly-wise statesman. Such being his attitude, the practical politician–and this is the condition I make–should at least act consistently in the case of a conflict and not suspect some danger to the state in the political theorist’s opinions which are ventured and publicly expressed without any ulterior purpose. By this clausula salvatoria the author desires formally and emphatically to deprecate herewith any malevolent interpretation which might be placed on his words.

When the old masters are dismissed as being irrelevant or dense, it denies them the credit for being very clever.

That said, I haven’t read this essay yet! But I have a somewhat informed hunch that more contemporary work that deals with the problems it raises directly make good headway on problem of political unity. For example, this article by Bennington (2012) “Kant’s Open Secret” is good and relevant to discussions of technical design and algorithmic governance. Cederman, who has been discussed here before, builds a computational simulation of peace inspired by Kant.

Here’s what I can sketch out, perhaps ignorantly. What’s at stake is whether antagonistic actors can resolve their differences and maintain peace. The proposed mechanism for this peace is some form of federated democracy. So to paint a picture: what I think Habermas is after is a theory of how governments can be legitimate in peace. What that requires, in his view, is some form of collective deliberation where actors put aside their differences and agree on some rules: the law.

What about when race and class interests are, as Parikh suggests, “fundamentally antagonistic”, and the unequal ledger of history gives cause for grievances?

Well, all too often, these are the conditions for war.

In the context of this discussion, which started with a concern about the legitimacy of states and especially the United States, it struck me that there’s quite a difference between how states legitimize themselves at peace versus how they legitimize themselves while at war.

War, in essence, allows some actors in the state to ignore the interests of other actors. There’s no need for discursive, democratic, cosmopolitan balancing of interests. What’s required is that an alliance of interests maintain the necessary power over rivals to win the war. War legitimizes autocracy and deals with dissent by getting rid of it rather than absorbing and internalizing it. Almost by definition, wars challenge the boundaries of states and the way underlying populations legitimize them.

So to answer Parikh, the alternative to peaceful rule of law is war. And there certainly have been serious race wars and class wars. As an example, last night I went to an art exhibit at the Brooklyn Museum entitled “The Legacy of Lynching: Confronting Racial Terror in America”. The phrase “racial terror” is notable because of how it positions racist lynching as a form of terrorism, which we have been taught to treat as the activity of rogue, non-state actors threatening national security. This is deliberate, as it frames black citizens as in need of national protection from white terrorists who are in a sense at war with them. Compare and contrast this with right-wing calls for “securing our borders” from allegedly dangerous immigrants, and you can see how both “left” and “right” wing political organizations in the United States today are legitimized in part by the rhetoric of war, as opposed to the rhetoric of peace.

To take a cynical view of the current political situation in the United States, which may be the most realistic view, the problem appears to be that we have a two party system in which the two parties are essentially at war, whether rhetorically or in terms of their actions in Congress. The rhetoric of the current president has made this uncomfortable reality explicit, but it is not a new state of affairs. Rather, one of the main talking points in the previous administration and the last election was the insistence by the Democratic leadership that the United States is a democracy that is at peace with itself, and so cooperation across party lines was a sensible position to take. The efforts by the present administration and Republican leadership to dismantle anything of the prior administration’s legacy make the state of war all too apparent.

I don’t mean “war” in the sense of open violence, of course. I mean it in the sense of defection and disregard for the interests of those outside of ones political alliance. The whole question of whether and how foreign influence in the election should be considered is dependent in part on whether one sees the contest between political parties in the United States as warfare or not. It is natural for different sides in a war to seek foreign allies, even and almost especially if they are engaged in civil war or regime change. The American Revolutionary was backed by the French. The Bulshevik Revolution in Russia was backed by Germany. That’s just how these things go.

As I write this, I become convinced that this is really what it comes in the United States today. There are “two Americas”. To the extent that there is stability, it’s not a state of peace, it’s a state of equilibrium or gridlock.


by Sebastian Benthall at August 06, 2017 05:40 PM

August 04, 2017

Ph.D. student

The meaning of gridlock in governance

I’ve been so intrigued by this article, “Dems Can Abandon the Center — Because the Center Doesn’t Exist”, by Eric Levitz in NY Mag. The gist of the article is that most policies that we think of as “centrist” are actually very unrepresentative of the U.S. population’s median attitude on any particular subject, and are held only by a small minority that Levitz associates with former Mayor Bloomberg of New York City. It’s a great read and cites much more significant research on the subject.

One cool thing the article provides is this nice graphic showing the current political spectrum in the U.S.:

The U.S. political spectrum , from Levitz, 2017.

In comparison to that, this blog post is your usual ramble of no consequence.

Suppose there’s an organization whose governing body doesn’t accomplish anything, despite being controversial, well-publicized, and apparently not performing satisfactorily. What does that mean?

From an outside position (somebody being governed by such a body), what is means is sustained dissatisfaction and the perception that the governing body is dys- or non- functional. This spurs the dissatisfied party to invest resources or take action to change the situation.

However, if the governing body is responsive to the many and conflicting interests of the governed, the stasis of the government could mean one of at least two things.

One thing it could mean is that the mechanism through which the government changes is broken.

Another thing it could mean is that the mechanism through which the government changes is working, and the state of governance reflects the equilibrium of the powers the contest for control of the government.

The latter view is not a politically exciting view and indeed it is politically self-defeating for whoever holds it. If we see government as something responding to the activity of many interests, mediating between them and somehow achieving their collective agenda, then the problem with seeing a government in gridlock as having achieved a “happy” equilibrium, or a “correct” view, is that it discourages partisan or interested engagement. If one side stops participating in the (expensive, exhausting) arm wrestle, then the other side gains ground.

On the other hand, the stasis should not in itself be considered cause for alarm, apart from the dissatisfaction resulting from ones particular perspective on the total system.

Another angle on this is that from every point in the political spectrum, and especially those points at the extremes, the procedural mechanisms of government are going to look broken because they don’t result in satisfying outcomes. (Consider the last election, where both sides argued that the system was rigged when they thought they were losing or had lost.) But, of course, these mechanisms are always already part of the governance system itself and subject to being governed by it, so pragmatically one will approve of them just in so far as it gives ones own position influence over outcomes (here I’m assuming strict proceduralism are somewhere on the multidimensional political spectrum themselves and is motivated by e.g. the appeal of the stability or legitimacy in some sense).


by Sebastian Benthall at August 04, 2017 10:37 PM

Habermas seems quaint right now, but shouldn’t

By chance I was looking up Habermas’s later philosophical work today, like Between Facts and Norms (1992), which has been said to be the culmination of the project he began with The Structural Transformation of the Public Sphere in 1962. In it, he argues that the law is what gives pluralistic states their legitimacy, because the law enshrines the consent of the governed. Power cannot legitimize itself; democratic law is the foundation for the legitimate state.

Habermas’s later work is widely respected in the European Union, which by and large has functioning pluralistic democratic states. Habermas emerged from the Frankfurt School to become a theorist of modern liberalism and was good at it. While the empirical question of how much education in political theory is tied to the legitimacy and stability of the state, anecdotally we can say that Habermas is a successful theorist and the German-led European Union is, presently, a successful government. For the purposes of this post, let’s assume that this is at least in part due to the fact that citizens are convinced, through the education system, of the legitimacy of their form of government.

In the United States, something different happened. Habermas’s earlier work (such as the The Structural Transformation of the Public Sphere) was introduced to United States intellectuals through a critical lens. Craig Calhoun, for example, argued in 1992 that the politics of identity was more relevant or significant than the politics of deliberation and democratic consensus.

That was over 25 years ago, and that moment was influential in the way political thought has unfolded in Europe and the United States. In my experience, it is very difficult to find support in academia for the view that rational consensus around democratic institutions is a worthwhile thing to study or advocate for. Identity politics and the endless contest of perspectives is much more popular among students and scholars coming out of places like UC Berkeley. In my own department, students were encouraged to read Habermas’s early work in the context of the identity politics critique, but never exposed to the later work that reacted to these critiques constructively to build a theory that was specifically about pluralism, which is what political identities need in order to unify as a legitimate state. There’s a sense in which the whole idea that one should continue an philosophical argument to the point of constructive agreement, despite the hard work and discipline that this demands, was abandoned in favor of an ideology of intellectual diversity that discouraged scrutiny and rigor across boundaries of identity, even in the narrow sense of professional or disciplinary identity.

The problem with this approach to intellectualism is that it is fractious and undermines itself. When these qualities are taken as intellectual virtues, it is no wonder that boorish overconfidence can take advantage of it in an open contest. And indeed the political class in the United States today has been undermined by its inability to justify its own power and institutions in anything but the fragmented arguments of identity politics.

It is a sad state of affairs. I can’t help but feel my generation is intellectually ill-equipped to respond to the very prominent challenges to the legitimacy of the state that are being leveled at it every day. Not to put to fine a point on it, I blame the intellectual laziness of American critical theory and its inability to absorb the insights of Habermas’s later theoretical work.

Addendum 8/7/17a:

It has come to my attention that this post is receiving a relatively large amount of traffic. This seems to happen when I hit a nerve, specifically when I recommend Habermas over identitarianism in the context of UC Berkeley. Go figure. I respectfully ask for comments from any readers. Some have already helped me further my thinking on this subject. Also, I am aware that a Wikipedia link is not the best way to spread understanding of Habermas’s later political theory. I can recommend this book review (Chriss, 1998) of Between Facts and Norms as well as the Habermas entry in the Stanford Encyclopedia of Philosophy which includes a section specifically on Habermasian cosmopolitanism, which seems relevant to the particular situation today.

Addendum 8/7/17b:

I may have guessed wrong. The recent traffic has come from Reddit. Welcome, Redditors!

 


by Sebastian Benthall at August 04, 2017 06:24 PM

August 02, 2017

Ph.D. alumna

How “Demo-or-Die” Helped My Career

I left the Media Lab 15 years ago this week. At the time, I never would’ve predicted that I learned one of the most useful skills in my career there: demo-or-die.

(Me debugging an exhibit in 2002)

The culture of “demo-or-die” has been heavily critiqued over the years. In doing so, most folks focus on the words themselves. Sure, the “or-die” piece is definitely an exaggeration, but the important message there is the notion of pressure. But that’s not what most people focus on. They focus on the notion of a “demo.”

To the best that anyone can recall, the root of the term stems back from early days at the Media Lab, most likely because of Nicholas Negroponte’s dismissal of “publish-or-perish” in academia. So the idea was to focus not on writing words but producing artifacts. In mocking what it was that the Media Lab produced, many critics focused on the way in which the Lab had a tendency to create vaporware, performed to visitors through the demo. In 1987, Stewart Brand called this “handwaving.” The historian Molly Steenson has a more nuanced view so I can’t wait to read her upcoming book. But the mockery of the notion of a demo hasn’t died. Given this, it’s not surprising that the current Director (Joi Ito) has pushed people to stop talking about demoing and start thinking about deploying. Hence, “deploy-or-die.”

I would argue that what makes “demo-or-die” so powerful has absolutely nothing to do with the production of a demo. It has to do with the act of doing a demo. And that distinction is important because that’s where the skill development that I relish lies.

When I was at the Lab, we regularly received an onslaught of visitors. I was a part of the “Sociable Media Group,” run by Judith Donath. From our first day in the group, we were trained to be able to tell the story of the Media Lab, the mission of our group, and the goal of everyone’s research projects. Furthermore, we had to actually demo their quasi functioning code and pray that it wouldn’t fall apart in front of an important visitor. We were each assigned a day where we were “on call” to do demos to any surprise visitor. You could expect to have at least one visitor every day, not to mention hundreds of visitors on days that were officially sanctioned as “Sponsor Days.”

The motivations and interests of visitors ranged wildly. You’d have tour groups of VIP prospective students, dignitaries from foreign governments, Hollywood types, school teachers, engineers, and a whole host of different corporate actors. If you were lucky, you knew who was visiting ahead of time. But that was rare. Often, someone would walk in the door with someone else from the Lab and introduce you to someone for whom you’d have to drum up a demo in very short order with limited information. You’d have to quickly discern what this visitor was interested in, figure out which of the team’s research projects would be most likely to appeal, determine how to tell the story of that research in a way that connected to the visitor, and be prepared to field any questions that might emerge. And oy vay could the questions run the gamut.

I *hated* the culture of demo-or-die. I felt like a zoo animal on display for others’ benefit. I hated the emotional work that was needed to manage stupid questions, not to mention the requirement to smile and play nice even when being treated like shit by a visitor. I hated the disruptions and the stressful feeling when a demo collapsed. Drawing on my experience working in fast food, I developed a set of tricks for staying calm. Count how many times a visitor said a certain word. Nod politely while thinking about unicorns. Experiment with the wording of a particular demo to see if I could provoke a reaction. Etc.

When I left the Media Lab, I was ecstatic to never have to do another demo in my life. Except, that’s the funny thing about learning something important… you realize that you are forever changed by the experience.

I no longer produce demos, but as I developed in my career, I realized that “demo-or-die” wasn’t really about the demo itself. At the end of the day, the goal wasn’t to pitch the demo — it was to help the visitor change their perspective of the world through the lens of the demo. In trying to shift their thinking, we had to invite them to see the world differently. The demo was a prop. Everything about what I do as a researcher is rooted in the goal of using empirical work to help challenge people’s assumptions and generate new frames that people can work with. I have to understand where they’re coming from, appreciate their perspective, and then strategically engage them to shift their point of view. Like my days at the Media Lab, I don’t always succeed and it is indeed frustrating, especially because I don’t have a prop that I can rely on when everything goes wrong. But spending two years developing that muscle has been so essential for my work as an ethnographer, researcher, and public speaker.

I get why Joi reframed it as “deploy-or-die.” When it comes to actually building systems, impact is everything. But I really hope that the fundamental practice of “demo-or-die” isn’t gone. Those of us who build systems or generate knowledge day in and day out often have too little experience explaining ourselves to the wide array of folks who showed up to visit the Media Lab. It’s easy to explain what you do to people who share your ideas, values, and goals. It’s a lot harder to explain your contributions to those who live in other worlds. Impact isn’t just about deploying a system; it’s about understanding how that system or idea will be used. And that requires being able to explain your thinking to anyone at any moment. And that’s the skill that I learned from the “demo-or-die” culture.

by zephoria at August 02, 2017 01:50 AM

July 26, 2017

MIMS 2010

Writing pull requests your coworkers might enjoy reading

Programmers like writing code but few love reviewing it. Although code review is mandatory at many companies, enjoying it is not. Here are some tips I’ve accumulated for getting people to review your code. The underlying idea behind these suggestions is that the person asking for review should spend extra time and effort making the pull request easy to review. In general, you can do this by discussing code changes beforehand, making them small, and describing them clearly.

At this point you may be wondering who died and made me king of code review (spoiler: nobody). This advice is based on my experience doing code review for other engineers at Twitter. I’ve reviewed thousands of pull requests, posted hundreds of my own, and observed what works and doesn’t across several teams. Some of the tips may apply to pull requests to open-source projects, but I don’t have much experience there so no guarantees.

I primarily use Phabricator and ReviewBoard, but I use the term “pull request” because I think that’s a well understood term for code proposed for review.

Plan the change before you make a pull request

If you talk to the people who own code before you make a change, they’ll be more likely to review it. This makes sense purely from a social perspective: they become invested in your change and doing a code review is just the final step in the process. You’ll save time in review because these people will already have some context on what you’re trying to do. You may even save time before review because you can consider different designs before you implement one.

The problem with skipping this step is that it’s important to separate the design of the change from the implementation. Once you post code for review you generally have a strong bias towards the design that you just implemented. It’s hard to hear “start over” and it’s hard for reviewers to say it as well.

Pick reviewers who are relevant to the change

Figure out why are you asking people to review this code.

  • Is it something they worked on?
  • Is it related to something they are working on?
  • Do you think they understand the thing you’re changing?

If the answer to these questions is no, find better people to review your change.

Tell reviewers what is going on

Write a good summary and description of the change. Long is not the same as good; absent is usually not good. Reviewers need to understand the context of the pull request. Explain why you are making this change. Reading through the commits associated with the request usually doesn’t say enough. If there is a bug, issue, or ticket that provides context for the change, link to it.

Ideally you have written clear, readable code with adequate documentation, but that doesn’t necessarily get you off the hook here. How your change does what it says it does may still not be obvious. Give your readers a guide. What parts of the change should they look at first? What part is the most important? For example, “The main change is adding a UTF-8 reader to class XYZ. Everything else is updating callers to use the new method.” This focuses readers’ attention on the meat of the change immediately.

You may find it helpful to write the description of your pull request while tests are running, or code is compiling, or another time where you would otherwise check email. I often keep a running description of the change open while I am writing the code. If I make a decision that I think will strike reviewers as unusual, I add a brief explanation to that doc and use to write the pull request.

Finagle uses a Problem/Solution format for pull requests that I find pleasant. It’s also be fun to misuse on occasion. I don’t recommend that, but I do plenty of things I don’t recommend.

Make the change as small as possible while still being understandable

Sometimes fixing a bug or creating a new feature requires changes to a dozen-odd files. This alone can be tricky to follow before you mix in other refactorings, clean-ups, and changes. Fixing unrelated things makes it harder to understand the pull request as a whole. Correcting a typo here or there is fine; fixing a different bug, or a heavy refactoring is not. (Teams will, of course, have different tolerances for this, but inasmuch as possible it’s nice to separate review of these parts.)

Even if you have a branch where you change a bunch of related things, you may want to extract isolated parts that can be reviewed and merged independently. Aim for a change that has a single, well-scoped answer to the question “What does this change do?”. Note that this is more about the change being conceptually small rather than small in the actual number of files modified. If you change a class and have to update usages in 50 other files, that might still count as small.

Of course there are caveats: having 20 small pull requests, each building on the previous, isn’t ideal either so you have to strike some balance between size and frequency. Sometimes splitting things up makes it harder to understand. Rely on your reviewers for feedback about how they prefer changes.

Send your pull request when it’s ready to review

Is your change actually ready to merge when reviewers OK it? Have you verified that the feature you have added works, or that the bug you fixed is actually fixed? Does the code compile? Do tests and linters pass? If not, you are going to waste reviewers’ time when you have to change things and ask for another review. Some of these checks can be automated—maybe tests are run against your branch; use a checklist for ones that can’t. (One obvious exception to this is an RFC-style pull request where you are seeking input before you implement everything—one way to “Plan the change”).

Once you have enough feedback from reviewers and have addressed the relevant issues, don’t keep updating the request with new changes. Merge it! It’s time for a new branch.

Closing thoughts

Not all changes need to follow these tips. You probably don’t need peer buy-in before you update some documentation, you may not have time to provide a review guide for an emergency fix, and sometimes it’s just really convenient to lump a few changes together. In general, though, I find that discussing changes ahead of time, keeping them small, and connecting the dots for your readers is worthwhile. Going the extra mile to help people reviewing your pull requests will result in faster turnaround, more focused feedback, and happier teammates. No guarantees, but it’s possible they’ll even enjoy it.

Thanks to Goran Peretin and Sarah Brown for reviewing this post and their helpful suggestions. Cross-posted at Medium.

by Ryan at July 26, 2017 03:01 PM

July 23, 2017

adjunct professor

Unmasking Slurs

I'm sympathetic to many of the arguments offered in a guest post by Robert Henderson, Peter Klecha, and Eric McCready (HK&M) in response to Geoff Pullum's post on "nigger in the woodpile," no doubt because they are sympathetic to some of the things I said in my reply to Geoff. But I have to object when they scold me for spelling out the word nigger rather than rendering it as n****r. It seems to me that "masking" the letters of slurs with devices such as this is an unwise practice—it reflects a misunderstanding of the taboos surrounding these words, it impedes serious discussion of their features, and most important, it inadvertently creates an impression that works to the advantage of certain racist ideologies. I have to add that it strikes me that HK&M's arguments, like a good part of the linguistic and philosophical literature on slurs, suffer from a certain narrowness of focus, a neglect both of the facts of actual usage of these words and the complicated discourses that they evoke. So, are you sitting comfortably?

HK&M say of nigger (or as they style it, n****r):

The word literally has as part of its semantic content an expression of racial hate, and its history has made that content unavoidably salient. It is that content, and that history, that gives this word (and other slurs) its power over and above other taboo expressions. It is for this reason that the word is literally unutterable for many people, and why we (who are white, not a part of the group that is victimized by the word in question) avoid it here.

Yes, even here on Language Log. There seems to be an unfortunate attitude — even among those whose views on slurs are otherwise similar to our own — that we as linguists are somehow exceptions to the facts surrounding slurs discussed in this post. In Geoffrey Nunberg’s otherwise commendable post on July 13, for example, he continues to mention the slur (quite abundantly), despite acknowledging the hurt it can cause. We think this is a mistake. We are not special; our community includes members of oppressed groups (though not nearly enough of them), and the rest of us ought to respect and show courtesy to them.

This position is a version of the doctrine that Luvell Anderson and Ernie Lepore call "silentism" (see also here). It accords with the widespread view that the word nigger is phonetically toxic: simply to pronounce it is to activate it, and it isn’t detoxified by placing it in quotation marks or other devices that indicate that the word is being mentioned rather than used, even written news reports or scholarly discussions. In that way, nigger and words like it seem to resemble strong vulgarities. Toxicity, that is, is a property that’s attached to the act of pronouncing a certain phonetic shape, rather than to an act of assertion, which is why some people are disconcerted when all or part of the word appears as a segment of other words, as in niggardly or even denigrate.

Are Slurs Nondisplaceable?

This is, as I say, a widespread view, and HK&M apparently hold that that is reason enough to avoid the unmasked utterance of the word (written or spoken), simply out of courtesy. It doesn't matter whether the insistence on categorial avoidance reflects only the fact that “People have had a hard time wrapping their heads around the fact that referring to the word is not the same as using it,” as John McWhorter puts it—people simply don't like to hear it spoken or see it written, so just don't.

But HK&M also suggest that the taboo on mentioning slurs has a linguistic basis:

There is a consensus in the semantic/pragmatic and philosophical literature on the topic that slurs aggressively attach to the speaker, committing them to a racist attitude even in embedded contexts. Consider embedded slurs; imagine Ron Weasley says “Draco thought that Harry was a mudblood”, where attributing the thought to Draco isn’t enough to absolve Ron of expressing the attitudes associated with the slur. Indeed, even mentioning slurs is fraught territory, which is why the authors of most papers on these issues are careful to distance themselves from the content expressed.

The idea here is that slurs, like other expressives, are always speaker-oriented. A number of semanticists have made this claim, but always on the basis of intuitions about spare constructed examples—in the present case, one involving an imaginary slur: “imagine Ron Weasley says “Draco thought that Harry was a mudblood.” This is always a risky method in getting at the features of socially charged words, and particularly with these, since most of the people who write about slurs are not native speakers of them, and their intuitions are apt to be shaped by their preconceptions. The fact is that people routinely produce sentences in which the attitudes implicit in a slur are attributed to someone other than the speaker. The playwright Harvey Fierstein produced a crisp example on MSNBC, “Everybody loves to hate a homo.” Here are some others:

In fact We lived, in that time, in a world of enemies, of course… but beyond enemies there were the Micks, and the spics, and the wops, and the fuzzy-wuzzies. A whole world of people not us… (edwardsfrostings.com)

So white people were given their own bathrooms, their own water fountains. You didn’t have to ride on public conveyances with niggers anymore. These uncivilized jungle bunnies, darkies.…You had your own cemetery. The niggers will have theirs over there, and everything will be just fine. (Ron Daniels in Race and Resistance: African Americans in the 21st Century)

All Alabama governors do enjoy to troll fags and lesbians as both white and black Alabamians agree that homos piss off the almighty God. (Encyclopedia Dramatica)

[Marcus Bachmann] also called for more funding of cancer and Alzheimer’s research, probably cuz all those homos get all the money now for all that AIDS research. (Maxdad.com)

And needless to say, slurs are not speaker-oriented when they're quoted. When the New York Times reports that “Kaepernick was called a nigger on social media,” no one would assume that the Times endorses the attitudes that the word conveys.

I make this point not so much because it's important here, but because it demonstrates the perils of analyzing slurs without actually looking at how people use them or regard them—a point I'll come back to in a moment.

Toxicity in Speech and Writing

The assimilation of slurs to vulgarities obscures several important differences between the two. For one thing, mentioning slurs is less offensive in writing than in speech. That makes slurs different from vulgarisms like fucking. The New York Times has printed the latter word only twice, most recently in its page one report of Trump’s Access Hollywood tapes. But it has printed nigger any number of times [added} presumably with the approval of its African American executive editor Dean Banquet (though in recent years it tends to avoid the word in headlines):

The rhymes include the one beginning, “Eeny, meeny, miney mo, catch a nigger by the toe,” and another one that begins, “Ten little niggers …” May 8, 2014

The Word 'Nigger' Is Part of Our Lexicon Jan. 8, 2011

I live in a city where I probably hear the word “nigger” 50 times a day from people of all colors and ages… Jan 6, 2011

In fan enclaves across the web, a subset of Fifth Harmony followers called Ms. Kordei “Normonkey,” “coon,” and “nigger” Aug 12, 2016

 Gwen [Ifill] came to work one day to find a note in her work space that read “Nigger, go home. Nov. 11, 2016

… on the evening of July 7, 2007, Epstein "bumped into a black woman" on the street in the Georgetown section of Washington … He "called her a 'nigger,' and struck her in the head with an open hand." Charles M. Blow, June 6, 2009.

By contrast, the word is almost never heard in broadcast or free cable (when it does occur, e.g., in a recording, it is invariably bleeped). When I did a Nexis search several years ago on broadcast and cable news transcripts for the year 2012, I found it had been spoken only three times, in each instance by blacks recalling the insults they endured in their childhoods.

To HK&M, this might suggest only that the Times is showing insufficient courtesy to African Americans by printing nigger in full. And it's true that other media are more scrupulous about masking the word than the Times is, notably the New York Post and Fox News and its outlets:

Walmart was in hot water on Monday morning after a product’s description of “N___ Brown” was found on their website. Fox32news, 2027

After Thurston intervened, Artiles continued on and blamed "six n——" for letting Negron rise to power. Fox13news.com, April 19, 2017

In a 2007 encounter with his best friend’s wife, Hogan unleashed an ugly tirade about his daughter Brooke’s black boyfriend.“I mean, I’d rather if she was going to f–k some n—-r, I’d rather have her marry an 8-foot-tall n—-r worth a hundred million dollars! Like a basketball player! I guess we’re all a little racist. F—ing n—-r,” Hogan said, according to a transcript of the recording. New York Post May 2, 2016

"Racism, we are not cured of it," Obama said. "And it's not just a matter of it not being polite to say n***** in public." Foxnews.com June 22, 2015

One might conclude from this, following HK&M's line of argument, that the New York Post and Fox News are demonstrating a greater degree of racial sensitivity than the Times. Still, given the ideological bent of these outlets, one might also suspect that masking is doing a different kind of social work.

Slurs in Scholarship

As an aside, I should note that the deficiencies of the masking approach are even more obvious when we turn to the mention of these words in linguistic or philosophical discussions of slurs and derogative terms, which often involve numerous mentions of a variety of terms. In my forthcoming paper “The Social Life of Slurs,” I discuss dozens of derogative terms, including not just racial, religious, and ethnic slurs, but political derogatives (libtard, commie), geographical derogations (cracker, It. terrone), and derogations involving disability (cripple, spazz, retard), class (pleb, redneck), sexual orientation (faggot, queer, poofter), and nonconforming gender (tranny). I'm not sure how HK&M would suggest I decide which of these called out for masking with asterisks—just the prototypical ones like nigger and spic, or others that may be no less offensive to the targeted group? Cast the net narrowly and you seem to be singling out certain forms of bigotry for special attention; cast it widely and the texts starts to look circus poster. Better to assume that the readers of linguistics and philosophy journals—and linguistics blogs—are adult discerning enough to deal with the unexpurgated forms.

What's Wrong with Masking?

The unspoken assumption behind masking taboo words is that they’re invested with magical powers—like a conjuror’s spell, they are inefficacious unless they are pronounced or written just so. This is how we often think of vulgarisms of course—that writing fuck as f*ck or fug somehow denatures it, even though the reader knows perfectly well what the word is. That's what has led a lot of people in recent years to assimilate racial slurs to vulgarisms—referring to them with the same kind of initialized euphemism used for shit and fuck and describing them with terms like “obscenity” and “curse word” with no sense of speaking figuratively.

But the two cases are very different. Vulgarities rely for their effect on a systematic hypocrisy: we officially stigmatize them in order to preserve their force when they are used transgressively. (Learning to swear involves both being told to avoid the words and hearing them used, ideally by the same people.) But that’s exactly the effect that we want to avoid with slurs: we don’t want their utterers to experience the flush of guilty pleasure or the sense of complicity that comes of violating a rule of propriety—we don't want people ever to use the words, or even think them. Yet that has been one pernicious effect of the toxification of certain words.

It should give us pause to realize that the assimilation of nigger to naughty words has been embraced not just by many African Americans, but also by a large segment of the cultural and political right. Recall the reactions when President Obama remarked in an interview with Marc Maron’s "WTF" podcast that curing racism was “not just a matter of it not being polite to say ‘nigger’ in public.” Some African Americans were unhappy with the remark—the president of the Urban League said the word "ought to be retired from the English language." Others thought it was appropriate.

But the response from many on the right was telling. They, too, disapproved of Obama’s use of the word, but only because it betrayed his crudeness. A commentator on Fox News wrote:

And then there's the guy who runs the "WTF" podcast — an acronym for a word I am not allowed to write on this website. President Obama agreed to a podcast interview with comedian Marc Maron — a podcast host known for his crude language. But who knew the leader of the free world would be more crude than the host?

The Fox News host Elisabeth Hasselbeck also referenced the name of Maron’s podcast and said,

I think many people are wondering if it’s only there that he would say it, and not, perhaps, in a State of the Union or more public address.

Also on Fox News, the conservative African American columnist Deneen Borelli said, that Obama “has really dragged in the gutter speak of rap music. So now he is the first president of rap, of street?”

It’s presumably not an accident that Fox News’s online reports of this story all render nigger as n****r. It reflects the "naughty word" understanding of the taboo that led members of a fraternity at the University of Oklahoma riding on a charter bus to chant, “There will never be a nigger at SAE/You can hang him from a tree, but he'll never sign with me,” with the same gusto that male college students of my generation would have brought to a sing-along of “Barnacle Bill the Sailor.”

That understanding of nigger as a dirty word also figures in the rhetorical move that some on the right have made, in shifting blame for the usage from white racists to black hip hop artists—taking the reclaimed use of the word as a model for white use. That in turn enables them to assimilate nigger—which they rarely distinguish from nigga—to the vulgarities that proliferate in hip hop. Mika Brzezinski and Joe Scarborough of Morning Joe blamed the Oklahoma incident on hip hop, citing the songs of Waka Flocka Flame, who had canceled a concert at the university; as Brzezinski put it:

If you look at every single song, I guess you call these, that he’s written, it’s a bunch of garbage. It’s full of n-words, it’s full of f-words. It’s wrong. And he shouldn’t be disgusted with them, he should be disgusted with himself.

On the same broadcast, Bill Kristol added that “popular culture has become a cesspool,” again subsuming the use of racist slurs, via hip hop, under the heading of vulgarity and obscenity in general.

I don’t mean to suggest that Brzezinski, Scarborough and Kristol aren’t genuinely distressed by the use of racial slurs (I have my doubts about some of the Fox News hosts). But for the respectable sectors of cultural right—I mean as opposed to the unreconstructed bigots who have no qualms about using nigger at Trump rallies or on Reddit forums—the essential problem with powerful slurs is that they’re vulgar and coarse, and only secondarily that they’re the instruments of social oppression. And the insistence on categorically avoiding unmasked mentions of the words is very easy to interpret as supporting that view. In a way, it takes us back to the disdain for the word among genteel nineteenth-century Northerners. A contributor to an 1894 number of the Century Magazine wrote that “An American feels something vulgar in the word ‘nigger’. A ‘half-cut’ [semi-genteel] American, though he might use it in speech, would hardly print it.” And a widely repeated anecdote had William Seward saying of Stephen Douglas that the American people would never elect as president “[a] man who spells negro with two g’s,” since “the people always mean to elect a gentleman for president.” (That expression, "spelling negro with two g's" was popular at the time, a mid-nineteenth-century equivalent to the form n*****r.)

This all calls for care, of course. There are certainly contexts in which writing nigger in full is unwise. But in serious written discussions of slurs and their use, we ought to be able to spell the words out, in the reasonable expectation that our readers will discern our purpose.

As John McWhorter put this point in connection with the remarks Obama made on the Marc Maron podcast:

Obama should not have to say “the N-word” when referring to the word, and I’m glad he didn’t. Whites shouldn’t have to either, if you ask me. I am now old enough to remember when the euphemism had yet to catch on. In a thoroughly enlightened 1990s journalistic culture, one could still say the whole word when talking about it.… What have we gained since then in barring people from ever uttering the word even to discuss it—other than a fake, ticklish nicety that seems almost designed to create misunderstandings?

by Geoff Nunberg at July 23, 2017 01:36 AM

July 21, 2017

Ph.D. student

Propaganda cyberwar: the new normal?

Reuters reports on the Washington Post’s report, citing U.S. intelligence officials, that the UAE arranged for hacking of Qatar government sites posting “fiery but false” quotes from Qatar’s emir. This was used to justify Saudi Arabia, the UAE, Egypt, and Bahrain to cut diplomatic and transport ties with Qatar.

Qatar says the quotes from the emir are fake, posted by hackers. U.S. intelligence officials now say (to the Post) that they have information about UAE discussing the hacks before they occur.

UAE denies the hacks, saying the reports of them are false, and argues that what is politically relevant is Qatar’s Islamist activities.

What a mess.

One can draw a comparison between these happenings in the Middle East and the U.S.’s Russiagate.

The comparison is difficult because any attempt to summarize what is going on with Russiagate runs into the difficulty of aligning with the narrative of one party or another who is presently battling for the ascendancy of their interpretation. But for clarity let me say that by Russiagate I mean the complex of allegations and counterclaims including: that the Russian government, or some Russians who were not associated with the government, or somebody else hacked the DNC and leaked their emails to influence the 2016 election (or its perceived legitimacy); that the Russian government (or maybe somebody else…) prop up alt-right media bots to spread “fake news” to swing voters; that swing voters were identified through the hacking of election records; that some or all of these allegations are false and promoted by politicized media outlets; that if the allegations are true, their impact on the outcome of the 2016 election is insufficient to have changed the outcome (hence not delegitimizing the outcome); the diplomatic spat over recreational compounds used by Russians in the U.S. and by the U.S. in Russia that is now based on the fact that the outgoing administration wanted to reprimand Russia for alleged hacks that allegedly led to its party’s loss of control of the government….

Propaganda

It is dizzying. In both the Qatari and U.S. cases, without very privileged inside knowledge we are left with vague and uncertain impressions of a new condition:

  • the relentless rate at which “new developments” in these stories is made available or recapitulated or commented on
  • the weakness with which they are confirmed or denied (because they are due to anonymous officials or unaccountable leaks)
  • our dependence on trusted authorities for our understanding of the problem when that trust is constantly being eroded
  • the variety of positions taken on any particular event, and the accessibility of these diverse views

Is any of this new? Maybe it’s fair to say it’s “increasing”, as the Internet has continuously inflated the speed and variety and scale of everything in the media, or seemed to.

I have no wish to recapitulate the breathless hyperbole about how media is changing “online”; this panting has been going on continuously for fifteen years at least. But recently I did see what seemed like a new insight among the broader discussion. Once, we were warned against the dangers of filter bubbles, the technologically reinforced perspectives we take when social media and search engines are trained on our preferences. Buzzfeed admirably tried to design a feature to get people Out of Their Bubble, but that got an insightful reaction from Rachel Haser:

In my experience, people understand that other opinions exist, and what the opinions are. What people don’t understand is where the opinions come from, and they don’t care to find out for themselves.

In other words: it is not hard for somebody to get out of their own bubble. Somebody’s else’s opinion is just a click or a search away. Among the narrow dichotomies of the U.S.’s political field, I’m constantly being told by the left-wing media who the right-wing pundits are and what they are saying, and why they are ridiculous. The right-wing media is constantly reporting on what left-wing people are doing and why they are ridiculous. If I ever want to verify for myself I can simply watch a video or read and article from a different point of view.

None of this access to alternative information will change my mind because my habitus is already set by my life circumstances and offline social and institutional relationships. The semiotic environment does not determine my perspective; the economic environment does. What the semiotic environment provides is, one way or another, an elaborate system of propaganda which reflects the possible practical and political alliances that are available for the deployment of capital. Most of what is said in “the media” is true; most of what is said in “the media” is spun; for the purpose of this post and to distinguish it from responsible scientific research or reporting of “just the facts”, which does happen (!), I will refer to it generically as propaganda.

Propaganda is obviously not new. Propaganda on the Internet is as new as the Internet. As the Internet expands (via smartphones and “things”), so too does propaganda. This is one part of the story here.

The second part of the story is all the hacks.

Hacks

What are hacks? Technically, a hack can be many different kinds of interventions into a (socio)technical system that creates behavior unexpected by the designer or owner of the system. It is a use or appropriation by somebody (the hacker) of somebody else’s technology, for the former’s advantage. Some example things that hacks can accomplish include: taking otherwise secret data, modifying data, and causing computers or networks to break down.

“CIA”, but Randall Munroe

There are interesting reasons why hacks have special social and political relevance. One important thing about computer hacking is that it requires technical expertise to understand how it works. This puts the analysis of a hack, and especially the attribution of the hack to some actor, in the hands of specialists. In this sense, “solving” a hack is like “solving” a conventional crime. It requires forensic experts, detectives who understand the motivation of potential suspects, and so on.

Another thing about hacks over the Internet is that they can come from “anywhere”, because Internet. This makes it harder to find hackers and also makes hacks convenient tools for transnational action. It has been argued that as the costs of physical violent war increase with an integrated global economy, the use of cyberwar as a softer alternative will rise.

In the cases described at the beginning of this post, hacks play many different roles:

  • a form of transgression, requiring apology, redress, or retaliation
  • a kind of communication, sending a message (perhaps true, or perhaps false) to an audience
  • the referent of communication, what is being discussed, especially with respect to its attribution (which is necessary for apology, redress, retaliation)

The difficulty with reporting about hacks, at least as far as reporting to the nonexpert public goes, is that every hack raises the specter of uncertainty about where it came from, whether it was as significant as the reporters say, whether the suspects have been framed, and so on.

If a propaganda war is a fire, cyberwar throws gasoline on the flame, because all the political complexity of the media can fracture the narrative around each hack until it too goes up in meaningless postmodern smoke.

Skooling?

I am including, by the way, the use of bots to promote content in social media as a “hack”. I’m blending slightly two meanings of “hack”: the more benign “MIT” sense of hack as a creative technical solution to a problem and the more specific sense of one who obviates computer security. Since the latter sense of “hack” has expanded to include social engineering efforts such as phishing, the automated influence of social media to present a false or skewed narrative as true seems to also fit here.

I have to say that this sort of media hacking–creating bots to spread “fake news” and so on–doesn’t have a succinct name yet, I propose “skooling” or “sk00ling”, since

  • it’s a phrase that means something similar to “pwning”/”owning”
  • the activity is like “phishing” in the sense that it is automated social engineering, but en masse (i.e. a school of fish)
  • the point of the hack is to “teaching” people something (i.e. some news or rumor), so to speak.

It turns out that this sort of media hacking isn’t just the bailiwick of shadowing intelligence agencies and organized cybercriminals. Run-of-the-mill public relations firms like Bell Potinger can do it. NatReferencesurally this is not considered on par with computer security crime, though there is a sense in which it is a kind of computer mediated fraud.

Putting it all together, we can imagine a sophisticated form of propaganda cyberwar campaign that goes something like this: an attacker collects data to identify about targets vulnerable to persuasion via hacks and other ways of collecting publicly or commercially available personal data. It does its best to cover its tracks to get plausible deniability. Then they skool the targets to create the desired effect. The skooling is itself a form of hack, and so the source of that attack is also obscured. Propaganda flares about both hacks (the one for data access, and the skooling). But if enough of the targets are effected (maybe they change how they vote in an election, or don’t vote at all) then the conversion rate is good enough and worth the investment.

Economics and Expertise

Of course, it would be simplistic to assume that every part of this value chain is performed by the same vertically integrated organization. Previous research on the spam value chain has shown how spam is an industry with many different required resources. Bot-nets are used to send mass emails; domain names are rented to host target web sites; there are even real pharmaceutical companies producing real knock-off viagra for those who have been coaxed into buying it. (See Kanich et al. 2008; Levchenko et al. 2011) Just like in a real industry, these different resources or part of the supply chain need not be all controlled under the same organization. On the contrary, the cybercrime economy is highly segmented into many different independent actors with limited knowledge of each other precisely because this makes it harder to catch them. So, for example, somebody that owns a botnet will rent out that botnet to a spammer who will then contract with a supplier.

Should we expect the skooling economy to work any differently? This depends a little on the arms race between social media bot creators and social media abuse detection and reporting. This has been a complex matter for some time, particularly because it is not always in a social media company’s interest to reject all bot activity as abuse even when this activity can be detected. Skooling is good for Twitter’s business, arguably.

But it may well be the case that the expertise in setting up influential clusters of bots to augment the power of some ideological block may be available in a more or less mercenary way. A particular cluster of bots in social media may or may not be positioned for a specific form of ideological attack or target; in that case the asset is not as as multipurpose as a standard botnet, which can run many different kinds of programs from spam to denial of service. (These are empirical questions and at the moment I don’t know the answers.)

The point is that because of the complexity of the supply chain, attribution need not be straightforward at all. Taking for example the alleged “alt-right” social media bot clusters, these clusters could be paid for (and their agendas influenced) by a succession of different actors (including right wing Americans, Russians, and whoever else.) There is certainly the potential for false flag operations if the point of the attack is to make it appear that somebody else has transgressed.

Naturally these subtleties don’t help the public understand what is happening to them. If they are aware of being skooled, it would be lucky. If they can attribute it to one party involved correctly, that is even luckier.

But to be realistic, most won’t have any idea this is happening, or happening to them.

Which brings me to my last point about this, which is the role of cybersecurity expertise in the propaganda cyberwar. Let me define cybersecurity expertise as the skill set necessary to identify and analyze hacks. Of course this form of expertise isn’t monolithic as there are many different attack vectors for hacks and understanding different physical and virtual vectors requires different skills. But knowing which skills are relevant in which contexts is for our purposes just another part of cybersecurity expertise which makes it more inscrutable to those that don’t have it. Cybersecurity expertise is also the kind of expertise you need to execute a hack (as defined above), though again this is a different variation of the skill set. I suppose it’s a bit like the Dark Arts in Harry Potter.

Because in the propaganda cyberwar the media through which people craft their sense of shared reality is vulnerable to cyberattacks, this gives both hackers and cybersecurity experts extraordinary new political powers. Both offensive and defensive security experts are likely to be for hire. There’s a marketplace for their first-order expertise, and then there’s a media marketplace for second-order reporting of the outcomes of their forensic judgments. The results of cybersecurity forensics need not be faithfully reported.

Outcomes

I don’t know what the endgame for this is. If I had to guess, I’d say one of two outcomes is likely. The first is that social media becomes more untrusted as a source of information as the amount of skooling increases. This doesn’t mean that people would stop trusting information from on-line sources, but it does mean that they would pick which on-line sources they trust and read them specifically instead of trusting what people they know share generally. If social media gets less determinative of people’s discovery and preferences for media outlets, then they are likely to pick sources that reflect their off-line background instead. This gets us back into the discussion of propaganda in the beginning of this post. In this case, we would expect skooling to continue, but be relegated to the background like spamming has been. There will be people who fall prey to it and that may be relevant for political outcomes, but it will become, like spam, a normal fact of life and no longer newsworthy. The vulnerability of the population to skooling and other propaganda cyberwarfare will be due to their out-of-band, offline education and culture.

Another possibility is that an independent, trusted, international body of cybersecurity experts becomes involved in analyzing and vetting skooling campaigns and other political hacks. This would have all the challenges of establishing scientific consensus as well as solving politicized high-profile crimes. Of course it would have enemies. But if it were trusted enough, it could become the pillar of political sanity that prevents a downslide into perpetual chaos.

I suppose there are intermediary outcomes as well where multiple poles of trusted cybersecurity experts weigh in and report on hacks in ways that reflect the capital-rich interests that hire them. Popular opinion follows these authorities as they have done for centuries. Nations maintain themselves, and so on.

Is it fair to say that propaganda cyberware is “the new normal”? It’s perhaps a trite thing to say. For it to be true, just two things must be true. First, it has to be new: it must be happening now, as of recently. I feel I must say this obvious fact only because I recently saw “the new normal” used to describe a situation that in fact was not occurring at all. I believe the phrase du jour for that sort of writing is “fake news”.

I do believe the propaganda cyberwar is new, or at least newly prominent because of Russiagate. We are sensitized to the political use of hacks now in a way that we haven’t been before.

The second requirement is that the new situation becomes normal, ongoing and unremarkable. Is the propaganda cyberwar going to be normal? I’ve laid out what I think are the potential outcomes. In some of them, indeed it does become normal. I prefer the outcomes that result in trusted scientific institutions partnering with criminal justice investigations in an effort to maintain world peace in a more modernist fashion. I suppose we shall have to see how things go.

References

Kanich, C., Kreibich, C., Levchenko, K., Enright, B., Voelker, G.M., Paxson, V. and Savage, S., 2008, October. Spamalytics: An empirical analysis of spam marketing conversion. In Proceedings of the 15th ACM conference on Computer and communications security (pp. 3-14). ACM.

Levchenko, K., Pitsillidis, A., Chachra, N., Enright, B., Félegyházi, M., Grier, C., Halvorson, T., Kanich, C., Kreibich, C., Liu, H. and McCoy, D., 2011, May. Click trajectories: End-to-end analysis of the spam value chain. In Security and Privacy (SP), 2011 IEEE Symposium on (pp. 431-446). IEEE.


by Sebastian Benthall at July 21, 2017 04:00 PM

July 20, 2017

Ph.D. student

What are the right metrics for evaluating the goodness of government?

Let’s assume for a moment that any politically useful language (“freedom”, “liberalism”, “conservatism”, “freedom of speech”, “free markets”, “fake news”, “democracy”, “fascism”, “theocracy”, “radicalism”, “diversity”, etc.) will get coopted by myriad opposed political actors that are either ignorant or uncaring of its original meaning and twisted to reflect only the crudest components of each ideology.

It follows from this assumption that an evaluation of a government based on these terms is going to be fraught to the point of being useless.

To put it another way: the rapidity and multiplicity of framings available for the understanding of politics, and the speed with which framings can assimilate and cleverly reverse each other, makes this entire activity a dizzying distraction from substantive evaluation of the world we live in.

Suppose that nevertheless we are interested in justice, broadly defined as the virtue of good government or well-crafted state.

It’s not going to be helpful to frame this argument, as it has been classically, in the terminology that political ideological battles have been fought in for centuries.

For domestic policy, legal language provides some kind of anchoring of political language. But legal language still accommodates drift (partly by design) and it does not translate well internationally.

It would be better to use an objective, scientific approach for this sort of thing.

That raises the interesting question: if one were to try to measure justice, what would one measure? Assuming one could observe and quantify any relevant mechanism in society, which ones would be the ones worth tracking an optimizing to make society more just?


by Sebastian Benthall at July 20, 2017 02:25 AM

July 19, 2017

Ph.D. student

Glass Enterprise Edition Doesn’t Seem So Creepy

Google Glass has returned — as Glass Enterprise Edition. The company’s website suggests that it can be used in professional settings–such as manufacturing, logistics, and healthcare — for specific work applications, such as accessing training videos, annotated images, handsfree checklists, or sharing your viewpoint with an expert collaborator. This is a very different imagined future with Glass than in the 2012 “One Day” concept video where a dude walks around New York City taking pictures and petting dogs. In fact, the idea of using this type of product in a professional working space, collaborating with experts from your point of view sounds a lot like the original Microsoft HoloLens concept video (mirror).

This is not to say one company followed or copied another (and in fact Hololens’ more augmented-reality-like interface and Glass’ more heads-up-display-like interface will likely be used for different types of applications. It is, however, a great example of how a product’s creepiness is partly related to whether it’s envisioned as a device to be used in constrained contexts or not.  In a great opening line which I think sums this well,  Levi Sumagaysay at Silicon Beat says:

Now Google Glass is productive, not creepy.

As I’ve previously written with Deirdre Mulligan [open access version] about the future worlds imagined by the original video presentations of Glass and HoloLens, Glass’ original portrayal of being always-on (and potentially always recording), invisible to others, taking information from one social context and using it in another, used in public spaces, made it easier to see it as a creepy and privacy-infringing device. (It didn’t help that the first Glass video also only showed the viewpoint of a single imagined user, a 20-something-year-old white man). Its goal seemed to be to capture information about a person’s entire life — from riding the subway to getting coffee with friends, to shopping, to going on dates. And a lot of people reacted negatively to Glass’ initial explorer edition, with Glass bans in some bars and restaurants, campaigns against it, and the rise of the colloquial term “glasshole.” In contrast, HoloLens was depicted as a very visible and very bulky device that can be easily seen, and its use was limited to a few familiar, specific places and contexts — at work or at home, so it’s not portrayed as a device that could record anything at any time. Notably, the HoloLens video also avoided showing the device in public spaces. HoloLens was also presented as a productivity tool to help complete specific tasks in new ways (such as CAD, helping someone complete a task by sharing their point of view, and the ever exciting file sharing), rather than a device that could capture everything about a user’s life. And there were few public displays of concern over privacy. (If you’re interested in more, I have another blog entry with more detail). 

Whether explicit or implicit, the presentation of Glass Enterprise Edition seems to recognize some of the lessons about constraining the use of such an expansive set of capabilities to particular contexts and roles. Using Glass’ sensing, recording, sharing, and display capabilities within the confines of professionals doing manufacturing, healthcare, or other work on the whole helps position the device as something that will not violate people’s privacy in public spaces. (Though it is perhaps still to be seen what types of privacy problems related to Glass will emerge in workplaces, and how those might be addressed through design, use rules, training, an so forth). What is perhaps more broadly interesting is how the same technology can take on different meanings with regards to privacy based on how it’s situated, used, and imagined within particular contexts and assemblages.


by Richmond at July 19, 2017 06:30 PM

July 14, 2017

adjunct professor

Polysemous Pejoratives

Geoff Pullum suggests that the flap over an MP’s use of nigger in the woodpile is overdone:

Anne Marie Morris, the very successful Conservative MP for Newton Abbot in the southwestern county of Devon, did not call anyone a nigger.…
Ms. Morris used a fixed phrase with its idiomatic meaning, and it contained a word which, used in other contexts, can be a decidedly offensive way of denoting a person of negroid racial type, or an outright insult or slur. Using such a slur — referring to a black person as a nigger — really would be a racist act. But one ill-advised use of an old idiom containing the word, in a context where absolutely no reference to race was involved, is not.

Oh, dear. As usual, Geoff's logic is impeccable, but in this case it's led him terribly astray.

As it happens, I addressed this very question in a report I wrote on behalf of the petitioners who asked the Trademark Board to cancel the mark of the Washington Redskins on the grounds that it violated the Lanham Act’s disparagement clause. The team argued, among other things, that “the fact that the term ‘redskin,’ used in singular, lower case form, references an ethnic group does not automatically render it disparaging when employed as a proper noun in the context of sports.” The idea here is that the connotations of a pejorative word do not persist when it acquires a transferred meaning—as the team’s lead attorney put it, “It’s what our word means.” In fact, they added, the use of the name as team name has only positive associations.

I responded, in part:

Nigger has distinct denotations when it is used for a black person, a shade of dark brown, or in phrases like nigger chaser, nigger fish, or niggertoe (a Brazil nut), and in phrases like nigger in the woodpile. All of those expressions are “different words” from the slurring ethnonym nigger from which they are derived, but each of them necessarily inherits its disparaging connotations. The OED now labels all of them as "derogatory" or "offensive." On consideration, it’s obvious why these connotations should persist when an expression acquires a transferred meaning—for more-or-less the same reason the connotations of fuck persist when it's incorporated in fuckwad. The power of a slur is derived from its history of use, a point that Langston Hughes made powerfully in a passage from his 1940 memoir The Big Sea:

The word nigger sums up for us who are colored all the bitter years of insult and struggle in America the slave-beatings of yesterday, the lynchings of today, the Jim Crow cars…the restaurants where you may not eat, the jobs you may not have, the unions you cannot join. The word nigger in the mouths of little white boys at school, the word nigger in the mouth of the foreman at the job, the word nigger across the whole face of America! Nigger! Nigger!

When one uses a slur like nigger, that is, one is "making linguistic community with a history of speakers,” as Judith Butler puts it. One speaks with their voice and evokes their attitudes toward the target, which is why the force of the word itself trumps the speaker’s individual beliefs or intentions. Whoever it was who decided to name a color nigger brown or to call a slingshot a nigger shooter could only have been someone who already used the word to denote black people and who presumed that that usage was common in his community. (Someone who was diffident about using the word in its literal meaning would hardly be comfortable using it metaphorically.) To continue to use those expressions, accordingly, is to set oneself in the line of those who have used the term as a racial slur in the past. Slurs keep their force even when they’re detached from their original reference. That’s why, in 1967, the US Board on Geographic Names removed Nigger from 167 place names. People may have formed agreeable associations in the past around a place called Nigger Beach, or a company called Nigger Lake Holidays, but they don’t redeem the word.

Tony Thorne says that, as late as the 1960s, it was possible to use the expression nigger in the woodpile “without having a conscious racist intention,” and Geoff argues that Morris’s utterance was not a racist act. That depends on what a "racist act" comes down to. It’s fair to assume that she didn’t utter the phrase with any deliberate intention of manifesting her contempt for blacks. But intention or no, anyone who uses any expression containing the word nigger in this day and age is culpably obtuse—all the more since nigger, more than other slurs, has become so phonetically toxic that people are reluctant even to mention it, in the philosophical sense, at least in speech. “Racially insensitive” doesn’t begin to say it.

It's that same obtuseness, I’d argue, that makes the Washington NFL team’s use of redskin objectionable, despite the insistence of the owners and many fans that they intend only to show “reverence toward the proud legacy and traditions of Native Americans” (even if the name of their team is a wholly different word). True, that word seems different from nigger, it’s only because the romanticized redskin is at a remove from the facts of history. Say “redskin” and what comes to mind is a sanitized and reassuring image of the victims of a long and brutal genocidal war, familiar from a hundred movie Westerns: the fierce, proud primitives, hopelessly outmatched by the forces of civilization, who nonetheless resisted courageously and died like me. (As Pat Buchanan put it in defending the team’s use of the name, “These were people who stood, fought and died and did not whimper.”)

In fact the most deceptive slurs aren’t the ones that express unmitigated contempt for their targets, like nigger and spic. They’re the ones that are tinged with sentimentality, condescension, pity, or exoticism, which are no less reductive or dehumanizing but are much easier to justify to ourselves. Recall the way the hipsters and hippies used spade as what Ken Kesey described as “a term of endearment.” Think of Oriental or cripple, or a male executive’s description of his secretary as “my gal.” Did that usage become sexist only when feminists pointed it out? Was it sexist only to women who objected to it? That's the thing about obtuseness, you can look deep in your heart and come up clean.

[Note: Just to anticipate a potential red herring, the recent Supreme Court decision invalidating the relevant clause of the Lanham Act didn't bear on the Redskins' claim that their name was not disparaging. The Court simply said that disparagement wasn't grounds for denying registration of a mark. The most recent judicial determination in this matter was that of the Court of Appeals, which upheld the petitioners' case.]

by Geoff Nunberg at July 14, 2017 02:06 AM

July 12, 2017

MIMS 2014

Which Countries are the Most Addicted to Coffee?

This is my last blog post about coffee (I promise). Ever since stumbling upon this Atlantic article about which countries consume the most coffee per capita, I’ve pondered a deeper question—not just who drinks the most coffee, but who’s most addicted to the stuff. You might argue that these questions are one and the same, but when it comes to studying addiction, it actually makes more sense to look at it through the lens of elasticity rather than gross consumption.

You might recall elasticity from an earlier blog post. Generally speaking, elasticity is the economic concept relating sensitivity of consumption to changes in another variable (in my earlier post, that variable was income). When it comes to studying addiction, economists focus on price elasticity—i.e. % change in quantity divided by the % change in price. And it makes sense. If you want to know how addicted people are to something, why not see how badly they need it after you jack up the price? If they don’t react very much, you can surmise that they are more addicted than if they don’t. Focusing on elasticity rather than gross consumption allows for a richer understanding of addiction. That’s why economists regularly employ this type of analysis when it comes to designing public policy around cigarette taxes.

I would never want to tax coffee, but I am interested in applying the same approach to calculate the price elasticity of coffee across different countries. While price elasticity is not a super complicated idea to grasp, in practice it is actually quite difficult to calculate. In the rest of this blog post, I’ll discuss these challenges in detail.

Gettin’ Da Data

The first problem for any data analysis is locating suitable data; in this case, the most important data is information for the two variables that make up the definition of elasticity: price and quantity. Thanks to the International Coffee Organization (ICO), finding data about retail coffee prices was surprisingly easy for 26 different countries reaching (in some cases) back to 1990.

Although price data was remarkably easy to find, there were still a few wrinkles to deal with. First, for a few countries (e.g. the U.S.), there was missing data in some years. To remedy this, I used the handy R package imputeTS, to generate reasonable values for the gaps in the affected time series.

The other wrinkle related to inflation. I searched around ICO’s website to see if their prices were nominal or whether they controlled for the effects of inflation. Since I couldn’t find any mention of inflation, I assumed the prices were nominal. Thus, I had to make a quick stop at the World Bank to grab inflation data so that I could deflate nominal prices to real prices.

While I was at the Bank, I also grabbed data on population and real GDP. The former is needed to get variables on a per capita basis (where appropriate), while the latter is needed as a control in our final model. Why? If you want to see how people react to changes in the price of coffee, it is important to hold constant any changes in their income. We’ve already seen how positively associated coffee consumption is with income, so this is definitely a variable you want to control for.

Getting price data might have been pretty easy, but quantity data posed more of a challenge. In fact, I wasn’t able to find any publicly available data about country-level coffee consumption. What I did find, however, was data about coffee production, imports and exports (thanks to the UN’s Food and Agriculture Organization). So, using a basic accounting logic (i.e. production – exports + imports), I was able to back into a net quantity of coffee left in a country in a given year.

There are obvious problems with this approach. For one thing, it assumes that all coffee left in a country after accounting for imports and exports is actually consumed. Although coffee is a perishable good, it is likely that at least in some years, quantity is carried over from one year to another. And unfortunately, the UN’s data gives me no way to account for this. The best I can hope for is that this net quantity I calculate is at least correlated with consumption. Since elasticity is chiefly concerned with the changes in consumption rather than absolute levels, if they are correlated, then my elasticity estimates shouldn’t be too severely biased. In all, the situation is not ideal. But if I couldn’t figure out some sort of workaround for the lack of publicly available coffee consumption data, I would’ve had to call it a day.

Dogged by Endogeneity

Once you have your data ducks in a row, the next step is to estimate your statistical model. There are several ways to model addiction, but the most simple is the following:

log(cof\_cons\_pc) = \alpha + \beta_{1} * log(price) + \beta_{2} * log(real\_gdp\_pc) + \beta_{3} * year + \varepsilon

The above linear regression equation models per capita coffee consumption as a function of price while controlling for the effects of income and time. The regression is taken in logs mostly as a matter of convenience, since logged models allow the coefficient on price, \beta_1, to be your estimate of elasticity. But there is a major problem with estimating the above equation, as economists will tell you, and that is the issue of endogeneity.

Endogeneity can mean several different things, but in this context, it refers to the fact that you can’t isolate the effect of price on quantity because the data you have is the sum total of all the shocks that shift both supply and demand over the course of a year. Shocks can be anything that affect supply/demand apart from price itself—from changing consumer tastes to a freak frostbite that wipes out half the annual Colombian coffee crop. These shocks are for the most part unobserved, but all together they define the market dynamics that jointly determine the equilibrium quantity and price.

To isolate the effect of price, you have to locate the variation in price that is not also correlated with the unobserved shocks in a given year. That way, the corresponding change in quantity can safely be attributed to the change in price alone. This strategy is known as using an instrumental variable (IV). In the World Bank Tobacco Toolkit, one suggested IV is lagged price (of cigarettes in their case, though the justification is the same for coffee). The rationale is that shocks from one year are not likely to carry over to the next, while at the same time, lagged price remains a good predictor of price in the current period. This idea has its critics (which I’ll mention later), but has obvious appeal since it doesn’t require any additional data.

To complete implementing the IV strategy, you must first run the model:

 

price_t = \alpha + \beta_{1} * price_{t-1} + \beta_{2} * real\_gdp\_pc_t + \beta_{3} * year_t + \varepsilon_t

You then use predicted values of price_t from this model as the values for price in the original elasticity model I outlined above. This is commonly known as as Two Stage Least Squares regression.

Next Station: Non-Stationarity

Endogeneity is a big pain, but it’s not the only issue that makes it difficult to calculate elasticity. Since we’re relying on time series data (i.e. repeated observations of country level data) as our source of variation, we open ourselves to the various problems that often pester inference in time series regression as well.

Perhaps the most severe problem posed by time series data is the threat of non-stationarity. What is non-stationarity? Well, when a variable is stationary, its mean and variance remain constant over time. Having stationary variables in linear regression is important because when they’re not, it can cause the coefficients you estimate in the model to be spurious—i.e. meaningless. Thus, finding a way to make sure your variables are stationary is rather important.

This is all made more complicated by the fact that there are several different flavors of stationarity. A series might be trend stationary, which means that it’s stationary around a trend line. Or it might be difference stationary. That means that it’s stationary after you difference the series, where differencing is to subtract away the value of the series from the year before, so you’re just left with the change from year to year. A series could also have structural breaks, like an outlier or a lasting shift in the mean (or multiple shifts). And finally, if two or more series are non-stationary but related to one another by way of something called co-integration, then you have to apply a whole different analytical approach.

At this point, a concrete example might help to illustrate the type of adjustments that need to be made to ensure stationarity of a series. Take a look at this log-scaled time series of coffee consumption in The Netherlands:

ne_plot

It seems like overall there is a slightly downward trend from 1990 through 2007 (with a brief interruption in 1998/99). Then, in 2008 through 2010, there was a mean shift downward, followed by another shift down in 2011. But starting in 2011, there seems to be a strong upward trend. All of these quirks in the series are accounted for in my elasticity model for The Netherlands using dummy variables—interacted with year when appropriate to allow for different slopes in different epochs described above.

This kind of fine-grained analysis had to be done on three variables per model—across twenty-six. different. models. . . Blech. Originally, I had hoped to automate much of this stage of the analysis, but the idiosyncrasies of each series made this impossible. The biggest issue were the structural breaks, which easily throw off the Augmented Dickey-Fuller test, the workhorse for detecting statistically whether or not a series is stationary.

This part of the project definitely took the longest to get done. It also involved a fair amount of judgment calls—when should a series be de-trended, or differenced, or how to separate the epochs when structural breaks were present. All this lends to a critique of time series analysis that it can often be more of an art than science. The work was tedious, but at the very least, it gave me confidence that it might be a while before artificial intelligence replaces humans for this particular task. In prior work, I actually implemented a structural break detection algorithm I once found in a paper, but I wasn’t impressed with its performance, so I wasn’t going to go down that rabbit hole again (for this project, at least).

Other Complications

Even after you’ve dealt with stationarity, there are still other potential problem areas. Serial correlation is one of them. What is serial correlation? Well, one of the assumptions in linear regression is that the error term, or \varepsilon, as it appears in the elasticity model above, is independent across different observations. Since you observe the same entity multiple times in time series data, the observations in your model are by definition dependent, or correlated. A little serial correlation isn’t a big deal, but a lot of serial correlation can cause your standard errors to become biased, and you need those for fun stuff like statistical inference (confidence intervals/hypothesis testing).

Another problem that can plague your \varepsilon‘s is heteroskedascity, which is a complicated word that means the variance of your errors is not constant over time. Fortunately, both heteroskedascity and serial correlation can be controlled for using robust covariance calculations known as Newey-West estimators. These methods are easily accessible in R via the sandwich package, and I used it whenever I observed heteroskedascity or serial correlation in my models.

 

A final issue is the problem of multicollinearity. Multicollinearity is not strictly a time series related issue; it occurs whenever the covariates in your model are highly correlated with one another. When this happens, your beta estimates become highly unreliable and unstable. This occurred in the models for Belgium and Luxembourg between the IV price variable and GDP. There are not many good options when your model suffers from multicollinearity. Keep the troublesome covariate, and you’re left with some really weird coefficient values. Throw it out, and your model could suffer from omitted variable bias. In the end, I excluded GDP from these models because the estimated coefficients looked less strange.

Results/Discussion

In the end, only two of the elasticities I estimated ended up being statistically significant—or reliable—estimates of elasticity (for Lithuania and Poland). The rest were statistically insignificant (at \alpha = 0.05), which means that positive values of elasticity are among the plausible values for a majority of the estimates. From an economic theory standpoint this makes little sense, since it is a violation of the law of demand. Higher prices should lead to a fall in demand, not a rise. Economists have a name for goods that violate the law of demand—Giffen goods—but they are very rare to encounter in the real world (some say that rice in China is one of them; I’m pretty sure coffee is a normal, well-behaved good.

Whenever you wind up with insignificant results, there is always a question of how to present them (if at all). Since the elasticity models produce actual point estimates for each country, I could have just put those point estimates in descending order and spit out that list as a ranking of the countries most addicted to coffee. But that would be misleading. Instead, I think it’s better to display entire confidence intervals—particularly to demonstrate the variance in certainty (i.e. width of the interval) across the different country estimates. The graphic below ranks countries from top to bottom in descending order by width of confidence interval. The vertical line at -1.0% is a reference for the threshold between goods considered price elastic ( \epsilon < -1.0% ) versus price inelastic ( -1.0% > \epsilon >  0.0%).

cofcis.png

When looking at the graphic above, it is important to bear in mind that apart from perhaps a few cases, it is not possible to draw conclusions about the differences in elasticities between individual countries. You cannot, for example, conclude that coffee is more elastic in the United States relative to Spain. To generate a ranking of elasticities across all countries (and arrive at an answer to the question posed by the post title), we would need to perform a battery of pairwise comparisons between all the the different countries ([26*25]/2 = 320 in total). Based on the graphic above, however, I am not convinced this would be worth the effort. Given the degree of overlap across confidence intervals—and the fact that the significance-level correction to account for multiple comparisons would only make this problem worse—I think the analysis would just wind up being largely inconclusive.

In the end, I’m left wondering what might be causing the unreliable estimates. In some cases, it could just be a lack of data; perhaps with access to more years—or more granular data taken at monthly or quarterly intervals—confidence intervals would shrink toward significance. In other cases, I might have gotten unlucky in terms of the variation of a given sample. But I am also not supremely confident in the fidelity of my two main variables, quantity and price, since both variables have artificial qualities to them. Quantity is based on values I synthetically backed into rather than coming from a concrete, vetted source, and price is derived from IV estimation. Although I trusted the World Bank when it said lagged price was a valid IV, I subsequently read some literature that said it may not solve the endogeneity issue after all. Specifically, it argues the assumption that shocks are not serially correlated is problematic.

If lagged price is not a valid IV, then another variable must be found that is correlated with price, but not with shocks to demand. Upon another round of Googling, I managed to find data with global supply-side prices through the years. It would be interesting to compare the results using these two different IVs. But then again, I did promise that this would be my last article about coffee… Does that mean the pot is empty?

terrytate

 

 

 


by dgreis at July 12, 2017 11:23 PM

Ph.D. student

Overdetermined outcomes in social science

One of the reasons why it’s important to think about explicitly about downward causation in society is how it interacts with considerations of social and economic justice.

Purely bottom-up effects can seem to have a different social valence than top-down effects.

One example, as noted by David Massad, has to do with segregation in housing. Famously, the Schelling segregation model shows how segregation in housing could be the result of autonomous individual decisions by people with a small preference for being with others like themselves (homophily). But historically in the United States, one factor influencing segregation was redlining, a top-down legal process.

Today, there is no question that there is great inequality in society. But the mechanism behind that inequality is unknown (at least to me, in my current informal investigation of the topic). One explanation, no doubt overly simplified, would be to say that wealth distribution is just a disorganized heavy tail distribution. A more specific account from Piketty would frame the problem as an organized heavy tail distribution based on the feedback effect of the relative difference in rate of return on capital versus labor. Naidu would argue that this difference in the rate of return is due to political agency on the part of capitalists, which would imply a downward causation mechanism from capitalist class interest to individual wealth distributions.

The key thing to note here is that the mere fact of inequality does not give us a lot to distinguish empirically between these competing hypotheses.

It is possible that the specific distribution (i.e cumulative density function) of inequality can shed light on which, if any, of these hypotheses hold. To work this out, we would need to come up with a likelihood function for the probability of the wealth distributions occurring under each hypothesis. Likely the result would be subtle: the difference in the likelihood functions would be about not that but how much inequality results, and whether and in what ways the wealth distribution is stratified.

Of course, another approach would be to collect other data besides the wealth distribution that bears on the problem. But what would that be? The legal record of the tax code, perhaps. But this does not straightforwardly solve our problem. Whatever the laws are and however they have changed, we cannot be sure of their effect on economic outcomes without testing them somehow against the empirical distribution again.

Another challenge to teasing these hypotheses apart is that they are not entirely distinct from each other. A disorganized heavy tail distribution posits a large number of contributing factors. Difference in rate of return on capital may be one important factor. But is it everything? Need it be everything to be an important social scientific theory?

A principled way of going about the problem would be to regress the total distribution against a number of potential factors, including capital returns and income and whatever other factors come to mind. This is the approach naturally taken in data science and machine learning. The result would be the identification of a vector of coefficients that would indicate the relative importance of different factors on total wealth.

Suppose there are 20 such factors, any one of which can be removed with minimal impact on the overall outcome. What then?


by Sebastian Benthall at July 12, 2017 03:56 PM

July 11, 2017

Ph.D. student

Why disorganized heavy tail distributions?

I wrote too soon.

Miller and Page (2009) do indeed address “fat tail” distributions explicitly in the same chapter on Emergence discussed in my last post.

However, they do not touch on the possibility that fat tail distributions might be log normal distributions generated by the Central Limit Theorem, as is well-documented by Mitzenmacher (2004).

Instead, they explicitly make a different case. They argue that there are two kinds of complexity:

  • disorganized complexity, complexity where extreme values balance each other out to create average aggregate behavior according to the Law of Large Numbers and Central Limit Theorem.
  • organized complexity, where positive and negative feedback can result in extreme outcomes, best characterized by power law or “heavy tail” distributions. Preferential attachment is an example of a feedback based mechanism for generating power law distributions (in the specific case of network degrees).

Indeed, this rough breakdown of possible scientific explanations (the relatively orderly null-hypothesis world of normal distributions, and the chaotic, more accurately rendered world of heavy tail distributions) was the one I had before I started studying complex systems and statistics more seriously in grad school.

Only later did I come to the conclusion that this is a pervasive error, because of the ease with which log normal distributions (which may be “disorganized”) can be confused with power law distributions (which tend to be explained by “organized” processes). I am a bit disappointed that Miller and Page repeat this error, but then again their book is written in 2009. I wonder whether the methodological realization (which I assume I’m not alone in, as I hear it confirmed informally in conversations with smart people sometimes) is relatively recent.

Because this is something so rarely discussed in focus, I think it may be worth pondering exactly why disorganized heavy tail distributions are not favored in the literature. There are several reasons I can think of, which I’ll offer informally here as possibilities or hypotheses.

One reason that I’ve argued for before here is that organized processes are more satisfying as explanations than disorganized processes. Most people are not very good at thinking about probabilities (Tetlock and Gardner (2016) have a great, accessible discussion of why this is the case). So to the extent that the Law of Large Numbers or Central Limit Theorem have true explanatory power, it may not be the kind of explanation most people are willing to entertain. This apparently includes scientists. Rather, a simple explanation in terms of feedback may be the kind of thing that feels like a robust scientific finding, even if there’s something spurious about it when viewed rigorously. (This is related, I think, to arguments about the end of narrative in social science.)

Another reason why disorganized heavy tail distributions may be underutilized as scientific explanations is that it is counter-intuitive that a disorganized process can produce such extreme inequality in outcomes.

This has to do with the key transformation that is the difference between a normal and a log normal distribution. A normal distribution is a bell-shaped distribution one gets when one adds a large number of independent random variables.

The log normal distribution is a heavy tail distribution one gets by multiplying a large number of positively valued independent random variables. While it does have a bell or hump, the top of the bell is not at the arithmetic mean, because the sides of the bell are skewed in size. But this is not necessarily because of the dominance of any particular factor (as would be expected if, for example, a single factor were involved in a positive feedback loop). Rather, it is the mathematical fact of many factors multiplied creating extraordinarily high values which creates the heavy right-hand side of the bell.

One way to put it is that rather than having a “deep” positive feedback loop where a single factor amplifies itself many times over, disorganized heavy tails have “shallow” positive feedback where each of many factors has a single and simultaneous amplifying effect on the impact of all the others. This amplification effect is, like multiplication itself, commutative, which means that no single factor can be considered to be causally prior to the others.

Once again, this defies specificity in an explanation, which may be for some people an explanatory desideratum.

But these extreme values are somehow ones that people demand specific explanations for. This is related, I believe, at the desire for a causal lever with which people can change outcomes, especially their own personal outcomes.

There’s an important political question implicated by all this, which is: why is wealth and power concentrated in the hands of the very few?

One explanation that must be considered is the possibility that society is accumulated history, and over thousands of years an innumerable number of independent factors have affected the distribution of wealth and power. Though rather disorganized, these factors amplify each other multiplicatively, resulting in the distribution that we see today.

The problem with this explanation is that it seems there is little to be done about this state of affairs. A person can effect a handful of the factors that contribute to their own wealth or the wealth of another, but if there are thousands of them then it’s hard to get a grip. One must view the other as simply lucky or unlucky. How can one politically mobilize around that?

References

Miller, John H., and Scott E. Page. Complex adaptive systems: An introduction to computational models of social life. Princeton university press, 2009

Mitzenmacher, Michael. “A brief history of generative models for power law and lognormal distributions.” Internet mathematics 1.2 (2004): 226-251.

Tetlock, Philip E., and Dan Gardner. Superforecasting: The art and science of prediction. Random House, 2016.


by Sebastian Benthall at July 11, 2017 03:12 PM

July 10, 2017

Ph.D. student

The Law: Miller and Page on Emergence, and statistics in social science

I’m working now through Complex Adaptive Systems by Miller and Page and have been deeply impressed with the clarity with which they lay out key scientific principles.

In their chapter on “Emergence”, they discuss the key problem in science of accounting for how some phenomena emerge from lower level phenomena. In the hard sciences, examples include how the laws and properties of chemistry emerge from the laws and properties of particles as determined by physics. It has been suggested that the psychological states of the mind emerge from the physical states of the brain. In social sciences, there is the open question of how social forms emerge from individual behavior.

Miller and Page acknowledge that “unfortunately, emergence is one of those complex systems ideas that exists in a well-trodden, but relatively untracked, bog of discussions”. Epstein’s (2006) treatment of it is particular aggressive, as he takes aim at early emergence theorists who used the term in a kind of mystifying sense and then attempts to replace this usage with his own much more concrete one.

So far in my reading on the subject there has been a lack of mathematical rigor in the treatment of the subject, but I’ve been impressed now with what Miller and Page specifically bring to bear on the problem.

Miller and Page provide two clear criteria for an emergent phenomenon:

  • “Emergence is a phenomenon whereby well-formulated aggregate behavior arises from localized, individual behavior.
  • “Such aggregate behavior should be immune to reasonable variations in the individual behavior.”

Significantly, their first example of such an effect comes from statistics: it’s the Law of Large Numbers and related theorems like the Central Limit Theorem.

These are basic theorems in statistics about the properties of a sample of random variables. The Law of Large Numbers states that the average of a large number of samples will converge on the expected value of the expected value of one sample. The Central Limit Theorem states that the distribution of the sum of many identical and independent random variables will tends towards a normal (or Gaussian) distribution whatever the distribution of the underlying variables are.

Though mathematically statements about random variables and their aggregate value, Miller and Page correctly generalize from this to say that these Laws apply to the relationship between individual behavior and aggregate patterns. The emergent phenomena here (the mean or distribution of outcomes) fulfill their criteria for emergent properties: they are well formed and depend less and less on individual behavior the more individuals there are involved.

These Laws are taught in Statistics 101. What is under-emphasized, in my experience, is the extent to which these Laws are determinative of social phenonema. Miller and Page cite an intriguing short story by Robert Coates, entitled “The Law” (1956), that explores the idea of what would happen if the Law of Large Numbers gave out. Suddenly traffic patterns would be radically unpredictable as the number of people on the road, or in a shopping mall, or outdoors enjoying nature, would be far from average far more often than we’re used to. Absurdly, the short story ends when the statistical law is at last adopted by Congress. This is absurd because of course this is one Law that affects all social and physical reality all the time.

Where this fact crops up less frequently than it should is in discussions of the origins of distributions of wide inequality. Physicists have for a couple decades been promoting the idea that the highly unequal “long tail” distributions found in society are likely power law distributions. Clauset, Shalizi, and Newman have developed a statistical test which, when applied, demonstrates that the empirical support for many of these claims isn’t truly there. Often these distributions are empirically closer to a log normal distribution, which can be explained by the Central Limit Theorem when one combines variables through multiplication rather than addition. My own small and flawed contribution to this long and significant line of research is here.

As far as explanatory hypotheses go, the immutable laws of statistics have advantages and disadvantages. Their advantage is that they are always correct. The disadvantage of these Laws in particular is that they do not lend themselves to narrative explanation, which means they are in principle excluded from those social sciences that hold themselves to argument via narration. Narration, it is argued, is more interesting and compelling for audiences not well-versed in the general science of statistics. Since many social sciences are interested in discussion of inequality in society, this seems to put these disciplines at odds with each other. Some disciplines, the ones converging now into computational social science, will use these Laws and be correct, but uninteresting. Other disciplines will ignore these laws and be incorrect but more compelling to popular audiences.

This is a disturbing conclusion, one that I believe strikes deeply at the heart of the epistemic crisis affecting politics today. No wonder we have “post-truth” media and “fake news” when our social scientists can’t even bring themselves to accept the inconvenience of learning basic statistics. I’m not speaking out of abstract concern here. I’ve encountered this problem personally and quite dramatically myself through my early dissertation work. Trying to make this very point proved so anathema to the way social sciences have been constructed that I had to abandon the project for lack of comprehending faculty support. This is despite The Law, as Coates refers to it whimsically, being well known and “on the books” for a very, very long time.

It is perhaps disconcerting to social scientists that their fields of expertise may be characterized well by the same kind of laws, grounded in mathematics, that determine chemical interactions that the evolution of biological ecosystems. And indeed there is a strong discourse around downward causation in social systems that discusses the ways in which individuals in society may be different from individuals random variables in a large sample. However, a clear understanding of statistical generative processes must be brought to bear on the understanding of social phenomena as a kind of null hypothesis. These statistical laws are due high prior probability, in the Bayesian sense. I hope to discover one day how to formalize this intuitively clear conclusion in more authoritative, mathematical terms.

References

Benthall, S. “Testing Generative Models of Online Collaboration with BigBang (pp. 182–189).” Proceedings of the 14th Python in Science Conference. Available at https://conference. scipy. org/proceedings/scipy2015/sebastian_benthall. html. 2015.

Benthall, Sebastian. “Philosophy of computational social science.” Cosmos and History: The Journal of Natural and Social Philosophy 12.2 (2016): 13-30.

Coates, Robert M. 1956. “The Law.” In The World of Mathematics, Vol. 4, edited by James R. Newman, 2268-71. New York: Simon and Schuster.

Clauset, Aaron, Cosma Rohilla Shalizi, and Mark EJ Newman. “Power-law distributions in empirical data.” SIAM review 51.4 (2009): 661-703.

Epstein, Joshua M. Generative social science: Studies in agent-based computational modeling. Princeton University Press, 2006.

Miller, John H., and Scott E. Page. Complex adaptive systems: An introduction to computational models of social life. Princeton university press, 2009.

Sawyer, R. Keith. “Simulating emergence and downward causation in small groups.” Multi-agent-based simulation. Springer Berlin Heidelberg, 2000. 49-67.


by Sebastian Benthall at July 10, 2017 05:07 PM

adjunct professor

July 06, 2017

Ph.D. student

Capital, democracy, and oligarchy

1. Capital

Bourdieu nicely lays out a taxonomy of forms of capital (1986), including economic capital (wealth) which we are all familiar with, as well as cultural capital (skills, elite tastes) and social capital (relationships with others, especially other elites). By saying that all three categories are forms of capital, what he means is that each “is accumulated labor (in its materialized form or its ‘incorporated,’ embodied form) which, when appropriated on a private, i.e., exclusive, basis by agents or groups of agents, enables them to appropriate social energy in the form of reified or living labor.” In his account, capital in all its forms are what give society its structure, including especially its economic structure.

[Capital] is what makes the games of society – not least, the economic game – something other than simple games of chance offering at every moment the possibility of a miracle. Roulette, which holds out the opportunity of winning a lot of money in a short space of time, and therefore of changing one’s social status quasi-instantaneously, and in which the winning of the previous spin of the wheel can be staked and lost at every new spin, gives a fairly accurate image of this imaginary universe of perfect competition or perfect equality of opportunity, a world without inertia, without accumulation, without heredity or acquired properties, in which every moment is perfectly independent of the previous one, every soldier has a marshal’s baton in his knapsack, and every prize can be attained, instantaneously, by everyone, so that at each moment anyone can become anything. Capital, which, in its objectified or embodied forms, takes time to accumulate and which, as a potential capacity to produce profits and to reproduce itself in identical or expanded form, contains a tendency to persist in its being, is a force inscribed in the objectivity of things so that everything is not equally possible or impossible. And the structure of the distribution of the different types and subtypes of capital at a given moment in time represents the immanent structure of the social world, i.e. , the set of constraints, inscribed in the very reality of that world, which govern its functioning in a durable way, determining the chances of success for practices.

Bourdieu is clear in his writing that he does not intend this to be taken as unsubstantiated theoretical posture. Rather, it is a theory he has developed through his empirical research. Obviously, it is also informed by many other significant Western theorists, including Kant and Marx. There is something slightly tautological about the way he defines his terms: if capital is posited to explain all social structure, then any social structure may be explained according to a distribution of capital. This leads Bourdieu to theorize about many forms of capital less obvious than wealth, such as the symbolic capital, like academic degrees.

The costs of such a theory is that it demands that one begin the difficult task of enumerate different forms of capital and, importantly, the ways in which some forms of capital can be converted into others. It is a framework which, in principle, could be used to adequately explain social reality in a properly scientific way, as opposed to other frameworks that seem more intended to maintain the motivation of a political agenda or academic discipline. Indeed there is something “interdisciplinary” about the very proposal to address symbolic and economic power in a way that deals responsibly with their commensurability.

So it has to be posited simultaneously that economic capital is at the root of all the other types of capital and that these transformed, disguised forms of economic capital, never entirely reducible to that definition, produce their most specific effects only to the extent that they conceal (not least from their possessors) the fact that economic capital is at their root, in other words – but only in the last analysis – at the root of their effects. The real logic of the functioning of capital, the conversions from one type to another, and the law of conservation which governs them cannot be understood unless two opposing but equally partial views are superseded: on the one hand, economism, which, on the grounds that every type of capital is reducible in the last analysis to economic capital, ignores what makes the specific efficacy of the other types of capital, and on the other hand, semiologism (nowadays represented by structuralism, symbolic interactionism, or ethnomethodology), which reduces social exchanges to phenomena of communication and ignores the brutal fact of universal reducibility to economics.

[I must comment that after years in an academic environment where sincere intellectual effort seemed effectively boobytrapped by disciplinary trip wires around ethnomethodology, quantification, and so on, this Bourdieusian perspective continues to provide me fresh hope. I’ve written here before about Bourdieu’s Science of Science and Reflexivity (2004), which was a wake up call for me that led to my writing this paper. That has been my main entrypoint into Bourdieu’s thought until now. The essay I’m quoting from now was published at least fifteen years prior and by its 34k citations appears to be a classic. Much of what’s written here will no doubt come across as obvious to the sophisticated reader. It is a symptom of a perhaps haphazard education that leads me to write about it now as if I’ve discovered it; indeed, the personal discovery is genuine for me, and though it is not a particularly old work, reading it and thinking it over carefully does untangle some of the knots in my thinking as I try to understand society and my role in it. Perhaps some of that relief can be shared through writing here.]

Naturally, Bourdieu’s account of capital is more nuanced and harder to measure than an economist’s. But it does not preclude an analysis of economic capital such as Piketty‘s. Indeed, much of the economist’s discussion of human capital, especially technological skill, and its relationship to wages can be mapped to a discussion of a specific form of cultural capital and how it can be converted into economic capital. A helpful aspect of this shift is that it allows one to conceptualize the effects of class, gender, and racial privilege in the transmission of technical skills. Cultural capital is, explicitly in Bourdieu’s account, labor intensive to transmit and often done so informally. Cultural tendencies to transmit this kind of capital preferentially to men instead of women in the family home become a viable explanation for the gender cap in the tech industry. While this is perhaps not a novel explanation, it is a significant one and Bourdieu’s theory helps us formulate it in a specific and testable way that transcends, as he says, both economism and semiologism, which seems productive when one is discussing society in a serious way.

One could also use a Bourdieusian framework to understand innovation spillover effects, as economists like to discuss, or the rise of Silicon Valley’s “Regional Advantage” (Saxenian, 1996), to take a specific case. One of Saxenian’s arguments (as I gloss it) is that Silicon Valley was more economically effective as a region than Route 128 in Massachusetts because the influx of engineers experimenting with new business models and reinvesting their profits into other new technology industries created a confluence of relevant cultural capital (technical skill) and economic capital (venture capital) that allowed the economic capital to be deployed more effectively. In other words, it wasn’t that the engineers in Silicon Valley were better engineers than the engineers in Route 128; it was that the economic capital was being deployed in a way that was less informed by technical knowledge. [Incidentally, if this argument is correct, then in some ways it undermines an argument put forward recently for setting up a “cyber workforce incubator” for the Federal Government in the Bay Area based on the idea that it’s necessary to tap into the labor pool there. If what makes Silicon Valley is smart capital rather than smart engineers, then that explains why there are so many engineers there (they are following the money) but also suggests that the price of technical labor there may be inflated. Engineers elsewhere may be just as good at being part of a cyber workforce. Which is just to say that when Bourdieusian theory is taken seriously, it can have practical policy implications.]

One must imagine, when considering society thus, that one could in principle map out the whole of society and the distribution of capitals within it. I believe Bourdieu does something like this in Distinction (1979), which I haven’t read–it is sadly referred to in the United States as the kind of book that is too dense to read. This is too bad.

But I was going to talk about…

2. Democracy

There are at least two great moments in history when democracy flourished. They have something in common.

One is Ancient Greece. The account of the polis in Hannah Arendt’s The Human Condition (1, cf (2 3) makes the familiar point that the citizens of the Ancient Greek city-state were masters of economically independent households. It was precisely the independence of politics (polis – city) from household economic affairs (oikos – house) that defined political life. Owning capital, in this case land and maybe slaves, was a condition for democratic participation. The democracy, such as it was, was the political unity of otherwise free capital holders.

The other historical moment is the rise of the mercantile class and the emergence of the democratic public sphere, as detailed by Habermas. If the public sphere Habermas described (and to some extent idealized) has been critiqued as being “bourgeois masculinist” (Fraser), that critique is telling. The bourgeoisie were precisely those who were owners of newly activated forms of economic capital–ships, mechanizing technologies, and the like.

If we can look at the public sphere in its original form realistically through the disillusionment of criticism, the need for rational discourse among capital holders was strategically necessary for the bourgeoisie to make strategic decisions about how to collectively allocate their economic capital. The Viewed through the objective lens of information processing and pure strategy, the public sphere was an effective means of economic coordination that complemented the rise of the Weberian bureaucracy, which provided a predictable state and also created new demand for legal professionals and the early information workers: clerks and scriveners and such.

The diversity of professions necessary for the functioning of the modern mercantile state created a diversity of forms of cultural capital that could be exchanged for economic capital. Hence, capital diffused from its concentration in the aristocracy into the hands of the widening class of the bourgeoisie.

Neither the Ancient Greek nor the mercantile democracies were particularly inclusive. Perhaps there is no historical precedent for a fully inclusive democracy. Rather, there is precedent for egalitarian alliances of capital holders in cases where that capital is broadly enough distributed to constitute citizenship as an economic class. Moreover, I must insert here that the Bourdieusian model suggests that citizenship could extend through the diffusion of non-economic forms of capital as well. For example, membership in the clergy was a form of capital taken on by some of the gentry; this came, presumably, with symbolic and social capital. The public sphere creates opportunities for the public socialite that were distinct from the opportunities of the courtier or courtesan. And so on.

However exclusive these democracies were, Fraser’s account of subaltern publics and counterpublics is of course very significant. What about the early workers and womens movements? Arguably these too can be understood in Bourdieusian terms. There were other forms of (social and cultural, if not economic) capital that workers and women in particular had available that provided the basis for their shared political interest and political participation.

What I’m suggesting is that:

  • Historically, the democratic impulse has been about uniting the interests of freeholders of capital.
  • A Bourdieusian understanding of capital allows us to maintain this (analytically helpful) understanding of democracy while also acknowledging the complexity of social structure, through the many forms of capital
  • That the complexity of society through the proliferation of forms of capital is one of, if not the, main mechanism of expanding effective citizenship, which is still conditioned on capital ownership even though we like to pretend it’s not.

Which leads me to my last point, which is about…

3. Oligarchy

If a democracy is a political unity of many different capital holders, what then is oligarchy in contrast?

Oligarchy is rule of the few, especially the rich few.

We know, through Bourdieu, that there are many ways to be rich (not just economic ways). Nevertheless, capital (in its many forms) is very unevenly distributed, which accounts for social structure.

To some extent, it is unrealistic to expect the flattening of this distribution. Society is accumulated history and there has been a lot of history and most of it has been brutally unkind.

However, there have been times when capital (in its many forms) has diffused because of the terms of capital exchange, broadly speaking. The functional separation of different professions was one way in which capital was fragmented into many differently exchangeable forms of cultural, social, and economic capitals. A more complex society is therefore a more democratic one, because of the diversity of forms of capital required to manage it. [I suspect there’s a technically specific way to make this point but don’t know how to do it yet.]

There are some consequences of this.

  1. Inequality in the sense of a very skewed distribution of capital and especially economic capital does in fact undermine democracy. You can’t really be a citizen unless you have enough capital to be able to act (use your labor) in ways that are not fully determined by economic survival. And of course this is not all or nothing; quantity of capital and relative capital do matter even beyond a minimum threshold.
  2. The second is that (1) can’t be the end of the story. Rather, to judge if the capital distribution of e.g. a nation can sustain a democracy, you need to account for many kinds of capital, not just economic capital, and see how these are distribute and exchanged. In other words, it’s necessary to look at the political economy broadly speaking. (But, I think, it’s helpful to do so in terms of ‘forms of capital’.)

One example, which I just learned recently, is this. In the United States, we have an independent judiciary, a third branch of government. This is different from other countries that are allegedly oligarchies, notably Russia but also Rhode Island before 2004. One could ask: is this Separation of Powers important for democracy? The answer is intuitively “yes”, and though I’m sure very smart things have been written to answer the question “why”, I haven’t read them, because I’ve been too busy blogging….

Instead, I have an answer for you based on the preceding argument. It was a new idea for me. It was this: What separation of powers does is its constructs a form of cultural capital associated with professional lawyers which is less exchangeable for economic and other forms of capital than in places where non-independence of the judiciary leads to more regular bribery, graft, and preferential treatment. Because it mediates economic exchanges, this has a massively distortative effect on the ability of economic capital to bulldoze other forms of capital, and the accompanying social structures (and social strictures) that bind it. It also creates a new professional class who can own this kind of capital and thereby accomplish citizenship.

Coda

In this blog post, I’ve suggested that not everybody who, for example, legally has suffrage in nominally democratic state is, in an effective sense, a citizen. Only capital owners can be citizens.

This is not intended in any way to be a normative statement about who should or should not be a citizen. Rather, it is a descriptive statement about how power is distributed in nominal democracies. To be an effective citizen, you need to have some kind of surplus of social power; capital the objectification of that social power.

The project of expanding democracy, if it is to be taken seriously, needs to be understood as the project of expanding capital ownership. This can include the redistribution of economic capital. It can also changing institutions that ground cultural and social capitals in ways that distribute other forms of capital more widely. Diversifying professional roles is a way of doing this.

Nothing I’ve written here is groundbreaking, for sure. It is for me a clearer way to think about these issues than I have had before.


by Sebastian Benthall at July 06, 2017 09:08 PM

July 05, 2017

Ph.D. alumna

Tech Culture Can Change

We need: Recognition, Repentance, Respect, and Reparation.

To be honest, what surprises me most about the current conversation about the inhospitable nature of tech for women is that people are surprised. To say that discrimination, harassment, and sexual innuendos are an open secret is an understatement. I don’t know a woman in tech who doesn’t have war stories. Yet, for whatever reason, we are now in a moment where people are paying attention. And for that, I am grateful.

Like many women in tech, I’ve developed strategies for coping. I’ve had to in order to stay in the field. I’ve tried to be “one of the guys,” pretending to blend into the background as sexist speech was jockeyed about in the hopes that I could just fit in. I’ve tried to be the kid sister, the freaky weirdo, the asexual geek, etc. I’ve even tried to use my sexuality to my advantage in the hopes that maybe I could recover some of the lost opportunity that I faced by being a woman. It took me years to realize that none of these strategies would make me feel like I belonged. Many even made me feel worse.

For years, I included Ani DiFranco lyrics in every snippet of code I wrote, as well as my signature. I’ve maintained a lyrics site since I was 18 because her words give me strength for coping with the onslaught of commentary and gross behavior. “Self-preservation is a full-time occupation.” I can’t tell you how often I’ve sat in a car during a conference or after a meeting singing along off-key at full volume with tears streaming down my face, just trying to keep my head together.

What’s at stake is not about a few bad actors. There’s also a range of behaviors getting lumped together, resulting in folks asking if inescapable sexual overtures are really that bad compared to assault. That’s an unproductive conversation because the fundamental problem is the normalization of atrocious behavior that makes room for a wide range of inappropriate actions. Fundamentally, the problem with systemic sexism is that it’s not the individual people who are the problem. It’s the culture. And navigating the culture is exhausting and disheartening. It’s the collection of particles of sand that quickly becomes a mountain that threatens to bury you.

It’s having to constantly stomach sexist comments with a smile, having to work twice as hard to be heard in a meeting, having to respond to people who ask if you’re on the panel because they needed a woman. It’s about going to conferences where deals are made in the sauna but being told that you have to go to the sauna with “the wives” (a pejoratively constructed use of the word). It’s about people assuming you’re sleeping with whoever said something nice about you. It’s being told “you’re kinda smart for a chick” when you volunteer to help a founder. It’s knowing that you’ll receive sexualized threats for commenting on certain topics as a blogger. It’s giving a talk at a conference and being objectified by the audience. It’s building whisper campaigns among women to indicate which guys to avoid. It’s using Dodgeball/Foursquare to know which parties not to attend based on who has checked in. It’s losing friends because you won’t work with a founder who you watched molest a woman at a party (and then watching Justin Timberlake portray that founder’s behavior as entertainment).

Lots of people in tech have said completely inappropriate things to women. I also recognize that many of those guys are trying to fit into the sexist norms of tech too, trying to replicate the culture that they see around them because they too are struggling for status. But that’s the problem. Once guys receive power and status within the sector, they don’t drop their inappropriate language. They don’t change their behavior or call out others on how insidious it is. They let the same dynamics fester as though it’s just part of the hazing ritual.

For women who succeed in tech, the barrage of sexism remains. It just changes shape as we get older.

On Friday night, after reading the NYTimes article on tech industry harassment, I was deeply sad. Not because the stories were shocking — frankly, those incidents are minor compared to some of what I’ve seen. I was upset because stories like this typically polarize and prompt efforts to focus on individuals rather than the culture. There’s an assumption that these are one-off incidents. They’re not.

I appreciate that Dave and Chris owned up to their role in contributing to a hostile culture. I know that it’s painful to hear that something you said or did hurt someone else when you didn’t intend that to be the case. I hope that they’re going through a tremendous amount of soul-searching and self-reflection. I appreciate Chris’ willingness to take to Medium to effectively say “I screwed up.” Ideally, they will both come out of this willing to make amends and right their wrongs.

Unfortunately, most people don’t actually respond productively when they’re called out. Shaming can often backfire.

One of the reasons that most people don’t speak up is that it’s far more common for guys who are called out on their misdeeds to respond the way that Marc Canter appeared to do, by justifying his behavior and demonizing the woman who accused him of sexualizing her. Given my own experiences with his sexist commentary, I decided to tweet out in solidarity by publicly sharing how he repeatedly asked me for a threesome with his wife early on in my career. At the time, I was young and I was genuinely scared of him; I spent a lot of time and emotional energy avoiding him, and struggled with how to navigate him at various conferences. I wasn’t the only one who faced his lewd comments, often framed as being sex-positive even when they were an abuse of power. My guess is that Marc has no idea how many women he’s made feel uncomfortable, ashamed, and scared. The question is whether or not he will admit that to himself, let alone to others.

I’m not interested in calling people out for sadistic pleasure. I want to see the change that most women in tech long for. At its core, the tech industry is idealistic and dreamy, imagining innovations that could change the world. Yet, when it comes to self-reflexivity, tech is just as regressive as many other male-dominated sectors. Still, I fully admit that I hold it to a higher standard in no small part because of the widespread commitment in tech to change the world for the better, however flawed that fantastical idealism is.

Given this, what I want from men in tech boils down to four Rs: Recognition. Repentance. Respect. Reparation.

Recognition. I want to see everyone — men and women — recognize how contributing to a culture of sexism takes us down an unhealthy path, not only making tech inhospitable for women but also undermining the quality of innovation and enabling the creation of tech that does societal harm. I want men in particular to reflect on how the small things that they do and say that they self-narrate as part of the game can do real and lasting harm, regardless of what they intended or what status level they have within the sector. I want those who witness the misdeeds of others to understand that they’re contributing to the problem.

Repentance. I want guys in tech — and especially those founders and funders who hold the keys to others’ opportunity — to take a moment and think about those that they’ve hurt in their path to success and actively, intentionally, and voluntarily apologize and ask for forgiveness. I want them to reach out to someone they said something inappropriate to, someone whose life they made difficult and say “I’m sorry.”

Respect. I want to see a culture of respect actively nurtured and encouraged alongside a culture of competition. Respect requires acknowledging others’ struggles, appreciating each others’ strengths and weaknesses, and helping each other through hard times. Many of the old-timers in tech are nervous that tech culture is being subsumed by financialization. Part of resisting this transformation is putting respect front and center. Long-term success requires thinking holistically about society, not just focusing on current capitalization.

Reparation. Every guy out there who wants to see tech thrive owes it to the field to actively seek out and mentor, support, fund, open doors for, and otherwise empower women and people of color. No excuses, no self-justifications, no sexualized bullshit. Just behavior change. Plain and simple. If our sector is about placing bets, let’s bet on a better world. And let’s solve for social equity.

I have a lot of respect for the women who are telling their stories, but we owe it to them to listen to the culture that they’re describing. Sadly, there are so many more stories that are not yet told. I realize that these stories are more powerful when people are named. My only hope is that those who are risking the backlash to name names will not suffer for doing so. Ideally, those who are named will not try to self-justify but acknowledge and accept that they’ve caused pain. I strongly believe that changing the norms is the only path forward. So while I want to see people held accountable, I especially want to see the industry work towards encouraging and supporting behavior change. At the end of the day, we will not solve the systemic culture of sexism by trying to weed out bad people, but we can work towards rendering bad behavior permanently unacceptable.

by zephoria at July 05, 2017 07:55 PM

June 26, 2017

Ph.D. student

Framing Future Drone Privacy Concerns through Amazon’s Concept Videos

This blog post is a version of a talk that I gave at the 2016 4S conference and describes work that has since been published in an article in The Journal of Human-Robot Interaction co-authored with Deirdre Mulligan entitled “These Aren’t the Autonomous Drones You’re Looking for: Investigating Privacy Concerns Through Concept Videos.” (2016). [Read online/Download PDF]

Today I’ll discuss an analysis of 2 of Amazon’s concept videos depicting their future autonomous drone service, how they frame privacy issues, and how these videos can be viewed in conversation with privacy laws and regulation.

As a privacy researcher with a human computer interaction background, I’ve become increasingly interested in how processes of imagination about emerging technologies contribute to narratives about the privacy implications of those technologies. Toda I’m discussing some thoughts emerging from a project looking at Amazon’s drone delivery service. In 2013, Amazon – the online retailer – announced Prime Air, a drone-based package delivery service. When they made their announcement, the actual product was not ready for public launch – and it’s still not available as of today. But what’s interesting is that at the time the announcement was made, Amazon also released a video that showed what the world might look like with this service of automated drones. And they released a second similar video in 2015. We call these videos concept videos.

These videos are one way that companies are strategically framing emerging technologies – what they will do, where, for whom, by what means; they’re beginning to associate values and narratives with these technologies. To surface values and narratives related to privacy present in these videos, we did a close reading of Amazon’s videos.

We’re generally interested the time time period after a technology is announced, but before it is publicly releasedDuring this time period, most people only interact with these technologies through their fictional representations of the future–in videos, advertisements, media, and so on. Looking at products during this period is interesting to understand the role that these videos play in framing technologies to become associated with certain values and narratives around privacy.

Background: Concept Videos & Design Fiction

Now creating representations of future concepts and products has a long history – including concept cars, or videos or dioramas of future technologies. Concept videos in particular as we’re conceptualizing them are short videos created by a company, showing a device or product that is not yet available for public purchase, though it might be in the short-term future. Concept videos depict what the world might be like in a few years if that device or product exists, and how people might interact with it or use it – we’ve written about this in some prior work looking at concept videos for augmented reality products.

When we are looking at the videos, we are primarily using the lens of design fiction, a concept from design researchers. Design fictions often show future scenarios, but more importantly, artifacts presented through design fiction exist within a narrative world, story, or fictional reality so that we can confront and think about artifacts in relation to a social and cultural environment. By creating fictional worlds and yet-to-be-realized design concepts, it tries to understand possible alternative futures. Design fictions also interact with broader social discourses outside the fiction. If we place corporate concept videos as design fictions, it suggests that the videos are open to interpretation and that such videos are best considered in dialogue with broader social discourses – for example those about privacy.

Yet we also have to recognize the corporate source of the videos.  The concept videos also share qualities with “vision videos,” corporate research videos that show a company’s research vision. Concept videos also contain elements of corporate advertising.

And they contain elements of video prototyping which often show short use scenarios of a technology, or simulate the use of a technology, although these are often either used internally within a company. In contrast, concept videos are public-facing artifacts.

Analyzing Amazon’s Concept Videos

Amazon released 2 concept videos – one in 2013, and a second at the end of 2015. We we can track changes in the way they frame their service. We did a close reading of the Amazon Drone videos to understand how they frame and address privacy concerns.

Below is Amazon’s first 2013 video, and let’s pay attention to how the physical drone looks, and how the video depicts its flying patterns.

So the drone has 8 rotors, is black and looks roughly like other commercially available hobbyist drones that might hold camera equipment. It then delivers the package flying from the Amazon warehouse to the recipient’s house where it’s able to land on its own.

Below is Amazon’s second 2015 video, so this time let’s pay attention again to how the physical drone looks and how the video depicts its flying patterns which we can compare against the first one.

This video’s presentation is a little more flashy – and narrated by Top Gear host Jeremy Clarkson. You might also have noticed that the word “privacy” is never used in either video. Yet several changes between the videos focusing on how the physical drone looks and its flying patterns can be read as efforts by Amazon to conceptualize and address privacy concerns.

Amazon drone 1

The depiction of the drone’s physical design changes

First off, the physical design of the drone changed in shape and color. It changed from a generic black 8-rotor drone, whereas the second video has a more square-shaped drone that’s a more unique design for Amazon, and it has bright bold Amazon branding. This addresses a potential privacy concern – that people may be uncomfortable if they see an unmarked drone near them, because they don’t know what it’s doing or who it belongs to. It might conjure questions such as “is it the neighbor taking pictures?” or “Who is spying on me?” The unique design and color palette in the later video provides a form of transparency clearly identifying who the drone belongs to and its purpose.

Amazon vertical takeoff

The 2015 video depicts a vertical takeoff as the drone’s first flight phase

The second part is about its flying patterns. The first video just sort of shows the drone fly from the warehouse to the user’s house. The second video breaks this down into 3 distinct flying phases. First is a vertical helicopter-like takeoff mode, the narrator describing it flying straight up to 400 feet, suggesting the drone will be high enough to not surveil or look closely at people, nor will it fly over people’s homes when it’s taking off.

Amazon horizontal flight

In the 2015 video, the drone enters a second horizontal flight phase

The second is a horizontal flight mode, which the narrator compares to an airplane. The airplane metaphor downplays surveillance concerns – most people aren’t concerned about people in an airplane watching them in their backyards or in public space. The “drone’s-eye-view” camera in this part of the video reinforces the airplane metaphor – it only shows a horizontal view from the drone like you would out of a plane, as if suggesting the drone only sees straight ahead while it flies horizontal, and isn’t capturing video or data about people directly below it.

Amazon landing

The 2015 video depicts a third landing phase

The third is the vertical landing phase, the drone’s-eye-view camera switches to look directly down. But this video only shows the house and property of the package recipient within the camera frame – suggesting that it only visually scans the property of the package recipient, and not adjacent property, and only uses its downward facing camera in vertical mode. Together these parts of the video try to frame Amazon’s drones as using cameras in a way consistent with privacy expectations.

Policy Considerations

Beyond differences between the two videos’ framing, it’s interesting to consider the policy discourse occurring when these videos were released. In between the two videos, the US Federal Aviation Administration issued draft regulations about unmanned aerial vehicles, including stipulations that they could fly at a maximum of 400 feet. Through 2014 and 2015 a number of US State laws were passed addressing privacy, citing drones trespassing the air space over private property as a privacy harm. Other policy organizations have noted the need for transparency about who is operating a drone to enable individuals to protect themselves from potential privacy invasions.

We can also think of these video as a policy fiction. The technology shown in the video exists, but the story it tells is not a legal reality. The main thing preventing this service is that the Federal Aviation Administration currently requires a human operator within the line of sight of these types of drones.

In this light, we can read the shift in Amazon’s framing of their delivery service as something more than just updates to their design – it’s also a response to particular types of privacy concerns raised in the ongoing policy discourse around drones, and perhaps they are trying to create a sense of goodwill over privacy issues, so that the regulations can be changed in a way that allows the rest of the service. This suggests that corporate framing through concept videos is not necessarily static, but can shift and evolve throughout the design process in conversation with multiple stakeholders as an ongoing negotiation. Amazon uses these videos to frame the technology for multiple audiences – potential customers, as well as acknowledging the concerns by legislators and regulators.

Concluding Thoughts

A few ideas have emerged from the project. First, we think that close readings of concept videos are a useful activity to surface the ways company frame privacy values in relation to their products. It provides some insight into the strategy companies are using to frame their products to multiple stakeholder groups (like consumers and regulators here) – and that this process of strategic framing is an ongoing negotiation.

Second, these videos present one particular vision of the future. But they may also present  opportunities to keep the future more open by contesting the corporate vision or creating alternative futures.  We as researchers can ask what videos don’t show – technical details about how the drone works, what data it collects, how does it work in an urban setting? Stakeholders can also put forth alternate futures – such as parody concept videos (indeed there have been parody concept videos presenting alternate views of the future – people shooting down drones, stolen and dropped packages, Amazon making you buy package insurance, making it only available for expensive items, that drones will use cameras to spy on people, drone delivery in a bathroom, and reimagining it as a Netflix DVD delivery service).

Third, we think that there may be some potential in using concept videos as a more explicit type of communication tool between companies and regulators and are looking for ways we might explore that in the future.


by Richmond at June 26, 2017 04:10 PM

June 15, 2017

Ph.D. student

Using design fiction and science fiction to interrogate privacy in sensing technologies

This post is a version of a talk I gave at DIS 2017 based on my paper with Ellen Van Wyk and James Pierce, Real-Fictional Entanglements: Using Science Fiction and Design Fiction to Interrogate Sensing Technologies in which we used a science fiction novel as the starting point for creating a set of design fictions to explore issues around privacy.  Find out more on our project page, or download the paper: [PDF link ] [ACM link]

Many emerging and proposed sensing technologies raise questions about privacy and surveillance. For instance new wireless smarthome security cameras sound cool… until we’re using them to watch a little girl in her bedroom getting ready for school, which feels creepy, like in the tweet below.

Or consider the US Department of Homeland Security’s imagined future security system. Starting around 2007, they were trying to predict criminal behavior, pre-crime, like in Minority Report. They planned to use thermal sensing, computer vision, eye tracking, gait sensing, and other physiological signals. And supposedly it would “avoid all privacy issues.”  And it’s pretty clear that privacy was not adequately addressed in this project, as found in an investigation by EPIC.

dhs slide.png

Image from publicintelligence.net. Note the middle bullet point in the middle column – “avoids all privacy issues.”

A lot of these types of products or ideas are proposed or publicly released – but somehow it seems like privacy hasn’t been adequately thought through beforehand. However, parallel to this, we see works of science fiction which often imagine social changes and effects related to technological change – and do so in situational, contextual, rich world-building ways. This led us to our starting hunch for our work:

perhaps we can leverage science fiction, through design fiction, to help us think through the values at stake in new and emerging technologies.

Designing for provocation and reflection might allow us to do a similar type of work through design that science fiction often does.

So we created a set of visual design fictions, inspired by a set of fictional technologies from the 2013 science fiction novel The Circle by Dave Eggers to explore privacy issues in emerging sensing technologies. By doing this we tap into an author’s already existing, richly imagined world, rather than creating our own imagined world from scratch.

Design Fiction and our Design Process

Our work builds on past connections drawn among fiction, design, research, and public imagination, specifically, design fiction. Design fiction has been described as an authorial practice between science fiction and science fact and as diegetic prototypes. In other words, artifacts created through design fiction help imply or create a narrative world, or fictional reality, in which they exist. By creating yet-to-be-realized design concepts, design fiction tries to understand possible alternative futures. (Here we draw on work by Bleecker, Kirby, Linehan et al, and Lindley & Coulton).

In the design research and HCI communities, design fiction has been used in predominantly 1 of 2 ways. One focuses on creating design fiction artifacts in formats such as textual ones, visual, video, and other materials. A second way uses design fiction as an analytical lens to understand fictional worlds created by others – including films, practices, and advertisements – although perhaps most relevant to us are Tanenbaum et al’s analysis of the film Mad Max: Fury Road or Lindley et al’s analysis of the film Her as design fictions.

In our work, we combine these 2 ways of using design fiction: We think about the fictional technologies introduced by Eggers in his novel using the lens of design fiction, and we used those to create our own new design fictions.

Obviously there’s a long history of science fiction in film, television, and literature. There’s a lot that we could have used to inspire our designs, but The Circle was interesting to us for a few reasons.

Debates about literary quality aside, as a mass market book, it presents an opportunity to look at a contemporary and popular depiction of sensing technologies. It reflects timely concerns about privacy and increasing data collection. The book and its fictional universe is accessible to a broad audience – it was a New York Times bestseller and a movie adaptation was released in May 2017. (While we knew that a film version was in production when we did our design work, we created our designs before the film was released).

The novel is set in a near future and focuses on a powerful tech company called The Circle, which keeps introducing new sensing products that supposedly provide greater user value, but to the reader, they seem increasingly invasive of privacy. The novel utilizes a dark humor to portray this, satirizing the rhetoric and culture of today’s technology companies.

It’s set in a near future that’s still very much recognizable – it starts to blur boundaries between fiction and reality in a way that we wanted to explore using design fiction. We used Gaver’s design workbook technique to generate a set of design fictions through several iterative rounds of design, several of which are discussed in this post.

workbook pages

We made a lot of designs – excerpts from our design workbook can be found on our project page

Our set of design fictions draws from 3 technologies from the novel, and 1 non-fictional technology that is being developed but seems like it could fit in the world of The Circle, again playing with this idea blurring fiction and reality. We’ll discuss 2 of them in this post, both of which originate from the Eggers novel (no major plot points are spoiled here!).

The first is SeeChange, which is the most prominent technology in the novel. It’s a wireless camera, about the size of a lollipop. It can record and stream live HD video online, and these live streams can be shared with anyone. Its battery lasts for years it, can be used indoors or outdoors, and it can be mounted discretely or worn on the body. It’s introduced to monitor conditions at outdoor sporting locations, or to monitor spaces to prevent crimes. Later, it’s worn by characters who share their lives through a constant live personal video stream.

The second is ChildTrack, which is part of an ongoing project at the company. It’s a small chip implanted into the bone of a child’s body, allowing parents to monitor their child’s location at all times for safety. Later in the story it’s suggested that these chips can also store a child’s educational records, homework, reading, attendance, and test scores so that parents can access all their child’s information in “one place”.

Adapting The Circle

We’re going to look at some different designs that we created that are variations on SeeChange and ChildTrack. Some designs may seem more real or plausible, while others may seem more fictional. Sometimes the technologies might seem fictional, while other times the values and social norms expressed might seem fictional. These are all things that we were interested in exploring through our design process.

beach

SeeChange Beach

For us, a natural starting point was that the novel doesn’t have any illustrations. So we started by going through the book’s descriptions of SeeChange and tried to interpret the first scene in which it appears. In this scene, a company executive demos SeeChange by showing an audience live images of several beaches from hidden cameras, ostensibly to monitor surfing conditions. Our collage of images felt surprisingly believable after we made it, and slightly creepy as it put us in the position of feeling like we were surveilling people at the beach.

childtrack

ChildTrack App Interface

We did the same thing for ChildTrack, looking at how it was described in the book and then imagining what the interface might look like. We wanted to use the perspective of a parent using an app looking at their child’s data, portraying parental surveillance of one’s child as a type of care or moral responsibility.

The Circle in New Contexts

With our approach, we wanted to use The Circle as a starting point to think through a series of privacy and surveillance concerns. After our initial set of designs we began thinking about how the same set of technologies might be used in other situations within the world of the novel, but not depicted in Eggers’ story; and how that might lead to new types of privacy concerns. We did this by creating variations on our initial set of designs.

amazon

SeeChange being “sold” on Amazon.com

From other research, we know that privacy is experienced differently based on one’s subject position. We wanted to think about how much of SeeChange’s surveillance concerns stem from its technical capabilities versus who uses it or who gets recorded. We made 3 Amazon.com pages to market SeeChange as three different products, targeting different groups. We were inspired by debates about police-citizen interactions in the U.S. and imagined SeeChange as a live streaming police body camera. Like Eggers’ book, we satirize the rhetoric of technological determinism, writing that cameras provide “objective” evidence of wrongdoing – we obviously know that cameras aren’t objective. We also leave ambiguity about if the police officer or citizen is doing the wrongdoing.  Thinking about using cameras for activist purposes – like how PETA uses undercover cameras, or how documentarians sometimes use hidden cameras , we frame SeeChange as a small, hidden, wearable camera for activists groups.  Inspired by political debates in the U.S., we thought about how people who are suspicious of the Federal Government might want to monitor political opponents, so we also market SeeChange as a camera “For Independence, Freedom, and Survival,” for this audience. Some of these framings seem more worrisome when thinking about who gets to use the camera, while others seem more worrisome when thinking about who gets recorded by the camera.

seechange angles

Ubiquitous SeeChange cameras from many angles. Image © Ellen Van Wyk, used with permission.

We also thought about longer term effects within the world of The Circle. What might it be like once these SeeChange cameras become ubiquitous, always recording and broadcasting? It could be nice to be able to re-watch crime scenes from multiple angles. But, it might be creepy to use many multiple angles to watch a person doing daily activities, which we depict here as a person sits in a conference room using his computer. The bottom picture looking between the blinds, simulating a small camera attached to the window is particularly creepy to me – and suggests capabilities that goes beyond today’s closed circuit TV cameras.

New Fictions and New Realities

After our second round of designs, we began thinking about privacy concerns that were not particularly present in the novel or our existing designs. The following designs, while inspired by novel, are imagined to exist in worlds beyond The Circle’s.

law enforcement

User interface of an advanced location-tracking system. Image © Ellen Van Wyk, used with permission.

The Circle never really discusses government surveillance, which we think is important to consider. All the surveillance in the book is done by private companies or by individuals. So, we created a scenario putting SeeChange in the hands of the police or government intelligence agencies, to track people and vehicles. Here, SeeChange might overcome some of the barriers that provide privacy protection for us today: Here, police could also easily use the search bar to find anybody’s location history without need for a warrant or any oversight – suggesting a new social or legal reality.

truwork

Truwork – “An integrated solution for your office or workplace!”

 Similarly, we wanted to think about issues of workplace surveillance. Here’s a scenario advertising a workplace implantable tracking device. Employers can subscribe to the service and make their employees implant these devices to keep track of their whereabouts and work activities to improve efficiency.

In a fascinating twist, a version of this actually occurred at a Swedish company about 6 months after we did our design work, where employees are inserting RFID chips into their hands to open doors, make payments, and so forth.

childtrack advertisers

Childtrack for advertisers infographic. Image © Ellen Van Wyk, used with permission.

The Circle never discusses data sharing with 3rd parties like advertisers, so we imagined a service built on top of ChildTrack aimed at advertisers to leverage all the data collected about a child to target them with advertisements. This represents a legal fiction, as it would likely be illegal to do this in the US and EU under various child data protection laws and regulations.

This third round of designs interrogates the relationship between privacy and personal data from the viewpoints of different stakeholders.

Reflections

After creating these designs, we have a series of reflections that fall broadly into 4 areas, though I’ll mention 2 of them here.

Analyzing Privacy Utilizing External Frameworks

Given our interest in the privacy implications of these technologies, we looked to privacy research before starting our design work. Contemporary approaches to privacy view it as contextual, dependent on one’s subject position and on specific situations. Mulligan et al suggest that rather than trying to define privacy, it’s more productive to map how various aspects of privacy are represented in particular situations along 5 dimensions.

After each round of our design iterations, we analyzed the designs through this framework. This allowed us to have a way to map how broadly we were exploring our problem space. For instance, in our first round of designs we stayed really close to the novel. The privacy harms that occurred with SeeChange were caused by other individual consumers using the cameras, and the harms that occurred with ChildTrack were parents violating their kids’ privacy. In the later designs that we created, we went beyond the ways that Eggers discussed privacy harms by looking at harms stemming from 3rd party data sharing, or government surveillance.

This suggests that design fictions can be design for and analyzed using frameworks for specific empirical topics (such as privacy) as a way to reflect on how we’re exploring a design space.

Blurring Real and Fictional

The second reflection we have is about how our design fictions blurred the real and fictional. After viewing the images, you might be slightly confused about what’s real and what’s fictional – and that is a boundary and a tension that we tried to explore though these designs. And after creating our designs we were surprised to find how some products we had imagined as fiction were close to being realized as “real” (such as the news about Swedish workers getting implanted chips – or Samsung’s new Gear 360 camera looking very much like our lollipop-inspired image of SeeChange). Rather than trying to draw boundaries between real and fictional, we find it useful to blur those boundaries, to recognize the real and fictional as inherently entangled and co-constructed. This lets us draw a myriad of connections that might let us see these technologies and designs in a new light. SeeChange isn’t just a camera in Eggers’ novel, but it’s related to established products like GoPro cameras; to experimental ideas like Google Glass; linked to cameras in other fiction like Orwell’s 1984; and linked to current sociopolitical debates like the role of cameras in policing and surveillance in public spaces. We can use fictional technical capabilities, fictional legal worlds, or social worlds to explore and reflect on how privacy is situated both in the present and how it might be in the future.

Conclusions

In summary, we created a set of design fictions inspired by the novel The Circle that engaged in the blurring of real and fictional to explore and analyze privacy implications of emerging sensing technologies.

Perhaps more pragmatically, we find that tapping into an author’s existing fictional universe provides a concrete starting point to begin design fiction explorations, so that we do not have to create a fictional world from scratch.

Find out more on our project page, or download the paper: [PDF link ] [ACM link]


by Richmond at June 15, 2017 09:02 PM

MIMS 2012

How to Say No to Your CEO Without Saying No

Shortly after I rolled out Optimizely’s Discovery kanban process last year, one of its benefits became immediately obvious: using it as a tool to say No.

This is best illustrated with a story. One day, I was at my desk, minding my own business 🙃, when our CEO came up to me and asked, “Hey, is there a designer who could work on <insert special CEO pet project>?” In my head, I knew it wasn’t a priority. Telling him that directly, though, would have led to us arguing over why we thought the project was or was not important, without grounding the argument in the reality of whether it was higher priority than current work-in-progress. And since he’s the CEO, I would have lost that argument.

So instead of doing that, I took him to our Discovery kanban board and said, “Let’s review what each person is doing and see if there’s anything we should stop doing to work on your project.” I pointed to each card on the board and said why we were doing it: “We’re doing this to reach company goal X… that’s important for customer Y,” and so on.

"Optimizely's Discovery kanban board in action" Optimizely’s Discovery kanban board in action

When we got to the end of the board, he admitted, “Yeah, those are all the right things to be doing,” and walked away. I never heard about the project again. And just like that, I said No to our CEO without saying No.

by Jeff Zych at June 15, 2017 05:19 AM

June 04, 2017

MIMS 2014

Do Hangovers Make Us Drink More Coffee?

hangover_header.jpg

After finishing my last blog post, I grew curious about the relationship between coffee and another beverage I’ve noticed is quite popular amongst backpacker folk: alcohol. Are late-night ragers (and their accompanying brutal hangovers) associated with greater levels of coffee consumption? Or is the idea about as dumb as another Hangover sequel?

When you look at a simple scatter plot associating per capita alcohol and coffee consumption on a national level, you might think that yes, alcohol does fuel coffee consumption (based on the apparent positive correlation).

coffee_vs_alcohol

But does this apparent relationship hold up to closer scrutiny? In my last article, we discovered that variables like country wealth could explain away much of the observed variation in coffee consumption. Could the same thing be happening here as well? That is, do richer countries generally consume more coffee and alcohol just because they can afford to do so?

The answer seems to be: not as much as I would have thought. The thing to notice in the graphs above is how much less sensitive alcohol consumption is to income compared to coffee. In other words, it don’t matter how much money is in your wallet, you gonna get your alcohol on no matter what. But for coffee, things are different. This is evident from the shapes of the data clouds in the respective graphs. That bunch of data points in the top left of the alcohol graph? You don’t see a similar shape in the coffee chart. And that means that consumption of alcohol depends much less on income, relative to coffee.

It’s not too surprising that alcohol consumption is less sensitive to income changes than coffee, but it’s always cool to see intuition borne out in real-life data that you randomly pull of the internet. By looking at what economists call income elasticity of demand—the % change in consumption of a good divided by the % change in income—we can more thoroughly quantify what we’re seeing. Using a log-log model, standard linear regression can be used to get a rough estimate* of the income elasticity of demand. In these models, the beta coefficient on log(income) ends up being the elasticity estimate.

When the elasticity of a good is less than 1, it is considered an inelastic good, i.e. a good that is not very sensitive to changes in income. Inelastic goods are also sometimes referred to as necessity goods. By contrast, goods with elasticity greater than 1 are considered elastic goods, or luxury goods. Sure enough, when you fit a log-log model to the data, the estimated elasticity for coffee is greater than 1 (1.08), while the estimated elasticity for alcohol is less than 1 (0.54). Hmm, so in the end, alcohol is more of a ‘necessity’ than coffee. Perhaps this settles any debate over which beverage is more addictive.

When it comes to drinking however (either coffee or alcohol), one cannot ignore the role that culture plays in driving the consumption of both beverages. Perhaps cultures that drink a lot of one drink a lot of the other, too. Or perhaps a culture has a taboo against alcohol, like we find in predominantly Muslim countries. To control for this, I included region-of-the-world controls in my final model relating alcohol and coffee consumption.

Unfortunately for my initial hypothesis, once you account for culture, any statistically significant relationship between alcohol and coffee consumption vanishes. To be sure that controlling for culture in my model was the right call, I performed a nested model analysis—a statistical method that basically helps make sure you’re not over-complicating things. The nested model analysis concluded that yes, culture does add value to the overall model, so I can’t just ignore it.

Echoing my last article, this is not the final word on the subject, as again, more granular data (at the individual level) could show a significant link between the two. Instead what this analysis says is that if a relationship does exist, it is not potent enough to show up in data at the national level. Oh well, it was worth a shot. Whether or not alcohol and coffee consumption are legit linked to one another, one fact remains indisputably true: hangovers suck.

hangover

 

Data Links:

  • Alcohol – World Health Organization – link
  • Economic Data – Conference Board – link
  • Coffee – Euromonitor (via The Atlantic) – link

* “rough” because ideally, you would look at changes in income in a single country to estimate elasticity rather than look at differences in income across different countries.

 

 


by dgreis at June 04, 2017 07:32 PM

May 22, 2017

Ph.D. student

hard realism about social structure

Sawyer’s (2000) investigations into the theory of downward causation of social structure are quite subtle. He points out several positions in the sociological debate about social structure:

  • Holists, who believe social structures have real, independent causal powers sometimes through internalization by individuals.
  • Subjectivists, who believe that social structures are epiphenomenal, reducible to individuals
  • Interactionists, who see patterns of interaction as primary, not the agents or the structures that may produce the interactions
  • Hybrid theorists, who see an interplay between social structure and independent individual agency.

I’m most interested at the moment in the holist, subjectivist, and hybrid positions. This is not because I don’t see interaction as essential–I do. But I think that recognizing that interactions are the medium if not material of social life does not solve the question of why social interactions seem to be structured the way they do. Or, more positively, the interactionist contributes to the discussion by opening up process theory and generative epistemology (cf. Cederman, 2005) as a way of getting at an answer to the question. It is up to us to take it from there.

The subjectivists, in positing only the observable individuals and their actions, has Occam’s Razor on their side. To posit the unobservable entities of social forms is to “Multiply entities unecessarily”. This perhaps accounts for the durability of the subjectivist thesis. The scientific burden of proof is, in a significant sense, on the holist or hybrid theorist to show why the positing of social forms and structures offers in explanatory power what it lacks in parsimony.

Another reason for the subjectivist position is that it does ideological work. Margaret Thatcher famously once said, “There is not such thing as society”, as a condemnation of the socialist government that she would dismantle in favor of free markets. Margaret Thatcher was highly influenced by Friedrich Hayek, who argued that free markets lead to more intelligent outcomes than planned economies because they are better at using local and distributed information in society. Whatever you think of the political consequences of his work, Hayek was an early theorist in society as a system of multiple agents with “bounded rationality“. A similar model to Hayek’s is developed and tested by Epstein and Axtell (1996).

On the other hand, our natural use of language, and social expectations, and legal system all weigh in favor of social forms, institutions, and other structures. These are, naturally, all “socially constructed” but these social constructs undeniably reproduce themselves; otherwise, they would not continue to exist. This process of self-reproduction is named autopoiesis (from ‘auto-‘ (self-), ‘-poisis’ (-creation)) by Maturana and Varela (1991). The concept has been taken up by Luhmann (1995) in social theory and Brier (2008) in Library and Information Sciences (LIS). As these later theorists argue, the phenomenon of language itself can be explained only as a autopoietic social system.

There is a gap between the positions of autopoiesis theorists and the sociological holists discussed by Sawyer. Autopoiesis is, in Varela’s formulation, a general phenomenon about the organization of matter. It is, in his view, the principle of organization of life on the cellular level.

Contrast this with the ‘holist’ social theorist who sees social structures as being reproduced by the “internalization” of the structure by the constituent agents. Social structures, in this view, depend at least in part on their being understood or “known” by the agents participating in them. This implies that the agents have certain cognitive powers that, e.g., strands of organic chemicals do not. [Sawyer refers to Castelfranchi, 1998 on this point; I have yet to read it.] Arguably, social norms are only norms because they are understood by agents involved. This is the position of Habermas (1985) for example, whose whole ethical theory depends on the rational acceptance of norms in free discussion. (This is the legacy of Immanuel Kant.)

What I am arguing for is that there is, in actuality, another position, not identified by Sawyer (2000), on the emergence of social structure that does not depend on internalization but that nevertheless has causal reality. Social forms may arise from individual activity in the same way that biological organization arises from unconscious chemical interactions. I suppose this is a form of holism.

I’d like to call this view the “hard realist” view of social structure, to contrast with “soft realist” views of social structure that depend on internalization by agents. I don’t mean for this to be taken aggressively, but rather because I have a very concrete distinction in mind. If social structure depends on internalization by agents, then that means (by definition, really) that there exists an intervention on the beliefs of agents that could dissolve the social structure and transform it into something else. For example, an left-wing anarchist might argue that money only has value because we all believe it has value. If we were to just all stop valuing money, we could have a free and equal society at last.

If social structures exist even in spite of the recognition of them by social actors, then the story is quite different. This means (by definition) that interventions on the beliefs of actors will not dissolve the structure. In other words, just because something is a social construct does not mean that it can be socially deconstructed by a process of reversal. Some social structures may truly have a life of their own. (I would expect this to be truer the more we delegate social moderation to technology.)

This story is complicated by the fact that social actors vary in their cognitive capacities and this heterogeneity can materially impact social outcomes. Axtell and Epstein (2006) have a model of the formation of retirement age norms in which a small minority of actors make their decision rationally based on expected outcomes and the rest adopt the behavior of the majority of their neighbors. This results in dynamic adjustments to behavior that, under certain parameters, make the total society look more individually rational than they are in fact. This is encouraging to those of us who sometimes feel our attempts to rationally understand the world are insignificant in the face of social inertia more broadly speaking.

But it also makes it difficult to judge empirically whether a “soft realist” or “hard realist” view of social structure is more accurate. It also makes the empirical distinction between the holist and subjectivist positions difficult, for that matter. Surveying individuals about their perceptions of their social world will tell you nothing about hard realist social structures. If there are heterogenous views about what the social order actually is, that may or may not impact the actual social structure that’s there. Real social structure may indeed create systematic blindnesses in the agents that compose them.

Therefore, the only way to test for hard realist social structure is to look at aggregate social behavior (perhaps on the interactionist level of analysis) and identify where its regularities can be attributed to generative mechanisms. Multi-agents systems and complex adaptive systems look like the primary tools in the toolkit for modeling these kinds of dynamics. So far I haven’t seen an adequate discussion of how these theories can be empirically confirmed using real data.

References

Axtell, Robert L and Epstein, J. M. “COORDINATION IN TRANSIENT SOCIAL NETWORKS: AN AGENT-BASED COMPUTATIONAL MODEL OF THE TIMING OF RETIREMENT ROBERT L. AXTELL AND JOSHUA M. EPSTEIN.” Generative social science: Studies in agent-based computational modeling (2006): 146.

Brier, Søren. Cybersemiotics: Why information is not enough!. University of Toronto Press, 2008.

Castelfranchi, Cristiano. “Simulating with cognitive agents: The importance of cognitive emergence.” International Workshop on Multi-Agent Systems and Agent-Based Simulation. Springer Berlin Heidelberg, 1998.

Cederman, Lars-Erik. “Computational models of social forms: Advancing generative process theory 1.” American Journal of Sociology 110.4 (2005): 864-893.

Epstein, Joshua M., and Robert Axtell. Growing artificial societies: social science from the bottom up. Brookings Institution Press, 1996.

Habermas, Jurgen, Jürgen Habermas, and Thomas McCarthy. The theory of communicative action. Vol. 2. Beacon press, 1985.

Hayek, Friedrich August. “The use of knowledge in society.” The American economic review (1945): 519-530.

Luhmann, Niklas. Social systems. Stanford University Press, 1995.

Maturana, Humberto R., and Francisco J. Varela. Autopoiesis and cognition: The realization of the living. Vol. 42. Springer Science & Business Media, 1991.

Sawyer, R. Keith. “Simulating emergence and downward causation in small groups.” Multi-agent-based simulation. Springer Berlin Heidelberg, 2000. 49-67.


by Sebastian Benthall at May 22, 2017 03:16 PM

May 19, 2017

MIMS 2016

Why is it asking for gender and age? Not sure how that relates to recipes, but I also don’t cook…

Why is it asking for gender and age? Not sure how that relates to recipes, but I also don’t cook…

I do think that if gender is absolutely necessary, it should give gender neutral options as well.

by Andrew Huang at May 19, 2017 01:30 AM

May 18, 2017

Ph.D. student

WannaCry as an example of the insecurity of legacy systems

CLTC’s Steve Weber and Betsy Cooper have written an Op-Ed about the recent WannaCry epidemic. The purpose of the article is clear: to argue that a possible future scenario CLTC developed in 2015, in which digital technologies become generally distrusted rather than trusted, is relevant and prescient. They then go on to elaborate on this scenario.

The problem with the Op-Ed is that the connection between WannaCry is spurious. Here’s how they make the connection:

The latest widespread ransomware attack, which has locked up computers in nearly 150 countries, has rightfully captured the world’s attention. But the focus shouldn’t be on the scale of the attack and the immediate harm it is causing, or even on the source of the software code that enabled it (a previous attack against the National Security Agency). What’s most important is that British doctors have reverted to pen and paper in the wake of the attacks. They’ve given up on insecure digital technologies in favor of secure but inconvenient analog ones.

This “back to analog” moment isn’t just a knee-jerk, stopgap reaction to a short-term problem. It’s a rational response to our increasingly insecure internet, and we are going to see more of it ahead.

If you look at the article that they link to from The Register, which is the only empirical evidence they use to make their case, it does indeed reference the use of pen and paper by doctors.

Doctors have been reduced to using pen and paper, and closing A&E to non-critical patients, amid the tech blackout. Ambulances have been redirected to other hospitals, and operations canceled.

There is a disconnect between what the article says and what Weber and Cooper are telling us. The article is quite clear that doctors are using pen and paper amid the tech blackout. Which is to say, because their computers are currently being locked up by ransomware, doctors are using pen and paper.

Does that mean that “They’ve given up on insecure digital technologies in favor of secure but inconvenient analog ones.”? No. It means that since they are waiting to be able to use their computers again, they have no other recourse but to use pen and paper. Does the evidence warrant the claim that “This “back to analog” moment isn’t just a knee-jerk, stopgap reaction to a short-term problem. It’s a rational response to our increasingly insecure internet, and we are going to see more of it ahead.” No, not at all.

In their eagerness to show the relevance of their scenario, Weber and Cooper rush say where the focus should be (on CLTC’s future scenario planning) that they ignore the specifics of WannaCry, most of which do not help their case. For example, there’s the issue that the vulnerability exploited by WannaCry had been publicly known for two months before the attack, and that Microsoft had already published a patch to the problem. The systems that were still vulnerability either did not apply the software update or were using an unsupported older version of Windows.

This paints a totally different picture of the problem than Weber and Cooper provide. It’s not that “new” internet infrastructure is insecure and “old” technologies are proven. Much of computing and the internet is already “old”. But there’s a life cycle to technology. “New” systems are more resilient (able to adapt to an attack or discovered vulnerability) and are smaller targets. Older legacy systems with a large installed based, like Windows 7, become more globally vulnerability if their weaknesses are discovered and not addressed. And if they are in widespread use, that presents a bigger target.

This isn’t just a problem for Windows. In this research paper, we show how similar principles are at work in the Python ecosystem. The riskiest projects are precisely those that are old, assumed to be secure, but no longer being actively maintained while the technical environment changes around them. The evidence of the WannaCry case further supports this view.


by Sebastian Benthall at May 18, 2017 02:20 PM

May 17, 2017

Ph.D. student

Sawyer on downward causation in social systems

The work of R. Keith Sawyer (2000) is another example of computational social science literature that I wish I had encountered ten years ago. Sawyer’s work from the early ’00’s is about the connections between sociological theory and multi-agent simulations (MAS).

Sawyer uses an example of an improvisational theater skit to demonstrate how emergence and downward causation work in a small group setting. Two actors in the skit exchange maybe ten lines, each building on the expectations set by the prior actions. The first line establishes the scene is a store, and one of the actors is the owner. The second actor approaches; the first greets her as if she is a customer. She acts in a childlike way and speaks haltingly, establishing that she needs assistance.

What changes in each step of the dialogue is the shared “frame” (in Sawyer’s usage) which defines the relationships and setting of the activity. Perhaps because it is improvisational theater, the frame is carefully shared between the actors. The “Yes, And…” rule applies and nobody is contradicted. This creates the illusion of a social reality, shared by the audience.

Reading this resonated with other reading and thinking I’ve done on ideology. I think about situations where I’ve been among people with a shared vision of the world, or where that vision of the world has been contested. Much of what is studied as framing in media studies is about codifying the relations between actors and the interpretation of actions.

Surely, for some groups to survive, they must maintain a shared frame among their members. This both provides a guide for collective action and also a motivation for cohesion. An example is an activist group at a protest. If one doesn’t share some kind of frame about the relationships between certain actors and the strategies being used, it doesn’t make sense to be part of that protest. The same is true for some (but maybe not all) academic disciplines. A shared social subtext, the frame, binds together members of the discipline and gives activity within it meaning. It also motivates the formation of boundaries.

I suppose the reification of Weird Twitter was an example of a viral framing. Or should I say enframing?! (Heidegger joke).

Getting back to Sawyer, his focus is on a particularly thorny aspect of social theory, the status of social structures and their causal efficacy. How do macro- social forms emerge from individual actors (or actions), and how do those macro- forms have micro- influence over individuals (if they do at all)? Broadly speaking in terms of theoretical poles, there are historically holists, like Durkheim and Parsons, who maintain that social structures are real and have causal power through, in one prominent variation, the internalization of the structure by individuals; subjectivists, like Max Weber, who see social structure as epiphenomenal and reduce it to individual subjective states; and interactionists, which focuses on the symbolic interactions between agents and the patterns of activity. There are also hybrid theories that combine two or more of these views, most notably Giddens, who combines holist and subjectivist positions in his theory of structuration.

After explaining all this very clearly and succinctly, he goes on to talk about which paradigms of agent based modeling correspond to which classes of sociological theory.

References

Sawyer, R. Keith. “Simulating emergence and downward causation in small groups.” Multi-agent-based simulation. Springer Berlin Heidelberg, 2000. 49-67.


by Sebastian Benthall at May 17, 2017 07:06 PM

May 16, 2017

Ph.D. student

Similarities between the cognitive science/AI and complex systems/MAS fields

One of the things that made the research traditions of cognitive science and artificial intelligence so great was the duality between them.

Cognitive science tried to understand the mind at the same time that artificial intelligence tried to discover methods for reproducing the functions of cognition artificially. Artificial intelligence techniques became hypotheses for how the mind worked, and empirically confirmed theories of how the mind worked inspired artificial intelligence techniques.

There was a lot of criticism of these fields at one point. Writers like Hubert Dreyfus, Lucy Suchman, and Winograd and Flores critiqued especially heavily one paradigm that’s now called “Good Old Fashioned AI”–the kind of AI that used static, explicit representations of the world instead of machine learning.

That was a really long time ago and now machine learning and cognitive psychology (including cognitive neuroscience) are in happy conversation, with much more successful models of learning that by and large have absorbed the critiques of earlier times.

Some people think that these old critiques still apply to modern methods in AI. Isn’t AI still AI? I believe the main confusion is that lots of people don’t know that “computable” means something very precisely mathematical: it means a function that is calculable by a partial recursive function. It just so happens that computers, the devices we know and love, can compute any computable function.

So what changed in AI was not that they were using computation to solve problems, but the way they used computation. Similarly, while there was a period where cognitive psychology tried to model mental processes using a particular kind of computable representation, and these models are now known to be inaccurate, that doesn’t mean that the mind doesn’t perform other forms of computation.

A similar kind of relationship is going on between the study of complex systems, especially complex social systems, and the techniques of multi-agent system modeling. Multi-agent system modeling is, as Epstein clarifies, about generative modeling of social processes that is computable in the mathematical sense, but the fact that physical computers are involved is incidental. Multi-agent systems are supposed to be a more realistic way of modeling agent interactions than, say, neoclassical game theory, in the same way that machine learning is a more realistic way of modeling cognition than GOFAI.

Given that, despite (or, more charitably because of) the critiques leveled against it, cognitive science and artificial intelligence have developed into widely successful and highly respected fields. We should expect complex systems/multi-agent systems research to follow a similar trajectory.


by Sebastian Benthall at May 16, 2017 09:03 PM

May 13, 2017

Ph.D. student

Varian taught Miller

“The emerging tapestry of complex systems research is being formed by localized individual efforts that are becoming subsumed as part of a greater pattern that holds a beauty and coherence that belies the lack of an omniscient designer.” – John H. Miller and Scott Page, Complex Adaptive Systems: An Introduction to Computational Models of Social Life

I’ve been giving myself an exhilarating crash course in the complex systems literature. Through reading several books and articles on the matter, one gets a sense of the different authors, their biases and emphasis. Cederman works carefully to ground his work in a deeper sociological tradition. Epstein is no-nonsense about the connection between mathematicity and computation and social scientific method. Holland is clear that social systems are, in his view, a special case of a more generalized object of scientific study, complex adaptive systems.

Perhaps the greatest challenge to any system, let alone social system, is self-reference. The capacity of social science as a system (or systems) to examine themselves is the subject of much academic debate and public concern. Miller and Page, in their Complex Adaptive Systems: An Introduction to Computational Models of Social Life, begin with their own comment on the emergence of complex systems research using a symbolic vocabulary drawn from their own field. They are conscious of their work as a self-reflective thesis that forms the basis of a broader and systematic education in their field of research.

As somebody who has attempted social scientific investigation of scientific fields (in my case, open source scientific software communities, along with some quasi-ethnographic work), my main emotions when reacting to this literature are an excitement about its awesome potential and a frustration that I have not been studying it sooner. I have been intellectually hungry for this material while studying at Berkeley, but it wasn’t in the zeitgeist of the places I was a part of to take this kind of work as the basis for study.

I think it’s fair to say that most of the professors there have heard of this line of work but are not experts in it. It is a relatively new field and UC Berkeley is a rather conservative institution. To some extent this explains this intellectual gap.

So then I discovered in the acknowledgements section of Miller and Page that Hal Varian taught John H. Miller when both were at University of Michigan. Hal Varian would then go on to be the first dean of my own department, the School of Information, before joining Google as their “chief economist” in 2002.

Google in 2002. I believe he helped design the advertising auction system, which was the basis of their extraordinary business model.

I’ve had the opportunity to study a little of Varian’s work. It’s really good. Microeconomic theory pertinent to the information economy. It included theory relevant to information security, as Ross Anderson’s recent piece in Edge discusses. This was highly useful stuff that is at the foundation of the modern information economy, at the very least to the extent that Google is at the foundation of the modern information economy, which it absolutely is.

This leaves me with a few burning questions. The first is why isn’t Varian’s work taught to everybody in the School of Information like it’s the f—ing gospel? Here we have a person who founded the department and by all evidence discovered and articulated knowledge of great importance to any information enterprise or professional. So why is it not part of the core curriculum of a professional school aimed at preparing people for Silicon Valley management jobs?

The second question is why isn’t work descending from Varian’s held in higher esteem at Berkeley? Why is it that neoclassical economic modeling, however useful, is seen as passé, and complex systems work almost unheard of? It does not, it seems to me, reflect the lack of prestige awarded the field nationally. I’m seeing Carnegie Mellon, University of Michigan, the Brookings Institute, Johns Hopkins, and Princeton all represented among the scholars studying complex systems. Berkeley is precisely the sort of place you would expect this work to flourish. But I know of only one professor there who teaches it with seriousness, a relatively new hire in the Geography department (who I in no way intend to diminish by writing this post; on the contrary).

One explanation is, to put it bluntly, brain drain. Hal Varian left Berkeley for Google in 2002. That must have been a great move for him. Perhaps he assumed his legacy would be passed on through the education system he helped to found, but that is not exactly what happened. Rather, it seems he left a vacuum for others to fill. Those left to fill it were those with less capacity to join the leadership of the booming technology industry: qualitative researchers. Latourians. The eager ranks of the social studier. (Note the awkardness of the rendering of ‘Studies’ as a discipline to its practicioner, a studier.) Engineering professors stayed on, and so the university churns out capable engineers which go on to lucrative careers. But something, some part of the rigorous strategic vision, was lost.

That’s a fable, of course. But one has to engage in some kind of sense-making to get through life. I wonder what somebody with a closer relationship to the administration of these institutions would say to any of this. For now, I have my story and know what it is I’m studying.


by Sebastian Benthall at May 13, 2017 02:03 PM

May 12, 2017

Ph.D. student

Hurray! Epstein’s ‘generative’ social science is ‘recursive’ or ‘effectively computable’ social science!

I’m finding recent reading on agent-based modeling profoundly refreshing. I’ve been discovering a number of writers with a level of sanity about social science and computation that I have been trying to find for years.

I’ve dipped into Joshua Epstein’s Generative Social Science: Studies in Agent-Based Computational Modeling (2007), which the author styles as a sequel to the excellent Growing Artificial Societies: Social Science from the Bottom Up (1996). Epstein explains that while the first book was a kind of “call to arms” for generative social science, the later book is a firmer and more mature theoretical argument, in the form of a compilation of research offering generative explanations for a wide variety of phenomena, including such highly pertinent ones as the emergence of social classes and norms.

What is so refreshing about reading this book is, I’ll say it again, the sanity of it.

First, it compares generative social science to other mathematical social sciences that use game theory. It notes that, though there are exceptions, the problem with these fields is their tendency to see explanation in terms of Nash equilibria of unboundedly rational agents. There’s lots of interesting social phenomena that are not in such an equilibrium–the phenomenon might itself be a dynamic one–and no social phenomenon worth mentioning has unboundedly rational agents.

This is a correct critique of naive mathematical economic modeling. But Epstein does not throw the baby out with the bathwater. He’s advocating for agent-based modeling through computer simulations.

This leads him to respond preemptively to objections. One of these responses is “The Computer is not the point”. Yes, computers are powerful tools and simulations in particular are powerful instruments. But it’s not important to the content of the social science that the simulations are being run on computers. That’s incidental. What’s important is that the simulations are fundamentally translatable into mathematical equations. This follows from basic theory of computation: every computed program is equivalent to some mathematical function. Hence, “generative social science” might as well be called “recursive social science” or “effectively computable social science”, he says; he took the term “generative” from Chomsky (i.e. “generative grammer”).

Compare this with Cederman’s account of ‘generative process theory‘ in sociology. For Cederman, generative process theory is older than the theory of computation. He locates its origin in Simmel, a contemporary of Max Weber. The gist of it is that you try to explain social phenomena by explaining the process that generates it. This is a triumphant position to take because it doesn’t have all the problems of positivism (theoretical blinders) or phenomenology (relativism).

So there is a sense in which the only thing Epstein is adding on top of this is the claim that proposed generative processes be computable. This is methodologically very open-ended, since computability is a very general mathematical property. Naturally the availability of computers for simulation makes this methodological requirement attractive, just as ‘analytic tractability’ was so important for neoclassical economic theory. But on top of its methodological attractiveness, there is also an ontological attractiveness to the theory. If one accepts what Charles Bennett calls the “physical Church theory”–the idea that the Church-Turing thesis applies not just to formal systems of computation but to all physical systems–then the foundational assumption of Epstein’s generative social science holds not just as a methodological assumption.

This was all written in 2007, two years before Lazer et al.’s “Life in the network: the coming age of computational social science“. “Computational social science”, in their view, is about the availability of data, the Internet, and the ability to look at society with a new rigor known to the hard sciences. Naturally, this is an important phenomenon. But somehow in the hype this version of computational social science became about the computers, while the underlying scientific ambition to develop a generative theory of society was lost. Computability was an essential feature of the method, but the discovery (or conjecture) that society itself is computation was lost.

But it need not be. Just a short dip into it, Epstein’s Generative social science is a fine, accessible book. All we need to do is get everybody to read it so we can all get on the same page.

References

Cederman, Lars-Erik. “Computational models of social forms: Advancing generative process theory 1.” American Journal of Sociology 110.4 (2005): 864-893.

Epstein, Joshua M., and Robert L. Axtell. “Growing artificial societies: Social science from the bottom up (complex adaptive systems).” (1996).

Epstein, Joshua M. Generative social science: Studies in agent-based computational modeling. Princeton University Press, 2006.

Lazer, David, et al. “Life in the network: the coming age of computational social science.” Science (New York, NY) 323.5915 (2009): 721.


by Sebastian Benthall at May 12, 2017 01:57 AM

May 05, 2017

Ph.D. student

Society as object of Data Science, as Multi-Agent System, and/or Complex Adaptive System

I’m drilling down into theory about the computational modeling of social systems. In just a short amount of time trying to take this task seriously, I’ve already run into some interesting twists.

A word about my trajectory so far: my background, such as it is, has been in cognitive science and artificial intelligence, and then software engineering. For the past several years I have been training to be a ‘data scientist’, and have been successful at that. This means getting a familiarity with machine learning techniques (a subset of AI), the underlying mathematical theory, software tooling, and research methodology to get valuable insights out of unstructured or complex observational data. The data sets I’m interested are as a rule generated by some sort of sociotechnical process.

As much as the techniques of data science lead to rigorous understanding of data at hand, there’s been something missing from my toolbox, which is the appropriate modeling language for social processes that can encode the kinds of implicit theories that my analysis surfaces. Hence the transition I am attempting to go from being a data scientist, a diluted term, to a computational social scientist.

The difficulty, navigating as I am out of a very odd intellectual niche, is acquiring the theoretical vocabulary that bridges the gap between social theory and computational theory. In my training at Berkeley’s School of Information, frequently computational theory and social theory have been assumed to be at odds with each other, applying to distinct domains of inquiry. I gather that this is true elsewhere as well. I have found this division intellectually impossible to swallow myself. So now I am embarking on an independent expedition into the world of computational social theory.

One of pieces that’s grounding my study, as I’ve mentioned, is Cederman’s work outline the relationship between generative process theory, multi-agent simulations (MAS), and computational sociology. It is great work for connecting more recent developments in computational sociology with earlier forms of sociology proper. Cederman cites interesting works by R. Keith Sawyer, who goes into depth about how MAS can shed light on some of the key challenges of social theory: how does social order happen? The tricky part here is the relationship between the ‘macro’ level ‘social forms’ and the ‘micro’ level individual actions. I disagree with some of Sawyer’s analysis, but I think he does a great of setting up the problem and its relationship to other sociological work, such as Giddens’s work on structuration.

This is, so far, all theory. As a concrete example of this method, I’ve been reading Epstein and Axtell’s Growing Artificial Societies (1996), which I gather is something of a classic in the field. Their Sugarscape model is very flexible and their simulations shed light on timeless questions of the relationship between economic activity and inequality. Their presentation is also inspiring.

As a rule I’m finding the literature in this space far more accessible than I would have expected. It’s often written in very plain language and depends more on the power of illustration than scientific terminology laden with intellectual authority. What I have encountered so far is, perhaps as a consequence, a little unsatisfying intellectually. But it’s all quite promising.

Based on these leads, I was recommended David Little’s recent blog post about complexity in social science. He’s quite critical of the bolder claims of these scientists; I’d like to revisit these arguments later. But what was most valuable for me were his references. One was a book by Epstein, who I gather has gone on to do a lot more work since co-authoring Growing Artificial Societies. This seems to continue in the vein of ‘generative’ modeling shared by Cederman.

But Little references two other sources: John Holland’s Complexity: A Very Short Introduction and Miller and Page’s Complex Adaptive Systems: An Introduction to Computational Models of Social Life.

This is actually a twist. Holland as well as Miller and Page appear to be concerned mainly with complex adaptive systems (CAS), which appear to be more general than MAS. At least, in Holland’s rendition, which I’m now reading. MAS, Cederman and Sawyer both argue, is inspired in part by Object Oriented Programming (OOP), a programming paradigm that truly does lend itself to certain kinds of simulations. But Holland’s work seems more ambitious, tying CAS back to contributions made by von Neumman and Noam Chomsky. Holland is after a general scientific theory of complexity, not a specific science of modeling social phenomena. Perhaps for this reason his work echoes some work I’ve seen in systems ecology on autocatalysis and Varela’s work on autopoiesis.

Indeed the thread of Varela may well lead to where I’m going. One paper I’ve seen ties computational sociology to Luhmann’s theory of communication; Luhmann drew on Varela’s ideas of autopoeisis explicitly. So there is likely a firm foundation for social theory somewhere in here.

These are fruitful investigations. What I’m wondering now is to what extent the literatures on MAS and CAS are divergent.

 

 


by Sebastian Benthall at May 05, 2017 02:38 PM

May 03, 2017

Ph.D. student

Responding to Kelkar on the study and politics of artificial intelligence

I quite like Shreeharsh Kelkar’s recent piece on artificial intelligence as a thoughtful comment on the meaning of the term today and what Science and Technology Studies (STS) has to offer the public debate about it.

When AI researchers (and today this includes people who label themselves machine learning researchers, data scientists, even statisticians) debate what AI really means, their purpose is clear: to legitimate particular programs of research. When AI researchers (and today this includes people who label themselves machine learning researchers, data scientists, even statisticians) debate what AI really means, their purpose is clear: to legitimate particular programs of research. What agenda do we—as non-participants, yet interested bystanders—have in this debate, and how might it be best expressed through boundary work? STS researchers have argued that contemporary AI is best viewed as an assemblage that embodies a reconfigured version of human-machine relations where humans are constructed, through digital interfaces, as flexible inputs and/or supervisors of software programs that in turn perform a wide-variety of small-bore high-intensity computational tasks (involving primarily the processing of large amounts of data and computing statistical similarities). It is this reconfigured assemblage that promises to change our workplaces, rather than any specific technological advance. The STS agenda has been to concentrate on the human labor that makes this assemblage function, and to argue that it is precisely the invisibility of this labor that allows the technology to seem autonomous. And of course, STS scholars have argued that the particular AI assemblage under construction is disproportionately tilted towards benefiting Silicon Valley capitalists.

This is a compelling and well-stated critique. There’s just a few ways in which I would contest Kelkar’s argument.

The first is to argue that political thrust of the critique, that artificial intelligence often involves a reconfiguration of the relationship between labor and machines, is not in general not one made firmly by STS scholars. In Kelkar’s own characterization, STS researchers are “non-participants, yet interested bystanders” in the debate about AI. This distancing maneuver by STS researchers brackets off how their own workplaces, as white collar information workers, are constantly being reconfigured by artificial intelligence, while their funding is tied up to larger forces in the information economy. Therefore there’s always something disingenuous to the STS’s researcher’s claim to be a bystander, a posturing which allows them to be provocative but take no responsibility for the consequences of the provocation.

In contrast, one could consider the work of Nick Land, who is as far as I can tell not taken seriously by STS researchers though he’s by now a well-known theorist on similar subjects. I haven’t studied Land’s work much myself; I get my understanding mainly through S.C. Hickman’s excellent blogging. I also cannot really speak to Land’s connection with the alt-right; I just don’t know much about it. What I believe Land has done is tried to develop social theory that takes into account the troubling relationship between artificial intelligence and labor, articulated the relationship, and become not just a bystander but a participant in the debate.

Essentially what I’m arguing is that if STS researchers don’t activate the authentic political tendency in their own work, which often is either a flavor of accelerationism or a reaction to it, they are being, to use an old phrase for which I can find no immediate substitute, namby pamby. If one has a sophomore-level understanding of Marxist theory and can make the connection between artificial intelligence and capital, it’s not clear what is added by the STS perspective besides a lot of particularization of the theory.

The other criticism of Kelkar’s argument is that it isn’t at all charitable to AI researchers. Somehow it collapses all discussion of AI into a “contemporary” debate with an underlying economic anxiety. Even the AI researchers are, in this narrative, driven by economic anxiety, as their own articulation of their research agenda exists only for its own legitimization. The natural tendency of STS researchers is to see scientists as engaged primarily in rhetorical practices aimed at legitimizing their own research. This tends to obscure any actual technological advances made by scientists. AI researchers are no exception. Let’s assume that artificial intelligence does indeed reconfigure the relationship between labor and capital, rendering much labor invisible and giving the illusion of autonomy to machines capable of intense computational tasks, for the ultimate benefit of Silicon Valley capitalists. STS researchers, at least those characterized by Kelkar, downplay that there are specific technical advances that make that reconfiguration possible, and that these accomplishments are expensive and require an enormous amount of technical labor, and moreover that there are fundamental mathematical principles underlying the development of this technology, But these are facts of the matter that are extremely important to anybody who is an actual participant in the the debates around AI, let alone the economy that AI is always already reconfiguring.

The claim that AI researchers are mainly legitimizing themselves through the rhetoric of calling their work “artificial intelligence”, as opposed to accomplishing scientific and engineering feats, is totally unhelpful if one is interested in the political consequences of artificial intelligence. In my academic experience, this move is primarily one of projection: STS researchers are constantly engaged in rhetorical practices legitimizing themselves, so why shouldn’t scientists be as well? As long as one is a “bystander”, having no interest in praxis, there is no contest except rhetorical contest for legitimacy of research agendas. This is entirely a product of the effete conditions of academic research disengaged from all reality except courting funding agencies. If STS scholars turned themselves towards the task of legitimizing themselves through actual political gains, their understanding of artificial intelligence would be quite different indeed.


by Sebastian Benthall at May 03, 2017 09:06 PM

May 02, 2017

Ph.D. student

Civil liberties and liberalism in the EU’s General Data Protection Regulation (GDPR)

I’ve been studying the EU’s General Data Protection Regulation and reading the news.

In the news, I’m reading all the time about how the European Union is the last bastion of the “post-war liberal order”, threatened on all sides by ethnonationalism, including from the United State. Some writers have argued that the U.S. has simply moved on from the historical conditions of liberalism, with liberals as a class just having trouble getting over it. Brexit is somehow also framed as an ethnonationalist project. Whether scapegoat or actual change agent, new attention is on Russia, which has never been liberal and justified its action to take back Crimea based on the ethnic Russian-ness of that territory.

Despite it being in many parts of the world normal, one thing that’s upsetting to liberalism about ethnonationalism is the idea that the nation is rooted in an ethnicity, which is a form of social collective bound by genetic and family ties, and not in individual autonomy. From here it is a short step to having that ethnicity empowered in its command of the nation-state. And as we have been taught in history, when you have states acting on behalf of certain ethnicities, those states often treat other ethnicities in ways that are, from a liberal perspective, unjust. One of the first things to go are the freedoms, especially political freedoms (the kinds of freedoms that lead directly or indirectly to political power).

This is just a preface, not intended in any particular way, to explain why I’m interested in some of the language in the General Data Protection Regulation (GDPR). I’m studying GDPR because I’m studying privacy engineering: how to design technical systems that preserve people’s privacy. For practical reasons this requires some review of the relevant legislation. Compliance with the law is, if nothing else, a business concern, and this makes it relevant to technologists. But the GDPR, which is one of the strongest privacy rulings on the horizon, is actually thick with political intent which goes well beyond the pragmatic and mundane concerns of technical design. Here is section 51 of the Recitals, which discuss the motivation of the regulation and are intended to be used in interpretation of the legally binding Articles in the second section (emphasis mine):

(51) Personal data which are, by their nature, particularly sensitive in relation to fundamental rights and freedoms merit specific protection as the context of their processing could create significant risks to the fundamental rights and freedoms.

Those personal data should include personal data revealing racial or ethnic origin, whereby the use of the term ‘racial origin’ in this Regulation does not imply an acceptance by the Union of theories which attempt to determine the existence of separate human races.

The processing of photographs should not systematically be considered to be processing of special categories of personal data as they are covered by the definition of biometric data only when processed through a specific technical means allowing the unique identification or authentication of a natural person.

Such personal data should not be processed, unless processing is allowed in specific cases set out in this Regulation, taking into account that Member States law may lay down specific provisions on data protection in order to adapt the application of the rules of this Regulation for compliance with a legal obligation or for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller.

In addition to the specific requirements for such processing, the general principles and other rules of this Regulation should apply, in particular as regards the conditions for lawful processing.

Derogations from the general prohibition for processing such special categories of personal data should be explicitly provided, inter alia, where the data subject gives his or her explicit consent or in respect of specific needs in particular where the processing is carried out in the course of legitimate activities by certain associations or foundations the purpose of which is to permit the exercise of fundamental freedoms.

It’s not light reading. What I find most significant about Recital 51 is that it explicitly makes the point that data concerning somebody’s racial and ethnic origin are particularly pertinent to “fundamental rights and freedoms” and potential risks to them. This is despite the fact that the EU is denying any theory of racial realism. Recital 51 is in effect saying that race is a social construct but that even though it’s just a social construct it’s so sensitive an issue that processing data about anybody’s race is prima facie seen as creating a risk for their fundamental rights and freedoms. Ethnicity, not denied in the same way as race, is treated similarly.

There are tons of legal exceptions to these prohibitions in the GDPR and I expect that the full range of normal state activities are allowed once all those exceptions are taken into account. But it is curious that revealing race and ethnic origin is considered dangerous by the EU’s GDPR at the same time when there’s this narrative that ethnonationalists want to break up the EU in order to create states affording special privileges to national ethnicities. What it speaks to, among other things, is the point that the idea of a right to privacy is not politically neutral with respect to these questions of nationalism and globalism which seem to define the most important dimensions of political difference today.

Assuming I’m right and the GDPR encodes a political liberalism that opposes ethnonationalism, this raises interesting questions for how it affects geopolitical outcomes once it comes to be enforced. Because of the extra-territorial jurisdiction of the GDPR, it imposes on businesses all over the world policies that respect its laws even if those businesses only operate partially in the EU. Supposing the EU holds together in some form while in other places some moderate form of ethnonationalism takes over. Would the GDPR and its enforcement be strong enough to normalize liberalism into technical and business design globally even while ethonationalist political forces erode civil liberties with respect to the state?


by Sebastian Benthall at May 02, 2017 06:47 PM

April 28, 2017

Ph.D. student

Highlights of Algorithms and Explanations (NYU April 27-28) #algoexpla17

I’ve attended the Algorithms and Explanations workshop at NYU this week. In general, it addressed the problems raised by algorithmic opacity in decision-making. I wasn’t able to attend all the panels; in this post I’ll cover some highlights of what I found especially insightful or surprising.

Overall, I was impressed by the work presented. All of it rose above the naive positions on the related issues; much of it was targeted at debunking these naive positions. This may have been a function of the venue: hosted by the Information Law Institute at NYU Law, the intellectual encounter was primarily between lawyers and engineers. This focuses the conversation. It was not a conference on technology criticism, in a humanities or popular style, which is often too eager to conflate itself with technology policy. In my opinion, this conflation leads to the kinds of excesses Adam Elkus has addressed in his essay on technology policy, which I recommend. For the most part one did not get the sense that the speakers were in the business of creating problems; they were in the business of solving them.

At least this was the tone set by the first panel I attended, which was a collection of computer scientists, statisticians, and engineers who presented tools or conceptualizations that gave algorithmic systems legibility. Of these, I found Anupam Datta’s Quantitative Input Influence measure best motivated from a statistical perspective. I do believe that this measure essentially solves to problem that most vexes people when it comes to the opacity of machine learning systems by giving a clear score for which inputs effect decision outcomes.

I also enjoyed the presentation of Foster Provost, partly for the debunking force of the talk. He drew on his 25+ years of experience designing and deploying decision support systems and pointed out that ever since people started building these tools, the questions of interpretability and accountability have been a part of the job. As a person with technical and industry background who encountered the surge of ‘algorithmic accountability’ in an academic stage, I’ve found many of the questions that have been raised by the field to be baffling largely because the solutions have seemed either obvious or ingrained in engineering culture as among the challenges of dealing with clients. (This tree swing cartoon is a classic illustration of this).

Alexandra Chouldechova gave a very interesting talk on model comparison as a way of identifying bias in black-box algorithms which was new material for me.

In the next panel, dealing specifically with regulation, Deven Desai provided a related historical perspective: there’s a preexisting legal literature in bureaucratic transparency that is relevant to regulatory questions about algorithmic transparency. This awareness is shared, I believe, by those who hold what may be called a physicalist understanding of computation, or what Charles Bennett has called “physical Church’s thesis”: the position that the Church-Turing, which is about how all formal computational systems are reducible to each other and share certain limits as to their power, applies to all physical information processing systems. In particular, this thesis leads to the conclusion that human bureaucratic and information technological systems are essentially up to the same thing when it comes to information processing (this is also the position of Beniger).

But the most galvanizing talk in the regulatory panel was by Sandra Wachter, who presented material relevant to her paper “Why a Right to Explanation of Automated Decision-Making Does Not Exist int eh General Data Protection Legislation“. Companies and privacy scholars in the U.S. turn to the GDPR as a leading and challenging new regulation. It’s bold to show up at a conference on Algorithms and Explanation with an argument that the explainability of algorithms isn’t relevant to the next generation of privacy regulations. This is a space to watch.

The second day’s talks focused on algorithmic explainability in specific sectors. Of these I found the intellectually richest to be the panel on Health Care. Rich Caruana gave a warm and technically focused talk on how the complexity of functions used by a learning system can support or undermine its intelligibility, a topic I personally see as the crux of the problem.

I was especially charmed, however, by Federico Cabitza’s discussion of decision support in the medical context. I wish I could point to any associated papers, but do not have them handy. What was most compelling about the talk was the way it made the case for needing to study algorithmic decision making in vivo, as part of a decision procedure that involves human experts and that learns, as a socio-technical system, over time. In my opinion, too often the perils of opacity of algorithms are framed in terms of a specific judgment or the faults of a specific model. As I try to steer my own work turns more towards sociological process theory, I’m on the lookout for technologists who see technology as part of a sociotechnical, evolutionary process and not in isolation. With this complex ontology in mind, Cabitza was then able to unpack “explanation” into dimensions that targeted different aspects of the decision making process: scrutability, comprehensibility, and interpretability. There was far to much in the talk for me to cover here.

The next panel was on algorithms in consumer credit. All three speakers were very good, though their talks worked in different directions and the tensions between them were never resolved in the questions. Dan Raviv of Lendbuzz explained how his company was bringing credit to those who otherwise have not had access to it: immigrants to the U.S. with firm professional qualifications but no U.S. credit history. Lendbuzz has essentially identified a prime credit population ignored by current FICO scores, and has started a bank to lend to them.

That’s an interesting business and technical accomplishment. Unfortunately, it was largely overlooked as attention moved to later talks in this section. Aaron Rieke of Upturn gave a very realistic picture of the use of big data in credit scoring (it isn’t used much in the U.S.; they mainly use conventional data sources like credit history). What he’s looking for, rather humbly, is ways to be a better advocate, especially for those who are adversely affected by the enormous disparity in credit access.

This disparity was the background to Frank Pasquale’s talk, which was broad in scope. I’m glad he dug into social science theory, presenting some material from “Two Narratives of Platform Capitalism“, which I wish I had read earlier. We seem to share an interest in alternative theories of social scientific explanation and its relationship to the tech economy. It was, as is typical of Pasquale’s work, rather polemical, calling for a critical examination of credit scoring and financial regulation with the aim of exposing exploitation. This exploitation reveals itself in the invasions of privacy suffered by those in poverty as well as the inability of those deemed credit-unworthy to access opportunity.

One cannot fault the political motivation of raising awareness of and supporting the disadvantaged in society. But where the discussion missed the mark, I’m afraid, was in tying these concerns about inequality back to questions of algorithmic transparency. I’m generally of the opinion that the disparities in society are the result of social forces and patterns much more forceful and comprehensive than the nuances of algorithmic credit scoring. It’s not clear how any interventions on these mechanisms can lead to better political outcomes. As Andrew Selbst pointed out in an insightful comment, the very idea of ‘credit worthiness’ sets the deck against those who do not have the reliable wealth to pay their debts. And as Raviv’s presentation revealed (before being eclipsed by other political concerns), for some, the problem is not enough algorithmic analysis of their financial situation, not too much.

There’s a broad and old literature in economics about moral hazards in insurance markets, markets for lemons, and other game theoretic understandings of the winners and losers in these kinds of two-sided markets which is generally understated in ‘critical’ discussions of credit scoring algorithms. That’s too bad in my opinion, as it provides the best explanation of the political outcomes that are most concerning about credit markets. (These discussions of mechanism design use formal modeling but generally do not in an of themselves carry a neoclassical ideology.)

The last talk I attended as about algorithms in the media. Nick Diakonopoulos gave a comprehensive review of the many issues at stake. The most famous speaker on this panel was Gilad Lotan, who presented a number of interesting (though to me, familiar) data science results about media fragmentation and the Outside Your Bubble buzzfeed feature, aimed to counter it.

I wish Lotan had presented about something else: how Buzzfeed uses the engagement data is collects across its platforms and content to make editorial and strategic decisions. This is the kind of algorithmic decision-making that affects people’s lives. It is also precisely the kind of decision-making which is not generally transparent to the consumers of media. It would have been nice (and I feel, appropriate for the conference) if Lotan had taken the opportunity to explain Buzzfeed’s algorithms, especially in sociotechnical context of the organization’s broader decision-making and strategy. But he didn’t.

The discussion proceeded to devolve into one about fake news. One good point that was made in this discussion was by Julia Powles: she’s learned in her work that one of the important and troubling consequences of technology’s role in media is that while Google, Facebook and the like cater to both journalists and media consumers, their market role is disintermediation of the publishers. But historically, journalists have had their editorial power through their relationships with publishers, who used to be the ones to control distribution.

I came away from this conference feeling well informed about innovations in machine learning and statistics in model interpretation and communication. But I’ve left confirmed in my view that much of the discussion of algorithms and their political effects per se is a red herring. Broader economic questions of industrial organization of the information economy dominate the algorithmic particulars, where political effects are concerned.


by Sebastian Benthall at April 28, 2017 08:14 PM

April 23, 2017

Ph.D. student

Process theory; generative epistemology; configurative ontology: notes on Cederman, part 1

I’ve recently had recommended to me the work of L.E. Cederman, who I’ve come to understand is a well-respected and significant figure in computational social science, especially agent based modeling. In particular, I’ve been referred to this paper on the theoretical foundations of computational sociology:

Cederman, L.E., 2005. Computational models of social forms: Advancing generative process theory 1. American Journal of Sociology, 110(4), pp.864-893. (link)

This is a paper I wish I had encountered years ago. I’ve written much here about my struggles with “interdisciplinary” research. In short: I’ve been trying to study social phenomena with scientific rigor. This is a very old problem fraught with division. On top of that, there’s been, it seems, an epistemological upset because of advances in data collection and processing that poses a practical challenge to a lot of established disciplines. On top of this, the social phenomena I’m interested in most tend to involve the interaction between people and technology, which brings with it an association with disciplines specialized to that domain (HCI, STS) that for me have not made my research any more straightforward. After trying for some time to do the work I wanted to do under the new heading of data science, I did not find what I was looking intellectually in that emerging field, however important the practical skill-set involved has been to me.

Computational social science, I’ve convinced myself if not others, is where the answers lie. My hope for it is that as a new discipline, it’s able to break away from dogmas that limited other disciplines and trapped their ambitions in endless methodological debates. What is being offered, I’ve imagined, in computational social science is the possibility of a new paradigm, or at least a viable alternative one. Cederman’s 2005 paper holds out the promise for just that.

Let me address for now some highlights of his vision of social science and how they relate to the other. I hope to come to the rest in a later post.

Sociological process theory. This is a position in sociological theory that Cederman attributes to 19th century sociologist Georg Simmel. The core of this position is that social reality is not fixed, but rather result of an ongoing process of social interactions that give rise to social forms.

“The large systems and the super-individual organizations that customarily come to mind when we think of society, are nothing but immediate interactions that occur among men constantly every minute, but that have become crystallized as permanent fields, as autonomous phenomena.” (Simmel quoted in Wolf 1950, quoted in Cederman 2005)

There is a lot to this claim. If one is coming from the field of Human Computer Interaction (HCI), what may seem most striking about it is how well it resonates with a scholarly tradition that is most frequently positioned as a countercurrent to an unthinking positivism in design. Lucy Suchman, Etienne Wenger, and Jean Lave are scholars that come to mind as representative of this way of thinking. Much of the intellectual thrust of Simmel can be found in Paul Dourish’s criticism of positivist understandings of “context” in HCI.

For Dourish, the intellectual ground of this position is phenomenological social science, often associated with ethnomethodology. Simmel predates phenomenology but is a neo-Kantian, a contemporary of Weber, and a critic of the positivism of his day (the original positivism). As a social scientific tradition, it has had its successors (maybe most notably George Herbert Mead) but has submerged under other theoretical traditions. From Cederman’s analysis, one gathers that this is largely due to process theory’s inability to ground itself in rigorous method. Its early proponents were fond of metaphorical writing in a way that didn’t age well. Cederman pays homage to the sociological process theory’s origins, but quickly moves to discuss an epistemological position that complements it. Notably, this position is neither positivist, nor phenomenological, nor critical (in the Frankfurt School sense), but something else: generative epistemology.

Generative epistemology. Cederman positions generative epistemology primarily in opposition to positivism and particularly a facet of positivism that he calls “nomothetic explanation”: explanation in terms of laws and regularities. The latter is considered the gold standard of natural science and the social sciences that attempt to mimic them. This tendency is independent of whether the inquiry is qualitative or quantitative. Both comparative analysis and statistical control look for a conjunction of factors that is regularly predictive of some outcome. (Cederman’s sources on this: (Gary) King, Keohane, and Verba (1994), and Goldthorpe, 1997. The Gary King cited is I assume the same Gary King who goes on to run Harvard’s IQSS; I hope to return to this question of positivism in computational social science in later writing. I tend to disagree with the idea that ‘data science’ or ‘big data’ has primarily a positivist tendency.)

Cederman describes the ‘process theorist’s’ alternative as based on abduction, not induction. Recall that ‘abduction’ was Peirce’s term for ‘inference to the best explanation’. The goal is to take an observed sociological phenomenon and explain its generation by accounting for how it is socially produced. The preference for generative explanation, in Simmel, comes in part from a pessimism about isolating regularities in complex social systems. Through this theorization, knowledge is gained; the knowledge gained is a theoretical advance that makes a social phenomenon less ‘puzzling’.

“The construction of generative explanations based on abductive inference is an inherently theoretical endeavor (McMullin, 1964). Instead of subsuming observations under laws, the main explanatory goal is to make a puzzling phenomenon less puzzling, something that inevitably requires the introduction of new knowledge through theoretical innovation.”

The specifics of the associated method are less clear than the motivation for this epistemology. Many early process theorists resorted to metaphors. But where all this is going is into the construction of models, and especially computational models, as a way of presenting and testing generative theories. Models generate forms through logical operations based on a number of parameters. A comparison between the logical form and the empirical form is made. If it favorable, then the empirical form can be characterized as the result of a process described by the variables and model. (Barth, 1981)

Cederman draws from Barth (1981) and Thomas Fararo (1989) to ally himself with ‘realist’ social science. The term is clarified later: ‘realism’ is opposed to ‘instrumentalism’, a reference that cuts to one of core epistemological debates in computational methods. An instrumental method, such as a machine learning ensemble, may provide a very instrumental model for purposes of prediction and control that nevertheless does not capture what’s really going on in the underlying process. Realist mathematical sociology, on the other hand, attempts to capture the reality of the process generating the social phenomenon in the precise language of processing, mathematics/computation. The underlying metaphysical point is one that many people would rather not attend to. For now, we will follow Cederman’s logic to a different ontological point.

Configurative ontology. Sociological process theory requires explanations to be specify the process that generates the social form observed. The entities, relations, and mechanisms may be unobserved or even unobservable. Postivists, Cederman argues, will often take the social forms to be variables themselves and undertheorize how the variables have been generated, since they care only about predicting actual outcomes. Whereas positivists study ‘correlations’ among elements, Simmel studies ‘sociations’, the interactions that result in those elements. The ontology, then, is that social forms are “configurations of social interactions and actors that together constitute the structures in which they are embedded.

In this view, variables, such as would be used in some more positivist social scientific study, “merely measure dimensions of social forms; they cannot represent the forms themselves except in very simple cases.” While a variable based analysis detaches a social phenomenon from space and time, “social forms always possess a duration in time and an extension in space.

Aside from a deep resonance with Dourish’s critique of ‘contextual computing’ (noted above), this argument once again recalls much of what now comes under the expansive notion of ‘criticism’ of social sciences. Ethnomethodology and ethnography more general are now often raised as an alternative to simplistic positivist methods. In my experience at Berkeley and exposure so far to the important academic debates, the most noisy contest is between allegedly positivist or instrumentalist (they are different, surely) quantitative methods and phenomenological ethnographic methods. Indeed, it is the latter who more often now claim the mantle of ‘realism’. What is different about Cederman’s case in this paper is that he is setting up a foundation for realist sociology that is nevertheless mathematized and computational.

What I am looking for in this paper, and haven’t found yet, is an account of how these ‘realist’ models of social processes are tested for their correspondence to empirical social form. Here is where I believe there is an opportunity that I have not yet seen fully engaged.


by Sebastian Benthall at April 23, 2017 05:32 PM

April 22, 2017

MIMS 2014

Coffee: Productivity Fuel? Or Just an Excuse to Leave the Office?

IMG_4037An organic coffee farm near Salento, Colombia

Traveling through Colombia’s coffee region, my days have been spent drooling over roasted arabica beans on organic coffee fincas, or having religious experiences while sampling the remarkable brew at some of the region’s cafes. It all made me realize that I truly am addicted to the stuff. Without at least two cups of java in the morning, I am a morose, gelatinous, dreary-eyed, delirious blob. And that got me thinking: if coffee is such a crucial input into my own productivity, what about the world at large? Are countries that drink more coffee more productive?

I am not the first person to ask this question. There is a problem, however, when it comes to relating productivity with coffee consumption. On a country level, at least, productivity is generally measured as GDP per capita, i.e. the value of goods and services provided by a country divided by its population. That means that we’re comparing coffee consumption with productivity in terms of a country’s wealth—as opposed to something else, like number of widgets produced, or the number of snaps sent per day.

Selection_023Credit: freakonometrics.hypotheses.org

The issue with GDP, however, is that coffee consumption naturally grows when a country’s inhabitants are more wealthy. Thus, when we observe the positive correlation between coffee consumption per person and GDP per capita (see chart), it’s way more likely the arrow of causality is running in the other direction, i.e. wealth is driving coffee consumption, rather than the other way around.

 

So do we give up there? Not just yet. In my grand armchair theory about coffee, gains in productivity are (in part) reaped from the extra hours that the precious elixir enables us to pour into our livelihoods each day. It’s difficult to verify this theory empirically, given the issue re: comparing productivity and coffee consumption described above. Moreover, there is a separate debate over whether toiling away more hours adds or detracts from worker productivity. But setting that question aside for a moment, I wondered: do we at least observe that countries with higher coffee consumption also have workers who are more likely to burn the midnight oil at the office?

coffee_vs_hoursThe answer is, surprisingly, not at all. There is, in fact, an unmistakably negative relationship between cups of coffee per day and the number of hours worked per person. So does this mean I need to totally flip my theory? Does coffee consumption actually make us lazier, because we’re so busy taking all those coffee breaks? Just look at the Netherlands way the eff out there on the bottom right. It makes total sense, since we know exactly what those Dutch are really up to during all those coffee breaks…

In reality, the story is not so simple. When you take a closer look at the countries that form the negative trend, something becomes quite apparent. The countries in the top-left are generally less wealthy than the countries in the bottom-right. Thus, my attempt to ignore country wealth by focusing instead on hours worked was all for naught, because it seems that country wealth is once again rearing its ugly head as a lurking variable.

Just as there is a very strong relationship between wealth and the consumption of coffee, there is also a strong relationship between a country’s wealth and the number of hours people work. It turns out that wealthier countries work fewer hours on average than less wealthy countries. This trend is pretty clear in the graph below—except for Singapore hanging out on the top right, slaving away but still making serious bank. You go, Singapore! Never change.gdp_vs_hoursThere are a lot of articles out there that try to explain why more productive countries work fewer hours (here’s one). Some conclude that because workers in richer countries are more productive, they need to work less. I think this line of thinking can be potentially problematic, particularly if one equates productivity with efficiency.  That could lead people to think that people in poorer countries are lazier on the job, or perhaps incompetent. But we have to remember what productivity actually means in this context. Recall from above that it is the value of goods and services a country makes divided by its population. And when we talk about value here, we are speaking in terms of how the market rewards these goods and services, not in terms of the sweat that goes into making them.

My own take is that workers in richer countries aren’t necessarily working more productively (i.e. more efficiently) than their counterparts in poorer countries, but rather that the types of jobs in richer countries on average tend to be more highly paid than in poorer countries. If you live in Vietnam and weren’t fortunate enough to have decent access to an education like your Oxford-educated friend in England, you probably won’t earn as much per hour as she will. And in order to make ends meet, you’re gonna need to put in more hours on the job.

Anyway, this is starting to veer quite a ways from coffee, and stray closer to another interest of mine, economics. The main thing to remember is that a country’s wealth has a positive influence on coffee consumption and a negative influence on the number of hours worked. Because of this complex tangle of relationships, it can be misleading to rely only on graphs that look at two variables at a time. Luckily, a statistician’s toolbox isn’t limited to scatterplots. By using linear regression, we can actually examine the relationship between coffee consumption and hours worked while controlling for the effect of a country’s wealth.

We can, in fact, control for a host of other variables we might think are important as well. For example, as a hot beverage, we might expect coffee to be less popular in countries with higher average temperatures. We might also control for region of the world as a proxy for culture, since guzzling coffee isn’t quite as big of a thing in say, India or China, as it is in the West. Those countries, for example, seem to prefer tea.

So what happens once we control for all of these variables? Well, it all depends on whether you include Singapore or not. In statistical jargon, Singapore is what is referred to as an influential observation. In other words, it’s an outlier that messes everything up if it is included in the analysis. Whatever is going on in Singapore is clearly very unique to Singapore. If we include it as part of our effort to describe a general trend, it will prove to be more of a distraction than anything else. Thus, we toss it out. Sorry Singapore. I know I said I loved you, but you gotta go. Stay golden.

Once Singapore is out of the picture—and we control for all the variables listed above—it turns out that coffee consumption has no statistically significant effect on the number of hours worked in a country. Thus, the answer to the title of this article is . . . Neither! On a country-level basis, coffee neither makes people work harder nor does it make them take more breaks out of the office. Sorry if that’s a boring conclusion, but don’t shoot the messenger. I’m just telling you what the numbers say.

Of course, this is not the final word on the subject. There may be more granular data out there, with consumption and productivity information recorded at a personal level (ideally as part of a randomized double-blind experiment using caffeine pills vs placebos). Such data would be much better suited to answering the question than the national-level data we’ve been looking at. But maybe you still managed to learn a thing or two about coffee, economics, or statistics in the process. Either way, it’s time for another cup of joe.

Funny-Coffee-Meme-27

Data Links:

 


by dgreis at April 22, 2017 08:57 PM

April 15, 2017

Ph.D. student

Three possibilities of political agency in an economy of control

I wrote earlier about three modes of social explanation: functionality, which explains a social phenomenon in terms of what it optimizes; politics, which explains a social phenomenon in terms of multiple agents working to optimize different goals; and chaos, which explains a social phenomenon in terms of the happenings of chance, independent of the will of any agent.

A couple notes on this before I go on. First, this view of social explanation is intentionally aligned with mathematical theories of agency widely used in what is broadly considered ‘artificial intelligence’ research and even more broadly  acknowledged under the rubrics of economics, cognitive science, multi-agent systems research, and the like. I am willfully opting into the hegemonic paradigm here. If years in graduate school at Berkeley have taught me one pearl of wisdom, it’s this: it’s hegemonic for a reason.

A second note is that when I say “social explanation”, what I really mean is “sociotechnical explanation”. This is awkward, because the only reason I have to make this point is because of an artificial distinction between technology and society that exists much more as a social distinction between technologists and–what should one call them?–socialites than as an actual ontological distinction. Engineers can, must, and do constantly engage societal pressures; they must bracket of these pressures in some aspects of their work to achieve the specific demands of engineering. Socialites can, must, and do adopt and use technologies in every aspect of their lives; they must bracket these technologies in some aspects of their lives in order to achieve the specific demands of mastering social fashions. The social scientist, qua socialite who masters specific social rituals, and the technologist, qua engineer who masters a specific aspect of nature, naturally advertise their mastery as autonomous and complete. The social scholar of technology, qua socialite engaged in arbitrage between communities of socialites and communities of technologists, naturally advertises their mastery as an enlightened view over and above the advertisements of the technologists. To the extent this is all mere advertising, it is all mere nonsense. Currency, for example, is surely a technology; it is also surely an artifact of socialization as much if not more than it is a material artifact. Since the truly ancient invention of currency and its pervasiveness through the fabric of social life, there has been no society that is not sociotechnical, and there has been no technology that is is not sociotechnical. A better word for the sociotechnical would be one that indicates its triviality, how it actually carries no specific meaning at all. It signals only that one has matured to the point that one disbelieves advertisements. We are speaking scientifically now.

With that out of the way…I have proposed three modes of explanation: functionality, politics, and chaos. They refer to specific distributions of control throughout a social system. The first refers to the capacity of the system for self-control. The second refers to the capacity of the components of the system for self-control. The third refers to the absence of control.

I’ve written elsewhere about my interest in the economy of control, or in economies of control, plurally. Perhaps the best way to go about studying this would be an in depth review of the available literature on information economics. Sadly, I am at this point a bit removed from this literature, having gone down a number of other rabbit holes. In as much as intellectual progress can be made by blazing novel trails through the wilderness of ideas, I’m intent on documenting my path back to the rationalistic homeland from which I’ve wandered. Perhaps I bring spices. Perhaps I bring disease.

One of the questions I bring with me is the question of political agency. Is there a mathematical operationalization of this concept? I don’t know it. What I do know is that it is associated most with the political mode of explanation, because this mode of explanation allows for the existence of politics, by which I mean agents engaged in complex interactions for their individual and sometimes collective gain. Perhaps it is the emerging dynamics of the individual’s shifting constitution as collectives that captures best what is interesting about politics. These collectives serve functions, surely, but what function? Is it a function with any permanence or real agency? Or is it a specious functionality, only a compromise of the agents that compose it, ready to be sabotaged by a defector at any moment?

Another question I’m interested in is how chaos plays a role in such an economy of control. There is plenty of evidence to suggest that entropy in society, far from being a purely natural consequence of thermodynamics, is a deliberate consequence of political activity. Brunton and Nissenbaum have recently given the name obfuscation to some kinds of political activity that are designed to mislead and misdirect. I believe this is not the only reason why agents in the economy of control work actively to undermine each others control. To some extent, the distribution of control over social outcomes is zero sum. It is certainly so at the Pareto boundary of such distributions. But I posit that part of what makes economies of control interesting is that they have a non-Euclidean geometry that confounds the simple aggregations that make Pareto optimality a useful concept within it. Whether this hunch can be put persuasively remains to be seen.

What I may be able to say now is this: there is a sense in which political agency in an economy of control is self-referential, in that what is at stake for each agent is not utility defined exogenously to the economy, but rather agency defined endogenously to the economy. This gives economic activity within it a particularly political character. For purposes of explanation, this enables us to consider three different modes of political agency (or should I say political action), corresponding to the three modes of social explanation outlined above.

A political agent may concern itself with seizing control. It may take actions which are intended to direct the functional orientation of the total social system of which it is a part to be responsive to its own functional orientation. One might see this narrowly as adapting the total system’s utility function to be in line with one’s own, but this is to partially miss the point. It is to align the agency of the total system with one’s one, or to make the total system a subsidiary to one’s agency.  (This demands further formalization.)

A political agent may instead be concerned with interaction with other agents in a less commanding way. I’ll call this negotiation for now. The autonomy of other agents is respected, but the political agent attempts a coordination between itself and others for the purpose of advancing its own interests (its own agency, its own utility). This is not a coup d’etat. It’s business as usual.

A political agent can also attempt to actively introduce chaos into its own social system. This is sabotage. It is an essentially disruptive maneuver. It is action aimed to cause the death of function and bring about instead emergence, which is the more positive way of characterizing the outcomes of chaos.


by Sebastian Benthall at April 15, 2017 04:24 PM

April 14, 2017

adjunct professor

DOC: No Records on Privacy Shield Removal Procedure

Back in November, I posted the Department of Commerce’s Privacy Shield checklist. The next logical step was to request DOC’s procedures for removal of companies from the Privacy Shield (submitted Dec. 1). Today, DOC-International Trade Administration responded with a “no records” response. It is not clear to me what date the search took place, and ITA is careful to say that their search did not include non-ITA Commerce elements. I’m following up on that.

by web at April 14, 2017 04:10 PM

April 12, 2017

Center for Technology, Society & Policy

Bug Bounty Programs as a Corporate Governance “Best Practice” Mechanism

by Amit Elazari Bar On, CTSP Fellow | Permalink

Originally posted on Berkeley Technology Law Journal Blog, on March 22, 2017

In an economy where data is an emerging global currency, software vulnerabilities and security breaches are naturally a major area of concern. As society produces more lines of code, and everything – from cars to sex toys is becoming connected: vulnerabilities are produced daily.[1]   Data breaches’ costs are estimated at an average of $4 million for an individual breach, and $3 trillion in total cost. While some reports suggest lower figures, there is no debate that such vulnerabilities could result in astronomically losses if left unattended. And as we recently learned from the Cloudflare breach, data breaches are becoming more prominent and less predictable,[2] and even security companies get hacked.

In light of these developments, it is no surprise that cybersecurity has become one of the major subjects regularly discussed in board rooms. Recently, the U.S. National Association of Corporate Directors (NACD) reported that while the directors do believe cyberattacks will affect their companies, many of them “acknowledge that their boards do not possess sufficient knowledge of this growing risk.” These findings suggest that directors should rethink their direct legal reasonability for the losses incurred due to unattended vulnerabilities.

The legal and business risks associated with data breaches are complex, and range from the FTC and other regulators’ investigations[3] to M&As complications[4] and consumer class actions.[5] But usually, if executives aren’t named personally in the complaint or prosecuted by regulators, such costs are endured by corporations or their cyber insurance, not the directors or managers themselves. However, shareholders’ derivative lawsuits for directors and managers’ liability are different. These suits target management personally.[6]

Experience shows that stock prices, even if influenced by the data breach, will eventually recover.[7] Yet, shareholder derivative lawsuits for directors’ liability are continuously filed in cases of data breaches. In such cases, the shareholders of the company that suffered from the data breach allege that by virtue of neglecting to enforce internal controls and monitor security vulnerabilities, the mangers breached their fiduciary duties towards the company.

Wyndham hotels, Home Depot and, of course, Target, are just a few companies in which data breaches were followed by such directors’ liability suits. More recently, Wendy’s, the popular fast food restaurant chain, was hit with such a suit[8] and now Yahoo! management is being sued by a group of shareholders for breach of fiduciary duties following their highly public data breach.[9] Until now, courts have dismissed these cases, following U.S. corporate law higher threshold concerning the Business Judgement Rule (BJR).[10] According to the court, directors’ duty of care to monitor security vulnerabilities is satisfied by enacting a reasonable system of reporting existing vulnerabilities, and their fiduciary duty is further fulfilled by doing something, anything, with these reports.[11] The view is that the board should put a “reasonable” security plan in place, not a perfect one.[12] It’s still not clear how the BJR reasonableness threshold differs from the FTC’s requirement to enact reasonable security practices under Section 5(a) of the FTC Act, but at least from the Wyndham case, it seems that BJR’s reasonableness threshold, when it comes to cyber, is much lower.[13]

The result is that corporate fiduciary duties are perhaps not the most effective mechanism to promote cybersecurity in the current legal environment. This is because, on the one hand the BJR is highly deferential to any reasonable action a board might take, and the other hand, especially in cyber security, reasonable actions are just not enough to provide adequate protection.

Yet, as Wong argued, even if shareholders’ derivative lawsuits often fail in the data breach context, directors should still be concerned with security vulnerabilities.[14] Data breaches involve personal reputational and economical costs for management, could result in board reelection, and cause consumer dissatisfaction.[15] We have recently learned the Yahoo! managers were not only sued for breach of fiduciary duties,[16] but asked to answer to a Senate Committee. Moreover, Yahoo!’s General Counsel has resigned, there were “management changes,” and Marissa Mayer, Yahoo!’s CEO, didn’t receive her annual bonus for 2016. All of this in addition to the $350 million drop in the Verizon-Yahoo M&A consideration price. It follows that managers and directors alike should continue to consider cybersecurity from a corporate governance perspective, but instead of focusing on minimizing liability, they should inspire to enact cybersecurity “best practices,[17] as they do in other corporate related areas.[18]

Introducing “Bug Bounty” Programs

As the economic, reputational and legal costs of data breaches grow rapidly, the practice of exposing cyber vulnerabilities and “bugs” has evolved from an internal quality assurance process to a booming industry: a “bug bounty economy” emerged. Governments and companies enact vulnerability rewards programs in which they pay millions to individual security experts worldwide for preforming adversarial research and exposing critical vulnerabilities, previously uncovered by the organizations internal checks and quality assurance.[19] From cutting-edge Silicon Valley companies to traditional governmental organizations such as the Pentagon and the FTC: all are beginning to understand why we need the help of friendly hackers, as we face the big battle over who controls the vulnerability market. For regulators, Bug Bounty Programs allow the advantage of employing talent which they might not be able to recruit in traditional employment tracks and facilitates, as explained here, an additional cost-effective objective monitoring system, free of hierarchies and political boundaries.[20]

The recent news about one of the biggest breaches in 2017, the Cloudflare breach (ironically termed “Cloudbleed”), discovered by Tavis Ormandy from Google’s Project Zero bug-hunting team, teaches us that even a small software bug, unattended, could result in great harm. The fact that this vulnerability was eventually exposed by a bug hunter, emphasizes that in cyber, as in all other source codes, “given enough eyeballs, all bugs are shallow.”[21] This means that if we can invite every security researcher in the world, to join the “co-developer base,” bugs will be discovered and fixed faster.[22] This is exactly what Bug Bounty Programs aim to do.

Bug Bounty Programs proactively invite security researchers from around the world to expose the company’s vulnerabilities in exchange for monetary and, sometimes more importantly, reputational rewards. If adequate report mechanisms are in place, Bug Bounty Programs could serve as an additional security layer, an external monitoring system, and provide management and directors with essential information concerning cyber vulnerabilities.  Indeed, “[b]ug bounty programs are moving from the realm of novelty towards becoming best practice[23] – but they can also serve as a corporate governance best practice, by operating as an additional objective and independent report system for management. Naturally, this will require the company’s senior management and board to become more involved in the program, demand timely reports, and that direct communication channels will be established. This is an increased standard both in terms of resources as well as time, but in the context of million-dollar breach damages, these preventative actions are worth the price.

Recognizing the above advantages of Bug Bounty Programs by senior management and directors will further contribute to the “bug bounty ecosystem,” while strengthening companies’ corporate governance practices. Bug Bounty Programs provide the management with a relatively inexpensive yet effective independent monitoring system, that could potentially reduce D&O liability and corporate litigation risks, while boosting the overall cybersecurity safeguards of the corporation.

Notes

[1] See Why everything is hackable: Computer security is broken from top to bottom, The Economist (Apr. 7, 2017) http://www.economist.com/news/science-and-technology/21720268-consequences-pile-up-things-are-starting-improve-computer-security (explaining how technology, software development culture, economic incentives, governments divided interests and cyber-insurance, all fuel the vulnerabilities’ “circus”).

[2] For example, New York State Attorney General, Eric T. Schneiderman reported a 60% increase in data breaches affecting New York state residents in 2016. See Att’y Gen. Eric T. Schneiderman, A.G. Schneiderman Announces Record Number of Data Breach Notices for 2016, available at https://ag.ny.gov/press-release/ag-schneiderman-announces-record-number-data-breach-notices-2016.

[3] As of the end of 2016, the FTC brought over 60 cases related to information security against companies that were engaged in “unfair or deceptive” practices. See Fed. Trade Comm’n, Privacy & Data Sec. Update: 2016 (2016), available at https://www.ftc.gov/system/files/documents/reports/privacy-data-security-update-2016/privacy_and_data_security_update_2016_web.pdf. For a recent, comprehensive analysis of the FTC efforts in this field (and others) see Chris Jay Hoofnagle, Federal Trade Commission Privacy Law and Policy ch. 8 (2016).

[4] As the Verizon-Yahoo! deal illustrates, data breaches could result in price reductions and renegotiations of M&As. Professor Steven Davidoff Solomon wan an early observer of this result of the Yahoo! breach, claiming on September 2016 that the data breach will give Verizon “significant leverage to renegotiate the price”. See Steven Davidoff Solomon, How Yahoo’s Data Breach Could Affect Its Deal With Verizon, N.Y. Times (Sep. 23, 2016), https://www.nytimes.com/2016/09/24/business/dealbook/how-yahoos-data-breach-could-affect-its-deal-with-verizon.html (discussing the relationship between data breaches and “material adverse change” (MAC) clauses).

[5] For example, yet another class action was filed against Yahoo! to the on February 7, 2017, following the major data breaches the company suffered from in 2016. See Steven Trader, Yahoo Hit With Another User Class Action Over Data Breach, Law360, (Feb. 8, 2017), https://www.law360.com/articles/889685/yahoo-hit-with-another-user-class-action-over-data-breach (Ridolfo v. Yahoo Inc., case number 3:17-cv-00619).

[6] A derivative lawsuit is brought by the shareholders on behalf of the company, seeking a remedy for injury that the company incurred. It allows shareholders to police directors and other mangers activities, but also requires that the shareholders will all exhaust available intracorporate remedies, such as demanding from the board to take action, as a procedural hurdle. The derivative lawsuit differs significantly from the direct shareholder suit, which seek remedy for injuries suffered by the shareholders themselves. See, e.g., Tooley v. Donaldson, Lufkin, & Jenrette, Inc., 845 A.2d 1031, 1033 (Del. 2004). Its noteworthy that in some cases, where fiduciary duties are breached not in “good faith,” D&O insurance will not cover such suits and directors couldn’t be indemnified for their legal costs.

[7] For a more academic survey, that reached similar conclusions, see Pierangelo Rosati et al., The effect of data breach announcements beyond the stock price: Empirical evidence on market activity, 49 Int’l Rev. Fin. Analysis 146 (2017), available at http://www.sciencedirect.com/science/article/pii/S1057521917300029 (surveying 74 data breaches of U.S. publicly traded firms, from 2005 to 2014, and reaching the conclusion that there is a positive short-term effect, but a quick return to normal market activity).

[8] Graham v. Peltz, 1:16-cv-1153 (S.D. Ohio Dec. 16, 2016).

[9] Its noteworthy that this Yahoo! claim focuses on “breach of fiduciary duty arising from the non-disclosure of data security breaches to Yahoo Inc.’s customers”, as opposed to failure to monitor security vulnerabilities. See Steven Trader, Yahoo Shareholders Sue Over Massive Data Breaches, Law360 (Feb. 21, 2017), https://www.law360.com/articles/893976?scroll=1 (Oklahoma Firefighters Pension and Retirement System v. Brandt, 2017-0133) (Del. Ch. Feb. 21, 2017).

[10]  For a helpful review of the manner in which the directors’ “duty to monitor”, as articulated under Caremark, was applied in the Target and Wundham Shareholders’ derivative lawsuits, see Victoria C. Wong, Cybersecurity, Risk Management, and How Boards Can Effectively Fulfill Their Monitoring Role, 15 U.C. Davis Bus. L.J. 201 (2015).

[11] See In re Home Depot S’holder Derivative Litig., 2016 U.S. Dist. LEXIS 164841, at *16 (N.D. Ga. Nov. 30, 2016) (citing Lyondell Chem. Co. v. Ryan, 970 A.2d 235, 243-44 (Del. 2009) and noting that “[u]nder Delaware law, … directors violate their duty of loyalty only ‘if they knowingly and completely failed to undertake their responsibilities’” and that “in other words, as long as the Outside Directors pursued any course of action that was reasonable, they would not have violated their duty of loyalty.”)

[12] Id. at *18.

[13] The boundaries of how the FTC reasonableness standard will be applied with respect to cyber security are still not clear, although the FTC releases statements regarding this standard. The newly initiated suit against D-link will probably shed some light in this respect. See Federal Trade Commission, FTC Charges D-Link Put Consumers’ Privacy at Risk Due to the Inadequate Security of Its Computer Routers and Cameras (Jan. 5, 2017), https://www.ftc.gov/news-events/press-releases/2017/01/ftc-charges-d-link-put-consumers-privacy-risk-due-inadequate and Federal Trade Commission, Data Security, https://www.ftc.gov/tips-advice/business-center/privacy-and-security/data-security (last visited Mar. 3, 2017)

[14] Wong, supra note 10.

[15] Id. at 211–214.

[16] See Trader, supra note 9.

[17] Id.

[18] See 1 Corporate Governance: Law & Practice § 1.03 (Amy L. Goodman & Steven M. Haas eds., 2016) (explaining that “many of the sources of guidance on corporate governance practices are not captured in rules and regulations but, rather, are set forth in statements, principles and white papers issued by bar associations, institutional investors, business groups and proxy voting advisory services, among others. These have come to be collectively referred to as recommended ‘best practices.’”).

[19] Cybersecurity Research: Addressing the Legal Barriers and Disincentives, https://www.ischool.berkeley.edu/sites/default/files/cybersec-research-nsf-workshop.pdf, at 5. See also Bugcrowd, The State of Bug Bounty Bugcrowd’s second annual report on the current state of the bug bounty economy (June 2016), available at https://pages.bugcrowd.com/hubfs/PDFs/state-of-bug-bounty-2016.pdf, at 8. A comprehensive list of bug bounty programs, enacted by leading companies such as Google and Facebook, is available here: https://hackerone.com/bug-bounty-programs.

[20] Generally, Bug Bounty Programs generate value on multiple levels: They boost companies return on investment, when comparing the cost of employing highly qualified security researchers; they facilitate recruitment and talent acquisition; they produce a reputation value; and they create a positive impact on software development lifecycle. See, e.g., Keren Elazari, How hackers can be a force for corporate good, Financial Times (Apr. 10, 2017) https://www.ft.com/content/46b9e012-1de3-11e7-b7d3-163f5a7f229c.

[21] This is Eric Raymond’s famous “Linus Law,” one of open source culture corner stones, coined in Eric S. Raymond, The Cathedral and the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary 19 (1999).

[22] Id. at 30.

[23] See also Jeff Stone, in an age of digital insecurity, paying bug bounties becomes the norm, The Christian Science Monitor (Aug. 12, 2016), http://www.csmonitor.com/World/Passcode/2016/0812/In-an-age-of-digital-insecurity-paying-bug-bounties-becomes-the-norm.

by Jennifer King at April 12, 2017 05:37 AM

April 08, 2017

adjunct professor

On Edward Balleisen’s Fraud: An American History from Barnum to Madoff

“…fraud is endemic to modern capitalism,” so said Professor Edward Balleisen at a National History Center talk on his excellent, comprehensive, thoughtful Fraud: An American History from Barnum to Madoff. We need histories of consumer protection. Balleisen provides one such history, focusing on the idea of fraud—specifically those wrought by businesses against consumers and investors. The concept of “fraud” is complex, it is defined differently through disciplinary lenses, and when we think about FTC privacy and many other consumer protection efforts, we are addressing conduct that is different from Balleisen’s focus. Yet, Balleisen’s book offers lessons for consumer protection more broadly and I learned a great deal from it.

Balleisen’s observation of the policy pendulum of anti-fraud efforts is most clearly stated on page 309, and anyone involved in modern debates on the FTC will recognize it:

Forceful antifraud tactics tended to generate complaints about autocratic governance that ran roughshod over individual rights and American values, which then prompted adoption of procedural protections, which in turn limited the effectiveness of administrative remedies. Post–World War II proceduralism deepened the democratic legitimacy of antifraud regulation, but at the cost of extending the rights of accused businesses, whether in criminal or administrative contexts.

My copy of Balleisen’s book is heavily marked up. So here are two key questions answered by the book and some other reflections–

Why, despite our rich information environment and seeming greater accountability brought about by technology and institutions, do frauds still persist, largely in five basic forms (pump and dump, pyramid scheme, bait and switch, advanced fee frauds, control fraud)?

  • There are businesses committed to fraud. The proceduralism described by Balleisen allowed committed fraudsters (Holland Furnace, Fritzel Television) to slow down intervention.
  • Committed fraudsters keep a “squawk” fund to “cool of the mark” by paying the consumers who do complain.
  • Especially in areas where products/services are new and norms do not yet exist, new market entrants have more space for deception.
  • Concerns about the pace of innovation and creating breathing room for it makes tolerance for fraud a part of a dynamic economy.
  • A turn to individualism in the 1970s caused institutions such as the BBB to embrace squawk fund approaches—instead of pursing big, collective actions, BBB started remedying individual claims, thus leaving the target free to continue operations.
  • Frauds are often small scale and your typical collective action problems emerge in policing them (daunting costs of representation, limited recovery, risk of countersuit or retaliation, embarrassment, and the problem of “unclean hands”).
  • Information asymmetry still exists!
  • Fraudsters can take advantage of the biases and heuristic reasoning approaches that most of us use.
    • We are strongly moved by forms of social proof over more objective evidence.
    • We are overconfident, especially when we have a little knowledge of a subject. There is the problem that many of us cannot recognize our own incompetence (the Dunning-Kruger effect).
    • We reason through “available” examples—easily recallable fraud events. As old frauds (such as the lightning rod sales of the last century) are interdicted, we forget about them and their lessons.
    • We are vulnerable to anchoring, which skews our perception of price.
    • We are loss adverse—and so when we anchor to a price, we act impulsively to capture discounts from the anchored price.
    • We are not good at separating bundles, and so sellers that engage in bundling can influence our perception of value (act now and get not one, but two non-stick pans!).
    • We are optimistic.
  • Gullibility, dreams of quickly-acquired wealth.
  • Only a small number of people need to fall for a fraud for the enterprise to be successful.
  • The Holder in Due Course doctrine—obliterated by the FTC in the 1970s, the ability for a seller to transfer a debt obligation to a third party created intense incentives for fraudulent sales.
  • On some level, we admire the guile of fraudsters—think about our centuries-long fascination with stories such as Reynard. The OED has over 300 words to describe deception, deceit, and trickery.
  • And there are many, many ways of cheating. Balleisen covers the many ways 19th century companies defrauded each other—wetting cotton to make it heavier, enclosing a low-value project within an envelope of high-quality material, and so on.
  • We are unwilling to criminally prosecute many consumer frauds, and when we do, convicted defendants receive laughably small sentences in light of the scale of their thefts.
  • On some level, we resent victims of fraud, and suspect that victims were somehow complicit in the scheme. The OED has 200 words for dupes.

Related to the above, what are the tensions/tactics that enable fraud today?

  • Product complexity. Complexity makes quality assessment difficult, leading us to fall back upon easily-manipulated signals, such as social proof.
    • This is, by the way, one reason why I think institutions such as Yelp will aid consumer protection little. Yelp—and even the BBB—are easily manipulated. There are even services that will do it for you, just like buying “puffs” from a 19th century newspaperman.
  • Economic complexity. As our economy becomes more complex, we have to rely and trust people we do not know—even people not in our own country.
  • Agreement complexity. Basic business models such as compounding interest cannot be defined by many consumers.
  • Corporate secrecy.
  • The ability to quickly incorporate.
  • Being able to acquire the “trappings of success.” Ponzi was known to have bought the most expensive car in production—merely possessing it offered proof of his legitimacy. Balleisen shows other examples—the importance of fraudsters to claim having a prestigious address, of having been in operation for many years, of having trademarks or other signals of brand.
  • Disclosure pollution. If a regulatory regime requires disclosure of some fact pointing to a problem, “pollute” the communication by making tons and tons of disclosures. I suspect that drug companies do this with side effects of prescription medicines.

Some final reflections–

I was surprised to learn of the historical vigor of the Better Business Bureau. I’ve long thought it to be not the most agile or effective institution. But Balleisen recounts decades when it was a serious force for consumer protection enforcement. In its heyday, it was a key actor in big fraud investigations, and it assisted public authorities in prosecutions. Balleisen shows how a conservative faction asserted control over its priorities, defanged it, and in the process, made it slouch into a kind of arbitration service for individual claims, and an opponent of anything but self-regulatory approaches. Some of the problems that Balleisen paints in the 1970 takeover, such as the problem of adverse selection in BBB membership, replicated themselves in the self-regulatory regimes for the internet.

Thoughts of “fraud” conjure images of Ponzi and Madoff. Conservatives and liberals alike disapprove of fraud as such. A problem that arises is that we use the same institutions and laws to pursue pure fraudsters as we do companies that do not live up to their advertising promises. This brand of FTC target sees himself as an honest businessman not to be painted with the same brush as hucksters. Balleisen gives the historical example of Macy’s and its promise that all of its prices were 6% lower than competitors—we know that this claim cannot be true in all situations. Macy’s saw deviance from the 6% target as just an imperfection that does not amount to deception or wrongdoing. Today, when companies like LabMD react viscerally to FTC intervention, it acts out just as its forebears. It rightly sees itself as a honest business–why is the federal government breathing down its neck? Businesses that read the situation that way always do the same thing—they accuse the FTC of pinkoism and of standing on an insecure constitutional foundation. Balleisen’s point is that their interventions introduce more and more proceduralism, but they rarely limit the substantive authorities of consumer protection institutions.

Balleisen’s book does not end in a bang. He adheres to the idea that there is no “silver bullet” to fraud, that many institutions and legal tools are needed to contain it, and that prevention (incentives for truthfulness, public education, consumer friendly defaults) should be the strategy rather than ex post remedy. He does carefully present the conservative reaction to the FTC but seems unconvinced of its cogency, or perhaps unconvinced that the critiques justify dismantling of new institutions.

by web at April 08, 2017 11:41 PM

April 05, 2017

MIDS student

Privacy matters of nations … part 1

Disclaimer: This blog has excerpts from a paper that I wrote as part of an amazing course while doing my Masters at UC Berkeley. Prof Nathan Good , you are an inspiration !

In this age of omnipresent pervasive technologies, privacy of individuals has been a focal point for various policies and laws worldwide.  Its time to look at privacy concerns from the eyes of an aggregate such as a nation. Espionage has been a disturbing reality since time immemorial. Do nations have a right to privacy against such intrusive “eyes” ? If so, are there any guidelines or framework for definition of privacy and rules of conduct at the level of a nation . If not, is it worth a discussion ? To emphasise, I do not focus on cases of surveillance by a country on its own citizens.

As I have pointed out in my blogs earlier and as you may have encountered yourself, privacy violations for individuals has almost become a norm nowadays. This is evident by the number of laws that are in place in US itself covering multiple fields to protect individual’s privacy such as the Belmont Principles, the Children’s Online Privacy Protection Act of 1998 , to name a few.

Privacy at an aggregate level

These principles can be extended to higher aggregates such as a family unit .For example, the concerns raised in the Google’s “Wi-Fi Sniffing Debacle”  were linked to the tracking of the wi-fi payload of various homes as the Street View cars were being driven around. The payload was linked to the computer and not necessarily to an individual. Federal Communications Commission made references to the federal Electronic Communications Privacy Act (ECPA) in its report. Similar concerns were raised elsewhere in the world in relation to this unconsented collection of data. Another incident which highlighted the concerns for addressing family level privacy was the famous HeLa genome study . Henrietta Lacks was a woman from Baltimore suffering from cervical cancer . Her cells were taken in 1951 without her consent . Scientists have since been studying her genome sequences to solve some challenging medical concerns . By publishing the genome sequence of her cells, the scientists had inadvertently advertised this private aspect of everyone connected to Henrietta by genes i.e. her family. The study had to be taken down when it became clear that the family’s consent had not been sought. These cases highlight the fact that the guidelines that protect individuals can also be used as a guiding principle in the context of families as a unit.

As a next level of aggregation, we look at society as a unit. Society, as a concept, can be quite ambiguous. We assume that any group of people bound together by a common thread such as residents of a given neighbourhood, consumers of a certain product etc can be thought of as belonging to a society. For example, in the case of the website Ashley Madison’s data breach, the whole user group’s privacy ( or in this case, secrecy) was at stake. Hackers had threatened to release private information of many of its users unless the website was shut down. While this was related to the personally identifiable information for each individual, the issue escalated drastically as it affected a majority of the 36 million users of the website. The Privacy commissioner of Canada stated that the Toronto-based company had in fact breached many privacy laws in Canada and elsewhere. Thus, any privacy violation that is not specific to one particular individual but a much larger group of which the individual is a member, is also looked through the lens of the same privacy laws.There are many other instances of “us vs the nosy corporates” that have been spoken about recently . For eg, due to the privacy setup and the inherent nature of the product, location of all users of Foursquare can be tracked in real time . Additionally the concept of society and privacy are quite intertwined as pointed out by sociologist Barrington Moore, “the need for privacy is a socially created need. Without society there would be no need for privacy.” ( Barrington Moore, JR., PRIVACY: STUDIES IN SOCIAL AND CULTURAL HISTORY 1984 ). As an interesting observation, Dan Solove states “Society is fraught with conflict and friction. Individuals, institutions, and governments can all engage in activities that have problematic effects on the lives of others.”

Let us now turn our attention to the next higher level of aggregation – nations. There are certain questions that arise in relation to this aggregation such as,

  • As in the case of family and society, can we assume that the principles behind privacy definition and privacy protection for individuals be as easily applied to privacy concerns of a nation as a unit ?
  • Are the concerns related to a nation’s privacy same as that of an individual ?
  • Are the threats to a nation’s privacy different in form and intent from those we looked at earlier ?

Definition ofprivacy of a nation”

It is difficult to provide an all-encompassing single definition for privacy even at the level of an individual. Thus, it is no surprise that defining such an “abstract” concept in reference to a nation as a unit becomes even harder. This is especially so because we deal with a new class of data here which is neither private nor public but is classified. However, in order to understand the concept, we will start by drawing references and analogies from the individual privacy linked studies.As Dan Solove puts it, “Privacy seems to be about everything, and therefore it appears to be nothing”

Privacy Harms

Concept of privacy, in reference to a nation, works on the philosophy of secrecy and the ability to create an autonomous decision-making zone . These in turn have been equated to national security, economic development and social stability. The secrecy philosophy, as defined by Dan Solove , defines privacy being violated if there is a public disclosure of previously concealed information. The “Taxonomy of Privacy Harms ” was propounded by Dan Solove to bring forth the kind of privacy related harms that people are trying to avoid . The harms hold true as is if the data subject in the diagram is a nation . For eg , some of the possible privacy harms at various stages are as follows

For the Nation subjected to invasion : Interference in decision making ; Intrusion

For the stage of Information processing : Secondary use of the information, exclusion, aggregation etc

For the stage of information dissemination : Breach of confidentiality, blackmail, disclosure, exposure

In the next part I will look at one of the biggest threats to the privacy of any nation – Espionage.


by arvinsahni at April 05, 2017 01:54 PM

April 03, 2017

Ph.D. student

Using python to explore Wikipedia pageview data for all current members of the U.S. Congress

pageviews-congress &v

Using python to explore Wikipedia pageview data for all current members of the U.S. Congress

By Stuart Geiger (@staeiou, User:Staeiou), licensed under the MIT license

Did you know that Wikipedia has been tracking aggregate, anonymized, hourly data about the number of times each page is viewed? There are data dumps, an API, and a web tool for exploring small sets of pages (see this blog post for more on those three). In this notebook, I show how to use python to get data on hundreds of pages at once -- every current member of the U.S. Senate and House of Representatives.

Libraries

We're using mwviews for getting the pageview data, pandas for the dataframe, and seaborn/matplotlib for plotting. pywikibot is in here because I tried to use it to get titles programmatically, but gave up.

In [1]:
!pip install mwviews pywikibot seaborn pandas
Requirement already satisfied: mwviews in /home/staeiou/anaconda3/lib/python3.6/site-packages
Requirement already satisfied: pywikibot in /home/staeiou/anaconda3/lib/python3.6/site-packages
Requirement already satisfied: seaborn in /home/staeiou/anaconda3/lib/python3.6/site-packages
Requirement already satisfied: pandas in /home/staeiou/anaconda3/lib/python3.6/site-packages
Requirement already satisfied: requests in /home/staeiou/anaconda3/lib/python3.6/site-packages (from mwviews)
Requirement already satisfied: futures in /home/staeiou/anaconda3/lib/python3.6/site-packages (from mwviews)
Requirement already satisfied: httplib2>=0.9 in /home/staeiou/anaconda3/lib/python3.6/site-packages (from pywikibot)
Requirement already satisfied: python-dateutil>=2 in /home/staeiou/anaconda3/lib/python3.6/site-packages (from pandas)
Requirement already satisfied: pytz>=2011k in /home/staeiou/anaconda3/lib/python3.6/site-packages (from pandas)
Requirement already satisfied: numpy>=1.7.0 in /home/staeiou/anaconda3/lib/python3.6/site-packages (from pandas)
Requirement already satisfied: six>=1.5 in /home/staeiou/anaconda3/lib/python3.6/site-packages (from python-dateutil>=2->pandas)
In [2]:
import mwviews
import seaborn as sns
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
sns.set(font_scale=1.5)

Data

The .txt files are manually curated lists of titles, based on first copying and pasting the columns displaying the names of the members of Congress at List_of_United_States_Senators_in_the_115th_Congress_by_seniority and List_of_United_States_Representatives_in_the_115th_Congress_by_seniority. Then each of the article links was manually examined to make sure they match the linked page, and updated if, for example, the text said "Dan Sullivan" but the article was at "Dan Sullivan (U.S. Senator)". Much thanks to Amy Johnson who helped curate these lists.

I tried programmatically getting lists of all current members of Congress, my failed attempts can be found at the end.

The files have one title per line, so we read it in and split it into a list with .split("\n")

In [3]:
with open("senators.txt") as f:
    sen_txt = f.read()

sen_list = sen_txt.split("\n")
In [4]:
sen_list[0:5]
Out[4]:
['Richard Shelby',
 'Luther Strange',
 'Lisa Murkowski',
 'Dan Sullivan (U.S. Senator)',
 'John McCain']

Checking the length of the list, we see it has 100, which is good!

In [5]:
len(sen_list)
Out[5]:
100

We do the same with the house list, and we get 431 because there are currently some vacancies.

In [6]:
with open("house_reps.txt") as f:
    house_txt = f.read()
    
house_list = house_txt.split("\n")
In [7]:
house_list[0:5]
Out[7]:
['Bradley Byrne', 'Martha Roby', 'Mike Rogers', 'Robert Aderholt', 'Mo Brooks']
In [8]:
len(house_list)
Out[8]:
431
In [ ]:
 

Querying the pageviews API

mwviews makes it much easier to query the pageviews API, so we don't have to directly call the API. We can also pass in a (very long!) list of pages to get data. We get back a nice JSON formatted response, which pandas can convert to a dataframe without any help.

The main way to interact via mwviews is the PageviewsClient object, which we will create as p for short.

In [9]:
from mwviews.api import PageviewsClient

p = PageviewsClient()

When we query the API for the view data, we can set many variables in p.article_views(). We pass in sen_list as our list of articles. Granularity can be monthly or daily, and start and end dates are formatted as YYYYMMDDHH. You have to include precise start and end dates by the hour, and it will not give super helpful error messages if you do things lie set your end date before your start date or things like that. And also know that the pageview data only goes back a few years.

In [10]:
sen_views = p.article_views(project='en.wikipedia', 
                            articles=sen_list, 
                            granularity='monthly', 
                            start='2016040100', 
                            end='2017033123')

sen_df = pd.DataFrame(sen_views)

If we peek at the first five rows and columns in the dataframe, we see it is formatted with one row per page, and one column per month:

In [11]:
sen_df.ix[0:5, 0:5]
Out[11]:
2016-04-01 00:00:00 2016-05-01 00:00:00 2016-06-01 00:00:00 2016-07-01 00:00:00 2016-08-01 00:00:00
Al_Franken 43087.0 66366.0 53539.0 143641.0 37679.0
Amy_Klobuchar 19740.0 16663.0 19394.0 36931.0 10618.0
Angus_King 13951.0 13341.0 16458.0 16043.0 15773.0
Ben_Cardin 7733.0 5532.0 7198.0 6656.0 6384.0
Ben_Sasse 9943.0 78686.0 22201.0 21502.0 11996.0

We transpose this (switching rows and columns), then set the index of each row to a more readable string, Year-Month:

In [12]:
sen_df = sen_df.transpose()
sen_df = sen_df.set_index(sen_df.index.strftime("%Y-%m")).sort_index()
sen_df.ix[0:5, 0:5]
Out[12]:
Al_Franken Amy_Klobuchar Angus_King Ben_Cardin Ben_Sasse
2016-04 43087.0 19740.0 13951.0 7733.0 9943.0
2016-05 66366.0 16663.0 13341.0 5532.0 78686.0
2016-06 53539.0 19394.0 16458.0 7198.0 22201.0
2016-07 143641.0 36931.0 16043.0 6656.0 21502.0
2016-08 37679.0 10618.0 15773.0 6384.0 11996.0

We can get the sum for each page by running .sum(), and we can peek into the first five pages:

In [13]:
sen_sum = sen_df.sum()
sen_sum[0:5]
Out[13]:
Al_Franken       1400454.0
Amy_Klobuchar     340114.0
Angus_King        281545.0
Ben_Cardin        135774.0
Ben_Sasse         384434.0
dtype: float64

We can get the sum for each month by transposing back and running .sum() on the dataframe:

In [14]:
sen_monthly_sum = sen_df.transpose().sum()
sen_monthly_sum
Out[14]:
2016-04    3931109.0
2016-05    3493508.0
2016-06    3358614.0
2016-07    6661905.0
2016-08    2012990.0
2016-09    2000842.0
2016-10    3647561.0
2016-11    6361233.0
2016-12    2352725.0
2017-01    5803284.0
2017-02    4912876.0
2017-03    3882319.0
dtype: float64

And we can get the sum of all the months from 2016-04 to 2016-03 by summing the monthly sum, which gives us 48.4 million pageviews:

In [15]:
sen_monthly_sum.sum()
Out[15]:
48418966.0

We can use the built-in plotting functionality in pandas dataframes to show a monthly plot. You can adjust kind to be many types, including bar, line, and area.

In [16]:
fig = plt.figure()
fig.suptitle("Monthly Wikipedia pageviews for current U.S. Senators")
plt.ticklabel_format(style = 'plain')

ax = sen_monthly_sum.plot(kind='barh', figsize=[12,6])
ax.set_xlabel("Monthly pageviews")
ax.set_ylabel("Month")
Out[16]:
<matplotlib.text.Text at 0x7faf78d4b320>

The House

We do the same thing for the House of Representatives, only with different variables. Recall that house_list is our list of titles:

In [17]:
house_list[0:5]
Out[17]:
['Bradley Byrne', 'Martha Roby', 'Mike Rogers', 'Robert Aderholt', 'Mo Brooks']
In [18]:
house_views = p.article_views(project='en.wikipedia', 
                              articles=house_list, 
                              granularity='monthly', 
                              start='2016040100', 
                              end='2017033123')
                              
house_df = pd.DataFrame(house_views)
house_df.ix[0:5, 0:5]
Out[18]:
2016-04-01 00:00:00 2016-05-01 00:00:00 2016-06-01 00:00:00 2016-07-01 00:00:00 2016-08-01 00:00:00
Adam_Kinzinger 6579.0 10515.0 12002.0 7217.0 22613.0
Adam_Schiff 6541.0 6649.0 12993.0 7501.0 4760.0
Adam_Smith_(politician) 2712.0 2400.0 2770.0 2939.0 2458.0
Adrian_Smith_(politician) 1368.0 1295.0 1285.0 1151.0 1432.0
Adriano_Espaillat 1296.0 1061.0 5591.0 5360.0 1729.0
In [19]:
house_df = house_df.transpose()
house_df = house_df.set_index(house_df.index.strftime("%Y-%m")).sort_index()
house_df.ix[0:5, 0:5]
Out[19]:
Adam_Kinzinger Adam_Schiff Adam_Smith_(politician) Adrian_Smith_(politician) Adriano_Espaillat
2016-04 6579.0 6541.0 2712.0 1368.0 1296.0
2016-05 10515.0 6649.0 2400.0 1295.0 1061.0
2016-06 12002.0 12993.0 2770.0 1285.0 5591.0
2016-07 7217.0 7501.0 2939.0 1151.0 5360.0
2016-08 22613.0 4760.0 2458.0 1432.0 1729.0
In [20]:
house_sum = house_df.sum()
house_sum[0:5]
Out[20]:
Adam_Kinzinger               162674.0
Adam_Schiff                  406908.0
Adam_Smith_(politician)       40274.0
Adrian_Smith_(politician)     19851.0
Adriano_Espaillat             67980.0
dtype: float64
In [21]:
house_monthly_sum = house_df.transpose().sum()
house_monthly_sum
Out[21]:
2016-04    1727960.0
2016-05    1940369.0
2016-06    1983199.0
2016-07    3009143.0
2016-08    1644636.0
2016-09    1609682.0
2016-10    2558133.0
2016-11    5095820.0
2016-12    2408666.0
2017-01    4190713.0
2017-02    3905450.0
2017-03    5931667.0
dtype: float64
In [22]:
house_monthly_sum.sum()
Out[22]:
36005438.0

This gives us 36 million total pageviews for House reps.

In [23]:
fig = plt.figure()
fig.suptitle("Monthly Wikipedia pageviews for current U.S. House of Representatives")
plt.ticklabel_format(style = 'plain')

ax = house_monthly_sum.plot(kind='barh', figsize=[12,6])
ax.set_xlabel("Monthly pageviews")
ax.set_ylabel("Month")
Out[23]:
<matplotlib.text.Text at 0x7faf5ff04518>

Combining the datasets

We have to transpose each dataset back, then append one to the other:

In [24]:
congress_df = house_df.transpose().append(sen_df.transpose())
congress_df.ix[0:10,0:10]
Out[24]:
2016-04 2016-05 2016-06 2016-07 2016-08 2016-09 2016-10 2016-11 2016-12 2017-01
Adam_Kinzinger 6579.0 10515.0 12002.0 7217.0 22613.0 6846.0 6869.0 19077.0 14200.0 18531.0
Adam_Schiff 6541.0 6649.0 12993.0 7501.0 4760.0 5068.0 8318.0 13906.0 17191.0 20320.0
Adam_Smith_(politician) 2712.0 2400.0 2770.0 2939.0 2458.0 2802.0 2841.0 4745.0 3301.0 5510.0
Adrian_Smith_(politician) 1368.0 1295.0 1285.0 1151.0 1432.0 1363.0 2004.0 2292.0 1481.0 1967.0
Adriano_Espaillat 1296.0 1061.0 5591.0 5360.0 1729.0 2754.0 2017.0 11937.0 4421.0 15559.0
Al_Green_(politician) 4527.0 3047.0 3243.0 3141.0 2028.0 2878.0 2915.0 3549.0 2382.0 6651.0
Al_Lawson 30.0 36.0 34.0 68.0 479.0 1070.0 1185.0 3856.0 2326.0 4971.0
Alan_Lowenthal 2164.0 2151.0 2575.0 1760.0 1597.0 1455.0 2278.0 3401.0 1985.0 3135.0
Albio_Sires 2348.0 2126.0 2467.0 1960.0 1679.0 3582.0 2483.0 4875.0 1993.0 3175.0
Alcee_Hastings 4795.0 5958.0 5533.0 9017.0 4581.0 4075.0 4711.0 6982.0 3866.0 8475.0
In [ ]:
 
In [25]:
congress_monthly_sum = congress_df.sum()
congress_monthly_sum
Out[25]:
2016-04     5659069.0
2016-05     5433877.0
2016-06     5341813.0
2016-07     9671048.0
2016-08     3657626.0
2016-09     3610524.0
2016-10     6205694.0
2016-11    11457053.0
2016-12     4761391.0
2017-01     9993997.0
2017-02     8818326.0
2017-03     9813986.0
dtype: float64

Then to find the total pageviews, run sum on the sum. This is 84.4 million pageviews from March 2016 to March 2017 for all U.S. Members of Congress:

In [26]:
congress_monthly_sum.sum()
Out[26]:
84424404.0
In [27]:
fig = plt.figure()
fig.suptitle("Monthly Wikipedia pageviews for current U.S. Members of Congress")
plt.ticklabel_format(style = 'plain')

ax = congress_monthly_sum.plot(kind='barh', figsize=[12,6])

ax.set_xlabel("Monthly pageviews")
ax.set_ylabel("Month")
Out[27]:
<matplotlib.text.Text at 0x7faf8a2133c8>

Plotting a single page's views over time

We can query the dataframe by index for a specific page, then plot it:

In [31]:
fig = plt.figure()
fig.suptitle("Monthly Wikipedia pageviews for Al Lawson")
plt.ticklabel_format(style = 'plain')

ax = congress_df.ix['Al_Lawson'].plot(kind='barh')

ax.set_xlabel("Monthly pageviews")
ax.set_ylabel("Month")
Out[31]:
<matplotlib.text.Text at 0x7faf5295f1d0>

Output data

We will export these to a folder called data, in csv and excel formats:

In [32]:
house_df.to_csv("data/house_views.csv")
house_df.to_excel("data/house_views.xlsx")

sen_df.to_csv("data/senate_views.csv")
sen_df.to_excel("data/senate_views.xlsx")
In [ ]:
 

Old code for trying to programatically get lists of members of Congress

In [33]:
# used to stop "Restart and run all" execution 

assert False is True
---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
<ipython-input-33-20e02078d1a5> in <module>()
      1 # used to stop "Restart and run all" execution
      2 
----> 3 assert False is True

AssertionError: 
In [ ]:
site = pywikibot.Site(code="en")
site.login()
In [ ]:
 
In [ ]:
rep_page = pywikibot.Page(site, title="List_of_United_States_Representatives_in_the_115th_Congress_by_seniority")
In [ ]:
rep_list = []
for page in rep_page.linkedPages():
    has_from_cat = False
    has_births_cat = False
    #print(page.title())
    for category in page.categories():
        #print("\t", category.title())
        if category.title().find("Category:Members of the United States House of Representatives from") >= 0:
            has_from_cat = True
        if category.title().find("births") >= 0:
            has_births_cat = True
        if has_births_cat & has_from_cat:
            rep_list.append(page.title())
            break
In [ ]:
senate_list = []
for page in rep_page.linkedPages():
    has_from_cat = False
    has_births_cat = False
    #print(page.title())
    for category in page.categories():
        #print("\t", category.title())
        if category.title().find("United States Senators") >= 0:
            has_from_cat = True
        if category.title().find("births") >= 0:
            has_births_cat = True
        if has_from_cat:
            senate_list.append(page.title())
            break
            

by stuart at April 03, 2017 07:00 AM

April 01, 2017

Ph.D. student

Varela’s modes of explanation and the teleonomic

I’m now diving deep into Francisco Varela’s Principles of Biological Autonomy (1979). Chapter 8 draws on his paper with Maturana, “Mechanism and biological explanation” (1972) (html). Chapter 9 draws heavily from his paper, “Describing the Logic of the Living: adequacies and limitations of the idea of autopoiesis” (1978) (html).

I am finding this work very enlightening. Somehow it bridges between my interests in philosophy of science right into my current work on privacy by design. I think I will find a way to work this into my dissertation after all.

Varela has a theory of different modes of explanation of phenomena.

One form of explanation is operational explanation. The categories used in these explanations are assumed to be components in the system that generated the phenomena. The components are related to each other in a causal and lawful (nomic) way. These explanations are valued by science because they are designed so that observers can best predict and control the phenomena under study. This corresponds roughly to what Habermas identifies as technical knowledge in Knowledge and Human Interests. In an operational explanation, the ideas of purpose or function have no explanatory value; rather the observer is free to employ the system for whatever purpose he or she wishes.

Another form of explanation is symbolic explanation, which is a more subtle and difficulty idea. It is perhaps better associated with phenomenology and social scientific methods that build on it, such as ethnomethodology. Symbolic explanations, Varela argues, are complementary to operational explanations and are necessary for a complete description of “living phenomenology”, which I believe Varela imagines as a kind of observer-inclusive science of biology.

To build up to his idea of the symbolic explanation, Varela first discusses an earlier form of explanation, now out of fashion: teleological explanation. Teleological explanations do not support manipulation, but rather “understanding, communication of intelligible perspective in regard to a phenomenal domain”. Understanding the “what for” of a phenomenon, what its purpose is, does not tell you how to control the phenomenon. While it may help regulate ones expectations, Varela does not see this as its primary purpose. Communicability motivates teleological explanation. This resonates with Habermas’s idea of hermeneutic knowledge, what is accomplished through intersubjective understanding.

Varela does not see these modes of explanation as exclusive. Operational explanations assume that “phenomena occur through a network of nomic (lawlike) relationships that follow one another. In the symbolic, communicative explanation the fundamental assumption is that phenomena occur through a certain order or pattern, but the fundamental focus of attention is on certain moments of such an order, relative to the inquiring community.” But these modes of explanation are fundamentally compatible.

“If we can provide a nomic basis to a phenomenon, an operational description, then a teleological explanation only consists of putting in parenthesis or conceptually abbreviating the intermediate steps of a chain of causal events, and concentrating on those patterns that are particularly interesting to the inquiring community. Accordingly, Pittendrich introduced the term teleonomic to designate those teleological explanations that assume a nomic structure in the phenomena, but choose to ignore intermediate steps in order to concentrate on certain events (Ayala, 1970). Such teleologic explanations introduce finalistic terms in an explanation while assuming their dependence in some nomic network, hence the name teleo-nomic.”

A symbolic explanation that is consistent with operational theory, therefore, is a teleonomic explanation: it chooses to ignore some of the operations in order to focus on relationships that are important to the observer. There are coherent patterns of behavior which the observer chooses to pay attention to. Varela does not use the word ‘abstraction’, as a computer scientist I am tempted to. But Varela’s domains of interest, however, are complex physical systems often represented as dynamic systems, not the kind of well-defined chains of logical operations familiar from computer programming. In fact, one of the upshots of Varela’s theory of the symbolic explanation is a criticism of naive uses of “information” in causal explanations that are typical of computer scientists.

“This is typical in computer science and systems engineering, where information and information processing are in the same category as matter and energy. This attitude has its roots in the fact that systems ideas and cybernetics grew in a technological atmosphere that acknowledged the insufficiency of the purely causalistic paradigm (who would think of handling a computer through the field equations of thousands of integrated circuits?), but had no awareness of the need to make explicit the change in perspective taken by the inquiring community. To the extent that the engineering field is prescriptive (by design), this kind of epistemological blunder is still workable. However, it becomes unbearable and useless when exported from the domain of prescription to that of description of natural systems, in living systems and human affairs.”

This form of critique makes its way into a criticism of artificial intelligence by Winograd and Flores, presumabley through the Chilean connection.


by Sebastian Benthall at April 01, 2017 12:05 AM

March 28, 2017

Ph.D. student

More assessment of AI X-risk potential

I’m been stimulated by Luciano Floridi’s recent article in Aeon “Should we be afraid of AI?”. I’m surprised that this issue hasn’t been settled yet, since it seems like “we” have the formal tools necessary to solve the problem decisively. But nevertheless this appears to be the subject of debate.

I was referred to Kaj Sotala’s rebuttal of an earlier work by Floridi which his Aeon article was based on. The rebuttal appears in this APA Newsletter on Philosophy and Computers. It is worth reading.

The issue that I’m most interested in is whether or not AI risk research should constitute a special, independent branch of research, or whether it can be approached just as well by pursuing a number of other more mainstream artificial intelligence research agendas. My primary engagement with these debates has so far been an analysis of Nick Bostrom’s argument in his book Superintelligence, which tries to argue in particular that there is an existential risk (or X-risk) to humanity from artificial intelligence. “Existential risk” means a risk to the existence of something, in this case humanity. And the risk Bostrom has written about is the risk of eponymous superintelligence: an artificial intelligence that gets smart enough to improve its own intelligence, achieve omnipotence, and end the world as we know it.

I’ve posted my rebuttal to this argument on arXiv. The one-sentence summary of the argument is: algorithms can’t just modify themselves into omnipotence because they will hit performance bounds due to data and hardware.

A number of friends have pointed out to me that this is not a decisive argument. They say: don’t you just need the AI to advance fast enough and far enough to be an existential threat?

There are a number of reasons why I don’t believe this is likely. In fact, I believe that it is provably vanishingly unlikely. This is not to say that I have a proof, per se. I suppose it’s incumbent on me to work it out and see if the proof is really there.

So: Herewith is my Sketch Of A Proof of why there’s no significant artificial intelligence existential risk.

Lemma: Intelligence advances due to purely algorithmic self-modificiation will always plateau due to data and hardware constraints, which advance more slowly.

Proof: This paper.

As a consequence, all artificial intelligence explosions will be sigmoid. That is, starting slow, accelerating, then decelerating, the growing so slowly as to be asymptotic. Let’s call the level of intelligence at which an explosion asymptotes the explosion bound.

There’s empirical support for this claim. Basically, we have never had a really big intelligence explosion due to algorithmic improvement alone. Looking at the impressive results of the last seventy years, most of the impressiveness can be attributed to advances in hardware and data collection. Notoriously, Deep Learning is largely just decades old artificial neural network technology repurposed to GPU’s on the cloud. Which is awesome and a little scary. But it’s not an algorithmic intelligence explosion. It’s a consolidation of material computing power and sensor technology by organizations. The algorithmic advances fill those material shoes really quickly, it’s true. This is precisely the point: it’s not the algorithms that’s the bottleneck.

Observation: Intelligence explosions are happening all the time. Most of them are small.

Once we accept the idea that intelligence explosions are all bounded, it becomes rather arbitrary where we draw the line between an intelligence explosion and some lesser algorithmic intelligence advance. There is a real sense in which any significant intelligence advance is a sigmoid expansion in intelligence. This would include run-of-the-mill scientific discoveries and good ideas.

If intelligence explosions are anything like virtually every other interesting empirical phenomenon, then they are distributed according to a heavy tail distribution. This means a distribution with a lot of very small values and a diminishing probability of higher values that nevertheless assigns some probability to very high values. Assuming intelligence is something that can be quantified and observed empirically (a huge ‘if’ taken for granted in this discussion), we can (theoretically) take a good hard look at the ways intelligence has advanced. Look around you. Do you see people and computers getting smarter all the time, sometimes in leaps and bounds but most of the time minutely? That’s a confirmation of this hypothesis!

The big idea here is really just to assert that there is a probability distribution over intelligence explosion bounds that all actual intelligence explosions are being drawn from. This follows more or less directly from the conclusion that all intelligence explosions are bounded. Once we posit such a distribution, it becomes possible to take expected values of functions of its values and functions of its values.

Empirical claim: Hardware and sensing advances diffuse rapidly relative to their contribution to intelligence gains.

There’s an material, socio-technical analog to Bostrom’s explosive superintelligence. We could imagine a corporation that is working in secret on new computing infrastructure. Whenever it has an advance in computing infrastructure, the AI people (or increasingly, the AI-writing-AI) develops programming that maximizes its use of this new technology. Then it uses that technology to enrich its own computer-improving facilities. When it needs more…minerals…or whatever it needs to further its research efforts, it finds a way to get them. It proceeds to take over the world.

This may presently be happening. But evidence suggests that this isn’t how the technology economy really works. No doubt Amazon (for example) is using Amazon Web Services internally to do its business analytics. But also it makes its business out of selling out its computing infrastructure to other organizations as a commodity. That’s actually the best way it can enrich itself.

What’s happening here is the diffusion of innovation, which is a well-studied phenomenon in economics and other fields. Ideas spread. Technological designs spread. I’d go so far as to say that it is often (perhaps always?) the best strategy for some agent that has locally discovered a way to advance its own intelligence to figure out how to trade that intelligence to other agents. Almost always that trade involves the diffusion of the basis of that intelligence itself.

Why? Because since there are independent intelligence advances of varying sizes happening all the time, there’s actually a very competitive market for innovation that quickly devalues any particular gain. A discovery, if hoarded, will likely be discovered by somebody else. The race to get credit for any technological advance at all motivates diffusion and disclosure.

The result is that the distribution of innovation, rather than concentrating into very tall spikes, is constantly flattening and fattening itself. That’s important because…

Claim: Intelligence risk is not due to absolute levels of intelligence, but relative intelligence advantage.

The idea here is that since humanity is composed of lots of interacting intelligence sociotechnical organizations, any hostile intelligence is going to have a lot of intelligent adversaries. If the game of life can be won through intelligence alone, then it can only be won with a really big intelligence advantage over other intelligent beings. It’s not about absolute intelligence, it’s intelligence inequality we need to worry about.

Consequently, the more intelligence advances (i.e, technologies) diffuse, the less risk there is.

Conclusion: The chance of an existential risk from an intelligence explosion is small and decreasing all the time.

So consider this: globally, there’s tons of investment in technologies that, when discovered, allow for local algorithmic intelligence explosions.

But even if we assume these algorithmic advances are nearly instantaneous, they are still bounded.

Lots of independent bounded explosions are happening all the time. But they are also diffusing all the time.

Since the global intelligence distribution is always fattening, that means that the chance of any particular technological advance granting a decisive advantage over others is decreasing.

There is always the possibility of a fluke, of course. But if there was going to be a humanity destroying technological discovery, it would probably have already been invented and destroyed us. Since it hasn’t, we have a lot more resilience to threats from intelligence explosions, not to mention a lot of other threats.

This doesn’t mean that it isn’t worth trying to figure out how to make AI better for people. But it does diminish the need to think about artificial intelligence as an existential risk. It makes AI much more comparable to a biological threat. Biological threats could be really bad for humanity. But there’s also the organic reality that life is very resilient and human life in general is very secure precisely because it has developed so much intelligence.

I believe that thinking about the risks of artificial intelligence as analogous to the risks from biological threats is helpful for prioritizing where research effort in artificial intelligence should go. Just because AI doesn’t present an existential risk to all of humanity doesn’t mean it doesn’t kill a lot of people or make their lives miserable. On the contrary, we are in a world with both a lot of artificial and non-artificial intelligence and a lot of miserable and dying people. These phenomena are not causally disconnected. A good research agenda for AI could start with an investigation of these actually miserable people and what their problems are, and how AI is causing that suffering or alternatively what it could do to improve things. That would be an enormously more productive research agenda than one that aims primarily to reduce the impact of potential explosions which are diminishingly unlikely to occur.


by Sebastian Benthall at March 28, 2017 01:07 AM

March 26, 2017

adjunct professor

D-Link Updates

The seal has been lifted on the complaint in the D-Link case. This document highlights the previously redacted portions in yellow.

Yesterday (April 3, 2017), D-Link filed a motion to dismiss that includes the initial hearing transcript.

by web at March 26, 2017 12:25 AM

March 24, 2017

MIMS 2014

Adventures in Sparkland (or… How I Learned that Michael Caine was the original Jason Bourne)

Ready, set, revive data blog! What better way to take advantage of the sketchy wifi I’ve encountered along my travels through South America than to do do some data science?

For some time now, I’ve wanted to get my feet wet with Apache Spark, the open source software that has become a standard tool on the data scientist’s utility belt when it comes to dealing with “big data.” Specifically, I was curious how Spark can understand complex human-generated text (through topic or theme modeling), as well as its ability to make recommendations based on preferences we’ve expressed in the past (i.e. how Netflix decides what to suggest you should watch next). For this, it only seemed natural to focus my energies on something I am also quite passionate about: Movies!

mjpop

Many people have already used the well known and publicly available Movielens dataset (READMEdata) to test out recommendation engines before. To add my own twist on standard practice, I added a topic model based off of movie plot data that I scraped from Wikipedia. This blog post will go into detail about the whole process. It’s organized into the following sections:

Setting Up The Environment

To me, this is always the most boring part of doing a data project. Unfortunately, this yak-shaving is wholly necessary to ever do anything interesting. If you only came to read about how this all relates to movies, feel free to skip over this part…

I won’t go into huge depth here, but I will say I effin love Docker as a means to set-up my environment. The reason Docker is so great is that it makes a dev environment totally explicit and portable—which means anybody who’s actually interested in the gory details can go wild with them on my Github (and develop my project further, if they so please).

Another reason Docker is the awesomest is that it made the process of simulating a cluster on my little Macbook Air relatively straightforward. Spark might be meant to be run on a cluster of multiple computers, but being on a backpacker’s budget, I wasn’t keen on commandeering a crowd of cloud computers using Amazon Web Services. I wanted to see what I could do with what I had.

The flip side of this, of course, is that everything was constrained to my 5-year-old laptop’s single processor and the 4GB of RAM I could spare to be shared by the entire virtual cluster. I didn’t think this would be a problem since I wasn’t dealing with big data, but I did keep running up against some annoying memory issues that proved to be a pain. More about that later.

#ScrapeMyPlot

The first major step in my project was getting ahold of movie plot data for each of the titles in the Movielens dataset. For this, I wrote a scraper in python using this handy wikipedia python library I found. The main idea behind my simple program was to:  1) search wikipedia using the title of each movie, 2) Use category tags to determine which search result was the article relating to the actual film in question, and 3) Use python’s BeautifulSoup and Wikipedia’s generally consistent html structure to extract the “plot” section from each article.

I wrapped these three steps in a bash script that would keep pinging wikipedia until it had attempted to grab plots for all the films in the Movielens data. This was something I could let run overnight or while trying to learn to dance like these people (SPOILER ALERT: I still can’t)

The results of this automated strategy were fair overall. Out of the 3,883 movie titles in the Movielens data, I was able to extract plot information for 2,533 or roughly 2/3 of them. I was hoping for ≥ 80%, but what I got was definitely enough to get started.

As I would later find however, even what I was able to grab was sometimes of dubious quality. For example, when the scraper was meant to grab the plot for Kids, the risqué 90’s drama about sex/drug-fueled teens in New York City, it grabbed the plot for Spy Kids instead. Not. the. same. Or when it was meant to grab the plot for Wild Things, another risqué 90’s title (but otherwise great connector in the Kevin Bacon game), it grabbed the plot for Where The Wild Things Are. Again, not. the. same. When these movies popped up in the context of trying to find titles that are similar to Toy Story, it was definitely enough to raise an eyebrow…

All this points to the importance of eating your own dog food when it comes to working with new, previously un-vetted data. Yes, it is a time consuming process, but it’s very necessary (and at least for this movie project, mildly entertaining).

Model Dem Topics

So first, one might ask: why go through the trouble of using a topic model to describe movie plot data? Well for one thing, it’s kinda interesting to see how a computer would understand movie plots and relate them to one another using probability-based artificial intelligence. But topic models offer practical benefits as well.

For one thing, absent a topic model, a computer generally represents a plot summary (or any document for that matter) as a bag of the words contained in that summary. That can be a lot of words, especially because a computer has to keep track of the words in the summary of not just a single movie, but rather the union of all the words in all the summaries of all the movies in the whole dataset.

Topic models reduce the complexity of representing a plot summary from a whole bag of words to a much smaller set of topics. This makes storing information about movies much more efficient in a computer’s memory. It also significantly speeds up calculations you might want to perform, such as seeing how similar one movie plot is to another. And finally, using a topic model can potentially help the computer describe the similarities between movies in a more sensible way. This increased accuracy can be used to improve the performance of other models, such as a recommendation engine.

Spark learns the topics across a set of plot summaries using a probabilistic process known as Latent Dirichlet Allocation or LDA. I won’t describe how LDA works in great depth (look here if you are interested in learning more), but after analyzing all the movie plots, it spits out a set of topics, i.e. lists of words that are supposed to be thematically related to each other if the algorithm did its job right. Each word within each topic has a weight proportional to its importance within the topic; words can repeat across topics but their weights will differ.

One somewhat annoying thing about using LDA is that you have to specify the number of topics before running the algorithm, which is an awkward thing to pinpoint a priori. How can you know how exactly how many topics exist across a corpus of movies—especially without reading all of the summaries? Another wrinkle to LDA is how sensitive it can be to the degree of pre-processing performed upon a text corpus before feeding it to the model.

After settling on 16 topics and a slew of preprocessing steps (stop word removal, Porter stemming, and part-of-speech filtering), I started to see topics that made sense. For example, there was a topic that broadly described a “Space Opera”:

Top 20 most important tokens in the “Space Opera” topic:

[ship, crew, alien, creatur, planet, space, men, group, team, time, order, board, submarin, death, plan, mission, home, survivor, offic, bodi]

Another topic seemed to be describing the quintessential sports drama. BTW, the lopped-off words like submarin or creatur are a result of Porter stemming, which reduces words to their more essential root forms.

Top 20 most important tokens in the “Sports Drama” topic:

[team, famili, game, offic, time, home, friend, player, day, father, men, man, money, polic, night, film, life, mother, car, school]

To sanity check the topic model, I was curious to see how LDA would treat films that were not used in the training of the original model. For this, I had to get some more movie plot data, which I did based on this IMDB list of top movies since 2000. The titles in the Movielens data tend to run a bit on the older side, so I knew I could find some fresh material by searching for some post-2000 titles.

To eyeball the quality of the results, I compared the topic model with the more simple “bag of words” model I mentioned earlier. For a handful of movies in the newer post-2000 set, I asked both models to return the most similar movies they could find in the original Movielens set.

I was encouraged (though not universally) by the results. Take, for example the results returned for V for Vendetta and Minority Report.

Similarity Rank: V for Vendetta


Similarity Rank Bag of Words Topic Model
1 But I’m a Cheerleader Candidate, The
2 Life Is Beautiful Dersu Uzala
3 Evita No Small Affair
4 Train of Life Terminator 2: Judgment Day
5 Jakob the Liar Schindler’s List
6 Halloween Mulan
7 Halloween: H20 Reluctant Debutante, The
8 Halloween II All Quiet on the Western Front
9 Forever Young Spartacus
10 Entrapment Grand Day Out, A

Similarity Rank: Minority Report


Similarity Rank Bag of Words Topic Model
1 Blind Date Seventh Sign, The
2 Scream 3 Crow: Salvation, The
3 Scream Crow, The
4 Scream of Stone Crow: City of Angels, The
5 Man of Her Dreams Passion of Mind
6 In Dreams Soylent Green
7 Silent Fall Murder!
8 Eyes of Laura Mars Hunchback of Notre Dame, The
9 Waking the Dead Batman: Mask of the Phantasm
10 I Can’t Sleep Phantasm

Thematically, it seems like for these two movies, the topic model gives broadly more similar/sensible results in the top ten than the baseline “bag of words” approach. (Technical note: the “bag of words” approach I refer to is more specifically a Tf-Idf transformation, a standard method used in the field of Information Retrieval and thus a reasonable baseline to use for comparison here.)

Although the topic model seemed to deliver in the case of these two films, that was not universally the case. In the case of Michael Clayton, there was no contest as to which model was better:

Similarity Rank: Michael Clayton


Similarity Rank Bag of Words Topic Model
1 Firm, The Low Down Dirty Shame, A
2 Civil Action, A Bonfire of the Vanities
3 Boiler Room Reindeer Games
4 Maybe, Maybe Not Raging Bull
5 Devil’s Advocate, The Chasers
6 Devil’s Own, The Mad City
7 Rounders Bad Lieutenant
8 Joe’s Apartment Killing Zoe
9 Apartment, The Fiendish Plot of Dr. Fu Manchu, The
10 Legal Deceit Grifters, The

In this case, it seems the Bag of Words model picked up on the legal theme while the topic model completely missed it. In the case of The Social Network, something else curious (and bad) happened:

Similarity Rank: The Social Network


Similarity Rank Bag of Words Topic Model
1 Twin Dragons Good Will Hunting
2 Higher Learning Footloose
3 Astronaut’s Wife, The Grease 2
4 Substitute, The Trial and Error
5 Twin Falls Idaho Love and Other Catastrophes
6 Boiler Room Blue Angel, The
7 Birdcage, The Lured
8 Quiz Show Birdy
9 Reality Bites Rainmaker, The
10 Broadcast News S.F.W.

With Good Will Hunting—another film about a gifted youth hanging around Cambridge, Massachusetts—it seemed like the topic model was off to a good start here. But then with Footloose and Grease 2 following immediately after, things start to deteriorate quickly. The crappy-ness of both result sets speaks to the overall low quality of the data we’re dealing with—both in terms of the limited set of movies available in the original Movielens data, as well as the quality of the Wikipedia plot data.

Still, when I saw Footloose, I was concerned that perhaps there might be a bug in my code. Digging a little deeper, I discovered that both movies did in fact share the highest score in a particular topic. However, the bulk of these scores are earned from different words within this same topic. This means that the words within the topics of the LDA model aren’t always very related to each other—a rather serious fault since that is exactly what it is meant to accomplish.

The fact is, it’s difficult to gauge the overall quality of the topic model even by eyeballing a handful of results as I’ve done. This is because like any clustering method, LDA is a form of unsupervised machine learning. That is to say, unlike a supervised machine learning method, there is no ground truth, or for-sure-we-know-it’s-right label, that we can use to objectively evaluate model performance.

However, what we can do is use the output from the topic model as input into the recommendation engine model (which is a supervised model). From there, we can see if the information gained from the topic model improves the performance of the recommendation engine. That was, in fact, my main motivation for using the topic model in the first place.

But before I get into that, I did want to share perhaps the most entertaining finding from this whole exercise (and the answer to the clickbait-y title of this blog post). The discovery occurred when I was comparing the bag of words and topic model results for The Bourne Ultimatum:

Similarity Rank: The Bourne Ultimatum


Similarity Rank Bag of Words Topic Model
1 Pelican Brief, The Three Days of the Condor
2 Light of Day Return of the Pink Panther, The
3 Safe Men Ipcress File, The
4 JFK Cop Land
5 Blood on the Sun Sting, The
6 Three Days of the Condor Great Muppet Caper, The
7 Shadow Conspiracy From Here to Eternity
8 Universal Soldier Man Who Knew Too Little, The
9 Universal Soldier: The Return Face/Off
10 Mission: Impossible 2 Third World Cop

It wasn’t the difference in the quality of the two result sets that caught my eye. In fact, with The Great Muppet Caper in there, the quality of the topic model seems a bit suspect, if anything.

What interested me was the emphasis the topic model placed on the similarity of some older tiles, like Three Days of the Condor, or The Return of the Pink Panther. But it was the 1965 gem, The Ipcress File, that took the cake. Thanks to the LDA topic model, I now know this movie exists, showcasing Michael Caine in all his 60’s badass glory. That link goes to the full trailer. Do yourself a favor and watch the whole thing. Or at the very least, watch this part, coz it makes me lol. They def don’t make ’em like they used to…

Rev Your Recommendation Engines

To incorporate the topic data into the recommendation engine, I first took the top-rated movies from each user in the Movielens dataset and created a composite vector for each user based on the max of each topic across their top rated movies. In other words, I created a “profile” of sorts for each user that summarized their tastes based on the most extreme expressions of each topic across the movies they liked the most.

After I had a profile for each user, I could get a similarity score for almost every movie/user pair in the Movielens dataset. Mixing these scores with the original Movielens ratings is a bit tricky, however, due to a wrinkle in the Spark recommendation engine implementation. When training a recommendation engine with Spark, one must choose between using either explicit or implicit ratings as inputs, but not both. The Movielens data is based on explicit ratings that users gave movies between 1 and 5. The similarity scores, by contrast, are signals I infer based on a user’s top-rated movies along with the independently trained topic model described above. In other words, the similarity scores are implicit data—not feedback that came directly from the user.

To combine the two sources of data, therefore, I had to convert the explicit data into implicit data. In the paper that explains Spark’s implicit recommendation algorithm, training examples for the implicit model are based off the confidence one has that a user likes a particular item rather than an explicit statement of preference. Given the original Movielens data, it makes sense to associate ratings of 4 or 5 with high confidence that a user liked a particular movie. One cannot, however, associate low ratings of 1, 2, or 3 with a negative preference, since in the implicit model, there is no notion of negative feedback. Instead, low ratings for a film correspond only to low confidence that a user liked that particular movie.

Since we lose a fair amount of information in converting explicit data to implicit data, I wouldn’t expect the recommendation engine I am building to beat out the baseline Movielens model, seeing as explicit data is generally a superior basis upon which to train a recommendation engine. However, I am more interested in seeing whether a model that incorporates information about movie plots can beat a model that does not. Also, it’s worth noting that many if not most real-world recommendation engines don’t have the luxury of explicit data and must rely instead on less reliable implicit signals. So if anything, handicapping the Movielens data as I am doing makes the setting more realistic.

Results/Findings

So does the movie topic data add value to the recommendation engine? Answering this question proved technically challenging, due to the limitations of my old Macbook Air :sad:.

One potential benefit of incorporating movie topic data is that scores can be generated for any (user, movie) pair that’s combinatorially possible given the underlying data. If the topic information did in fact add value to the recommendation engine, then the model could train upon a much richer set of data, including examples not directly observed in real life. But as I mentioned, my efforts to explore the potential benefit of this expanded data slammed against the memory limits I was confined to on my 5-year-old Macbook.

My constrained resources provided a lovely opportunity to learn all about Java Garbage Collection in Spark, but my efforts to tune the memory management of my program proved futile. I became convinced that an un-tunable hard memory limit was the culprit when I saw repeated executors fail after max-ing out their JVM heaps while running a series of full garbage collections. The Spark tuning guide says that if “a full GC is invoked multiple times for before a task completes, it means that there isn’t enough memory available for executing tasks.” I seemed to find myself in exactly this situation.

Since I couldn’t train on bigger data, I pretended I had less data instead. I trained two models. In one model, I pretended that I didn’t know anything about some of the ratings given to movies by users (in practice this meant setting a certain percentage of ratings to 0, since in the implicit model, 0 implies no confidence that a user prefers an item).  In a second model, I set these ratings to the similarity scores that came from the topic model.

The results of this procedure were mixed. When I covered up 25% of the data, the two recommendation engines performed roughly the same. However, when I covered up 75% of the data, there was about a 3% bump in performance for the topic model-based recommendation engine.

Although there might be some benefit (and at worst no harm) to using the topic model data, what I’d really like to do is map out a learning curve for my recommendation engine. In the context of machine learning, learning curves are curves that chart algorithm performance as a function of the number of training samples used to train the algorithm. Based on the two points I sampled, we cannot know for certain whether the benefit of including topic model data is always crowded out by the inclusion of more real world samples. We also cannot know whether using expanded data based on combinatorially generated similarity scores improves engine performance.

Given my hardware limits and my commitment to using only the resources in my backpack, I couldn’t map out this learning curve more methodically. I also couldn’t explore how using a different number of topics in the LDA model affects performance—something else I was curious to explore. In the end, my findings are only suggestive.

While I couldn’t explore everything I wanted, I ultimately learned a butt-load about how Spark works, which was my goal for starting this project in the first place. And of course, there was The Ipcress File discovery. Oh what’s that? You didn’t care much for The Ipcress File?  You didn’t even watch the trailer? Well, then I have to ask you:


by dgreis at March 24, 2017 12:36 AM

March 22, 2017

Ph.D. student

Lenin and Luxemburg

One of the interesting parts of Scott’s Seeing Like a State is a detailed analysis of Vladimir Lenin’s ideological writings juxtaposed with one of this contemporary critics, Rosa Luxemburg, who was a philosopher and activist in Germany.

Scott is critical of Lenin, pointing out that while his writings emphasize the role of a secretive intelligentsia commanding the raw material of an angry working class through propaganda and a kind of middle management tier of revolutionarily educated factory bosses, this is not how the revolution actually happened. The Bolsheviks took over an empty throne, so to speak, because the czars had already lost their power fighting Austria in World War I. This left Russia headless, with local regions ruled by local autonomous powers. Many of these powers were in fact peasant and proletarian collectives. But others may have been soldiers returning from war and seizing whatever control they could by force.

Luxemburg’s revolutionary theory was much more sensitive to the complexity of decentralized power. Rather than expecting the working class to submit unquestioningly to top-down control and coordinating in mass strikes, she acknowledged a reality that decentralized groups would act in an uncoordinated way. This was good for the revolutionary cause, she argued, because it allowed the local energy and creativity of workers movements to move effectively and contribute spontaneously to the overall outcome. Whereas Lenin saw spontaneity in the working class as leading inevitably to their being coopted by bourgeois ideology, Luxemburg believed the spontaneous authentic action of autonomously acting working class people were vital to keeping the revolution unified and responsive to working class interests.


by Sebastian Benthall at March 22, 2017 02:00 AM

March 21, 2017

MIMS 2011

Towards software that supports interpretation rather than quantification

[Reblogged from the Software Sustainability Institute blog]

My research involves the study of the emerging relationships between data and society that is encapsulated by the fields of software studies, critical data studies and infrastructure studies, among others. These fields of research are primarily aimed at interpretive investigations into how software, algorithms and code have become embedded into everyday life, and how this has resulted in new power formations, new inequalities, new authorities of knowledge [1]. Some of the subjects of this research include the ways in which Facebook’s News Feed algorithm influences the visibility and power of different users and news sources (Bucher, 2012), how Wikipedia delegates editorial decision-making and moral agency to bots (Geiger and Ribes, 2010), or the effects of Google’s Knowledge Graph on people’s ability to control facts about the places in which they live (Ford and Graham, 2016).

As the only Software Sustainability Institute fellows working in this area, I set myself the goal of investigating what tools, methods and infrastructure researchers working in these fields were using to conduct their research. Although Big Data is a challenge for every field of research, I found that the challenge for social scientists and humanities scholars doing interpretive research in this area is unique and perhaps even more significant. Two key challenges stand out. The first is that data requiring interpretation tends to be much larger than traditionally analysed. This often requires at least some level of quantification in order to ‘zoom out’ to obtain a bigger picture of the phenomenon or issues under study. Researchers in this tradition often lack the skills to conduct such analyses – particularly at scale. The second challenge is that online data is subject to ethical and legal restrictions, particularly when research involves interpretive research (as opposed to the anonymized data collected for statistical research).

In many universities it seems that mathematics, engineering, physics and computer science departments have started to build internal infrastructure to deal with Big Data, and some universities have established good Digital Humanities programs that are largely about the quantitative study of large corpuses of images/films/videos or other cultural objects. But infrastructure and expertise is severely lacking for those wishing to do interpretive rather than quantitative research using mixed, experimental, ethnographic or qualitative research using online data. The software and infrastructure required for doing interpretive research is patchy, departments are typically ill-equipped to support researchers and students with the expertise required to conduct social media research, and significant ethical questions remain about doing social media research, particularly in the context of data protection laws.

Data Carpentry offers some promise here. I organized, with the support of the Software Sustainability Institute, a “Data Carpentry for the Social Sciences workshop” with Dr Brenda Moon (Queensland University of Technology) and Martin Callaghan (University of Leeds) in November 2016 at Leeds University. Data Carpentry workshops tend to be organized for quantitative work in the hard sciences and there were no lesson plans for dealing with social media data. Brenda stepped in to develop some of these materials based partly on the really good Library Carpentry resources and both Martin and Brenda (with additional help from Dr Andy Evans, Joanna Leng and Dr Viktoria Spaiser) made an excellent start towards seeding the lessons database with some social media specific exercises.

The two-day workshop centered on examples from Twitter data and participants worked with Python and other off-the-shelf tools to extract and analyze data. There were fourteen participants in the workshop ranging from PhD students to professors and from media and communications to sociology and social policy, music to law, earth and environment to translation studies. At the end of the workshop participants said that they felt they had received a strong grounding in Python and that the course was useful, interactive, open and not intimidating. There were suggestions, however, to make improvements to the Twitter lessons and to perhaps split up the group in the second day to move onto more advanced programming for some and to go over the foundations for beginners.

Also supported by the Institute was my participation in two conferences in Australia at the end of 2016. The first was a conference exploring the impact of automation on everyday life at the Queensland University of Technology in Brisbane, the second, the annual Crossroads in Cultural Studies conference in Sydney. Through my participation in these events (and via other information-gathering that I have been conducting in my travels) I have learned that many researchers in the social sciences and humanities suffer from a significant lack of local expertise and infrastructure. On multiple occasions I learned of PhD students and researchers running analyses of millions of tweets on their laptops, suffering from a lack of understanding when applying for ethical approval and conducting analyses that lack a consistent approach.

Centers of excellence in digital methods around the world share code and learnings where they can. One such program is the Digital Methods Initiative (DMI) at the University of Amsterdam. The DMI hosts regular summer and winter schools to train researchers in using digital methods tools and provides free access to some of the open source software tools that it has developed for collecting and analyzing digital data. Queensland University of Technology’s Social Media Group also hosts summer schools and has contributed to methodological scholarship employing interpretive approaches to social media and internet research. The common characteristic of such programmes are that they are collaborative (sharing resources across the university departments and between different universities) and innovative (breaking some of the traditional rules that govern traditional research in the university).

Many researchers who handle data in more interpretive studies tend to rely on these global hubs in the few universities where infrastructure is being developed. The UK could benefit from a similar hub for researchers locally, especially since software and code needs to be continually developed and maintained for a much wider variety of evolving methods. Alternatively, or alongside such hubs, Data Carpentry workshops could serve as an important virtual hub for sharing lesson plans and resources. Data Carpentry could, for example, host code that can be used to query APIs for doing social media research and workshops could also be used to collaboratively explore or experiment with methods for iterative, grounded investigation of social media practices.

Due to the rapid increase in the scale and velocity of social media data and because of the lack of technical expertise to manage such data, social scientists and humanities scholars have taken a backseat to the hard sciences in explaining new dimensions of social life online. This is disappointing because it means that much of the research coming out about social media, Big Data and the computation lacks a connection to important social questions about the world. Building from some of this momentum will be essential in the next few years if we are to see social scientists and humanities scholars adding their important insights into social phenomena online. Much more needs to be done to build flexible and agile resources for the rapidly advancing field of social media research if we are to benefit from the contributions of social science and humanities scholars in the field of digital cultures and politics.

[1] For an excellent introduction to the contribution of interpretive scholars to questions about data and the digital see ‘The Datafied Society’ just published by Amsterdam University Press http://en.aup.nl/books/9789462981362-the-datafied-society.html

Pic: Martin Callaghan displays the ‘Geeks and repetitive tasks’ model during the November 2016 Data Carpentry for the Social Sciences workshop at Leeds University.


by Heather Ford at March 21, 2017 01:19 PM

March 20, 2017

Ph.D. student

artificial life, artificial intelligence, artificial society, artificial morality

“Everyone” “knows” what artificial intelligence is and isn’t and why it is and isn’t a transformative thing happening in society and technology and industry right now.

But the fact is that most of what “we” “call” artificial intelligence is really just increasingly sophisticated ways of solving a single class of problems: optimization.

Essentially what’s happened in AI is that all empirical inference problems can be modeled as Bayesian problems, which are then solved using variational inference methods, which are essentially just turning the Bayesian statistic problem into a solvable form of an optimization problem, and solving it.

Advances in optimization have greatly expanded the number of things computers can accomplish as part of a weak AI research agenda.

Frequently these remarkable successes in Weak AI are confused with an impending revolution in what used to be called Strong AI but which now is more frequently called Artificial General Intelligence, or AGI.

Recent interest in AGI has spurred a lot of interesting research. How could it not be interesting? It is also, for me, extraordinarily frustrating research because I find the philosophical precommitments of most AGI researchers baffling.

One insight that I wish made its way more frequently into discussions of AGI is an insight made by the late Francisco Varela, who argued that you can’t really solve the problem of artificial intelligence until you have solved the problem of artificial life. This is for the simple reason that only living things are really intelligent in anything but the weak sense of being capable of optimization.

Once being alive is taken as a precondition for being intelligent, the problem of understanding AGI implicates a profound and fascinating problem of understanding the mathematical foundations of life. This is a really amazing research problem that for some reason is never ever discussed by anybody.

Let’s assume it’s possible to solve this problem in a satisfactory way. That’s a big If!

Then a theory of artificial general intelligence should be able to show how some artificial living organisms are and others are not intelligent. I suppose what’s most significant here is the shift in thinking of AI in terms of “agents”, a term so generic as to be perhaps at the end of the day meaningless, to thinking of AI in terms of “organisms”, which suggests a much richer set of preconditions.

I have similar grief over contemporary discussion of machine ethics. This is a field with fascinating, profound potential. But much of what machine ethics boils down to today are trolley problems, which are as insipid as they are troublingly intractable. There’s other, better machine ethics research out there, but I’ve yet to see something that really speaks to properly defining the problem, let alone solving it.

This is perhaps because for a machine to truly be ethical, as opposed to just being designed and deployed ethically, it must have moral agency. I don’t mean this in some bogus early Latourian sense of “wouldn’t it be fun if we pretended seatbelts were little gnomes clinging to our seats” but in an actual sense of participating in moral life. There’s a good case to be made that the latter is not something easily reducible to decontextualized action or function, but rather has to do with how own participates more broadly in social life.

I suppose this is a rather substantive metaethical claim to be making. It may be one that’s at odds with common ideological trainings in Anglophone countries where it’s relatively popular to discuss AGI as a research problem. It has more in common, intellectually and philosophically, with continental philosophy than analytic philosophy, whereas “artificial intelligence” research is in many ways a product of the latter. This perhaps explains why these two fields are today rather disjoint.

Nevertheless, I’d happily make the case that the continental tradition has developed a richer and more interesting ethical tradition than what analytic philosophy has given us. Among other reasons this is because of how it is able to situated ethics as a function of a more broadly understood social and political life.

I postulate that what is characteristic of social and political life is that it involves the interaction of many intelligent organisms. Which of course means that to truly understand this form of life and how one might recreate it artificially, one must understand artificial intelligence and, transitively, artificial life.

Only one artificial society is sufficiently well-understood could we then approach the problem of artificial morality, or how to create machines that truly act according to moral or ethical ideals.


by Sebastian Benthall at March 20, 2017 02:40 AM

March 19, 2017

Ph.D. student

ideologies of capitals

A key idea of Bourdieusian social theory is that society’s structure is due to the distribution of multiple kinds of capital. Social fields have their roles and their rules, but they are organized around different forms of capital the way physical systems are organized around sources of force like mass and electrical charge. Being Kantian, Bourdieusian social theory is compatible with both positivist and phenomenological forms of social explanation. Phenomenological experience, to the extent that it repeats itself and so can be described aptly as a social phenomenon at all, is codified in terms of habitus. But habitus is indexed to its place within a larger social space (not unlike, it must be said, a Blau space) whose dimensions are the dimensions of the allocations of capital throughout it.

While perhaps not strictly speaking a corollary, this view suggests a convenient methodological reduction, according to which the characteristic beliefs of a habitus can be decomposed into components, each component representing the interests of a certain kind of capital. When I say “the interests of a capital”, I do mean the interests of the typical person who holds a kind of capital, but also the interests of a form of capital, apart from and beyond the interests of any individual who carries it. This is an ontological position that gives capital an autonomous social life of its own, much like we might attribute an autonomous social life to a political entity like a state. This is not the same thing as attributing to capital any kind of personhood; I’m not going near the contentious legal position that corporations are people, for example. Rather, I mean something like: if we admit that social life is dictated in part by the life cycle of a kind of psychic microorganism, the meme, then we should also admit abstractly of social macroorganisms, such as capitals.

What the hell am I talking about?

Well, the most obvious kind of capital worth talking about in this way is money. Money, in our late modern times, is a phenomenon whose existence depends on a vast global network of property regimes, banking systems, transfer protocols, trade agreements, and more. There’s clearly a naivete in referring to it as a singular or homogeneous phenomenon. But it is also possible to referring to in a generic globalized way because of the ways money markets have integrated. There is a sense in which money exists to make more money and to give money more power over other forms of capital that are not money, such as: social authority based on any form of seniority, expertise, lineage; power local to an institution; or the persuasiveness of an autonomous ideal. Those that have a lot of money are likely to have an ideology very different from those without a lot of money. This is partly due to the fact that those who have a lot of money will be interested in promoting the value of that money over and above other capitals. Those without a lot of money will be interested inn promoting forms of power that contest the power of money.

Another kind of capital worth talking about is cosmopolitanism. This may not be the best word for what I’m pointing at but it’s the one that comes to mind now. What I’m talking about is the kind of social capital one gets not by having a specific mastery of a local cultural form, but rather by having the general knowledge and cross-cultural competence to bridge across many different local cultures. This form of capital is loosely correlated with money but is quite different from it.

A diagnosis of recent shifts in U.S. politics, for example, could be done in terms of the way capital and cosmopolitanism have competed for control over state institutions.


by Sebastian Benthall at March 19, 2017 12:29 AM

March 16, 2017

Ph.D. student

equilibrium representation

We must keep in mind not only the capacity of state simplifications to transform the world but also the capacity of the society to modify, subvert, block, and even overturn the categories imposed upon it. Here is it useful to distinguish what might be called facts on paper from facts on the ground…. Land invasions, squatting, and poaching, if successful, represent the exercise of de facto property rights which are not represented on paper. Certain land taxes and tithes have been evaded or defied to the point where they have become dead letters. The gulf between land tenure facts on paper and facts on the ground is probably greatest at moments of social turmoil and revolt. But even in more tranquil times, there will always be a shadow land-tenure system lurking beside and beneath the official account in the land-records office. We must never assume that local practice conforms with state theory. – Scott, Seeing Like a State, 1998

I’m continuing to read Seeing Like a State and am finding in it a compelling statement of a state of affairs that is coded elsewhere into the methodological differences between social science disciplines. In my experience, much of the tension between the social sciences can be explained in terms of the differently interested uses of social science. Among these uses are the development of what Scott calls “state theory” and the articulation, recognition, and transmission of “local practice”. Contrast neoclassical economics with the anthropology of Jean Lave as examples of what I’m talking about. Most scholars are willing to stop here: they choose their side and engage in a sophisticated form of class warfare.

This is disappointing from the perspective of science per se, as a pursuit of truth. To see where there’s a place for such work in the social sciences, we only have to the very book in front of us, Seeing Like a State, which stands outside of both state theory and local practices to explain a perspective that is neither but rather informed by a study of both.

In terms of the ways that knowledge is used in support of human interests, in the Habermasian sense (see some other blog posts), we can talk about Scott’s “state theory” as a form of technical knowledge, aimed at facilitating power over the social and natural world. What he discusses is the limitation of technical knowledge in mastering the social, due to complexity and differentiation in local practice. So much of this complexity is due to the politicization of language and representation that occurs in local practice. Standard units of measurement and standard terminology are tools of state power; efforts to guarantee them are confounded again and again in local interest. This disagreement is a rejection of the possibility of hermeneutic knowledge, which is to say linguistic agreement about norms.

In other words, Scott is pointing to a phenomenon where because of the interests of different parties at different levels of power, there’s a strategic local rejection of inter-subjective agreement. Implicitly, agreeing even on how to talk with somebody with power over you is conceding their power. The alternative is refusal in some sense. A second order effect of the complexity caused by this strategic disagreement is the confounding of technical mastery over the social. In Scott’s terminology, a society that is full of strategic lexical disagreement is not legible.

These are generalizations reflecting tendencies in society across history. Nevertheless, merely by asserting them I am arguing that they have a kind of special status that is not itself caught up in the strategic subversions of discourse that make other forms of expertise foolish. There must be some forms of representation that persist despite the verbal disagreements and differently motivated parties that use them.

I’d like to call these kinds of representations, which somehow are technically valid enough to be useful and robust to disagreement, even politicized disagreement, as equilibrium representations. The idea here is that despite a lot of cultural and epistemic churn, there are still attractor states in the complex system of knowledge production. At equilibrium, these representations will be stable and serve as the basis for communication between different parties.

I’ve posited equilibrium representations hypothetically, without having a proof or example yet on one that actually exists. My point is to have a useful concept that acknowledges the kinds of epistemic complexities raised by Scott but that acknowledges the conditions for which a modernist epistemology could prevail despite those complexities.

 


by Sebastian Benthall at March 16, 2017 05:57 PM

appropriate information flow

Contextual integrity theory defines privacy as appropriate information flow.

Whether or not this is the right way to define privacy (which might, for example, be something much more limited), and whether or not contextual integrity as it is currently resourced as a theory is capable of capturing all considerations needed to determine the appropriateness of information flow, the very idea of appropriate information flow is a powerful one. It makes sense to strive to better our understanding of which information flows are appropriate, which others are inappropriate, to whom, and why.

 


by Sebastian Benthall at March 16, 2017 01:38 AM

March 15, 2017

Ph.D. student

Seeing Like a State: problems facing the code rural

I’ve been reading James C. Scott’s Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed for, once again, Classics. It’s just as good as everyone says it is, and in many ways the counterpoint to James Beniger’s The Control Revolution that I’ve been looking for. It’s also highly relevant to work I’m doing on contextual integrity in privacy.

Here’s a passage I read on the subway this morning that talks about the resistance to codification of rural land use customs in Napoleonic France.

In the end, no postrevolutionary rural code attracted a winning coalition, even amid a flurry of Napoleonic codes in nearly all other realms. For our purposes, the history of the stalemate is instructive. The first proposal for a code, which was drafted in 1803 and 1807, would have swept away most traditional rights (such as common pasturage and free passage through others’ property) and essentially recast rural property relations in the light of bourgeois property rights and freedom of contract. Although the proposed code pefigured certain modern French practices, many revolutionaries blocked it because they feared that its hands-off liberalism would allow large landholders to recreate the subordination of feudalism in a new guise.

A reexamination of the issue was then ordered by Napoleon and presided over by Joseph Verneilh Puyrasseau. Concurrently, Depute Lalouette proposed to do precisely what I supposed, in the hypothetical example, was impossible. That is, he undertook to systematically gather information about all local practices, to classify and codify them, and then to sanction them by decree. The decree in question would become the code rural. Two problems undid this charming scheme to present the rural poplace with a rural code that simply reflected its own practices. The first difficulty was in deciding which aspects of the literally “infinite diversity” or rural production relations were to be represented and codified. Even if a particular locality, practices varied greatly from farm to farm over time; any codification would be partly arbitrary and artificially static. To codify local practices was thus a profoundly political act. Local notables would be able to sanction their preferences with the mantle of law, whereas others would lose customary rights that they depended on. The second difficulty was that Lalouette’s plan was a mortal threat to all state centralizers and economic modernizers for whom a legible, national property regime was the procondition of progress. As Serge Aberdam notes, “The Lalouette project would have brought about exactly what Merlin de Douai and the bourgeois, revolutionary jurists always sought ot avoid.” Neither Lalouette nor Verneilh’s proposed code was ever passed, because they, like their predecessor in 1807, seemed to be designed to strengthen the hand of the landowners.

(Emphasis mine.)

The moral of the story is that just as the codification of a land map will be inaccurate and politically contested for its biases, so too a codification of customs and norms will suffer the same fate. As Borges’ fable On Exactitude in Science mocks the ambition of physical science, we might see the French attempts at code rural to be a mockery of the ambition of computational social science.

On the other hand, Napoleonic France did not have the sweet ML we have today. So all bets are off.


by Sebastian Benthall at March 15, 2017 03:16 PM

March 14, 2017

Ph.D. student

industrial technology development and academic research

I now split my time between industrial technology (software) development and academic research.

There is a sense in which both activities are “scientific”. They both require the consistent use of reason and investigation to arrive at reliable forms of knowledge. My industrial and academic specializations are closely enough aligned that both aim to create some form of computational product. These activities are constantly informing one another.

What is the difference between these two activities?

One difference is that industrial work pays a lot better than academic work. This is probably the most salient difference in my experience.

Another difference is that academic work is more “basic” and less “applied”, allowing it to address more speculative questions.

You might think that the latter kind of work is more “fun”. But really, I find both kinds of work fun. Fun-factor is not an important difference for me.

What are other differences?

Here’s one: I find myself emotionally moved and engaged by my academic work in certain ways. I suppose that since my academic work straddles technology research and ethics research (I’m studying privacy-by-design), one thing I’m doing when I do this work is engaging and refining my moral intuitions. This is rewarding.

I do sometimes also feel that it is self-indulgent, because one thing that thinking about ethics isn’t is taking responsibility for real change in the world. And here I’ll express an opinion that is unpopular in academia, which is that being in industry is about taking responsibility for real change in the world. This change can benefit other people, and it’s good when people in industry get paid well because they are doing hard work that entails real risks. Part of the risk is the responsibility that comes with action in an uncertain world.

Another critically important difference between industrial technology development and academic research is that while the knowledge created by the former is designed foremost to be deployed and used, the knowledge created by the latter is designed to be taught. As I get older and more advanced as a researcher, I see that this difference is actually an essential one. Knowledge that is designed to be taught needs to be teachable to students, and students are generally coming from both a shallower and more narrow background than adult professionals. Knowledge that is designed to by deployed and used need only be truly shared by a small number of experienced practitioners. Most of the people affected by the knowledge will be affected by it indirectly, via artifacts. It can be opaque to them.

Industrial technology production changes the way the world works and makes the world more opaque. Academic research changes the way people work, and reveals things about the world that had been hidden or unknown.

When straddling both worlds, it becomes quite clear that while students are taught that academic scientists are at the frontier of knowledge, ahead of everybody else, they are actually far behind what’s being done in industry. The constraint that academic research must be taught actually drags its form of science far behind what’s being done regularly in industry.

This is humbling for academic science. But it doesn’t make it any less important. Rather, in makes it even more important, but not because of the heroic status of academic researchers being at the top of the pyramid of human knowledge. It’s because the health of the social system depends on its renewal through the education system. If most knowledge is held in secret and deployed but not passed on, we will find ourselves in a society that is increasingly mysterious and out of our control. Academic research is about advancing the knowledge that is available for education. It’s effects can take half a generation or longer to come to fruition. Against this long-term signal, the oscillations that happen within industrial knowledge, which are very real, do fade into the background. Though not before having real and often lasting effects.


by Sebastian Benthall at March 14, 2017 02:27 AM

March 03, 2017

Ph.D. alumna

Failing to See, Fueling Hatred.

I was 19 years old when a some configuration of anonymous people came after me. They got access to my email and shared some of the most sensitive messages on an anonymous forum. This was after some of my girl friends received anonymous voice messages describing how they would be raped. And after the black and Latinx high school students I was mentoring were subject to targeted racist messages whenever they logged into the computer cluster we were all using. I was ostracized for raising all of this to the computer science department’s administration. A year later, when I applied for an internship at Sun Microsystems, an alum known for his connection to the anonymous server that was used actually said to me, “I thought that they managed to force you out of CS by now.”

Needless to say, this experience hurt like hell. But in trying to process it, I became obsessed not with my own feelings but with the logics that underpinned why some individual or group of white male students privileged enough to be at Brown University would do this. (In investigations, the abusers were narrowed down to a small group of white men in the department but it was never going to be clear who exactly did it and so I chose not to pursue the case even though law enforcement wanted me to.)

My first breakthrough came when I started studying bullying, when I started reading studies about why punitive approaches to meanness and cruelty backfire. It’s so easy to hate those who are hateful, so hard to be empathetic to where they’re coming from. This made me double down on an ethnographic mindset that requires that you step away from your assumptions and try to understand the perspective of people who think and act differently than you do. I’m realizing more and more how desperately this perspective is needed as I watch researchers and advocates, politicians and everyday people judge others from their vantage point without taking a moment to understand why a particular logic might unfold.

The Local Nature of Wealth

A few days ago, my networks were on fire with condescending comments referencing an article in The Guardian titled “Scraping by on six figures? Tech workers feel poor in Silicon Valley’s wealth bubble.” I watched as all sorts of reasonably educated, modestly but sustainably paid people mocked tech folks for expressing frustration about how their well-paid jobs did not allow them to have the sustainable lifestyle that they wanted. For most, Silicon Valley is at a distance, a far off land of imagination brought to you by the likes of David Fincher and HBO. Progressive values demand empathy for the poor and this often manifests as hatred for the rich. But what’s missing from this mindset is an understanding of the local perception of wealth, poverty, and status. And, more importantly, the political consequences of that local perception.

Think about it this way. I live in NYC where the median household income is somewhere around $55K. My network primarily makes above the median and yet they all complain that they don’t have enough money to achieve what they want in NYC, whether they’re making $55K, $70K, or $150K. Complaining about being not having enough money is ritualized alongside complaining about the rents. No one I know really groks that they’re making above the median income for the city (and, thus, that most people are much poorer than they are), let alone how absurd their complaints might sound to someone from a poorer country where a median income might be $1500 (e.g., India).

The reason for this is not simply that people living in NYC are spoiled, but that people’s understanding of prosperity is shaped by what they see around them. Historically, this has been understood through word-of-mouth and status markers. In modern times, those status markers are often connected to conspicuous consumption. “How could HE afford a new pair of Nikes!?!?”

The dynamics of comparison are made trickier by media. Even before yellow journalism, there has always been some version of Page Six or “Lifestyles of the Rich and Famous.” Stories of gluttonous and extravagant behaviors abound in ancient literature. Today, with Instagram and reality TV, the idea of haves and havenots is pervasive, shaping cultural ideas of privilege and suffering. Everyday people perform for the camera and read each other’s performances critically. And still, even as we watch rich people suffer depression or celebrities experience mental breakdowns, we don’t know how to walk in each other’s shoes. We collectively mock them for their privilege as a way to feel better for our own comparative struggles.

In other words, in a neoliberal society, we consistently compare ourselves to others in ways that make us feel as though we are less well off than we’d like. And we mock others who are more privileged who do the same. (And, horribly, we often blame others who are not for making bad decisions.)

The Messiness of Privilege

I grew up with identity politics, striving to make sense of intersectional politics and confused about what it meant to face oppression as a woman and privilege as a white person. I now live in a world of tech wealth while my family does not. I live with contradictions and I work on issues that make those contradictions visible to me on a regular basis. These days, I am surrounded by civil rights advocates and activists of all stripes. Folks who remind me to take my privilege seriously. And still, I struggle to be a good ally, to respond effectively to challenges to my actions. Because of my politics and ideals, I wake up each day determined to do better.

Yet, with my ethnographer’s hat on, I’m increasingly uncomfortable with how this dynamic is playing out. Not for me personally, but for affecting change. I’m nervous that the way that privilege is being framed and politicized is doing damage to progressive goals and ideals. In listening to white men who see themselves as “betas” or identify as NEETs (“Not in Education, Employment, or Training”) describe their hatred of feminists or social justice warriors, I hear the cost of this frame. They don’t see themselves as empowered or privileged and they rally against these frames. And they respond antagonistically in ways that further the divide, as progressives feel justified in calling them out as racist and misogynist. Hatred emerges on both sides and the disconnect produces condescension as everyone fails to hear where each other comes from, each holding onto their worldview that they are the disenfranchised, they are the oppressed. Power and wealth become othered and agency becomes understood through the lens of challenging what each believes to be the status quo.

It took me years to understand that the boys who tormented me in college didn’t feel powerful, didn’t see their antagonism as oppression. I was even louder and more brash back then than I am now. I walked into any given room performing confidence in ways that completely obscured my insecurities. I took up space, used my sexuality as a tool, and demanded attention. These were the survival skills that I had learned to harness as a ticket out. And these are the very same skills that have allowed me to succeed professionally and get access to tremendous privilege. I have paid a price for some of the games that I have played, but I can’t deny that I’ve gained a lot in the process. I have also come to understand that my survival strategies were completely infuriating to many geeky white boys that I encountered in tech. Many guys saw me as getting ahead because I was a token woman. I was accused of sleeping my way to the top on plenty of occasions. I wasn’t simply seen as an alpha — I was seen as the kind of girl that screwed boys over. And because I was working on diversity and inclusion projects in computer science to attract more women and minorities as the field, I was seen as being the architect of excluding white men. For so many geeky guys I met, CS was the place where they felt powerful and I stood for taking that away. I represented an oppressor to them even though I felt like it was they who were oppressing me.

Privilege is complicated. There is no static hierarchical structure of oppression. Intersectionality provides one tool for grappling with the interplay between different identity politics, but there’s no narrative for why beta white male geeks might feel excluded from these frames. There’s no framework for why white Christians might feel oppressed by rights-oriented activists. When we think about privilege, we talk about the historical nature of oppression, but we don’t account for the ways in which people’s experiences of privilege are local. We don’t account for the confounding nature of perception, except to argue that people need to wake up.

Grappling with Perception

We live in a complex interwoven society. In some ways, that’s intentional. After WWII, many politicians and activists wanted to make the world more interdependent, to enable globalization to prevent another world war. The stark reality is that we all depend on social, economic, and technical infrastructures that we can’t see and don’t appreciate. Sure, we can talk about how our food is affordable because we’re dependent on underpaid undocumented labor. We can take our medicine for granted because we fail to appreciate all of the regulatory processes that go into making sure that what we consume is safe. But we take lots of things for granted; it’s the only way to move through the day without constantly panicking about whether or not the building we’re in will collapse.

Without understanding the complex interplay of things, it’s hard not to feel resentful about certain things that we do see. But at the same time, it’s not possible to hold onto the complexity. I can appreciate why individuals are indignant when they feel as though they pay taxes for that money to be given away to foreigners through foreign aid and immigration programs. These people feel like they’re struggling, feel like they’re working hard, feel like they’re facing injustice. Still, it makes sense to me that people’s sense of prosperity is only as good as their feeling that they’re getting ahead. And when you’ve been earning $40/hour doing union work only to lose that job and feel like the only other option is a $25/hr job, the feeling is bad, no matter that this is more than most people make. There’s a reason that Silicon Valley engineers feel as though they’re struggling and it’s not because they’re comparing themselves to everyone in the world. It’s because the standard of living keeps dropping in front of them. It’s all relative.

It’s easy to say “tough shit” or “boo hoo hoo” or to point out that most people have it much worse. And, at some levels, this is true. But if we don’t account for how people feel, we’re not going to achieve a more just world — we’re going to stoke the fires of a new cultural war as society becomes increasingly polarized.

The disconnect between statistical data and perception is astounding. I can’t help but shake my head when I listen to folks talk about how life is better today than it ever has been in history. They point to increased lifespan, new types of medicine, decline in infant mortality, and decline in poverty around the world. And they shake their heads in dismay about how people don’t seem to get it, don’t seem to get that today is better than yesterday. But perception isn’t about statistics. It’s about a feeling of security, a confidence in one’s ecosystem, a belief that through personal effort and God’s will, each day will be better than the last. That’s not where the vast majority of people are at right now. To the contrary, they’re feeling massively insecure, as though their world is very precarious.

I am deeply concerned that the people whose values and ideals I share are achieving solidarity through righteous rhetoric that also produces condescending and antagonistic norms. I don’t fully understand my discomfort, but I’m scared that what I’m seeing around me is making things worse. And so I went back to some of Martin Luther King Jr.’s speeches for a bit of inspiration today and I started reflecting on his words. Let me leave this reflection with this quote:

The ultimate weakness of violence is that it is a descending spiral,
begetting the very thing it seeks to destroy.
Instead of diminishing evil, it multiplies it.
Through violence you may murder the liar,
but you cannot murder the lie, nor establish the truth.
Through violence you may murder the hater,
but you do not murder hate.
In fact, violence merely increases hate.
So it goes.
Returning violence for violence multiplies violence,
adding deeper darkness to a night already devoid of stars.
Darkness cannot drive out darkness:
only light can do that.
Hate cannot drive out hate: only love can do that.
— Dr. Martin Luther King, Jr.

Image from Flickr: Andy Doyle

by zephoria at March 03, 2017 09:19 PM

March 01, 2017

Ph.D. student

arXiv preprint of Refutation of Bostrom’s Superintelligence Argument released

I’ve written a lot of blog posts about Nick Bostrom’s book Superintelligence, presented what I think is a refutation of his core argument.

Today I’ve released an arXiv preprint with a more concise and readable version of this argument. Here’s the abstract:

Don’t Fear the Reaper: Refuting Bostrom’s Superintelligence Argument

In recent years prominent intellectuals have raised ethical concerns about the consequences of artificial intelligence. One concern is that an autonomous agent might modify itself to become “superintelligent” and, in supremely effective pursuit of poorly specified goals, destroy all of humanity. This paper considers and rejects the possibility of this outcome. We argue that this scenario depends on an agent’s ability to rapidly improve its ability to predict its environment through self-modification. Using a Bayesian model of a reasoning agent, we show that there are important limitations to how an agent may improve its predictive ability through self-modification alone. We conclude that concern about this artificial intelligence outcome is misplaced and better directed at policy questions around data access and storage.

I invite any feedback on this work.


by Sebastian Benthall at March 01, 2017 02:18 PM

February 20, 2017

Ph.D. alumna

Heads Up: Upcoming Parental Leave

There’s a joke out there that when you’re having your first child, you tell everyone personally and update your family and friends about every detail throughout the pregnancy. With Baby #2, there’s an abbreviated notice that goes out about the new addition, all focused on how Baby #1 is excited to have a new sibling. And with Baby #3, you forget to tell people.

I’m a living instantiation of that. If all goes well, I will have my third child in early March and I’ve apparently forgotten to tell anyone since folks are increasingly shocked when I indicate that I can’t help out with XYZ because of an upcoming parental leave. Oops. Sorry!

As noted when I gave a heads up with Baby #1 and Baby #2, I plan on taking parental leave in stride. I don’t know what I’m in for. Each child is different and each recovery is different. What I know for certain is that I don’t want to screw over collaborators or my other baby – Data & Society. As a result, I will be not taking on new commitments and I will be actively working to prioritize my collaborators and team over the next six months.

In the weeks following birth, my response rates may get sporadic and I will probably not respond to non-mission-critical email. I also won’t be scheduling meetings. Although I won’t go completely offline in March (mostly for my own sanity), but I am fairly certain that I will take an email sabbatical in July when my family takes some serious time off** to be with one another and travel.

A change in family configuration is fundamentally walking into the abyss. For as much as our culture around maternity leave focuses on planning, so much is unknown. After my first was born, I got a lot of work done in the first few weeks afterwards because he was sleeping all the time and then things got crazy just as I was supposedly going back to work. That was less true with #2, but with #2 I was going seriously stir crazy being home in the cold winter and so all I wanted was to go to lectures with him to get out of bed and soak up random ideas. Who knows what’s coming down the pike. I’m fortunate enough to have the flexibility to roll with it and I intend to do precisely that.

What’s tricky about being a parent in this ecosystem is that you’re kinda damned if you do, damned if you don’t. Women are pushed to go back to work immediately to prove that they’re serious about their work – or to take serious time off to prove that they’re serious about their kids. Male executives are increasingly publicly talking about taking time off, while they work from home.  The stark reality is that I love what I do. And I love my children. Life is always about balancing different commitments and passions within the constraints of reality (time, money, etc.).  And there’s nothing like a new child to make that balancing act visible.

So if you need something from me, let me know ASAP!  And please understand and respect that I will be navigating a lot of unknown and doing my best to achieve a state of balance in the upcoming months of uncertainty.

 

** July 2017 vacation. After a baby is born, the entire focus of a family is on adjustment. For the birthing parent, it’s also on recovery because babies kinda wreck your body no matter how they come out. Finding rhythms for sleep and food become key for survival. Folks talk about this time as precious because it can enable bonding. That hasn’t been my experience and so I’ve relished the opportunity with each new addition to schedule some full-family bonding time a few months after birth where we can do what our family likes best – travel and explore as a family. If all goes well in March, we hope to take a long vacation in mid-July where I intend to be completely offline and focused on family. More on that once we meet the new addition.

by zephoria at February 20, 2017 01:45 PM

February 15, 2017

Ph.D. alumna

When Good Intentions Backfire

… And Why We Need a Hacker Mindset


I am surrounded by people who are driven by good intentions. Educators who want to inform students, who passionately believe that people can be empowered through knowledge. Activists who have committed their lives to addressing inequities, who believe that they have a moral responsibility to shine a spotlight on injustice. Journalists who believe their mission is to inform the public, who believe that objectivity is the cornerstone of their profession. I am in awe of their passion and commitment, their dedication and persistence.

Yet, I’m existentially struggling as I watch them fight for what is right. I havelearned that people who view themselves through the lens of good intentions cannot imagine that they could be a pawn in someone else’s game. They cannot imagine that the values and frames that they’ve dedicated their lives towards — free speech, media literacy, truth — could be manipulated or repurposed by others in ways that undermine their good intentions.

I find it frustrating to bear witness to good intentions getting manipulated,but it’s even harder to watch how those who are wedded to good intentions are often unwilling to acknowledge this, let alone start imagining how to develop the appropriate antibodies. Too many folks that I love dearly just want to double down on the approaches they’ve taken and the commitments they’ve made. On one hand, I get it — folks’ life-work and identities are caught up in these issues.

But this is where I think we’re going to get ourselves into loads of trouble.

The world is full of people with all sorts of intentions. Their practices and values, ideologies and belief systems collide in all sorts of complex way. Sometimes, the fight is about combating horrible intentions, but often it is not. In college, my roommate used to pound a mantra into my head whenever I would get spun up about something: Do not attribute to maliciousness what you can attribute to stupidity. I return to this statement a lot when I think about how to build resilience and challenge injustices, especially when things look so corrupt and horribly intended — or when people who should be allies see each other as combatants. But as I think about how we should resist manipulation and fight prejudice, I also think that it’s imperative to move away from simply relying on “good intentions.”

I don’t want to undermine those with good intentions, but I also don’t want good intentions to be a tool that can be used against people. So I want to think about how good intentions get embedded in various practices and the implications of how we view the different actors involved.

The Good Intentions of Media Literacy

When I penned my essay “Did Media Literacy Backfire?”, I wanted to ask those who were committed to media literacy to think about how their good intentions — situated in a broader cultural context — might not play out as they would like. Folks who critiqued my essay on media literacy pushed back in all sorts of ways, both online and off. Many made me think, but some also reminded me that my way of writing was off-putting. I was accused of using the question “Did media literacy backfire?” to stoke clicks.Some snarkily challenged my suggestion that media literacy was even meaningfully in existence, asked me to be specific about which instantiations I meant (because I used the phrase “standard implementations”), and otherwise pushed for the need to double down on “good” or “high quality” media literacy. The reality is that I’m a huge proponent of their good intentions — and have long shared them, but I wrote this piece because I’m worried that good intentions can backfire.

While I was researching youth culture, I never set out to understand what curricula teachers used in the classroom. I wasn’t there to assess the quality of the teachers or the efficacy of their formal educational approaches. I simply wanted to understand what students heard and how they incorporated the lessons they received into their lives. Although the teens that I met had a lot of choice words to offer about their teachers, I’ve always assumed that most teachers entered the profession with the best of intentions, even if their students couldn’t see that. But I spent my days listening to students’ frustrations and misperceptions of the messages teachers offered.

I’ve never met an educator who thinks that the process of educating is easy or formulaic. (Heck, this is why most educators roll their eyes when they hear talk of computerized systems that can educate better than teachers.) So why do we assume that well-intended classroom lessons — or even well-designed curricula — might not play out as we imagine? This isn’t simply about the efficacy of the lesson or the skill of the teacher, but the cultural context in which these conversations occur.

In many communities in which I’ve done research, the authority of teachers is often questioned. Nowhere is this more painfully visible than when well-intended highly educated (often white) teachers come to teach in poorer communities of color. Yet, how often are pedagogical interventions designed by researchers really taking into account the doubt that students and their parents have of these teachers? And how do we as educators and scholars grapple with how we might have made mistakes?

I’m not asking “Did Media Literacy Backfire?” to be a pain in the toosh, but to genuinely highlight how the ripple effects of good intentions may not play out as imagined on the ground for all sorts of reasons.

The Good Intentions of Engineers

From the outside, companies like Facebook and Google seem pretty evil to many people. They’re situated in a capitalist logic that many advocates and progressives despise. They’re opaque and they don’t engage the public in their decision-making processes, even when those decisions have huge implications for what people read and think. They’re extremely powerful and they’ve made a lot of people rich in an environment where financial inequality and instability is front and center. Primarily located in one small part of the country, they also seem like a monolithic beast.

As a result, it’s not surprising to me that many people assume that engineers and product designers have evil (or at least financially motivated) intentions. There’s an irony here because my experience is the opposite.Most product teams have painfully good intentions, shaped by utopic visions of how the ideal person would interact with the ideal system. Nothing is more painful than sitting through a product design session with design personae that have been plucked from a collection of clichés.

I’ve seen a lot of terribly naive product plans, with user experience mockups that lack any sense of how or why people might interact with a system in unexpected ways. I spent years tracking how people did unintended things with social media, such as the rise of “Fakesters,” or of teenagers who gamed Facebook’s system by inserting brand names into their posts, realizing that this would make their posts rise higher in the social network’s news feed. It has always boggled my mind how difficult it is for engineers and product designers to imagine how their systems would get gamed. I actually genuinely loved product work because I couldn’t help but think about how to break a system through unexpected social practices.

Most products and features that get released start with good intentions, but they too get munged by the system, framed by marketing plans, and manipulated by users. And then there’s the dance of chaos as companies seek to clean up PR messes (which often involves non-technical actors telling insane fictions about the product), patch bugs to prevent abuse, and throw bandaids on parts of the code that didn’t play out as intended. There’s a reason that no one can tell you exactly how Google’s search engine or Facebook’s news feed works. Sure, the PR folks will tell you that it’s proprietary code. But the ugly truth is that the code has been patched to smithereens to address countless types of manipulation and gamification(e.g., SEO to bots). It’s quaint to read the original “page rank” paper that Brin and Page wrote when they envisioned how a search engine could ideally work. That’s so not how the system works today.

The good intentions of engineers and product people, especially those embedded in large companies, are often doubted as sheen for a capitalist agenda. Yet, like many other well-intended actors, I often find that makers feel misunderstood and maligned, assumed to have evil thoughts. And I often think that when non-tech people start by assuming that they’re evil, we lose a significant opportunity to address problems.

The Good Intentions of Journalists

I’ve been harsh on journalists lately, mostly because I find it so infuriating that a profession that is dedicated to being a check to power could be so ill-equipped to be self-reflexive about its own practices.

Yet, I know that I’m being unfair. Their codes of conduct and idealistic visions of their profession help journalists and editors and publishers stay strong in an environment where they are accustomed to being attacked. It just kills me that the cultural of journalism makes those who have an important role to play unable to see how they can be manipulated at scale.

Sure, plenty of top-notch journalists are used to negotiating deception and avoidance. You gotta love a profession that persistently bangs its head against a wall of “no comment.” But journalism has grown up as an individual sport; a competition for leads and attention that can get fugly in the best of configurations. Time is rarely on a journalist’s side, just as nuance is rarely valued by editors. Trying to find “balance” in this ecosystem has always been a pipe dream, but objectivity is a shared hallucination that keeps well-intended journalists going.

Powerful actors have always tried to manipulate the news media, especially State actors. This is why the fourth estate is seen as so important in the American context. Yet, the game has changed, in part because of the distributed power of the masses. Social media marketers quickly figured out that manufacturing outrage and spectacle would give them a pathway to attention, attracting news media like bees to honey. Most folks rolled their eyes, watching as monied people played the same games as State actors. But what about the long tail? How do we grapple with the long tail? How should journalists respond to those who are hacking the attention economy?

I am genuinely struggling to figure out how journalists, editors, and news media should respond in an environment in which they are getting gamed.What I do know from 12-steps is that the first step is to admit that you have a problem. And we aren’t there yet. And sadly, that means that good intentions are getting gamed.

Developing the Hacker Mindset

I’m in awe of how many of the folks I vehemently disagree with are willing to align themselves with others they vehemently disagree with when they have a shared interest in the next step. Some conservative and hate groups are willing to be odd bedfellows because they’re willing to share tactics, even if they don’t share end goals. Many progressives can’t even imagine coming together with folks who have a slightly different vision, let alone a different end goal, to even imagine various tactics. Why is that?

My goal in writing these essays is not because I know the solutions to some of the most complex problems that we face — I don’t — but because I think that we need to start thinking about these puzzles sideways, upside down, and from non-Euclidean spaces. In short, I keep thinking that we need more well-intended folks to start thinking like hackers.

Think just as much about how you build an ideal system as how it might be corrupted, destroyed, manipulated, or gamed. Think about unintended consequences, not simply to stop a bad idea but to build resilience into the model.

As a developer, I always loved the notion of “extensibility” because it was an ideal of building a system that could take unimagined future development into consideration. Part of why I love the notion is that it’s bloody impossible to implement. Sure, I (poorly) comment my code and build object-oriented structures that would allow for some level of technical flexibility. But, at the end of the day, I’d always end up kicking myself for not imagining a particular use case in my original design and, as a result, doing a lot more band-aiding than I’d like to admit. The masters of software engineering extensibility are inspiring because they don’t just hold onto the task at hand, but have a vision for all sorts of different future directions that may never come into fruition. That thinking is so key to building anything, whether it be software or a campaign or a policy. And yet, it’s not a muscle that we train people to develop.

If we want to address some of the major challenges in civil society, we need the types of people who think 10 steps ahead in chess, imagine innovative ways of breaking things, and think with extensibility at their core. More importantly, we all need to develop that sensibility in ourselves. This is the hacker mindset.

This post was originally posted on Points. It builds off of a series of essays on topics affecting the public sphere written by folks at Data & Society. As expected, my earlier posts ruffled some feathers, and I’ve been trying to think about how to respond in a productive manner. This is my attempt.

Flickr Image: CC BY 2.0-licensed image by DaveBleasdale.

by zephoria at February 15, 2017 05:51 PM

February 12, 2017

Ph.D. student

the “hacker class”, automation, and smart capital

(Mood music for reading this post:)

I mentioned earlier that I no longer think hacker class consciousness is important.

As incongruous as this claim is now, I’ve explained that this is coming up as I go through old notes and discard them.

I found another page of notes that reminds me there was a little more nuance to my earlier position that I remembered, which has to do with the kind of labor done by “hackers”, a term I reserve the right to use in MIT/Eric S. Raymond sense, without the political baggage that has since attached to the term.

The point was in response to Eric. S. Raymond’s “How to be a hacker” essay which was that part of what it means to be a “hacker” is to hate drudgery. The whole point of programming a computer is so that you never have to do the same activity twice. Ideally, anything that’s repeatable about the activity gets delegated to the computer.

This is relevant in the contemporary political situation because we’re probably now dealing with the upshot of structural underemployment due to automation and the resulting inequalities. This remains a topic that scholarship, technologists, and politicians seem systematically unable to address directly even when they attempt to, because everybody who sees the writing on the wall is too busy trying to get the sweet end of that deal.

It’s a very old argument that those who own the means of production are able to negotiate for a better share of the surplus value created by their collaborations with labor. Those who own or invest in capital generally speaking would like to increase that share. So there’s market pressure to replace reliance of skilled labor, which is expensive, with reliance on less skilled labor, which is plentiful.

So what gets industrialists excited is smart capital, or a means of production that performs the “skilled” functions formerly performed by labor. Call it artificial intelligence. Call it machine learning. Call it data science. Call it “the technology industry”. That’s what’s happening and been happening for some time.

This leaves good work for a single economic class of people, those whose skills are precisely those that produce this smart capital.

I never figured out what the end result of this process would be. I imagined at one point that the creation of the right open source technology would bring about a profound economic transformation. A far fetched hunch.


by Sebastian Benthall at February 12, 2017 10:14 PM

three kinds of social explanation: functionalism, politics, and chaos

Roughly speaking, I think there are three kinds of social explanation. I mean “explanation” in a very thick sense; an explanation is an account of why some phenomenon is the way it is, grounded in some kind of theory that could be used to explain other phenomena as well. To say there are three kinds of social explanation is roughly equivalent to saying there are three ways to model social processes.

The first of these kind of social explanation is functionalism. This explains some social phenomenon in terms of the purpose that it serves. Generally speaking, fulfilling this purpose is seen as necessary for the survival or continuation of the phenomenon. Maybe it simply is the continued survival of the social organism that is its purpose. A kind of agency, though probably very limited, is ascribed to the entire social process. The activity internal to the process is then explained by the purpose that it serves.

The second kind of social explanation is politics. Political explanations focus on the agencies of the participants within the social system and reject the unifying agency of the whole. Explanations based on class conflict or personal ambition are political explanations. Political explanations of social organization make it out to be the result of a complex of incentives and activity. Where there is social regularity, it is because of the political interests of some of its participants in the continuation of the organization.

The third kind of social explanation is hardly an explanation at all. It is explanation by chaos. This sort of explanation is quite rare, as it does not provide much of the psychological satisfaction we like from explanations. I mention it here because I think it is an underutilized mode of explanation. In large populations, much of the activity that happens will do so by chance. Even large organizations may form according to stochastic principles that do not depend on any real kind of coordinated or purposeful effort.

It is important to consider chaotic explanation of social processes when we consider the limits of political expertise. If we have a low opinion of any particular person’s ability to understand their social environment and act strategically, then we must accept that much of their “politically” motivated actions will be based on misconceptions and therefore be, in an objective sense, random. At this point political explanations become facile, and social regularity has to be explained either in terms of the ability of social organizations qua organizations to survive, or the organization must be explained in a deflationary way: i.e., that the organization is not really there, but just in the eye of the beholder.


by Sebastian Benthall at February 12, 2017 02:36 AM