School of Information Blogs

March 28, 2017

Ph.D. student

More assessment of AI X-risk potential

I’m been stimulated by Luciano Floridi’s recent article in Aeon “Shhttps://digifesto.wordpress.com/?p=2740&preview=trueould we be afraid of AI?”. I’m surprised that this issue hasn’t been settled yet, since it seems like “we” have the formal tools necessary to solve the problem decisively. But nevertheless this appears to be the subject of debate.

I was referred to Kaj Sotala’s rebuttal of an earlier work by Floridi which his Aeon article was based on. The rebuttal appears in this APA Newsletter on Philosophy and Computers. It is worth reading.

The issue that I’m most interested in is whether or not AI risk research should constitute a special, independent branch of research, or whether it can be approached just as well by pursuing a number of other more mainstream artificial intelligence research agendas. My primary engagement with these debates has so far been an analysis of Nick Bostrom’s argument in his book Superintelligence, which tries to argue in particular that there is an existential risk (or X-risk) to humanity from artificial intelligence. “Existential risk” means a risk to the existence of something, in this case humanity. And the risk Bostrom has written about is the risk of eponymous superintelligence: an artificial intelligence that gets smart enough to improve its own intelligence, achieve omnipotence, and end the world as we know it.

I’ve posted my rebuttal to this argument on arXiv. The one-sentence summary of the argument is: algorithms can’t just modify themselves into omnipotence because they will hit performance bounds due to data and hardware.

A number of friends have pointed out to me that this is not a decisive argument. They say: don’t you just need the AI to advance fast enough and far enough to be an existential threat?

There are a number of reasons why I don’t believe this is likely. In fact, I believe that it is provably vanishingly unlikely. This is not to say that I have a proof, per se. I suppose it’s incumbent on me to work it out and see if the proof is really there.

So: Herewith is my Sketch Of A Proof of why there’s no significant artificial intelligence existential risk.

Lemma: Intelligence advances due to purely algorithmic self-modificiation will always plateau due to data and hardware constraints, which advance more slowly.

Proof: This paper.

As a consequence, all artificial intelligence explosions will be sigmoid. That is, starting slow, accelerating, then decelerating, the growing so slowly as to be asymptotic. Let’s call the level of intelligence at which an explosion asymptotes the explosion bound.

There’s empirical support for this claim. Basically, we have never had a really big intelligence explosion due to algorithmic improvement alone. Looking at the impressive results of the last seventy years, most of the impressiveness can be attributed to advances in hardware and data collection. Notoriously, Deep Learning is largely just decades old artificial neural network technology repurposed to GPU’s on the cloud. Which is awesome and a little scary. But it’s not an algorithmic intelligence explosion. It’s a consolidation of material computing power and sensor technology by organizations. The algorithmic advances fill those material shoes really quickly, it’s true. This is precisely the point: it’s not the algorithms that’s the bottleneck.

Observation: Intelligence explosions are happening all the time. Most of them are small.

Once we accept the idea that intelligence explosions are all bounded, it becomes rather arbitrary where we draw the line between an intelligence explosion and some lesser algorithmic intelligence advance. There is a real sense in which any significant intelligence advance is a sigmoid expansion in intelligence. This would include run-of-the-mill scientific discoveries and good ideas.

If intelligence explosions are anything like virtually every other interesting empirical phenomenon, then they are distributed according to a heavy tail distribution. This means a distribution with a lot of very small values and a diminishing probability of higher values that nevertheless assigns some probability to very high values. Assuming intelligence is something that can be quantified and observed empirically (a huge ‘if’ taken for granted in this discussion), we can (theoretically) take a good hard look at the ways intelligence has advanced. Look around you. Do you see people and computers getting smarter all the time, sometimes in leaps and bounds but most of the time minutely? That’s a confirmation of this hypothesis!

The big idea here is really just to assert that there is a probability distribution over intelligence explosion bounds that all actual intelligence explosions are being drawn from. This follows more or less directly from the conclusion that all intelligence explosions are bounded. Once we posit such a distribution, it becomes possible to take expected values of functions of its values and functions of its values.

Empirical claim: Hardware and sensing advances diffuse rapidly relative to their contribution to intelligence gains.

There’s an material, socio-technical analog to Bostrom’s explosive superintelligence. We could imagine a corporation that is working in secret on new computing infrastructure. Whenever it has an advance in computing infrastructure, the AI people (or increasingly, the AI-writing-AI) develops programming that maximizes its use of this new technology. Then it uses that technology to enrich its own computer-improving facilities. When it needs more…minerals…or whatever it needs to further its research efforts, it finds a way to get them. It proceeds to take over the world.

This may presently be happening. But evidence suggests that this isn’t how the technology economy really works. No doubt Amazon (for example) is using Amazon Web Services internally to do its business analytics. But also it makes its business out of selling out its computing infrastructure to other organizations as a commodity. That’s actually the best way it can enrich itself.

What’s happening here is the diffusion of innovation, which is a well-studied phenomenon in economics and other fields. Ideas spread. Technological designs spread. I’d go so far as to say that it is often (perhaps always?) the best strategy for some agent that has locally discovered a way to advance its own intelligence to figure out how to trade that intelligence to other agents. Almost always that trade involves the diffusion of the basis of that intelligence itself.

Why? Because since there are independent intelligence advances of varying sizes happening all the time, there’s actually a very competitive market for innovation that quickly devalues any particular gain. A discovery, if hoarded, will likely be discovered by somebody else. The race to get credit for any technological advance at all motivates diffusion and disclosure.

The result is that the distribution of innovation, rather than concentrating into very tall spikes, is constantly flattening and fattening itself. That’s important because…

Claim: Intelligence risk is not due to absolute levels of intelligence, but relative intelligence advantage.

The idea here is that since humanity is composed of lots of interacting intelligence sociotechnical organizations, any hostile intelligence is going to have a lot of intelligent adversaries. If the game of life can be won through intelligence alone, then it can only be won with a really big intelligence advantage over other intelligent beings. It’s not about absolute intelligence, it’s intelligence inequality we need to worry about.

Consequently, the more intelligence advances (i.e, technologies) diffuse, the less risk there is.

Conclusion: The chance of an existential risk from an intelligence explosion is small and decreasing all the time.

So consider this: globally, there’s tons of investment in technologies that, when discovered, allow for local algorithmic intelligence explosions.

But even if we assume these algorithmic advances are nearly instantaneous, they are still bounded.

Lots of independent bounded explosions are happening all the time. But they are also diffusing all the time.

Since the global intelligence distribution is always fattening, that means that the chance of any particular technological advance granting a decisive advantage over others is decreasing.

There is always the possibility of a fluke, of course. But if there was going to be a humanity destroying technological discovery, it would probably have already been invented and destroyed us. Since it hasn’t, we have a lot more resilience to threats from intelligence explosions, not to mention a lot of other threats.

This doesn’t mean that it isn’t worth trying to figure out how to make AI better for people. But it does diminish the need to think about artificial intelligence as an existential risk. It makes AI much more comparable to a biological threat. Biological threats could be really bad for humanity. But there’s also the organic reality that life is very resilient and human life in general is very secure precisely because it has developed so much intelligence.

I believe that thinking about the risks of artificial intelligence as analogous to the risks from biological threats is helpful for prioritizing where research effort in artificial intelligence should go. Just because AI doesn’t present an existential risk to all of humanity doesn’t mean it doesn’t kill a lot of people or make their lives miserable. On the contrary, we are in a world with both a lot of artificial and non-artificial intelligence and a lot of miserable and dying people. These phenomena are not causally disconnected. A good research agenda for AI could start with an investigation of these actually miserable people and what their problems are, and how AI is causing that suffering or alternatively what it could do to improve things. That would be an enormously more productive research agenda than one that aims primarily to reduce the impact of potential explosions which are diminishingly unlikely to occur.


by Sebastian Benthall at March 28, 2017 01:07 AM

March 26, 2017

adjunct professor

D-Link Updates

The seal has been lifted on the complaint in the D-Link case. This document highlights the previously redacted portions in yellow.

by web at March 26, 2017 12:25 AM

March 24, 2017

MIMS 2014

Adventures in Sparkland (or… How I Learned that Michael Caine was the original Jason Bourne)

Ready, set, revive data blog! What better way to take advantage of the sketchy wifi I’ve encountered along my travels through South America than to do do some data science?

For some time now, I’ve wanted to get my feet wet with Apache Spark, the open source software that has become a standard tool on the data scientist’s utility belt when it comes to dealing with “big data.” Specifically, I was curious how Spark can understand complex human-generated text (through topic or theme modeling), as well as its ability to make recommendations based on preferences we’ve expressed in the past (i.e. how Netflix decides what to suggest you should watch next). For this, it only seemed natural to focus my energies on something I am also quite passionate about: Movies!

Many people have already used the well known and publicly available Movielens dataset (READMEdata) to test out recommendation engines before. To add my own twist on standard practice, I added a topic model based off of movie plot data that I scraped from Wikipedia. This blog post will go into detail about the whole process. It’s organized into the following sections:

Setting Up The Environment

To me, this is always the most boring part of doing a data project. Unfortunately, this yak-shaving is wholly necessary to ever do anything interesting. If you only came to read about how this all relates to movies, feel free to skip over this part…

I won’t go into huge depth here, but I will say I effin love Docker as a means to set-up my environment. The reason Docker is so great is that it makes a dev environment totally explicit and portable—which means anybody who’s actually interested in the gory details can go wild with them on my Github (and develop my project further, if they so please).

Another reason Docker is the awesomest is that it made the process of simulating a cluster on my little Macbook Air relatively straightforward. Spark might be meant to be run on a cluster of multiple computers, but being on a backpacker’s budget, I wasn’t keen on commandeering a crowd of cloud computers using Amazon Web Services. I wanted to see what I could do with what I had.

The flip side of this, of course, is that everything was constrained to my 5-year-old laptop’s single processor and the 4GB of RAM I could spare to be shared by the entire virtual cluster. I didn’t think this would be a problem since I wasn’t dealing with big data, but I did keep running up against some annoying memory issues that proved to be a pain. More about that later.

#ScrapeMyPlot

The first major step in my project was getting ahold of movie plot data for each of the titles in the Movielens dataset. For this, I wrote a scraper in python using this handy wikipedia python library I found. The main idea behind my simple program was to:  1) search wikipedia using the title of each movie, 2) Use category tags to determine which search result was the article relating to the actual film in question, and 3) Use python’s BeautifulSoup and Wikipedia’s generally consistent html structure to extract the “plot” section from each article.

I wrapped these three steps in a bash script that would keep pinging wikipedia until it had attempted to grab plots for all the films in the Movielens data. This was something I could let run overnight or while trying to learn to dance like these people (SPOILER ALERT: I still can’t)

The results of this automated strategy were fair overall. Out of the 3,883 movie titles in the Movielens data, I was able to extract plot information for 2,533 or roughly 2/3 of them. I was hoping for ≥ 80%, but what I got was definitely enough to get started.

As I would later find however, even what I was able to grab was sometimes of dubious quality. For example, when the scraper was meant to grab the plot for Kids, the risqué 90’s drama about sex/drug-fueled teens in New York City, it grabbed the plot for Spy Kids instead. Not. the. same. Or when it was meant to grab the plot for Wild Things, another risqué 90’s title (but otherwise great connector in the Kevin Bacon game), it grabbed the plot for Where The Wild Things Are. Again, not. the. same. When these movies popped up in the context of trying to find titles that are similar to Toy Story, it was definitely enough to raise an eyebrow…

All this points to the importance of eating your own dog food when it comes to working with new, previously un-vetted data. Yes, it is a time consuming process, but it’s very necessary (and at least for this movie project, mildly entertaining).

Model Dem Topics

So first, one might ask: why go through the trouble of using a topic model to describe movie plot data? Well for one thing, it’s kinda interesting to see how a computer would understand movie plots and relate them to one another using probability-based artificial intelligence. But topic models offer practical benefits as well.

For one thing, absent a topic model, a computer generally represents a plot summary (or any document for that matter) as a bag of the words contained in that summary. That can be a lot of words, especially because a computer has to keep track of the words in the summary of not just a single movie, but rather the union of all the words in all the summaries of all the movies in the whole dataset.

Topic models reduce the complexity of representing a plot summary from a whole bag of words to a much smaller set of topics. This makes storing information about movies much more efficient in a computer’s memory. It also significantly speeds up calculations you might want to perform, such as seeing how similar one movie plot is to another. And finally, using a topic model can potentially help the computer describe the similarities between movies in a more sensible way. This increased accuracy can be used to improve the performance of other models, such as a recommendation engine.

Spark learns the topics across a set of plot summaries using a probabilistic process known as Latent Dirichlet Allocation or LDA. I won’t describe how LDA works in great depth (look here if you are interested in learning more), but after analyzing all the movie plots, it spits out a set of topics, i.e. lists of words that are supposed to be thematically related to each other if the algorithm did its job right. Each word within each topic has a weight proportional to its importance within the topic; words can repeat across topics but their weights will differ.

One somewhat annoying thing about using LDA is that you have to specify the number of topics before running the algorithm, which is an awkward thing to pinpoint a priori. How can you know how exactly how many topics exist across a corpus of movies—especially without reading all of the summaries? Another wrinkle to LDA is how sensitive it can be to the degree of pre-processing performed upon a text corpus before feeding it to the model.

After settling on 16 topics and a slew of preprocessing steps (stop word removal, Porter stemming, and part-of-speech filtering), I started to see topics that made sense. For example, there was a topic that broadly described a “Space Opera”:

Top 20 most important tokens in the “Space Opera” topic:

[ship, crew, alien, creatur, planet, space, men, group, team, time, order, board, submarin, death, plan, mission, home, survivor, offic, bodi]

Another topic seemed to be describing the quintessential sports drama. BTW, the lopped-off words like submarin or creatur are a result of Porter stemming, which reduces words to their more essential root forms.

Top 20 most important tokens in the “Sports Drama” topic:

[team, famili, game, offic, time, home, friend, player, day, father, men, man, money, polic, night, film, life, mother, car, school]

To sanity check the topic model, I was curious to see how LDA would treat films that were not used in the training of the original model. For this, I had to get some more movie plot data, which I did based on this IMDB list of top movies since 2000. The titles in the Movielens data tend to run a bit on the older side, so I knew I could find some fresh material by searching for some post-2000 titles.

To eyeball the quality of the results, I compared the topic model with the more simple “bag of words” model I mentioned earlier. For a handful of movies in the newer post-2000 set, I asked both models to return the most similar movies they could find in the original Movielens set.

I was encouraged (though not universally) by the results. Take, for example the results returned for V for Vendetta and Minority Report.

Similarity Rank: V for Vendetta


Similarity Rank Bag of Words Topic Model
1 But I’m a Cheerleader Candidate, The
2 Life Is Beautiful Dersu Uzala
3 Evita No Small Affair
4 Train of Life Terminator 2: Judgment Day
5 Jakob the Liar Schindler’s List
6 Halloween Mulan
7 Halloween: H20 Reluctant Debutante, The
8 Halloween II All Quiet on the Western Front
9 Forever Young Spartacus
10 Entrapment Grand Day Out, A

Similarity Rank: Minority Report


Similarity Rank Bag of Words Topic Model
1 Blind Date Seventh Sign, The
2 Scream 3 Crow: Salvation, The
3 Scream Crow, The
4 Scream of Stone Crow: City of Angels, The
5 Man of Her Dreams Passion of Mind
6 In Dreams Soylent Green
7 Silent Fall Murder!
8 Eyes of Laura Mars Hunchback of Notre Dame, The
9 Waking the Dead Batman: Mask of the Phantasm
10 I Can’t Sleep Phantasm

Thematically, it seems like for these two movies, the topic model gives broadly more similar/sensible results in the top ten than the baseline “bag of words” approach. (Technical note: the “bag of words” approach I refer to is more specifically a Tf-Idf transformation, a standard method used in the field of Information Retrieval and thus a reasonable baseline to use for comparison here.)

Although the topic model seemed to deliver in the case of these two films, that was not universally the case. In the case of Michael Clayton, there was no contest as to which model was better:

Similarity Rank: Michael Clayton


Similarity Rank Bag of Words Topic Model
1 Firm, The Low Down Dirty Shame, A
2 Civil Action, A Bonfire of the Vanities
3 Boiler Room Reindeer Games
4 Maybe, Maybe Not Raging Bull
5 Devil’s Advocate, The Chasers
6 Devil’s Own, The Mad City
7 Rounders Bad Lieutenant
8 Joe’s Apartment Killing Zoe
9 Apartment, The Fiendish Plot of Dr. Fu Manchu, The
10 Legal Deceit Grifters, The

In this case, it seems the Bag of Words model picked up on the legal theme while the topic model completely missed it. In the case of The Social Network, something else curious (and bad) happened:

Similarity Rank: The Social Network


Similarity Rank Bag of Words Topic Model
1 Twin Dragons Good Will Hunting
2 Higher Learning Footloose
3 Astronaut’s Wife, The Grease 2
4 Substitute, The Trial and Error
5 Twin Falls Idaho Love and Other Catastrophes
6 Boiler Room Blue Angel, The
7 Birdcage, The Lured
8 Quiz Show Birdy
9 Reality Bites Rainmaker, The
10 Broadcast News S.F.W.

With Good Will Hunting—another film about a gifted youth hanging around Cambridge, Massachusetts—it seemed like the topic model was off to a good start here. But then with Footloose and Grease 2 following immediately after, things start to deteriorate quickly. The crappy-ness of both result sets speaks to the overall low quality of the data we’re dealing with—both in terms of the limited set of movies available in the original Movielens data, as well as the quality of the Wikipedia plot data.

Still, when I saw Footloose, I was concerned that perhaps there might be a bug in my code. Digging a little deeper, I discovered that both movies did in fact share the highest score in a particular topic. However, the bulk of these scores are earned from different words within this same topic. This means that the words within the topics of the LDA model aren’t always very related to each other—a rather serious fault since that is exactly what it is meant to accomplish.

The fact is, it’s difficult to gauge the overall quality of the topic model even by eyeballing a handful of results as I’ve done. This is because like any clustering method, LDA is a form of unsupervised machine learning. That is to say, unlike a supervised machine learning method, there is no ground truth, or for-sure-we-know-it’s-right label, that we can use to objectively evaluate model performance.

However, what we can do is use the output from the topic model as input into the recommendation engine model (which is a supervised model). From there, we can see if the information gained from the topic model improves the performance of the recommendation engine. That was, in fact, my main motivation for using the topic model in the first place.

But before I get into that, I did want to share perhaps the most entertaining finding from this whole exercise (and the answer to the clickbait-y title of this blog post). The discovery occurred when I was comparing the bag of words and topic model results for The Bourne Ultimatum:

Similarity Rank: The Bourne Ultimatum


Similarity Rank Bag of Words Topic Model
1 Pelican Brief, The Three Days of the Condor
2 Light of Day Return of the Pink Panther, The
3 Safe Men Ipcress File, The
4 JFK Cop Land
5 Blood on the Sun Sting, The
6 Three Days of the Condor Great Muppet Caper, The
7 Shadow Conspiracy From Here to Eternity
8 Universal Soldier Man Who Knew Too Little, The
9 Universal Soldier: The Return Face/Off
10 Mission: Impossible 2 Third World Cop

It wasn’t the difference in the quality of the two result sets that caught my eye. In fact, with The Great Muppet Caper in there, the quality of the topic model seems a bit suspect, if anything.

What interested me was the emphasis the topic model placed on the similarity of some older tiles, like Three Days of the Condor, or The Return of the Pink Panther. But it was the 1965 gem, The Ipcress File, that took the cake. Thanks to the LDA topic model, I now know this movie exists, showcasing Michael Caine in all his 60’s badass glory. That link goes to the full trailer. Do yourself a favor and watch the whole thing. Or at the very least, watch this part, coz it makes me lol. They def don’t make ’em like they used to…

Rev Your Recommendation Engines

To incorporate the topic data into the recommendation engine, I first took the top-rated movies from each user in the Movielens dataset and created a composite vector for each user based on the max of each topic across their top rated movies. In other words, I created a “profile” of sorts for each user that summarized their tastes based on the most extreme expressions of each topic across the movies they liked the most.

After I had a profile for each user, I could get a similarity score for almost every movie/user pair in the Movielens dataset. Mixing these scores with the original Movielens ratings is a bit tricky, however, due to a wrinkle in the Spark recommendation engine implementation. When training a recommendation engine with Spark, one must choose between using either explicit or implicit ratings as inputs, but not both. The Movielens data is based on explicit ratings that users gave movies between 1 and 5. The similarity scores, by contrast, are signals I infer based on a user’s top-rated movies along with the independently trained topic model described above. In other words, the similarity scores are implicit data—not feedback that came directly from the user.

To combine the two sources of data, therefore, I had to convert the explicit data into implicit data. In the paper that explains Spark’s implicit recommendation algorithm, training examples for the implicit model are based off the confidence one has that a user likes a particular item rather than an explicit statement of preference. Given the original Movielens data, it makes sense to associate ratings of 4 or 5 with high confidence that a user liked a particular movie. One cannot, however, associate low ratings of 1, 2, or 3 with a negative preference, since in the implicit model, there is no notion of negative feedback. Instead, low ratings for a film correspond only to low confidence that a user liked that particular movie.

Since we lose a fair amount of information in converting explicit data to implicit data, I wouldn’t expect the recommendation engine I am building to beat out the baseline Movielens model, seeing as explicit data is generally a superior basis upon which to train a recommendation engine. However, I am more interested in seeing whether a model that incorporates information about movie plots can beat a model that does not. Also, it’s worth noting that many if not most real-world recommendation engines don’t have the luxury of explicit data and must rely instead on less reliable implicit signals. So if anything, handicapping the Movielens data as I am doing makes the setting more realistic.

Results/Findings

So does the movie topic data add value to the recommendation engine? Answering this question proved technically challenging, due to the limitations of my old Macbook Air :sad:.

One potential benefit of incorporating movie topic data is that scores can be generated for any (user, movie) pair that’s combinatorially possible given the underlying data. If the topic information did in fact add value to the recommendation engine, then the model could train upon a much richer set of data, including examples not directly observed in real life. But as I mentioned, my efforts to explore the potential benefit of this expanded data slammed against the memory limits I was confined to on my 5-year-old Macbook.

My constrained resources provided a lovely opportunity to learn all about Java Garbage Collection in Spark, but my efforts to tune the memory management of my program proved futile. I became convinced that an un-tunable hard memory limit was the culprit when I saw repeated executors fail after max-ing out their JVM heaps while running a series of full garbage collections. The Spark tuning guide says that if “a full GC is invoked multiple times for before a task completes, it means that there isn’t enough memory available for executing tasks.” I seemed to find myself in exactly this situation.

Since I couldn’t train on bigger data, I pretended I had less data instead. I trained two models. In one model, I pretended that I didn’t know anything about some of the ratings given to movies by users (in practice this meant setting a certain percentage of ratings to 0, since in the implicit model, 0 implies no confidence that a user prefers an item).  In a second model, I set these ratings to the similarity scores that came from the topic model.

The results of this procedure were mixed. When I covered up 25% of the data, the two recommendation engines performed roughly the same. However, when I covered up 75% of the data, there was about a 3% bump in performance for the topic model-based recommendation engine.

Although there might be some benefit (and at worst no harm) to using the topic model data, what I’d really like to do is map out a learning curve for my recommendation engine. In the context of machine learning, learning curves are curves that chart algorithm performance as a function of the number of training samples used to train the algorithm. Based on the two points I sampled, we cannot know for certain whether the benefit of including topic model data is always crowded out by the inclusion of more real world samples. We also cannot know whether using expanded data based on combinatorially generated similarity scores improves engine performance.

Given my hardware limits and my commitment to using only the resources in my backpack, I couldn’t map out this learning curve more methodically. I also couldn’t explore how using a different number of topics in the LDA model affects performance—something else I was curious to explore. In the end, my findings are only suggestive.

While I couldn’t explore everything I wanted, I ultimately learned a butt-load about how Spark works, which was my goal for starting this project in the first place. And of course, there was The Ipcress File discovery. Oh what’s that? You didn’t care much for The Ipcress File?  You didn’t even watch the trailer? Well, then I have to ask you:


by dgreis at March 24, 2017 12:36 AM

March 22, 2017

Ph.D. student

Lenin and Luxemburg

One of the interesting parts of Scott’s Seeing Like a State is a detailed analysis of Vladimir Lenin’s ideological writings juxtaposed with one of this contemporary critics, Rosa Luxemburg, who was a philosopher and activist in Germany.

Scott is critical of Lenin, pointing out that while his writings emphasize the role of a secretive intelligentsia commanding the raw material of an angry working class through propaganda and a kind of middle management tier of revolutionarily educated factory bosses, this is not how the revolution actually happened. The Bolsheviks took over an empty throne, so to speak, because the czars had already lost their power fighting Austria in World War I. This left Russia headless, with local regions ruled by local autonomous powers. Many of these powers were in fact peasant and proletarian collectives. But others may have been soldiers returning from war and seizing whatever control they could by force.

Luxemburg’s revolutionary theory was much more sensitive to the complexity of decentralized power. Rather than expecting the working class to submit unquestioningly to top-down control and coordinating in mass strikes, she acknowledged a reality that decentralized groups would act in an uncoordinated way. This was good for the revolutionary cause, she argued, because it allowed the local energy and creativity of workers movements to move effectively and contribute spontaneously to the overall outcome. Whereas Lenin saw spontaneity in the working class as leading inevitably to their being coopted by bourgeois ideology, Luxemburg believed the spontaneous authentic action of autonomously acting working class people were vital to keeping the revolution unified and responsive to working class interests.


by Sebastian Benthall at March 22, 2017 02:00 AM

March 21, 2017

MIMS 2011

Towards software that supports interpretation rather than quantification

[Reblogged from the Software Sustainability Institute blog]

My research involves the study of the emerging relationships between data and society that is encapsulated by the fields of software studies, critical data studies and infrastructure studies, among others. These fields of research are primarily aimed at interpretive investigations into how software, algorithms and code have become embedded into everyday life, and how this has resulted in new power formations, new inequalities, new authorities of knowledge [1]. Some of the subjects of this research include the ways in which Facebook’s News Feed algorithm influences the visibility and power of different users and news sources (Bucher, 2012), how Wikipedia delegates editorial decision-making and moral agency to bots (Geiger and Ribes, 2010), or the effects of Google’s Knowledge Graph on people’s ability to control facts about the places in which they live (Ford and Graham, 2016).

As the only Software Sustainability Institute fellows working in this area, I set myself the goal of investigating what tools, methods and infrastructure researchers working in these fields were using to conduct their research. Although Big Data is a challenge for every field of research, I found that the challenge for social scientists and humanities scholars doing interpretive research in this area is unique and perhaps even more significant. Two key challenges stand out. The first is that data requiring interpretation tends to be much larger than traditionally analysed. This often requires at least some level of quantification in order to ‘zoom out’ to obtain a bigger picture of the phenomenon or issues under study. Researchers in this tradition often lack the skills to conduct such analyses – particularly at scale. The second challenge is that online data is subject to ethical and legal restrictions, particularly when research involves interpretive research (as opposed to the anonymized data collected for statistical research).

In many universities it seems that mathematics, engineering, physics and computer science departments have started to build internal infrastructure to deal with Big Data, and some universities have established good Digital Humanities programs that are largely about the quantitative study of large corpuses of images/films/videos or other cultural objects. But infrastructure and expertise is severely lacking for those wishing to do interpretive rather than quantitative research using mixed, experimental, ethnographic or qualitative research using online data. The software and infrastructure required for doing interpretive research is patchy, departments are typically ill-equipped to support researchers and students with the expertise required to conduct social media research, and significant ethical questions remain about doing social media research, particularly in the context of data protection laws.

Data Carpentry offers some promise here. I organized, with the support of the Software Sustainability Institute, a “Data Carpentry for the Social Sciences workshop” with Dr Brenda Moon (Queensland University of Technology) and Martin Callaghan (University of Leeds) in November 2016 at Leeds University. Data Carpentry workshops tend to be organized for quantitative work in the hard sciences and there were no lesson plans for dealing with social media data. Brenda stepped in to develop some of these materials based partly on the really good Library Carpentry resources and both Martin and Brenda (with additional help from Dr Andy Evans, Joanna Leng and Dr Viktoria Spaiser) made an excellent start towards seeding the lessons database with some social media specific exercises.

The two-day workshop centered on examples from Twitter data and participants worked with Python and other off-the-shelf tools to extract and analyze data. There were fourteen participants in the workshop ranging from PhD students to professors and from media and communications to sociology and social policy, music to law, earth and environment to translation studies. At the end of the workshop participants said that they felt they had received a strong grounding in Python and that the course was useful, interactive, open and not intimidating. There were suggestions, however, to make improvements to the Twitter lessons and to perhaps split up the group in the second day to move onto more advanced programming for some and to go over the foundations for beginners.

Also supported by the Institute was my participation in two conferences in Australia at the end of 2016. The first was a conference exploring the impact of automation on everyday life at the Queensland University of Technology in Brisbane, the second, the annual Crossroads in Cultural Studies conference in Sydney. Through my participation in these events (and via other information-gathering that I have been conducting in my travels) I have learned that many researchers in the social sciences and humanities suffer from a significant lack of local expertise and infrastructure. On multiple occasions I learned of PhD students and researchers running analyses of millions of tweets on their laptops, suffering from a lack of understanding when applying for ethical approval and conducting analyses that lack a consistent approach.

Centers of excellence in digital methods around the world share code and learnings where they can. One such program is the Digital Methods Initiative (DMI) at the University of Amsterdam. The DMI hosts regular summer and winter schools to train researchers in using digital methods tools and provides free access to some of the open source software tools that it has developed for collecting and analyzing digital data. Queensland University of Technology’s Social Media Group also hosts summer schools and has contributed to methodological scholarship employing interpretive approaches to social media and internet research. The common characteristic of such programmes are that they are collaborative (sharing resources across the university departments and between different universities) and innovative (breaking some of the traditional rules that govern traditional research in the university).

Many researchers who handle data in more interpretive studies tend to rely on these global hubs in the few universities where infrastructure is being developed. The UK could benefit from a similar hub for researchers locally, especially since software and code needs to be continually developed and maintained for a much wider variety of evolving methods. Alternatively, or alongside such hubs, Data Carpentry workshops could serve as an important virtual hub for sharing lesson plans and resources. Data Carpentry could, for example, host code that can be used to query APIs for doing social media research and workshops could also be used to collaboratively explore or experiment with methods for iterative, grounded investigation of social media practices.

Due to the rapid increase in the scale and velocity of social media data and because of the lack of technical expertise to manage such data, social scientists and humanities scholars have taken a backseat to the hard sciences in explaining new dimensions of social life online. This is disappointing because it means that much of the research coming out about social media, Big Data and the computation lacks a connection to important social questions about the world. Building from some of this momentum will be essential in the next few years if we are to see social scientists and humanities scholars adding their important insights into social phenomena online. Much more needs to be done to build flexible and agile resources for the rapidly advancing field of social media research if we are to benefit from the contributions of social science and humanities scholars in the field of digital cultures and politics.

[1] For an excellent introduction to the contribution of interpretive scholars to questions about data and the digital see ‘The Datafied Society’ just published by Amsterdam University Press http://en.aup.nl/books/9789462981362-the-datafied-society.html

Pic: Martin Callaghan displays the ‘Geeks and repetitive tasks’ model during the November 2016 Data Carpentry for the Social Sciences workshop at Leeds University.


by Heather Ford at March 21, 2017 01:19 PM

March 20, 2017

Ph.D. student

artificial life, artificial intelligence, artificial society, artificial morality

“Everyone” “knows” what artificial intelligence is and isn’t and why it is and isn’t a transformative thing happening in society and technology and industry right now.

But the fact is that most of what “we” “call” artificial intelligence is really just increasingly sophisticated ways of solving a single class of problems: optimization.

Essentially what’s happened in AI is that all empirical inference problems can be modeled as Bayesian problems, which are then solved using variational inference methods, which are essentially just turning the Bayesian statistic problem into a solvable form of an optimization problem, and solving it.

Advances in optimization have greatly expanded the number of things computers can accomplish as part of a weak AI research agenda.

Frequently these remarkable successes in Weak AI are confused with an impending revolution in what used to be called Strong AI but which now is more frequently called Artificial General Intelligence, or AGI.

Recent interest in AGI has spurred a lot of interesting research. How could it not be interesting? It is also, for me, extraordinarily frustrating research because I find the philosophical precommitments of most AGI researchers baffling.

One insight that I wish made its way more frequently into discussions of AGI is an insight made by the late Francisco Varela, who argued that you can’t really solve the problem of artificial intelligence until you have solved the problem of artificial life. This is for the simple reason that only living things are really intelligent in anything but the weak sense of being capable of optimization.

Once being alive is taken as a precondition for being intelligent, the problem of understanding AGI implicates a profound and fascinating problem of understanding the mathematical foundations of life. This is a really amazing research problem that for some reason is never ever discussed by anybody.

Let’s assume it’s possible to solve this problem in a satisfactory way. That’s a big If!

Then a theory of artificial general intelligence should be able to show how some artificial living organisms are and others are not intelligent. I suppose what’s most significant here is the shift in thinking of AI in terms of “agents”, a term so generic as to be perhaps at the end of the day meaningless, to thinking of AI in terms of “organisms”, which suggests a much richer set of preconditions.

I have similar grief over contemporary discussion of machine ethics. This is a field with fascinating, profound potential. But much of what machine ethics boils down to today are trolley problems, which are as insipid as they are troublingly intractable. There’s other, better machine ethics research out there, but I’ve yet to see something that really speaks to properly defining the problem, let alone solving it.

This is perhaps because for a machine to truly be ethical, as opposed to just being designed and deployed ethically, it must have moral agency. I don’t mean this in some bogus early Latourian sense of “wouldn’t it be fun if we pretended seatbelts were little gnomes clinging to our seats” but in an actual sense of participating in moral life. There’s a good case to be made that the latter is not something easily reducible to decontextualized action or function, but rather has to do with how own participates more broadly in social life.

I suppose this is a rather substantive metaethical claim to be making. It may be one that’s at odds with common ideological trainings in Anglophone countries where it’s relatively popular to discuss AGI as a research problem. It has more in common, intellectually and philosophically, with continental philosophy than analytic philosophy, whereas “artificial intelligence” research is in many ways a product of the latter. This perhaps explains why these two fields are today rather disjoint.

Nevertheless, I’d happily make the case that the continental tradition has developed a richer and more interesting ethical tradition than what analytic philosophy has given us. Among other reasons this is because of how it is able to situated ethics as a function of a more broadly understood social and political life.

I postulate that what is characteristic of social and political life is that it involves the interaction of many intelligent organisms. Which of course means that to truly understand this form of life and how one might recreate it artificially, one must understand artificial intelligence and, transitively, artificial life.

Only one artificial society is sufficiently well-understood could we then approach the problem of artificial morality, or how to create machines that truly act according to moral or ethical ideals.


by Sebastian Benthall at March 20, 2017 02:40 AM

March 19, 2017

Ph.D. student

ideologies of capitals

A key idea of Bourdieusian social theory is that society’s structure is due to the distribution of multiple kinds of capital. Social fields have their roles and their rules, but they are organized around different forms of capital the way physical systems are organized around sources of force like mass and electrical charge. Being Kantian, Bourdieusian social theory is compatible with both positivist and phenomenological forms of social explanation. Phenomenological experience, to the extent that it repeats itself and so can be described aptly as a social phenomenon at all, is codified in terms of habitus. But habitus is indexed to its place within a larger social space (not unlike, it must be said, a Blau space) whose dimensions are the dimensions of the allocations of capital throughout it.

While perhaps not strictly speaking a corollary, this view suggests a convenient methodological reduction, according to which the characteristic beliefs of a habitus can be decomposed into components, each component representing the interests of a certain kind of capital. When I say “the interests of a capital”, I do mean the interests of the typical person who holds a kind of capital, but also the interests of a form of capital, apart from and beyond the interests of any individual who carries it. This is an ontological position that gives capital an autonomous social life of its own, much like we might attribute an autonomous social life to a political entity like a state. This is not the same thing as attributing to capital any kind of personhood; I’m not going near the contentious legal position that corporations are people, for example. Rather, I mean something like: if we admit that social life is dictated in part by the life cycle of a kind of psychic microorganism, the meme, then we should also admit abstractly of social macroorganisms, such as capitals.

What the hell am I talking about?

Well, the most obvious kind of capital worth talking about in this way is money. Money, in our late modern times, is a phenomenon whose existence depends on a vast global network of property regimes, banking systems, transfer protocols, trade agreements, and more. There’s clearly a naivete in referring to it as a singular or homogeneous phenomenon. But it is also possible to referring to in a generic globalized way because of the ways money markets have integrated. There is a sense in which money exists to make more money and to give money more power over other forms of capital that are not money, such as: social authority based on any form of seniority, expertise, lineage; power local to an institution; or the persuasiveness of an autonomous ideal. Those that have a lot of money are likely to have an ideology very different from those without a lot of money. This is partly due to the fact that those who have a lot of money will be interested in promoting the value of that money over and above other capitals. Those without a lot of money will be interested inn promoting forms of power that contest the power of money.

Another kind of capital worth talking about is cosmopolitanism. This may not be the best word for what I’m pointing at but it’s the one that comes to mind now. What I’m talking about is the kind of social capital one gets not by having a specific mastery of a local cultural form, but rather by having the general knowledge and cross-cultural competence to bridge across many different local cultures. This form of capital is loosely correlated with money but is quite different from it.

A diagnosis of recent shifts in U.S. politics, for example, could be done in terms of the way capital and cosmopolitanism have competed for control over state institutions.


by Sebastian Benthall at March 19, 2017 12:29 AM

March 16, 2017

Ph.D. student

equilibrium representation

We must keep in mind not only the capacity of state simplifications to transform the world but also the capacity of the society to modify, subvert, block, and even overturn the categories imposed upon it. Here is it useful to distinguish what might be called facts on paper from facts on the ground…. Land invasions, squatting, and poaching, if successful, represent the exercise of de facto property rights which are not represented on paper. Certain land taxes and tithes have been evaded or defied to the point where they have become dead letters. The gulf between land tenure facts on paper and facts on the ground is probably greatest at moments of social turmoil and revolt. But even in more tranquil times, there will always be a shadow land-tenure system lurking beside and beneath the official account in the land-records office. We must never assume that local practice conforms with state theory. – Scott, Seeing Like a State, 1998

I’m continuing to read Seeing Like a State and am finding in it a compelling statement of a state of affairs that is coded elsewhere into the methodological differences tween social science disciplines. In my experience, much of the tension between the social sciences can be explained in terms of the differently interested uses of social science. Among these uses are the development of what Scott calls “state theory” and the articulation, recognition, and transmission of “local practice”. Contrast neoclassical economics with the anthropology of Jean Lave as examples of what I’m talking about Most scholars are willing to stop here: they choose their side and engage in a sophisticated form of class warfare.

This is disappointing from the perspective of science per se, as a pursuit of truth. To see where there’s a place for such work in the social sciences, we only have to the very book in front of us, Seeing Like a State, which stands outside of both state theory and local practices to explain a perspective that is neither but rather informed by a study of both.

In terms of the ways that knowledge is used in support of human interests, in the Habermasian sense (see some other blog posts), we can talk about Scott’s “state theory” as a form of technical knowledge, aimed at facilitating power over the social and natural world. What he discusses is the limitation of technical knowledge in mastering the social, due to complexity and differentiation in local practice. So much of this complexity is due to the politicization of language and representation that occurs in local practice. Standard units of measurement and standard terminology are tools of state power; efforts to guarantee them are confounded again and again in local interest. This disagreement is a rejection of the possibility of hermeneutic knowledge, which is to say linguistic agreement about norms.

In other words, Scott is pointing to a phenomenon where because of the interests of different parties at different levels of power, there’s a strategic local rejection of inter-subjective agreement. Implicitly, agreeing even on how to talk with somebody with power over you is conceding their power. The alternative is refusal in some sense. A second order effect of the complexity caused by this strategic disagreement is the confounding of technical mastery over the social. In Scott’s terminology, a society that is full of strategic lexical disagreement is not legible.

These are generalizations reflecting tendencies in society across history. Nevertheless, merely by asserting them I am arguing that they have a kind of special status that is not itself caught up in the strategic subversions of discourse that make other forms of expertise foolish. There must be some forms of representation that persist despite the verbal disagreements and differently motivated parties that use them.

I’d like to call these kinds of representations, which somehow are technically valid enough to be useful and robust to disagreement, even politicized disagreement, as equilibrium representations. The idea here is that despite a lot of cultural and epistemic churn, there are still attractor states in the complex system of knowledge production. At equilibrium, these representations will be stable and serve as the basis for communication between different parties.

I’ve posited equilibrium representations hypothetically, without having a proof or example yet on one that actually exists. My point is to have a useful concept that acknowledges the kinds of epistemic complexities raised by Scott but that acknowledges the conditions for which a modernist epistemology could prevail despite those complexities.

 


by Sebastian Benthall at March 16, 2017 05:57 PM

appropriate information flow

Contextual integrity theory defines privacy as appropriate information flow.

Whether or not this is the right way to define privacy (which might, for example, be something much more limited), and whether or not contextual integrity as it is currently resourced as a theory is capable of capturing all considerations needed to determine the appropriateness of information flow, the very idea of appropriate information flow is a powerful one. It makes sense to strive to better our understanding of which information flows are appropriate, which others are inappropriate, to whom, and why.

 


by Sebastian Benthall at March 16, 2017 01:38 AM

March 15, 2017

Ph.D. student

Seeing Like a State: problems facing the code rural

I’ve been reading James C. Scott’s Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed for, once again, Classics. It’s just as good as everyone says it is, and in many ways the counterpoint to James Beniger’s The Control Revolution that I’ve been looking for. It’s also highly relevant to work I’m doing on contextual integrity in privacy.

Here’s a passage I read on the subway this morning that talks about the resistance to codification of rural land use customs in Napoleonic France.

In the end, no postrevolutionary rural code attracted a winning coalition, even amid a flurry of Napoleonic codes in nearly all other realms. For our purposes, the history of the stalemate is instructive. The first proposal for a code, which was drafted in 1803 and 1807, would have swept away most traditional rights (such as common pasturage and free passage through others’ property) and essentially recast rural property relations in the light of bourgeois property rights and freedom of contract. Although the proposed code pefigured certain modern French practices, many revolutionaries blocked it because they feared that its hands-off liberalism would allow large landholders to recreate the subordination of feudalism in a new guise.

A reexamination of the issue was then ordered by Napoleon and presided over by Joseph Verneilh Puyrasseau. Concurrently, Depute Lalouette proposed to do precisely what I supposed, in the hypothetical example, was impossible. That is, he undertook to systematically gather information about all local practices, to classify and codify them, and then to sanction them by decree. The decree in question would become the code rural. Two problems undid this charming scheme to present the rural poplace with a rural code that simply reflected its own practices. The first difficulty was in deciding which aspects of the literally “infinite diversity” or rural production relations were to be represented and codified. Even if a particular locality, practices varied greatly from farm to farm over time; any codification would be partly arbitrary and artificially static. To codify local practices was thus a profoundly political act. Local notables would be able to sanction their preferences with the mantle of law, whereas others would lose customary rights that they depended on. The second difficulty was that Lalouette’s plan was a mortal threat to all state centralizers and economic modernizers for whom a legible, national property regime was the procondition of progress. As Serge Aberdam notes, “The Lalouette project would have brought about exactly what Merlin de Douai and the bourgeois, revolutionary jurists always sought ot avoid.” Neither Lalouette nor Verneilh’s proposed code was ever passed, because they, like their predecessor in 1807, seemed to be designed to strengthen the hand of the landowners.

(Emphasis mine.)

The moral of the story is that just as the codification of a land map will be inaccurate and politically contested for its biases, so too a codification of customs and norms will suffer the same fate. As Borges’ fable On Exactitude in Science mocks the ambition of physical science, we might see the French attempts at code rural to be a mockery of the ambition of computational social science.

On the other hand, Napoleonic France did not have the sweet ML we have today. So all bets are off.


by Sebastian Benthall at March 15, 2017 03:16 PM

March 14, 2017

Ph.D. student

industrial technology development and academic research

I now split my time between industrial technology (software) development and academic research.

There is a sense in which both activities are “scientific”. They both require the consistent use of reason and investigation to arrive at reliable forms of knowledge. My industrial and academic specializations are closely enough aligned that both aim to create some form of computational product. These activities are constantly informing one another.

What is the difference between these two activities?

One difference is that industrial work pays a lot better than academic work. This is probably the most salient difference in my experience.

Another difference is that academic work is more “basic” and less “applied”, allowing it to address more speculative questions.

You might think that the latter kind of work is more “fun”. But really, I find both kinds of work fun. Fun-factor is not an important difference for me.

What are other differences?

Here’s one: I find myself emotionally moved and engaged by my academic work in certain ways. I suppose that since my academic work straddles technology research and ethics research (I’m studying privacy-by-design), one thing I’m doing when I do this work is engaging and refining my moral intuitions. This is rewarding.

I do sometimes also feel that it is self-indulgent, because one thing that thinking about ethics isn’t is taking responsibility for real change in the world. And here I’ll express an opinion that is unpopular in academia, which is that being in industry is about taking responsibility for real change in the world. This change can benefit other people, and it’s good when people in industry get paid well because they are doing hard work that entails real risks. Part of the risk is the responsibility that comes with action in an uncertain world.

Another critically important difference between industrial technology development and academic research is that while the knowledge created by the former is designed foremost to be deployed and used, the knowledge created by the latter is designed to be taught. As I get older and more advanced as a researcher, I see that this difference is actually an essential one. Knowledge that is designed to be taught needs to be teachable to students, and students are generally coming from both a shallower and more narrow background than adult professionals. Knowledge that is designed to by deployed and used need only be truly shared by a small number of experienced practitioners. Most of the people affected by the knowledge will be affected by it indirectly, via artifacts. It can be opaque to them.

Industrial technology production changes the way the world works and makes the world more opaque. Academic research changes the way people work, and reveals things about the world that had been hidden or unknown.

When straddling both worlds, it becomes quite clear that while students are taught that academic scientists are at the frontier of knowledge, ahead of everybody else, they are actually far behind what’s being done in industry. The constraint that academic research must be taught actually drags its form of science far behind what’s being done regularly in industry.

This is humbling for academic science. But it doesn’t make it any less important. Rather, in makes it even more important, but not because of the heroic status of academic researchers being at the top of the pyramid of human knowledge. It’s because the health of the social system depends on its renewal through the education system. If most knowledge is held in secret and deployed but not passed on, we will find ourselves in a society that is increasingly mysterious and out of our control. Academic research is about advancing the knowledge that is available for education. It’s effects can take half a generation or longer to come to fruition. Against this long-term signal, the oscillations that happen within industrial knowledge, which are very real, do fade into the background. Though not before having real and often lasting effects.


by Sebastian Benthall at March 14, 2017 02:27 AM

March 03, 2017

Ph.D. alumna

Failing to See, Fueling Hatred.

I was 19 years old when a some configuration of anonymous people came after me. They got access to my email and shared some of the most sensitive messages on an anonymous forum. This was after some of my girl friends received anonymous voice messages describing how they would be raped. And after the black and Latinx high school students I was mentoring were subject to targeted racist messages whenever they logged into the computer cluster we were all using. I was ostracized for raising all of this to the computer science department’s administration. A year later, when I applied for an internship at Sun Microsystems, an alum known for his connection to the anonymous server that was used actually said to me, “I thought that they managed to force you out of CS by now.”

Needless to say, this experience hurt like hell. But in trying to process it, I became obsessed not with my own feelings but with the logics that underpinned why some individual or group of white male students privileged enough to be at Brown University would do this. (In investigations, the abusers were narrowed down to a small group of white men in the department but it was never going to be clear who exactly did it and so I chose not to pursue the case even though law enforcement wanted me to.)

My first breakthrough came when I started studying bullying, when I started reading studies about why punitive approaches to meanness and cruelty backfire. It’s so easy to hate those who are hateful, so hard to be empathetic to where they’re coming from. This made me double down on an ethnographic mindset that requires that you step away from your assumptions and try to understand the perspective of people who think and act differently than you do. I’m realizing more and more how desperately this perspective is needed as I watch researchers and advocates, politicians and everyday people judge others from their vantage point without taking a moment to understand why a particular logic might unfold.

The Local Nature of Wealth

A few days ago, my networks were on fire with condescending comments referencing an article in The Guardian titled “Scraping by on six figures? Tech workers feel poor in Silicon Valley’s wealth bubble.” I watched as all sorts of reasonably educated, modestly but sustainably paid people mocked tech folks for expressing frustration about how their well-paid jobs did not allow them to have the sustainable lifestyle that they wanted. For most, Silicon Valley is at a distance, a far off land of imagination brought to you by the likes of David Fincher and HBO. Progressive values demand empathy for the poor and this often manifests as hatred for the rich. But what’s missing from this mindset is an understanding of the local perception of wealth, poverty, and status. And, more importantly, the political consequences of that local perception.

Think about it this way. I live in NYC where the median household income is somewhere around $55K. My network primarily makes above the median and yet they all complain that they don’t have enough money to achieve what they want in NYC, whether they’re making $55K, $70K, or $150K. Complaining about being not having enough money is ritualized alongside complaining about the rents. No one I know really groks that they’re making above the median income for the city (and, thus, that most people are much poorer than they are), let alone how absurd their complaints might sound to someone from a poorer country where a median income might be $1500 (e.g., India).

The reason for this is not simply that people living in NYC are spoiled, but that people’s understanding of prosperity is shaped by what they see around them. Historically, this has been understood through word-of-mouth and status markers. In modern times, those status markers are often connected to conspicuous consumption. “How could HE afford a new pair of Nikes!?!?”

The dynamics of comparison are made trickier by media. Even before yellow journalism, there has always been some version of Page Six or “Lifestyles of the Rich and Famous.” Stories of gluttonous and extravagant behaviors abound in ancient literature. Today, with Instagram and reality TV, the idea of haves and havenots is pervasive, shaping cultural ideas of privilege and suffering. Everyday people perform for the camera and read each other’s performances critically. And still, even as we watch rich people suffer depression or celebrities experience mental breakdowns, we don’t know how to walk in each other’s shoes. We collectively mock them for their privilege as a way to feel better for our own comparative struggles.

In other words, in a neoliberal society, we consistently compare ourselves to others in ways that make us feel as though we are less well off than we’d like. And we mock others who are more privileged who do the same. (And, horribly, we often blame others who are not for making bad decisions.)

The Messiness of Privilege

I grew up with identity politics, striving to make sense of intersectional politics and confused about what it meant to face oppression as a woman and privilege as a white person. I now live in a world of tech wealth while my family does not. I live with contradictions and I work on issues that make those contradictions visible to me on a regular basis. These days, I am surrounded by civil rights advocates and activists of all stripes. Folks who remind me to take my privilege seriously. And still, I struggle to be a good ally, to respond effectively to challenges to my actions. Because of my politics and ideals, I wake up each day determined to do better.

Yet, with my ethnographer’s hat on, I’m increasingly uncomfortable with how this dynamic is playing out. Not for me personally, but for affecting change. I’m nervous that the way that privilege is being framed and politicized is doing damage to progressive goals and ideals. In listening to white men who see themselves as “betas” or identify as NEETs (“Not in Education, Employment, or Training”) describe their hatred of feminists or social justice warriors, I hear the cost of this frame. They don’t see themselves as empowered or privileged and they rally against these frames. And they respond antagonistically in ways that further the divide, as progressives feel justified in calling them out as racist and misogynist. Hatred emerges on both sides and the disconnect produces condescension as everyone fails to hear where each other comes from, each holding onto their worldview that they are the disenfranchised, they are the oppressed. Power and wealth become othered and agency becomes understood through the lens of challenging what each believes to be the status quo.

It took me years to understand that the boys who tormented me in college didn’t feel powerful, didn’t see their antagonism as oppression. I was even louder and more brash back then than I am now. I walked into any given room performing confidence in ways that completely obscured my insecurities. I took up space, used my sexuality as a tool, and demanded attention. These were the survival skills that I had learned to harness as a ticket out. And these are the very same skills that have allowed me to succeed professionally and get access to tremendous privilege. I have paid a price for some of the games that I have played, but I can’t deny that I’ve gained a lot in the process. I have also come to understand that my survival strategies were completely infuriating to many geeky white boys that I encountered in tech. Many guys saw me as getting ahead because I was a token woman. I was accused of sleeping my way to the top on plenty of occasions. I wasn’t simply seen as an alpha — I was seen as the kind of girl that screwed boys over. And because I was working on diversity and inclusion projects in computer science to attract more women and minorities as the field, I was seen as being the architect of excluding white men. For so many geeky guys I met, CS was the place where they felt powerful and I stood for taking that away. I represented an oppressor to them even though I felt like it was they who were oppressing me.

Privilege is complicated. There is no static hierarchical structure of oppression. Intersectionality provides one tool for grappling with the interplay between different identity politics, but there’s no narrative for why beta white male geeks might feel excluded from these frames. There’s no framework for why white Christians might feel oppressed by rights-oriented activists. When we think about privilege, we talk about the historical nature of oppression, but we don’t account for the ways in which people’s experiences of privilege are local. We don’t account for the confounding nature of perception, except to argue that people need to wake up.

Grappling with Perception

We live in a complex interwoven society. In some ways, that’s intentional. After WWII, many politicians and activists wanted to make the world more interdependent, to enable globalization to prevent another world war. The stark reality is that we all depend on social, economic, and technical infrastructures that we can’t see and don’t appreciate. Sure, we can talk about how our food is affordable because we’re dependent on underpaid undocumented labor. We can take our medicine for granted because we fail to appreciate all of the regulatory processes that go into making sure that what we consume is safe. But we take lots of things for granted; it’s the only way to move through the day without constantly panicking about whether or not the building we’re in will collapse.

Without understanding the complex interplay of things, it’s hard not to feel resentful about certain things that we do see. But at the same time, it’s not possible to hold onto the complexity. I can appreciate why individuals are indignant when they feel as though they pay taxes for that money to be given away to foreigners through foreign aid and immigration programs. These people feel like they’re struggling, feel like they’re working hard, feel like they’re facing injustice. Still, it makes sense to me that people’s sense of prosperity is only as good as their feeling that they’re getting ahead. And when you’ve been earning $40/hour doing union work only to lose that job and feel like the only other option is a $25/hr job, the feeling is bad, no matter that this is more than most people make. There’s a reason that Silicon Valley engineers feel as though they’re struggling and it’s not because they’re comparing themselves to everyone in the world. It’s because the standard of living keeps dropping in front of them. It’s all relative.

It’s easy to say “tough shit” or “boo hoo hoo” or to point out that most people have it much worse. And, at some levels, this is true. But if we don’t account for how people feel, we’re not going to achieve a more just world — we’re going to stoke the fires of a new cultural war as society becomes increasingly polarized.

The disconnect between statistical data and perception is astounding. I can’t help but shake my head when I listen to folks talk about how life is better today than it ever has been in history. They point to increased lifespan, new types of medicine, decline in infant mortality, and decline in poverty around the world. And they shake their heads in dismay about how people don’t seem to get it, don’t seem to get that today is better than yesterday. But perception isn’t about statistics. It’s about a feeling of security, a confidence in one’s ecosystem, a belief that through personal effort and God’s will, each day will be better than the last. That’s not where the vast majority of people are at right now. To the contrary, they’re feeling massively insecure, as though their world is very precarious.

I am deeply concerned that the people whose values and ideals I share are achieving solidarity through righteous rhetoric that also produces condescending and antagonistic norms. I don’t fully understand my discomfort, but I’m scared that what I’m seeing around me is making things worse. And so I went back to some of Martin Luther King Jr.’s speeches for a bit of inspiration today and I started reflecting on his words. Let me leave this reflection with this quote:

The ultimate weakness of violence is that it is a descending spiral,
begetting the very thing it seeks to destroy.
Instead of diminishing evil, it multiplies it.
Through violence you may murder the liar,
but you cannot murder the lie, nor establish the truth.
Through violence you may murder the hater,
but you do not murder hate.
In fact, violence merely increases hate.
So it goes.
Returning violence for violence multiplies violence,
adding deeper darkness to a night already devoid of stars.
Darkness cannot drive out darkness:
only light can do that.
Hate cannot drive out hate: only love can do that.
— Dr. Martin Luther King, Jr.

Image from Flickr: Andy Doyle

by zephoria at March 03, 2017 09:19 PM

March 01, 2017

Ph.D. student

arXiv preprint of Refutation of Bostrom’s Superintelligence Argument released

I’ve written a lot of blog posts about Nick Bostrom’s book Superintelligence, presented what I think is a refutation of his core argument.

Today I’ve released an arXiv preprint with a more concise and readable version of this argument. Here’s the abstract:

Don’t Fear the Reaper: Refuting Bostrom’s Superintelligence Argument

In recent years prominent intellectuals have raised ethical concerns about the consequences of artificial intelligence. One concern is that an autonomous agent might modify itself to become “superintelligent” and, in supremely effective pursuit of poorly specified goals, destroy all of humanity. This paper considers and rejects the possibility of this outcome. We argue that this scenario depends on an agent’s ability to rapidly improve its ability to predict its environment through self-modification. Using a Bayesian model of a reasoning agent, we show that there are important limitations to how an agent may improve its predictive ability through self-modification alone. We conclude that concern about this artificial intelligence outcome is misplaced and better directed at policy questions around data access and storage.

I invite any feedback on this work.


by Sebastian Benthall at March 01, 2017 02:18 PM

February 20, 2017

Ph.D. alumna

Heads Up: Upcoming Parental Leave

There’s a joke out there that when you’re having your first child, you tell everyone personally and update your family and friends about every detail throughout the pregnancy. With Baby #2, there’s an abbreviated notice that goes out about the new addition, all focused on how Baby #1 is excited to have a new sibling. And with Baby #3, you forget to tell people.

I’m a living instantiation of that. If all goes well, I will have my third child in early March and I’ve apparently forgotten to tell anyone since folks are increasingly shocked when I indicate that I can’t help out with XYZ because of an upcoming parental leave. Oops. Sorry!

As noted when I gave a heads up with Baby #1 and Baby #2, I plan on taking parental leave in stride. I don’t know what I’m in for. Each child is different and each recovery is different. What I know for certain is that I don’t want to screw over collaborators or my other baby – Data & Society. As a result, I will be not taking on new commitments and I will be actively working to prioritize my collaborators and team over the next six months.

In the weeks following birth, my response rates may get sporadic and I will probably not respond to non-mission-critical email. I also won’t be scheduling meetings. Although I won’t go completely offline in March (mostly for my own sanity), but I am fairly certain that I will take an email sabbatical in July when my family takes some serious time off** to be with one another and travel.

A change in family configuration is fundamentally walking into the abyss. For as much as our culture around maternity leave focuses on planning, so much is unknown. After my first was born, I got a lot of work done in the first few weeks afterwards because he was sleeping all the time and then things got crazy just as I was supposedly going back to work. That was less true with #2, but with #2 I was going seriously stir crazy being home in the cold winter and so all I wanted was to go to lectures with him to get out of bed and soak up random ideas. Who knows what’s coming down the pike. I’m fortunate enough to have the flexibility to roll with it and I intend to do precisely that.

What’s tricky about being a parent in this ecosystem is that you’re kinda damned if you do, damned if you don’t. Women are pushed to go back to work immediately to prove that they’re serious about their work – or to take serious time off to prove that they’re serious about their kids. Male executives are increasingly publicly talking about taking time off, while they work from home.  The stark reality is that I love what I do. And I love my children. Life is always about balancing different commitments and passions within the constraints of reality (time, money, etc.).  And there’s nothing like a new child to make that balancing act visible.

So if you need something from me, let me know ASAP!  And please understand and respect that I will be navigating a lot of unknown and doing my best to achieve a state of balance in the upcoming months of uncertainty.

 

** July 2017 vacation. After a baby is born, the entire focus of a family is on adjustment. For the birthing parent, it’s also on recovery because babies kinda wreck your body no matter how they come out. Finding rhythms for sleep and food become key for survival. Folks talk about this time as precious because it can enable bonding. That hasn’t been my experience and so I’ve relished the opportunity with each new addition to schedule some full-family bonding time a few months after birth where we can do what our family likes best – travel and explore as a family. If all goes well in March, we hope to take a long vacation in mid-July where I intend to be completely offline and focused on family. More on that once we meet the new addition.

by zephoria at February 20, 2017 01:45 PM

February 15, 2017

Ph.D. alumna

When Good Intentions Backfire

… And Why We Need a Hacker Mindset


I am surrounded by people who are driven by good intentions. Educators who want to inform students, who passionately believe that people can be empowered through knowledge. Activists who have committed their lives to addressing inequities, who believe that they have a moral responsibility to shine a spotlight on injustice. Journalists who believe their mission is to inform the public, who believe that objectivity is the cornerstone of their profession. I am in awe of their passion and commitment, their dedication and persistence.

Yet, I’m existentially struggling as I watch them fight for what is right. I havelearned that people who view themselves through the lens of good intentions cannot imagine that they could be a pawn in someone else’s game. They cannot imagine that the values and frames that they’ve dedicated their lives towards — free speech, media literacy, truth — could be manipulated or repurposed by others in ways that undermine their good intentions.

I find it frustrating to bear witness to good intentions getting manipulated,but it’s even harder to watch how those who are wedded to good intentions are often unwilling to acknowledge this, let alone start imagining how to develop the appropriate antibodies. Too many folks that I love dearly just want to double down on the approaches they’ve taken and the commitments they’ve made. On one hand, I get it — folks’ life-work and identities are caught up in these issues.

But this is where I think we’re going to get ourselves into loads of trouble.

The world is full of people with all sorts of intentions. Their practices and values, ideologies and belief systems collide in all sorts of complex way. Sometimes, the fight is about combating horrible intentions, but often it is not. In college, my roommate used to pound a mantra into my head whenever I would get spun up about something: Do not attribute to maliciousness what you can attribute to stupidity. I return to this statement a lot when I think about how to build resilience and challenge injustices, especially when things look so corrupt and horribly intended — or when people who should be allies see each other as combatants. But as I think about how we should resist manipulation and fight prejudice, I also think that it’s imperative to move away from simply relying on “good intentions.”

I don’t want to undermine those with good intentions, but I also don’t want good intentions to be a tool that can be used against people. So I want to think about how good intentions get embedded in various practices and the implications of how we view the different actors involved.

The Good Intentions of Media Literacy

When I penned my essay “Did Media Literacy Backfire?”, I wanted to ask those who were committed to media literacy to think about how their good intentions — situated in a broader cultural context — might not play out as they would like. Folks who critiqued my essay on media literacy pushed back in all sorts of ways, both online and off. Many made me think, but some also reminded me that my way of writing was off-putting. I was accused of using the question “Did media literacy backfire?” to stoke clicks.Some snarkily challenged my suggestion that media literacy was even meaningfully in existence, asked me to be specific about which instantiations I meant (because I used the phrase “standard implementations”), and otherwise pushed for the need to double down on “good” or “high quality” media literacy. The reality is that I’m a huge proponent of their good intentions — and have long shared them, but I wrote this piece because I’m worried that good intentions can backfire.

While I was researching youth culture, I never set out to understand what curricula teachers used in the classroom. I wasn’t there to assess the quality of the teachers or the efficacy of their formal educational approaches. I simply wanted to understand what students heard and how they incorporated the lessons they received into their lives. Although the teens that I met had a lot of choice words to offer about their teachers, I’ve always assumed that most teachers entered the profession with the best of intentions, even if their students couldn’t see that. But I spent my days listening to students’ frustrations and misperceptions of the messages teachers offered.

I’ve never met an educator who thinks that the process of educating is easy or formulaic. (Heck, this is why most educators roll their eyes when they hear talk of computerized systems that can educate better than teachers.) So why do we assume that well-intended classroom lessons — or even well-designed curricula — might not play out as we imagine? This isn’t simply about the efficacy of the lesson or the skill of the teacher, but the cultural context in which these conversations occur.

In many communities in which I’ve done research, the authority of teachers is often questioned. Nowhere is this more painfully visible than when well-intended highly educated (often white) teachers come to teach in poorer communities of color. Yet, how often are pedagogical interventions designed by researchers really taking into account the doubt that students and their parents have of these teachers? And how do we as educators and scholars grapple with how we might have made mistakes?

I’m not asking “Did Media Literacy Backfire?” to be a pain in the toosh, but to genuinely highlight how the ripple effects of good intentions may not play out as imagined on the ground for all sorts of reasons.

The Good Intentions of Engineers

From the outside, companies like Facebook and Google seem pretty evil to many people. They’re situated in a capitalist logic that many advocates and progressives despise. They’re opaque and they don’t engage the public in their decision-making processes, even when those decisions have huge implications for what people read and think. They’re extremely powerful and they’ve made a lot of people rich in an environment where financial inequality and instability is front and center. Primarily located in one small part of the country, they also seem like a monolithic beast.

As a result, it’s not surprising to me that many people assume that engineers and product designers have evil (or at least financially motivated) intentions. There’s an irony here because my experience is the opposite.Most product teams have painfully good intentions, shaped by utopic visions of how the ideal person would interact with the ideal system. Nothing is more painful than sitting through a product design session with design personae that have been plucked from a collection of clichés.

I’ve seen a lot of terribly naive product plans, with user experience mockups that lack any sense of how or why people might interact with a system in unexpected ways. I spent years tracking how people did unintended things with social media, such as the rise of “Fakesters,” or of teenagers who gamed Facebook’s system by inserting brand names into their posts, realizing that this would make their posts rise higher in the social network’s news feed. It has always boggled my mind how difficult it is for engineers and product designers to imagine how their systems would get gamed. I actually genuinely loved product work because I couldn’t help but think about how to break a system through unexpected social practices.

Most products and features that get released start with good intentions, but they too get munged by the system, framed by marketing plans, and manipulated by users. And then there’s the dance of chaos as companies seek to clean up PR messes (which often involves non-technical actors telling insane fictions about the product), patch bugs to prevent abuse, and throw bandaids on parts of the code that didn’t play out as intended. There’s a reason that no one can tell you exactly how Google’s search engine or Facebook’s news feed works. Sure, the PR folks will tell you that it’s proprietary code. But the ugly truth is that the code has been patched to smithereens to address countless types of manipulation and gamification(e.g., SEO to bots). It’s quaint to read the original “page rank” paper that Brin and Page wrote when they envisioned how a search engine could ideally work. That’s so not how the system works today.

The good intentions of engineers and product people, especially those embedded in large companies, are often doubted as sheen for a capitalist agenda. Yet, like many other well-intended actors, I often find that makers feel misunderstood and maligned, assumed to have evil thoughts. And I often think that when non-tech people start by assuming that they’re evil, we lose a significant opportunity to address problems.

The Good Intentions of Journalists

I’ve been harsh on journalists lately, mostly because I find it so infuriating that a profession that is dedicated to being a check to power could be so ill-equipped to be self-reflexive about its own practices.

Yet, I know that I’m being unfair. Their codes of conduct and idealistic visions of their profession help journalists and editors and publishers stay strong in an environment where they are accustomed to being attacked. It just kills me that the cultural of journalism makes those who have an important role to play unable to see how they can be manipulated at scale.

Sure, plenty of top-notch journalists are used to negotiating deception and avoidance. You gotta love a profession that persistently bangs its head against a wall of “no comment.” But journalism has grown up as an individual sport; a competition for leads and attention that can get fugly in the best of configurations. Time is rarely on a journalist’s side, just as nuance is rarely valued by editors. Trying to find “balance” in this ecosystem has always been a pipe dream, but objectivity is a shared hallucination that keeps well-intended journalists going.

Powerful actors have always tried to manipulate the news media, especially State actors. This is why the fourth estate is seen as so important in the American context. Yet, the game has changed, in part because of the distributed power of the masses. Social media marketers quickly figured out that manufacturing outrage and spectacle would give them a pathway to attention, attracting news media like bees to honey. Most folks rolled their eyes, watching as monied people played the same games as State actors. But what about the long tail? How do we grapple with the long tail? How should journalists respond to those who are hacking the attention economy?

I am genuinely struggling to figure out how journalists, editors, and news media should respond in an environment in which they are getting gamed.What I do know from 12-steps is that the first step is to admit that you have a problem. And we aren’t there yet. And sadly, that means that good intentions are getting gamed.

Developing the Hacker Mindset

I’m in awe of how many of the folks I vehemently disagree with are willing to align themselves with others they vehemently disagree with when they have a shared interest in the next step. Some conservative and hate groups are willing to be odd bedfellows because they’re willing to share tactics, even if they don’t share end goals. Many progressives can’t even imagine coming together with folks who have a slightly different vision, let alone a different end goal, to even imagine various tactics. Why is that?

My goal in writing these essays is not because I know the solutions to some of the most complex problems that we face — I don’t — but because I think that we need to start thinking about these puzzles sideways, upside down, and from non-Euclidean spaces. In short, I keep thinking that we need more well-intended folks to start thinking like hackers.

Think just as much about how you build an ideal system as how it might be corrupted, destroyed, manipulated, or gamed. Think about unintended consequences, not simply to stop a bad idea but to build resilience into the model.

As a developer, I always loved the notion of “extensibility” because it was an ideal of building a system that could take unimagined future development into consideration. Part of why I love the notion is that it’s bloody impossible to implement. Sure, I (poorly) comment my code and build object-oriented structures that would allow for some level of technical flexibility. But, at the end of the day, I’d always end up kicking myself for not imagining a particular use case in my original design and, as a result, doing a lot more band-aiding than I’d like to admit. The masters of software engineering extensibility are inspiring because they don’t just hold onto the task at hand, but have a vision for all sorts of different future directions that may never come into fruition. That thinking is so key to building anything, whether it be software or a campaign or a policy. And yet, it’s not a muscle that we train people to develop.

If we want to address some of the major challenges in civil society, we need the types of people who think 10 steps ahead in chess, imagine innovative ways of breaking things, and think with extensibility at their core. More importantly, we all need to develop that sensibility in ourselves. This is the hacker mindset.

This post was originally posted on Points. It builds off of a series of essays on topics affecting the public sphere written by folks at Data & Society. As expected, my earlier posts ruffled some feathers, and I’ve been trying to think about how to respond in a productive manner. This is my attempt.

Flickr Image: CC BY 2.0-licensed image by DaveBleasdale.

by zephoria at February 15, 2017 05:51 PM

February 12, 2017

Ph.D. student

the “hacker class”, automation, and smart capital

(Mood music for reading this post:)

I mentioned earlier that I no longer think hacker class consciousness is important.

As incongruous as this claim is now, I’ve explained that this is coming up as I go through old notes and discard them.

I found another page of notes that reminds me there was a little more nuance to my earlier position that I remembered, which has to do with the kind of labor done by “hackers”, a term I reserve the right to use in MIT/Eric S. Raymond sense, without the political baggage that has since attached to the term.

The point was in response to Eric. S. Raymond’s “How to be a hacker” essay which was that part of what it means to be a “hacker” is to hate drudgery. The whole point of programming a computer is so that you never have to do the same activity twice. Ideally, anything that’s repeatable about the activity gets delegated to the computer.

This is relevant in the contemporary political situation because we’re probably now dealing with the upshot of structural underemployment due to automation and the resulting inequalities. This remains a topic that scholarship, technologists, and politicians seem systematically unable to address directly even when they attempt to, because everybody who sees the writing on the wall is too busy trying to get the sweet end of that deal.

It’s a very old argument that those who own the means of production are able to negotiate for a better share of the surplus value created by their collaborations with labor. Those who own or invest in capital generally speaking would like to increase that share. So there’s market pressure to replace reliance of skilled labor, which is expensive, with reliance on less skilled labor, which is plentiful.

So what gets industrialists excited is smart capital, or a means of production that performs the “skilled” functions formerly performed by labor. Call it artificial intelligence. Call it machine learning. Call it data science. Call it “the technology industry”. That’s what’s happening and been happening for some time.

This leaves good work for a single economic class of people, those whose skills are precisely those that produce this smart capital.

I never figured out what the end result of this process would be. I imagined at one point that the creation of the right open source technology would bring about a profound economic transformation. A far fetched hunch.


by Sebastian Benthall at February 12, 2017 10:14 PM

three kinds of social explanation: functionalism, politics, and chaos

Roughly speaking, I think there are three kinds of social explanation. I mean “explanation” in a very thick sense; an explanation is an account of why some phenomenon is the way it is, grounded in some kind of theory that could be used to explain other phenomena as well. To say there are three kinds of social explanation is roughly equivalent to saying there are three ways to model social processes.

The first of these kind of social explanation is functionalism. This explains some social phenomenon in terms of the purpose that it serves. Generally speaking, fulfilling this purpose is seen as necessary for the survival or continuation of the phenomenon. Maybe it simply is the continued survival of the social organism that is its purpose. A kind of agency, though probably very limited, is ascribed to the entire social process. The activity internal to the process is then explained by the purpose that it serves.

The second kind of social explanation is politics. Political explanations focus on the agencies of the participants within the social system and reject the unifying agency of the whole. Explanations based on class conflict or personal ambition are political explanations. Political explanations of social organization make it out to be the result of a complex of incentives and activity. Where there is social regularity, it is because of the political interests of some of its participants in the continuation of the organization.

The third kind of social explanation is hardly an explanation at all. It is explanation by chaos. This sort of explanation is quite rare, as it does not provide much of the psychological satisfaction we like from explanations. I mention it here because I think it is an underutilized mode of explanation. In large populations, much of the activity that happens will do so by chance. Even large organizations may form according to stochastic principles that do not depend on any real kind of coordinated or purposeful effort.

It is important to consider chaotic explanation of social processes when we consider the limits of political expertise. If we have a low opinion of any particular person’s ability to understand their social environment and act strategically, then we must accept that much of their “politically” motivated actions will be based on misconceptions and therefore be, in an objective sense, random. At this point political explanations become facile, and social regularity has to be explained either in terms of the ability of social organizations qua organizations to survive, or the organization must be explained in a deflationary way: i.e., that the organization is not really there, but just in the eye of the beholder.


by Sebastian Benthall at February 12, 2017 02:36 AM

February 09, 2017

MIMS 2012

Artists don't distinguish between...

Artists don’t distinguish between the act of making something and the act of thinking about it — thinking and making evolve together in an emergent, concurrent fashion. As a result, when approaching a project, an artist often doesn’t seem to plan it out. She just goes ahead and begins, all the while collecting data that inform how she will continue. A large part of what drives her confidence to move forward is her faith in her ability to course correct and improvise as she goes.

— John Maeda, “Redesigning Leadership”

This quote from John Maeda’s book, Redesigning Leadership really resonated with me. It captures my approach to problems and new challenges perfectly. I don’t stress too much about having every step planned out — I’ve learned to trust my intuition and follow new paths as they appear, having faith that they will lead me to a successful outcome.

“Improvise as she goes.” I never would have thought of it like that, but “improvising” is a great way to describe my approach.

by Jeff Zych at February 09, 2017 01:13 AM

February 06, 2017

Ph.D. student

immigration, automation, xenophobia, and jobs

“The divide is not between the left and right any more but between patriots and globalists.” – Marine Le Pen

I have been trying to get a grip on what’s going on with the global economy. This is hard because I get a lot of my news via Twitter and so can only comprehend arguments in 140 characters or less. Here are several that are floating around:

  • In UK, US, and France, there are those who blame globalization for their underemployment. They advocate for reduced immigration and import protectionism.
  • Economists like Larry Summers assure as that it is technological automation, not free trade, which has caused underemployment. That, and the actual emergence of emerging markets and their capacity to produce competitive goods.
  • Tech companies are rallying to fight Trump’s immigration ban. This is because they want top talent.

Though it pains me to say it, it looks like there is a missing link in the mainstream economic analysis, which is this: to the extent that highly talented immigrants help tech companies produce technology that automates work otherwise performed by non-immigrant labor, there is a real sense in which “immigrants have come to take [our/your] jobs.” It’s not through direct competition over low wages. It’s indirectly through automation.

That said, this is a drop in the bucket, as there’s plenty of domestic labor in the tech industry. If leaked memos and accounts of communications between U.S. leadership are to be believed, the xenophobic aspect of the new protectionism is due to the visibility of successful immigrants. If these successful immigrants are working in technology which has automated domestic jobs, then the racial or national otherness of the immigrants may be adding insult to injury, so to speak.

Please take all this with caveats about how all domestic labor is due to immigration, how the racial and cultural diversity of the countries in question is authentic to these nations; identities, etc. I’m just trying to get at what the real sticking points are.


by Sebastian Benthall at February 06, 2017 04:51 PM

I’m no longer freaking out about societal collapse

I have been a little worried about societal collapse. I learned something new that made me less worried about it, which is that Article 25 of the Constitution allows for the Vice President and a majority of the cabinet to file for the removal of the President. The President can reverse this decision, but if the VP and majority of cabinet file for the position again, then Congress gets to vote on it.

I learned about this from FiveThirtyEight, which I suppose I should be paying more attention to. Their analysis reminds me of the chapter on Superteams from Tetlock’s Superforecasting: they sit around and critique each other’s views, adjusting their confidence in various hypotheses. Good for them!

In the specific case of the current Presidency and the Federal Government, what this new information does for me is significantly change what the options are for probable worst case scenarios. These worst case scenarios all involve the possibility that (a) Trump goes off the rails doing something truly terrible, possibly (b) trying to defy the authority of the Judicial branch entirely, essentially imposing martial law. This depends on (c) Congress being totally useless.

Earlier, I thought the only way to remove a standing President was impeachment, and given (c) that’s just not likely to happen.

However, given everything being said about the bitter infighting within the White House, it looks like a potential move by Pence and half the cabinet is totally within reason. The bet is largely on Pence’s ambition. He’s young enough to have a career ahead of him. He has more to gain from Defending the Constitution at the last minute than he does from following a lunatic into oblivion. The cabinet is shaping up to be full of ambitious rich people who benefit from having rule of law, as long as that law is not regulating any of their businesses.

I’m not saying that use of Article 25 to depose President Trump is likely to happen. Rather, I think that it provides a check on his power that I hadn’t considered before. He can be reined in or threatened from within his own team, especially as it fills out.

This is no real comfort to all the people who would be disadvantaged by these policies. It tilts the odds in favor of the stability of the current government, with all of its vocal hostility to judges, immigrants, liberals, and so on.

My prediction is that the next four years are going to continue to be very uncomfortable for the public spirited. The federal government may not be the best place to find work in the public interest unless one is a social conservative, because government will be mainly be serving private interests.

It was interesting that so many of the Super Bowl ads today were about inclusivity and other left-wing values. If the government is pulling back its support for certain causes, that does not necessarily leave these causes without champions. A forward-looking question is: how will civil society and industry compensate for the things the government is not doing?


by Sebastian Benthall at February 06, 2017 06:42 AM

February 05, 2017

Ph.D. student

metaphysics and politics

In almost any contemporary discussion of politics, today’s experts will tell you that metaphysics is irrelevant.

This is because we are discouraged today from taking a truly totalizing perspective–meaning, a perspective that attempts to comprehend the totality of what’s going on.

Academic work on politics is specialized. It focuses on a specific phenomenon, or issue, or site. This is partly due to the limits of what it is possible to work on responsibly. It is also partly due to the limitations of agency. A grander view of politics isn’t useful for any particular agent; they need only the perspective that best serves them. Blind spots are necessary for agency.

But universalist metaphysics is important for politics precisely because if there is a telos to politics, it is peace, and peace is a condition of the totality.

And while a situated agent may have no need for metaphysics because they are content with the ontology that suits them, situated agents cannot alone make any guarantees of peace.

In order for an agent to act effectively in the interest of total societal conditions, they require an ontology which is not confined by their situation, which will encode those habits of thought necessary for maintaining their situation as such.

What motivates the study of metaphysics then? A motivation is that it provides one with freedom from ones situation.

This freedom is a political accomplishment, and it also has political effects.


by Sebastian Benthall at February 05, 2017 04:28 PM

February 03, 2017

Ph.D. student

no, free speech was totally unaffected by the Berkeley violence

When I wrote the other day about anarchist tactics in resistance to perceived fascism, I had in mind non-violent tactics. I did not anticipate that soon after Black Bloc anarchists would cause violence in the otherwise peaceful protest of Milo Yianopolous’s talk.

There has since been a back and forth about what any of this means in terms of the big picture of the nation’s politics.

I would like to argue that it means nothing.

There has been some commentary about the First Amendment. Berkeley’s the historical site of the Free Speech movement. Right-wing commentators, including Yianopolous himself, are eager to paint the event as an ironic crisis of Free Speech. Donald J. Trump, President of the United States of America, has insinuated that UC Berkeley was complicit in the illegal silencing of Yianopolous. But these are red herrings that are stupid. The talk was canceled because a small minority of people who had nothing to do with UC Berkeley made the situation unmanageable, and public safety took priority. Meanwhile, Black Bloc anarchists are based in Oakland. So this has nothing to do with Berkeley.

Meanwhile, the whole conceit that somebody’s live speaking event at a college campus is a privileged moment in which Yianopolous could share his message is silly. This is somebody who has made a career through social media. Everyone who wanted to know what he was going to say could have looked up what he’s already said on-line. It’s because everybody already knew what he was going to say that people were pissed about him showing up.

There’s a lens on the event which is a familiar progressive refrain about the emotional powers of speech. Speech causes the transfer of hate, the triggering of traumas, it offends and causes emotional pain. When somebody says insensitive things, it can be painful. And there’s this idea that by maintaining a collective consciousness pure of bad thoughts, these painful ideas won’t spread.

But hasn’t politicized media already saturated the thoughts of anybody paying attention? The likelihood that the presence or non-presence of a speaker at UC Berkeley is going to be a student’s first encounter with an idea is small. To believe otherwise is nostalgia.

Since speech flows freely through social media, and has in fact never been freer, the events of the protest were, in fact, all speech, including the violence. It was all performance. The speech of the black bloc was loud and clear, it said “F*** YOU FASCISTS.” It wasn’t directed at Yianopolous at all, obviously. It was a statement about everything else that’s going on. It turn subtext into text.

But it means nothing. It’s just politics of spectacle. The First Ammendment is being evoked ignorantly and symbolically. Nobody is actually taking anybody else to court.

It’s a good question whether, how, and who can actually be taken to court over things that are being done in these crazy times.


by Sebastian Benthall at February 03, 2017 04:13 AM

February 01, 2017

Ph.D. student

Ohm and Post: Privacy as threats, privacy as dignity

I’m reading side by side two widely divergent law review articles about privacy.

One is Robert Post‘s “The Social Foundations of Privacy: Community and Self in Common Law Tort” (1989) (link)

The other is Paul Ohm‘s “Sensitive Information” (2014) (link)

They are very notably different. Post’s article diverges sharply from the intellectual millieu I’m used to. It starts with an exposition of Goffman’s view of the personal self as being constituted by ceremonies and rituals of human relationships. Privacy tort law is, in Post’s view, about repairing tears in the social fabric. The closest thing to this that I have ever encountered is Fingarette’s book on Confucianism.

Ohm’s article is much more recent and is in large part a reaction to the Snowden leaks. It’s an attempt to provide an account of privacy that can limit the problems associated with massive state (and corporate?) data collection. It attempts to provide a legally informed account of what information is sensitive, and then suggests that threat modeling strategies from computer security can be adapted to the privacy context. Privacy can be protected by identifying and mitigated privacy threats.

As I get deeper into the literature on Privacy by Design, and observe how privacy-related situations play out in the world and in my own life, I’m struck by the adaptability and indifference of the social world to shifting technological infrastructural conditions. A minority of scholars and journalists track major changes in it, but for the most part the social fabric adapts. Most people, probably necessarily, have no idea what the technological infrastructure is doing and don’t care to know. It can be coopted, or not, into social ritual.

If the swell of scholarship and other public activity on this topic was the result of surprising revelations or socially disruptive technological innovations, these same discomforts have also created an opportunity for the less technologically focused to reclaim spaces for purely social authority, based on all the classic ways that social power and significance play out.


by Sebastian Benthall at February 01, 2017 06:44 PM

January 31, 2017

Ph.D. student

gamers, collective intelligence, airport protests, democratic surrounds, and blue ooze

I was captivated and unnerved by Jordan Greenhall’s “Situational Assessment 2017: Trump Edition“. It is a kind of futurist writing I appreciate. I’m personally able to put aside the criticism that it sounds like he’s making the future of the country into a role-playing game because I’ve played a lot of Dungeons and Dragons and don’t pretend to not appreciate role-playing games as a flexible and effective cognitive frame. Also, it seems quite likely that an important political bloc in the United States right now are gamers.

As wretched as Gamer Gate was, the most wretched thing about it was how little light was shed on the Gamer demographic. We were led to believe that Gamers are mainly white, male, and lacking in progressive sophistication. The mainstream left wing critique of the Gamers of Gamer Gate was reductivist [see correction in footnote], focusing on the most visible and extreme actions and thereby alienating the probably much larger number of people who could be loosely identified as Gamers who didn’t fit the archetype constructed by their opposition.

This is a metaphor for all political opposition in this nonsense media environment. A little bit of effort and empathy goes a long way, but most people don’t care enough to bother.

Put yourself in the shoes of a white guy who spends a lot of time, you know, gaming. You’re probably underemployed and not very geographically mobile. Your primary source of entertainment is grand narrative driven virtual combat and conquest. There is the existential angst that comes with all your victories taking place in environments that don’t actually exist and the alienation that comes from being socialized into a mock military full of teenagers. Your actual lifestyle is fine, in an objective sense, but it is boring as hell. The smartest things you can find to read are written by coastal journalists or academics, but they are full of postmodern and multiculturalist sentiments that have nothing to do with your lived experience in, let’s say, the Rust Belt. So you start reading Alt Right materials because it’s a refreshing change. Now you’re strong for Trump.

This is what Max Weber would call an Ideal Type. I don’t have any data to back up this characterization of the Gamer. You could call it a hunch. And building on Greenhall’s futurist essay, these Gamers are the Red Insurgency. Being the Red Insurgency is very appealing to the Gamer, because it’s basically just like playing a video game except you play propaganda wars on the Internet and you’ve recently managed to take over the U.S. government.

The disturbing thing about Greenhall’s essay, for me, is his insistence that the Red Insurgency will win against the Deep State or “Blue Church” because of its superior adaptability and faster response loop. Essentially, Greenhall’s argument is that the collective intelligence architecture of the Red Insurgency is superior to the entrenched bureaucracy of the state and so in a kind of disruptive innovation the former will replace the latter.

This is far fetched. Like many polemic arguments written across the political spectrum, it doesn’t take into account the horrific complexity of managing a broadly integrated society. (This is the same criticism I’ve had of Pasquale, who writes as if it’s time for a populist revolt against the Deep State of Google, etc.) Recent events surrounding Trump’s executive orders show where the video game stops and life begins. You can win an election on a platform of Gamer-baiting slogans, but if you try to write them into executive orders it turns out that they probably interact with existing laws. Barring the absolutely terrifying prospect of washing away the entire court system, it appears that the whole game being played between the Red Insurgency and the Blue Church does indeed have rules. And those rules are boring.

Meanwhile, there’s something else happening, which is the organic reformation of the coastal populist left, which never really went away and is after all more populous than Greenhall’s Red Insurgency. However unable they are to win seats in Congress, they are able to rally.

I went to SFO last weekend to see what the protest was like. It was totally different from playing a video game. There were lots of people there, you know, in person. It was interracial, it was multigenerational. There was a purposeful pluralism. A pluralism of everyone you’d expect, but a pluralism nonetheless.

Greenhall makes some observations about the fluidity and non-linearity of collective intelligence in groups that make use of the Internet for their main means of communication. While I believe his assessment of the properties of this kind of collective intelligence is true, it’s also an awkward articulation of what has been described elsewhere and in more depth. Castells’s theory of the Network Society (2000), was such a good account of what’s going on that the next generation of academics had to bury the theory so that they wouldn’t have to parrot it. One of its totally reasonable points (in The Power of Identity) is that in the Network Society there’s a politics of identity whereby state ideologies are challenged by social movements that are themselves a kind of actor in global politics. I forget what he has to say about liberalism. But he writes a lot about the Zapatistas, who were a left-wing Mexican revolutionary…collective intelligence.

There’s a sense in which non-violent protest tactics advance in terms of their tactics. The sociologist in me remembers Occupy with a kind of fondness because while it was probably ineffective at achieving its political aims, whatever they were, I was led to believe that that wasn’t actually the point of it. The point of Occupy was to keep the social technology of non-violent urban protest well-oiled and calibrated to changing media environments. There was a nation-wide general assembly of decentralized cells of protesters. There was the training of a generation of activists in the use of the people’s microphone. There was the mobilization of social media for the scandalization of police brutality. It was specifically an anarchist movement, the first of many political movements made against the U.S. establishment. It didn’t work.

At San Francisco Airport the protesters used a human microphone to announce that the airport was supporting the protest, allowing the demonstrators to block the gates. Airports, it turns out, are great places for protests. Tons of amenities. Also, the symbolism.

You may not believe it, but I do have a point. The point is about how I think the airport protests are like the binding together of neurons for a grassroots collective intelligence. It’s a grassroots collective intelligence that would be totally mundane in so many other decades, because it’s actually just liberalism.

But if we can believe Fred Turner’s argument in The Democratic Surround, liberalism didn’t just happen for no reason. Liberalism was an invention of American intellectuals in response to the rising threat of European Fascism. The story goes: Hitler mastered the use of mass media, and American intellectuals thought the technology itself was partly responsible for fascist politics. It allowed, perhaps for the first time, the direct witnessing of a charismatic crazy man by a population not accustomed to seeing such things. And they were enthralled.

President Roosevelt was already doing his Fireside Chats and there was a concern that this media strategy would turn the United States fascist as well. Saving the day, in Fred Turner’s telling, was an intellectual coalition of exiled Bauhaus artists as well as Gregory Bateson and Margaret Mead just back from an anthropological expedition in Bali. I think Adorno is in there somewhere. John Cage as well, though I find that part of the argument unconvincing.

Long story short, there’s a new kind of art installation that emerges in World War II called “The Democratic Surround” which is also a metaphor for Facebook. It’s an exhibition that features images showing the variety of people that there are in America, or the variety of consumer products available. It’s a celebration of variety. You walk through it and are invited to find your unique place within it. You see a picture of a family that reminds you of yours and you think, “Ah, I am part of something greater than myself, whose value is in its diversity.” This becomes your national identity. Then you go fight the fascists.

The Democratic Surround was a nationalist project. It had two major catches. The first is that it was a carefully managed experience. It was a curated art exhibit, after all. Later versions of it would be carefully instrumented to measure traffic through it and the psychological impact on its audience. This makes the Democratic Surround reminiscent of the Panopticon in a way that’s useful, since the whole point of The Democratic Surround is to try to put a more positive spin on the lush surveillance state Silicon Valley invented for us.

The second catch is that when it was being used as a national propaganda device, the picture of America being shown to citizens was largely premature. It was a picture of racial and gender equality and of social integration. But the Civil Rights movement, for example, hadn’t happened yet. So all this pluralism was aspirational. It was a promise of what America could become if it won the war against Fascism in Europe.

It’s sixty years later now and much of U.S. history since then has been making that pluralistic vision a reality. Of course, it’s only a reality in certain dense cities. But a lot of people live in those cities and so they are culturally dominant. To some extent the multiculturalism we have now is just what it’s necessary to believe in order to survive, politically, in those diverse urban environments.

A lot of this very real diversity was on display at the SFO protest and I assume at other protests in other airports. It got me thinking about The Democratic Surround because it was (a) explicitly nationalistic–the signage was definitely about America, and (b) explicitly pluralistic. What made it significantly unlike The Democratic Surround was that (a) it was a lot of actual people, not some media or “communications” bullshit, and (b) it was managed very loosely. I mentioned the human microphone. There were people who were acting as organizers, but my sense was that these were organizers in the anarchist tradition. There were large stockpiles of food and water freely available for protesters. People with bullhorns invited people to come up and testify about the personal meaning of the event. It was a proper rally.

What I’ve been trying to argue throughout all this is that there is a new political identity emerging from this mess. It’s collective intelligence architecture is that of a networked anarchist movement. Which is to say fast, messy, problematically inclusive, and fun. But its politics are actually quite traditionally American: it’s to stop Fascism. Or the specter of it. Whether or not there is a real threat of Fascism in America or not, the more there is the appearance of one, the more liberals are going to start using anarchist tactics.

If all goes well, this provides a counterbalance to what Greenhall calls the Red Insurgency. Let’s call it, for the sake of argument, Blue Ooze. Blue Ooze isn’t part of the Deep State; it’s a different intelligence structure. It’s conservative, in the sense of resisting radicalism or change. Its purpose is to cool off the whole political process by legitimizing the Deep State, which is mostly just fine. If successful, it will sustain bipartisan power and otherwise maintain the status quo.

Blue Ooze is always already coopted by global capital yadda yadda you’ve heard it all before.

Note: I stand corrected by a good friend and colleague. Part of the point of the original critiques that lead to GamerGate were in fact arguments that gamers were not just white men with certain predictable tastes, but rather were a much more diverse group. I have fallen prey to the reductivism of the consequent journalism on the subject, i.e. the narrative that was being pushed afterwards by Gawker. My focus above has been on widening, ever so slightly, the conception of that Gamer. As a gamer myself who has never lived in a flyover state, I would have to say that I too am an exception to the ideal type presented above. If my tone is read accurately, the purpose of this blog post is to provide a countervailing view of activism as a way of playing the political game that is open to all.


by Sebastian Benthall at January 31, 2017 07:57 AM

January 30, 2017

MIMS 2012

Shifting from a Product-centric to a Service-centric Mindset

Over the past few months I’ve shifted from a product-centric mindset to a service-centric mindset. My focus used to be on building products that help people accomplish a task or goal. That meant I would try to understand the problem to solve, who it’s being solved for, and then design digital products to solve that problem.

But as I’ve grown as a designer, become a manager, and seen Optimizely move into the enterprise market, I’ve realized that a lot more goes into making a product successful than the product itself. Companies often offer additional services to make customers successful.

A service is a touchpoint or system provided by a company to fulfill a need. A touchpoint is how someone uses a service — a website, phone line, ticket kiosk, and so on.

Most digital products, for example, have additional online properties to help customer be successful, like a knowledge base. Companies can also provide non-digital services, such as a support line customers can call or email.

Even though a service may not have a visual interface, they can still be thoughtfully designed. To make good decisions about how these services work, you still need a solid understanding of your users and their goals. This is what product designers do when designing a product, with the only difference being the final deliverable is not a visual interface.

Shifting to a service mindset makes it obvious that new technologies that have invisible UIs, like Alexa and Operator, can be thought of as services and designed just like any other service. In its simplest form, design is the act of making thoughtful decisions. Having empathy and understanding a user’s goals, motivations, and context help designers make thoughtful decisions. These activities apply to services and invisible UIs just as much as creating visual interfaces.

On top of that, all of the products and services that a company offers its customers need to work in concert with each other. This means that it isn’t enough for each product and service to be well-designed on its own — they also need to be designed to seamlessly work together to make customers successful. Doing this also requires having a broad understanding of your customers.

When I had a product-centric mindset I was aware of the different touchpoints, but I hadn’t put much effort into designing them all as a cohesive, interrelated experience. Customers may use the knowledge base and email support while using the product, but that’s for the support team to manage. “I’m just going to make the product great because that’s all that customers need to be successful,” I used to think. I’ve since learned that isn’t true. It takes more than the product itself to make customers successful.

Learning about the discipline of service design has helped me connect all the different touchpoints customers use into one unified framework. Everything is a service — products included. And they can all be thoughtfully designed by using the core skills designers already have. By doing so, customers will have a better experience with your products and services, which will make them more successful, and that will ultimately make your company more successful.


If you’re interested in learning more about service design, these books and articles have taught me a lot:

by Jeff Zych at January 30, 2017 03:20 AM

January 27, 2017

Ph.D. alumna

The Information War Has Begun

Yesterday, Steve Bannon clearly articulated what many people have felt and known for quite some time when he told journalists, “You’re the opposition party. Not the Democratic Party… The media’s the opposition party.” This builds on earlier remarks by Trump, who said, “I have a running war with the media.”

Journalists have covered this with their “objective” voice as though it was another news story in the crazy first week of WTF moments. Many of those who value the media have looked at this with wide eyes, struggling to assess which of the many news stories they should be more horrified by. Far too few are getting the point:

The news media have become a pawn in a big chess game of an information war. 

News agencies, long trained to focus on reporting information and maintaining a conceptual model of standards, are ill-equipped to understand that they may have a role in this war, that their actions and decisions are shaping the way the war plays out.

When Kellyanne Conway argued that they were operating with “alternative facts,” the media mocked her. They tried to dismiss her comment that the media has a 14% approval rating by fact-correcting this to point out that this was only a Gallup poll concerning the media’s approval rating among Republicans. But they missed her greater point: there’s no cost to the administration to be helpful to the media because the people the Trump Administration cares about don’t trust the media anyhow.

CC-BY-NC-ND 2.0-licensed photo by Mark Deckers.

How many years did it take for the US military to learn that waging war with tribal networks couldn’t be fought with traditional military strategies? How long will it take for the news media to wake up and recognize that they’re being played? And how long after that will it take for editors and publishers to start evolving their strategies?

As I wrote in “Hacking the Attention Economy,” manipulating the media for profit, ideology, and lulz has evolved over time. The strategies that hackers, hoaxers, and haters have taken have become more sophisticated. The campaigns have gotten more intense. And now many of the actors most set on undermining institutionalized information intermediaries are in the most powerful office in the land. They are waging war on the media and the media doesn’t know what to do other than to report on it.

We’ve built an information ecosystem where information can fly through social networks (both technical and personal). Folks keep looking to the architects of technical networks to solve the problem. I’m confident that these companies can do a lot to curb some of the groups who have capitalized on what’s happening to seek financial gain. But the battles over ideology and attention are going to be far trickier. What’s at stake isn’t “fake news.” What’s at stake is the increasing capacity of those committed to a form of isolationist and hate-driven tribalism that has been around for a very long time. They have evolved with the information landscape, becoming sophisticated in leveraging whatever tools are available to achieve power, status, and attention. And those seeking a progressive and inclusive agenda, those seeking to combat tribalism to form a more perfect union —  they haven’t kept up.

The information war has begun. Normative approaches to challenging the system will not work. What will it take for news media to wake up? What will it take for progressives to start developing skills to fight back?

by zephoria at January 27, 2017 04:54 PM

January 24, 2017

Ph.D. student

update: no longer think “hacker class consciousness” is important

I’m going through old papers and throwing them out. I came upon an early draft from my first year in graduate school titled “Hacker Class Consciousness”. It was the beginning of an argument that those that work on open source software needed to develop a kind of class consciousness recognizing that their work bears a special relationship to capitalist modes of production. Open source software is a form of capital (a means of production) that is not privately owned. Hence, it is actually quite disruptive to capitalism per se. A la early Marxist theory, a political identity or “class consciousness” of people working in this way was necessary to reform the government to make it more equitable, or environmentally friendly, less violent, or whatever your critique of capitalism (or neoliberalism, if you prefer) is.

I didn’t get very far past this basic economic logic, which I still think is correct. I no longer think that class consciousness is important though. And I don’t think there’s an inevitability to capitalism containing the seeds of its own revolution through the eventual triumph of open source production.

I think it’s a good practice to make oneself accountable when one changes ones mind. There’s lots of evidence to say that when people publicly commit to some belief, they wind up sticking to it with more confidence than they ought to. Shame related reasons, I suppose. A good alternative habit, I believe, is publicly admitting when you are wrong about something, with the reasons for the update.

So why did I change my mind on this? Well, one reason is that I took some shots at formally modelling the problem several years ago and while it showed the robustness of open source software as a way of opening a market that had previously been dominated or locked in by a proprietary vendor or solution, there isn’t the profit motive driving open source production as a first mover. So the natural pressures of the market make open source coexist alongside proprietary systems, providing a countervailing force to privatization but never dissolving it entirely.

Another reason I changed my mind was a more general shift away from Marxist to Bourdieusian modes of thinking, which I’ve talked about here. A key part of this change in perspective is that it sees many kinds of capital at work in society, including both economic and cultural forms, and populations are distributed across the resulting multidimensional spectrum of variation, not stratified into a one-dimensional class structure. In such a world, class consciousness is futile. This futility may explain the futility of the Marxist project in general, as there was never really the kind of global collective action of the proletariat that he predicted would end capitalism. There’s always too many other kinds of population difference at work to allow for such a revolution. Race, for example.

It is good that a matured attitude has left me less eager to engage in a futile revolutionary project. There’s nothing like pursuing a doctorate for grinding that kind of idealism out of you. Now I can scintillate with cynicism, and would like to be much better at it. Which is to say, I’m beginning to regret ever turning away from the dismal science of economics, which now seems much more like the doctrine worth pursuing and improving.

One nice thing about economics is that it is quantitatively rigorous. This is not simply an intellectual gate-keeping statement designed to box out the innumerate. It’s rather a comment on how such a field has strictly more expressive power because of its capacity to represent a statistical distribution of variation. It’s not enough to say there’s black and white when there are shades of gray. And it’s not enough to say there are shades of gray when the particular variation in density of light across the field is what’s important.

A grayscale raster, from the OpenGeo Suite

A grayscale raster, from the OpenGeo Suite

It’s this kind of expressive power that gives computational social science much of its appeal. I forgot to even make this argument in my paper about the subject. That may be because this notion of the expressive power of different representational systems is part of what one learns in the course of ones computer science education, and that argument was written primarily for people without a computer science education.

Which really brings the discussion back around to where I come down to on the revolutionary economic potential of software development. Which is that really, it’s about educating people in the concepts and skills that allow them to make use of this incredible pool of openly available technical capital that gives people the “class consciousness” to act with it. Since late modern software development depends for its very existence on the great open wealth of collectivized logic already crystallized into free code, the “consciousness” is really just the habitus of the developer. I suppose I occasionally meet somebody who says they’ve been coding in .NET for their whole careers, but they are rare and I think are not doing well in the greater information economy.

It no coincidence that technical education and skills diffusion are, for Thomas Picketty, the way to counteract the inequality the results from disparate returns on wealth versus labor. This is a position one simply converges on if one studies it for long enough. Kindly, it stabilizes the role of the education system as one that is necessary for correcting other forms of societal destabilization and excess.


by Sebastian Benthall at January 24, 2017 02:14 AM

January 21, 2017

MIMS 2011

Human-bot relations at ICA 2017 in San Diego

News this week that a panel I contributed to on political bots has been accepted for the annual International Communication Association (ICA) conference in San Diego with Amanda Clarke, Elizabeth Dubois, Jonas Kaiser and Cornelius Puschmann this May. Political bots are automated agents that are deployed on social media platforms like Twitter to perform a variety of functions that are having a significant impact on politics and public life. There is already some great work about the negative impact of bots that are used to “manipulate public opinion by megaphoning or repressing political content in various forms” (see politicalbots.org) but we were interested in the types of bots these bots are often compared to — the so-called “good” bots that expose the actions of particular types of actors (usually governments) and thereby bring about greater transparency of government activity.

Elizabeth, Cornelius and I worked on a paper about WikiEdits bots for ICA last year in the pre-conference: “Algorithms, Automation, Politics” (“Keeping Ottawa Honest — One Tweet at a Time?” Politicians, Journalists and their Twitter bots, PDF) where we found that the impact of these bots isn’t as simple as bringing about greater transparency. The new work that we will present in May is a deeper investigation of the types of relationships that are catalysed by the existence and ongoing development of transparency bots on Twitter. I’ll be working on the relationship between bots and their creators in both Canada and South Africa, attempting to investigate the relationship between the bots and the transparency that they promise. Cornelius is looking at the relationship between journalists and bots, Elizabeth and Amanda are looking at the relationship between bots and political staff/government employees, and Jonas will be looking more closely at bots and users. The awesome Stuart Geiger who has done some really great work on bots has kindly agreed to be a respondent to the paper.

You can read more about the panel and each of the papers below.

Do people make good bots bad?

Political bots are not necessarily good or bad. We argue the impact of transparency bots (a particular kind of political bot) rests largely on the relationships bots have with their creators, journalists, government and political staff, and the general public. In this panel each of these relationships is highlighted using empirical evidence and a respondent guides wider discussion about how these relationships interact in the wider political and media system.

This panel challenges the notion that political bots are necessarily good or bad by highlighting relationships between political actors and transparency bots. Transparency bots are automated social media accounts which report behaviour of political players/institutions and are normally viewed as a positive force for democracy. In contrast, bot activity such as astroturfing and the creation of fake followers or friends on social media has been examined and critiqued as nefarious in academic and popular literature. We assert that the impact of transparency bots rests largely on the relationships bots have with their creators, journalists, government and political staff, and the general public. Each panelist highlights one of these relationships (noting related interactions with additional actors) in order to answer the question “How do human-bot relationships shape bots’ political impact?”

Through comparative analysis of the Canadian and South African Wikiedits bots, Ford shows that transparency is not a potential affordance of the technology but rather of the conditions in place between actors. Puschmann considers the ways bots are framed and used by journalists in a content analysis of news articles. Dubois and Clarke articulate the ways public servants and political staff respond to the presence of Wikiedits bots revealing that internal institutional policies mediate the relationships these actors can have with bots. Finally, Kaiser asks how users who are not political elite actors frame transparency bots making use of a quantitative and qualitative analysis of Reddit content.

Geiger (respondent) then poses questions which cut across the relationships and themes brought out by panelists. This promotes a holistic view of the bot in their actual communicative system. Cross-cutting questions illustrate that the impact of bots is seen not simply in dyadic relationships but also in the ways various actors interact with each other as well as the bots in question.

This panel is a needed opportunity to critically consider the political role and impact of transparency bots considering the bot in context. Much current literature assumes political bots have significant agency, however, bots need to interact with other political actors in order to have an impact. A nuanced understanding of the different types of relationships among political actors and bots that exists is thus essential. The cohesive conversation presented by panelists allows for a comparison across the different kinds of bot-actor relationships, focusing in detail on particular types of actors and then zooming out to address the wider system inclusive of these relationships.

  1. Bots and their creators
    Heather Ford

Bots – particularly those with public functions such as government transparency – are often created and recreated collaboratively by communities of technologists who share a particular world view of democracy and of technology’s role in politics and social change. This paper will focus on the origins of bots in the motivations and practices of their creators focusing on a particular case of transparency bots. Wikipedia/Twitter bots are built to tweet every time an editor within a particular government IP range edits Wikipedia as a way of notifying others to check possible government attempts to manipulate facts on the platform. The outputs of Wikipedia/Twitter bots have been employed by journalists as sources in stories about governments manipulating information (Ford et al, 2016).

Investigating the relationship between bot creators and their bots in Canada and South Africa by following the bots and their networks using mixed methods, I ask: To what extent is transparency an affordance of the particular technology being employed? Or is transparency rather an affordance of the conditions in place between actors in the network? Building from theories of co-production (Jasanoff, 2004) and comparing the impact of Wikipedia/Twitter bots on the news media in Canada and South Africa, this paper begins to map out the relationships that seem to be required for bots to take on a particular function (such as government transparency). Findings indicate that bots can only become transparency bots through the enrolling of allies (Callon, 1986) and through particular local conditions that ensure success in achieving a particular outcome. This is a stark reminder of the connectedness of human-machine relations and the limitations on technologists to fully create the world they imagine when they build their bots.

 

2. Bots and Journalists
Cornelius Puschmann

Different social agents — human and non-human — compete for attention, spread information and contribute to political debates online. Journalism is impacted by digital automation in two distinct ways: Through its potentially manipulative influence on reporting and thus public opinion (Woolley & Howard, 2016, Woolley, 2016), and by providing journalists with a set of new tools for providing insight, disseminating information, and connecting with audiences (Graefe, 2016; Lokot & Diakopoulos, 2015). This contribution focuses primarily on the first aspect, but also takes the second into account, because we argue that fears of automation in journalism may fuel reservations among journalists regarding the role of bots more generally.

To address the first aspect, we present the results of a quantitative content analysis of English-language mainstream media discourse on bots. Building on prior research on the reception of Bots (Ford et al, 2016), we focus on the following aspects in particular:

– the context in which bots are discussed,

– the evaluation (“good” for furthering transparency, “bad” because they spread propaganda),

– the implications for public deliberation (if any).

Secondly, we discuss the usage of bots and automation for the news media, using a small set of examples from the context of automated journalism (Johri, Han & Mehta, 2016). Bots are increasingly used to automate particular aspects of journalism, such as the generation of news items and the dissemination of content. Building on these examples we point to the “myriad ways in which news bots are being employed for topical, niche, and local news, as well as for providing higher-order journalistic functions such as commentary, critique, or even accountability” (Lokot & Diakopoulos, 2015, p. 2).

 

3. Bots and Government/Political Staff
Elizabeth Dubois and Amanda Clarke

Wikiedits bots are thought to promote more transparent, accountable government because they expose the Wikipedia editing practices of public officials, especially important when those edits are part of partisan battles between political staff, or enable the spread of misinformation and propaganda by properly neutral public servants. However, far from bolstering democratic accountability, these bots may have a perverse effect on democratic governance. Early evidence suggests that the Canadian Wikiedits bot (@gccaedits) may be contributing to a chilling effect wherein public servants and political staff are editing Wikipedia less or editing in ways that are harder to track in order to avoid the scrutiny that these bots enable (Ford et al, 2016). The extent to which this chilling effect shapes public officials’ willingness to edit Wikipedia openly (or at all), and the role the bot plays in inducing this chilling effect, remain open questions ripe for investigation. Focusing on the bot tracking activity in the Government of Canada (@gccaedits), this paper reports on the findings of in-depth interviews with public and political officials responsible for Wikipedia edits as well as analysis of internal government documents related to the bot (retrieved through Access to Information requests).

We find that internal institutional policies, constraints of the Westminster system of democracy (which demands public servants remain anonymous, and that all communications be tightly managed in strict hierarchical chains of command), paired with primarily negative media reporting of the @gccaedits bot, have inhibited Wikipedia editing. This poses risks to the quality of democratic governance in Canada. First, many edits revealed by the bot are in fact useful contributions to knowledge, and reflect the elite and early insider insight of public officials. At a larger level, these edits represent novel and significant disruptions to a public sector communications culture that has not kept pace with the networked models of information production and dissemination that characterize the digital age. In this sense, the administrative and journalistic response to the bot’s reporting sets back important efforts to bolster Open Government and digital era public service renewal. Detailing these costs, and analysing the role of the bot and human responses to it, this paper suggests how wikiedit bots shape digital era governance.

4. Bots and Users
Jonas Kaiser

Users interact online with bots on a daily basis. They tweet, upvote or comment, in short: participate in many different communities and are involved in shaping the user’s perceptions. Based on this experience the users’ perspective on bots may differ significantly from journalists, bot creators or political actors. Yet it is being ignored in the literature up to now. As such we are missing an integral perspective on bots that may help us to understand how the societal discourse surrounding bots is structured. To analyze how and in which context users talk about transparency bots specifically a content analysis and topic analysis of Reddit comments from 86 posts in 48 subreddits on the issue of Wikiedits bots will be conducted. This proposal’s research focuses on two major aspects: how Reddit users 1) frame and with what other 2) topics they associate transparency bots.

Framing in this context is understood as “making sense of relevant events, suggesting what is at issue” (Gamson & Modigliani, 1989, p. 3). Even though some studies have shown, for example, how political actors frame bots (Ford, Dubois, & Puschmann, 2016) a closer look at the user’s side is missing. But this perspective is important as non-elite users may have a different view than the more elite political actors that can help us understand in how they interpret bots. This overlooked perspective, then, could have meaningful implications for political actors or bot creators. At the same time it is important to understand the broader context of the user discourse on transparency bots to properly connect the identified frames with overarching topics. Hence an automated topic modeling approach (Blei, Ng & Jordan, 2003) is chosen to identify the underlying themes within the comments. By combining frame analysis with topic modeling this project will highlight the way users talk about transparency bots and in which context they do so and thus emphasize the role of the users within the broader public discourse on bots.

Bibliography

Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent dirichlet allocation. Journal of Machine Learning Research, 3, 993-1022.

Callon, M. (1986). “Some Elements of a Sociology of Translation: Domestication of the Scallops and the Fishermen of St Brieuc Bay”. In John Law (ed.), Power, Action and Belief: A New Sociology of Knowledge (London: Routledge & Kegan Paul).

Ford, H., Dubois, E., & Puschmann, C. (2016). Automation, Algorithms, and Politics | Keeping Ottawa Honest—One Tweet at a Time? Politicians, Journalists, Wikipedians and Their Twitter Bots. International Journal of Communication, 10, 24.

Gamson, W. A., & Modigliani, A. (1989). Media Discourse and Public Opinion on Nuclear Power: A Constructionist Approach. American Journal of Sociology, 95(1), 1-37.

Graefe, A. (2016). Guide to automated journalism. http://towcenter.org/research/guide-to-automated-journalism/

Jasanoff, S. (2004). States of Knowledge: The Co-Production of Science and the Social Order. (London: Routledge Chapman & Hall)

Johri et al. (2016). Domain specific newsbots. Live automated reporting systems involving natural language communication. Paper presented at 2016 Computation + Journalism Symposium.

Lokot, T. & Diakopoulos, N. (2015). News bots: Automating news and information dissemination on Twitter. Digital Journalism. doi: 10.1080/21670811.2015.1081822

Woolley, S. C. (2016). Automating power: Social bot interference in global politics. First Monday. doi: 10.5210/fm.v21i4.6161

Woolley, S. C., & Howard, P. (2016). Bots unite to automate the presidential election. Retrieved Jun. 5, 2016, from http://www.wired.com/2016/05/twitterbots-2/


by Heather Ford at January 21, 2017 03:40 PM

Ph.D. student

consequences of scale

Here’s some key things about an economy of control:

  • An economy of control is normally very stable. It’s punctuated equilibrium. But the mean size of disruptive events increases over time, because each of these events can cause a cascade through an ever increasingly complex system.
  • An economy of control has enormous inequalities of all kinds of scale. But there’s a kind of evenness to the inequality from an information theoretic perspective, because of a conservation of entropy principle.
  • An economy of control can be characterized adequately using third order cybernetics. It’s an unsolved research problem to determine whether third order cybernetics is reducible to second order cybernetics. There should totally be a big prize for the first person who figures this out. That prize is a very lucrative hedge fund.
  • An economy of control is, of course, characterized mainly by its titular irony: there is the minimum possible control necessary to maintain the system’s efficiency. It’s a totalizing economic model of freedom maximization.
  • Economics of control is to neoliberalism and computational social science what neoliberalism was to political liberalism and neoclassical economic theory.
  • The economy of control preserves privacy perfectly at equilibrium, barring externalities.
  • The economy of control internalizes all externalities in the long run.
  • In the economy of control, demand is anthropic.
  • In the economy of control, for any belief that needs to be shouted on television, there is a person who sincerely believes it who is willing to get paid to shout it. Journalism is replaced entirely by networks of trusted scholarship.
  • The economy of control is sociologically organized according to two diverging principles: the organizational evolutionary pressure familiar from structural functionalism, and entropy. It draws on Bataille’s theory of the general economy. But it borrows from Ulanowicz the possibility of life overcoming thermodynamics. So to speak.

Just brainstorming here.


by Sebastian Benthall at January 21, 2017 03:59 AM

January 19, 2017

Ph.D. student

what if computers don’t actually control anything important?

I’ve written a lot (here, informally) on the subject of computational control of society. I’m not the only one, of course. There has in the past few years been a growing fear that one day artificial intelligence might control everything. I’ve argued that this is akin to older fears that, under capitalism, instrumentality would run amok.

Recently, thinking a little more seriously about what’s implied by an economy of control, I’ve been coming around to a quite different conclusion. What if the general tendency of these algorithmic systems is not the enslavement of humanity but rather the opening up of freedom and opportunity? This is not a critical attitude and might be seen as a simple shilling for industrial powers, so let me pose the point slightly more controversially. What if the result of these systems is to provide so much freedom and opportunity that it undermines the structure that makes social action significant? The “control” of these systems could just be the result of our being exposed, at last, to our individual insignificance in the face of each other.

As a foil, I’ll refer again to Frank Pasquale’s The Black Box Society, which I’ve begun to read again at the prompting of Pasquale himself. It is a rare and wonderful thing for the author of a book you’ve written rude things about to write you and tell you you’ve misrepresented the work. So often I assume nobody’s actually reading what I write, making this a lonely vocation indeed. Now I know that at least somebody gives a damn.

In Chapter 3, Pasquale writes:

“The power to include, exclude, and rank [in search results] is the power to ensure which public impressions become permanent and which remain fleeting. That is why search services, social and not, are ‘must-have’ properties for advertisers as well as users. As such, they have made very deep inroads indeed into the sphere of cultural, economic, and political influence that was once dominated by broadcast networks, radio stations, and newspapers. But their dominance is so complete, and their technology so complex, that they have escaped pressures for transparency and accountability that kept traditional media answerable to the public.”

As a continuation of the “technics-out-of-control” meme, there’s an intuitive thrust to this argument. But looking at the literal meaning of the sentences, none of it is actually true!

Let’s look at some of the reasons why these claims are false:

  • There are multiple competing search engines, and switching costs are very low. There are Google and Bing and Duck Duck Go, but there’s also more specialized search engines for particular kinds of things. Literally every branded shopping website has a search engine that includes only what it chooses to include. This market pressure for search drives search engines generally to provide people with the answers they are looking for.
  • While there is a certain amount of curation that goes into search results, the famous early ranking logic which made large scale search possible used mainly data created as part of the content itself (hyperlinks in the case of Google’s PageRank) or usage (engagement in the case of Facebook’s EdgeRank). To the extent that these algorithms have changed, much of it has been because they have had to cave to public pressure, in the form of market pressure. Many of these changes are based on dynamic socially created data as well (such as spam flagging). Far from being manipulated by a secret powerful force, search engine results are always a dynamic, social accomplishment that is a reflection of the public.
  • Alternative media forms, such as broadcast radio, print journalism, cable television, storefront advertisting, and so on still exist and have an influence over people’s decisions. No single digital technology ensures anything! A new restaurant that opens up in a neighborhood is free to gain a local reputation in the old fashioned way. And then these same systems for ranking and search incentivize the discovery over these local gems by design. The information economy doesn’t waste opportunities like this!

So what’s the problem? If algorithms aren’t controlling society, but rather are facilitating its self-awareness, maybe these kinds of polemics are just way off base.


by Sebastian Benthall at January 19, 2017 05:11 AM

January 17, 2017

Ph.D. student

economy of control

We call it a “crisis” when the predictions of our trusted elites are violated in one way or another. We expect, for good reason, things to more or less continue as they are. They’ve evolved to be this way, haven’t they? The older the institution, the more robust to change it must be.

I’ve gotten comfortable in my short life with the global institutions that appeared to be the apex of societal organization. Under these conditions, I found James Beniger‘s work to be particularly appealing, as it predicts the growth of information processing apparati (some combination of information worker and information technology) as formerly independent components of society integrate. I’m of the class of people that benefits from this kind of centralization of control, so I was happy to believe that this was an inevitable outcome according to physical law.

Now I’m not so sure.

I am not sure I’ve really changed my mind fundamentally. This extreme Beniger view is too much like Nick Bostrom’s superintelligence argument in form, and I’ve already thought hard about why that argument is not good. That reasoning stopped at the point of noting how superintelligence “takeoff” is limited by data collection. But I did not go to the next and probably more important step, which is the problem of aleatoric uncertainty in a world with multiple agents. We’re far more likely to get into a situation with multi-polar large intelligences that are themselves fraught with principle-agent problems, because that’s actually the status quo.

I’ve been prodded to revisit The Black Box Society, which I’ve dealt with inadequately. Its beefier chapters deal with a lot of the specific economic and regulatory recent history of the information economy of the United States, which is a good complement to Beniger and a good resource for the study of competing intelligences within a single economy, though I find this data a but clouded by the polemical writing.

“Economy” is the key word here. Pure, Arendtian politics and technics have not blended easily, but what they’ve turned into is a self-regulatory system with structure and agency. More than that, the structure is for sale, and so is the agency. What is interesting about the information economy is, and I guess I’m trying to coin a phrase here, is that it is an economy of control. The “good” being produced, sold, and bought, is control.

There’s a lot of interesting research about information goods. But I’ve never heard of a “control good”. But this is what we are talking about when we talk about software, data collection, managerial labor, and the conflicts and compromises that it creates.

I have a few intuitions about where this goes, but not as many as I’d like. I think this is because the economy of control is quite messy and hard to reason about.


by Sebastian Benthall at January 17, 2017 12:10 AM

January 13, 2017

Ph.D. student

habitus and citizenship

Just a quick thought… So in Bourdieu’s Science of Science and Reflexivity, he describes the habitus of the scientist. Being a scientist demands a certain adherence to the rules of the scientific game, certain training, etc. He winds up constructing a sociological explanation for the epistemic authority of science. The rules of the game are the conditions for objectivity.

When I was working on a now defunct dissertation, I was comparing this formulation of science with a formulation of democracy and the way it depends on publics. Habermasian publics, Fraserian publics, you get the idea. Within this theory, what was once a robust theory of collective rationality as the basis for democracy has deteriorated under what might be broadly construed as “postmodern” critiques of this rationality. One could argue that pluralistic multiculturalism, not collective reason, became the primary ideology for American democracy in the past eight years.

Pretty sure this backfired with e.g. the Alt-Right.

So what now? I propose that those interested in functioning democracy reconsider the habitus of citizenship and how it can be maintained through the education system and other civic institutions. It’s a bit old-school. But if the Alt-Right wanted a reversion to historical authoritarian forms of Western governance, we may be getting there. Suppose history moves in a spiral. It might be best to try to move forward, not back.


by Sebastian Benthall at January 13, 2017 12:29 AM

January 10, 2017

Ph.D. student

Loving Tetlock’s Superforecasting: The Art and Science of Prediction

I was a big fan of Philip Tetlock’s Expert Political Judgment (EPJ). I read it thoroughly; in fact a book review of it was my first academic publication. It was very influential on me.

EPJ is a book that is troubling to many political experts because it basically says that most so-called political expertise is bogus and that what isn’t bogus is fairly limited. It makes this argument with far more meticulous data collection and argumentation than I am able to do justice to here. I found it completely persuasive and inspiring. It wasn’t until I got to Berkeley that I met people who had vivid negative emotional reactions to this work. They seem to mainly have been political experts who do not having their expertise assessed in terms of its predictive power.

Superforecasting: The Art and Science of Prediction (2016) is a much more accessible book that summarizes the main points from EPJ and then discusses the results of Tetlock’s Good Judgment Project, which was his answer to an IARPA challenge in forecasting political events.

Much of the book is an interesting history of the United States Intelligence Community (IC) and the way its attitudes towards political forecasting have evolved. In particular, the shock of the failure of the predictions around Weapons of Mass Destruction that lead to the Iraq War were a direct cause of IARPA’s interest in forecasting and their funding of the Good Judgment Project despite the possibility that the project’s results would be politically challenging. IARPA comes out looking like a very interesting and intellectually honest organization solving real problems for the people of the United States.

Reading this has been timely for me because: (a) I’m now doing what could be broadly construed as “cybersecurity” work, professionally, (b) my funding is coming from U.S. military and intelligence organizations, and (c) the relationship between U.S. intelligence organizations and cybersecurity has been in the news a lot lately in a very politicized way because of the DNC hacking aftermath.

Since so much of Tetlock’s work is really just about applying mathematical statistics to the psychological and sociological problem of developing teams of forecasters, I see the root of it as the same mathematical theory one would use for any scientific inference. Cybersecurity research, to the extent that it uses sound scientific principles (which it must, since it’s all about the interaction between society, scientifically designed technology, and risk), is grounded in these same principles. And at its best the U.S. intelligence community lives up to this logic in its public service.

The needs of the intelligence community with respect to cybersecurity can be summed up in one word: rationality. Tetlock’s work is a wonderful empirical study in rationality that’s a must-read for anybody interested in cybersecurity policy today.


by Sebastian Benthall at January 10, 2017 10:54 PM

Ph.D. alumna

Why America is Self-Segregating

The United States has always been a diverse but segregated country. This has shaped American politics profoundly. Yet, throughout history, Americans have had to grapple with divergent views and opinions, political ideologies, and experiences in order to function as a country. Many of the institutions that underpin American democracy force people in the United States to encounter difference. This does not inherently produce tolerance or result in healthy resolution. Hell, the history of the United States is fraught with countless examples of people enslaving and oppressing other people on the basis of difference. This isn’t about our past; this is about our present. And today’s battles over laws and culture are nothing new.

Ironically, in a world in which we have countless tools to connect, we are also watching fragmentation, polarization, and de-diversification happen en masse. The American public is self-segregating, and this is tearing at the social fabric of the country.

Many in the tech world imagined that the Internet would connect people in unprecedented ways, allow for divisions to be bridged and wounds to heal.It was the kumbaya dream. Today, those same dreamers find it quite unsettling to watch as the tools that were designed to bring people together are used by people to magnify divisions and undermine social solidarity. These tools were built in a bubble, and that bubble has burst.

Nowhere is this more acute than with Facebook. Naive as hell, Mark Zuckerberg dreamed he could build the tools that would connect people at unprecedented scale, both domestically and internationally. I actually feel bad for him as he clings to that hope while facing increasing attacks from people around the world about the role that Facebook is playing in magnifying social divisions. Although critics love to paint him as only motivated by money, he genuinely wants to make the world a better place and sees Facebook as a tool to connect people, not empower them to self-segregate.

The problem is not simply the “filter bubble,” Eli Pariser’s notion that personalization-driven algorithmic systems help silo people into segregated content streams. Facebook’s claim that content personalization plays a small role in shaping what people see compared to their own choices is accurate.And they have every right to be annoyed. I couldn’t imagine TimeWarner being blamed for who watches Duck Dynasty vs. Modern Family. And yet, what Facebook does do is mirror and magnify a trend that’s been unfolding in the United States for the last twenty years, a trend of self-segregation that is enabled by technology in all sorts of complicated ways.

The United States can only function as a healthy democracy if we find a healthy way to diversify our social connections, if we find a way to weave together a strong social fabric that bridges ties across difference.

Yet, we are moving in the opposite direction with serious consequences. To understand this, let’s talk about two contemporary trend lines and then think about the implications going forward.

Privatizing the Military

The voluntary US military is, in many ways, a social engineering project. The public understands the military as a service organization, dedicated to protecting the country’s interests. Yet, when recruits sign up, they are promised training and job opportunities. Individual motivations vary tremendously, but many are enticed by the opportunity to travel the world, participate in a cause with a purpose, and get the heck out of dodge. Everyone expects basic training to be physically hard, but few recognize that some of the most grueling aspects of signing up have to do with the diversification project that is central to the formation of the American military.

When a soldier is in combat, she must trust her fellow soldiers with her life. And she must be willing to do what it takes to protect the rest of her unit. In order to make that possible, the military must wage war on prejudice. This is not an easy task. Plenty of generals fought hard to fight racial desegregation and to limit the role of women in combat. Yet, the US military was desegregated in 1948, six years before Brown v. Board forced desegregation of schools. And the Supreme Court ruled that LGB individuals could openly serve in the military before they could legally marry.

CC BY 2.0-licensed photo by The U.S. Army.

Morale is often raised as the main reason that soldiers should not be forced to entrust their lives to people who are different than them. Yet, time and again, this justification collapses under broader interests to grow the military. As a result, commanders are forced to find ways to build up morale across difference, to actively and intentionally seek to break down barriers to teamwork, and to find a way to gel a group of people whose demographics, values, politics, and ideologies are as varied as the country’s.

In the process, they build one of the most crucial social infrastructures of the country. They build the diverse social fabric that underpins democracy.

Tons of money was poured into defense after 9/11, but the number of people serving in the US military today is far lower than it was throughout the 1980s. Why? Starting in the 1990s and accelerating after 9/11, the US privatized huge chunks of the military. This means that private contractors and their employees play critical roles in everything from providing food services to equipment maintenance to military housing. The impact of this on the role of the military in society is significant. For example, this undermine recruits’ ability to get training to develop critical skills that will be essential for them in civilian life. Instead, while serving on active duty, they spend a much higher amount of time on the front lines and in high-risk battle, increasing the likelihood that they will be physically or psychologically harmed. The impact on skills development and job opportunities is tremendous, but so is the impact on the diversification of the social fabric.

Private vendors are not engaged in the same social engineering project as the military and, as a result, tend to hire and fire people based on their ability to work effectively as a team. Like many companies, they have little incentive to invest in helping diverse teams learn to work together as effectively as possible. Building diverse teams — especially ones in which members depend on each other for their survival — is extremely hard, time-consuming, and emotionally exhausting. As a result, private companies focus on “culture fit,” emphasize teams that get along, and look for people who already have the necessary skills, all of which helps reinforce existing segregation patterns.

The end result is that, in the last 20 years, we’ve watched one of our major structures for diversification collapse without anyone taking notice. And because of how it’s happened, it’s also connected to job opportunities and economic opportunity for many working- and middle-class individuals, seeding resentment and hatred.

A Self-Segregated College Life

If you ask a college admissions officer at an elite institution to describe how they build a class of incoming freshman, you will quickly realize that the American college system is a diversification project. Unlike colleges in most parts of the world, the vast majority of freshman at top tier universities in the United States live on campus with roommates who are assigned to them. Colleges approach housing assignments as an opportunity to pair diverse strangers with one another to build social ties. This makes sense given how many friendships emerge out of freshman dorms. By pairing middle class kids with students from wealthier families, elite institutions help diversify the elites of the future.

This diversification project produces a tremendous amount of conflict. Although plenty of people adore their college roommates and relish the opportunity to get to know people from different walks of life as part of their college experience, there is an amazing amount of angst about dorm assignments and the troubles that brew once folks try to live together in close quarters. At many universities, residential life is often in the business of student therapy as students complain about their roommates and dormmates. Yet, just like in the military, learning how to negotiate conflict and diversity in close quarters can be tremendously effective in sewing the social fabric.

CC BY-NC-ND 2.0-licensed photo by Ilya Khurosvili.

In the springs of 2006, I was doing fieldwork with teenagers at a time when they had just received acceptances to college. I giggled at how many of them immediately wrote to the college in which they intended to enroll, begging for a campus email address so that they could join that school’s Facebook (before Facebook was broadly available). In the previous year, I had watched the previous class look up roommate assignments on MySpace so I was prepared for the fact that they’d use Facebook to do the same. What I wasn’t prepared for was how quickly they would all get on Facebook, map the incoming freshman class, and use this information to ask for a roommate switch. Before they even arrived on campus in August/September of 2006, they had self-segregated as much as possible.

A few years later, I watched another trend hit: cell phones. While these were touted as tools that allowed students to stay connected to parents (which prompted many faculty to complain about “helicopter parents” arriving on campus), they really ended up serving as a crutch to address homesickness, as incoming students focused on maintaining ties to high school friends rather than building new relationships.

Students go to elite universities to “get an education.” Few realize that the true quality product that elite colleges in the US have historically offered is social network diversification. Even when it comes to job acquisition, sociologists have long known that diverse social networks (“weak ties”) are what increase job prospects. By self-segregating on campus, students undermine their own potential while also helping fragment the diversity of the broader social fabric.

Diversity is Hard

Diversity is often touted as highly desirable. Indeed, in professional contexts, we know that more diverse teams often outperform homogeneous teams. Diversity also increases cognitive development, both intellectually and socially. And yet, actually encountering and working through diverse viewpoints, experiences, and perspectives is hard work. It’s uncomfortable. It’s emotionally exhausting. It can be downright frustrating.

Thus, given the opportunity, people typically revert to situations where they can be in homogeneous environments. They look for “safe spaces” and “culture fit.” And systems that are “personalized” are highly desirable. Most people aren’t looking to self-segregate, but they do it anyway. And, increasingly, the technologies and tools around us allow us to self-segregate with ease. Is your uncle annoying you with his political rants? Mute him. Tired of getting ads for irrelevant products? Reveal your preferences. Want your search engine to remember the things that matter to you? Let it capture data. Want to watch a TV show that appeals to your senses? Here are some recommendations.

Any company whose business model is based on advertising revenue and attention is incentivized to engage you by giving you what you want. And what you want in theory is different than what you want in practice.

Consider, for example, what Netflix encountered when it started its streaming offer. Users didn’t watch the movies that they had placed into their queue. Those movies were the movies they thought they wanted, movies that reflected their ideal self — 12 Years a Slave, for example. What they watched when they could stream whatever they were in the mood for at that moment was the equivalent of junk food — reruns of Friends, for example. (This completely undid Netflix’s recommendation infrastructure, which had been trained on people’s idealistic self-images.)

The divisions are not just happening through commercialism though. School choice has led people to self-segregate from childhood on up. The structures of American work life mean that fewer people work alongside others from different socioeconomic backgrounds. Our contemporary culture of retail and service labor means that there’s a huge cultural gap between workers and customers with little opportunity to truly get to know one another. Even many religious institutions are increasingly fragmented such that people have fewer interactions across diverse lines. (Just think about how there are now “family services” and “traditional services” which age-segregate.) In so many parts of public, civic, and professional life, we are self-segregating and the opportunities for doing so are increasing every day.

By and large, the American public wants to have strong connections across divisions. They see the value politically and socially. But they’re not going to work for it. And given the option, they’re going to renew their license remotely, try to get out of jury duty, and use available data to seek out housing and schools that are filled with people like them. This is the conundrum we now face.

Many pundits remarked that, during the 2016 election season, very few Americans were regularly exposed to people whose political ideology conflicted with their own. This is true. But it cannot be fixed by Facebook or news media. Exposing people to content that challenges their perspective doesn’t actually make them more empathetic to those values and perspectives. To the contrary, it polarizes them. What makes people willing to hear difference is knowing and trusting people whose worldview differs from their own. Exposure to content cannot make up for self-segregation.

If we want to develop a healthy democracy, we need a diverse and highly connected social fabric. This requires creating contexts in which the American public voluntarily struggles with the challenges of diversity to build bonds that will last a lifetime. We have been systematically undoing this, and the public has used new technological advances to make their lives easier by self-segregating. This has increased polarization, and we’re going to pay a heavy price for this going forward. Rather than focusing on what media enterprises can and should do, we need to focus instead on building new infrastructures for connection where people have a purpose for coming together across divisions. We need that social infrastructure just as much as we need bridges and roads.

This piece was originally published as part of a series on media, accountability, and the public sphere. See also:

by zephoria at January 10, 2017 01:15 PM

January 09, 2017

MIMS 2018

Trump and the Strategy of Irrationality

I wrote this piece in November 2016 and sat on it for a while, unsure whether or not I wanted to publish it. Since then, the Washington Post and the Boston Globe have had great pieces making similar points to the one I made here: that Donald Trump’s unpredictability may, in certain situations, give him leverage in negotiations. The world has changed a lot in these two short months but many points I make here still stand. So please, enjoy.

Source: Wikimedia Commons

Donald Trump is not just the most controversial President-Elect in recent American history — he is also the most unpredictable. His lack of political experience, inconsistent views, and tendency towards outbursts leave even his most ardent supporters unsure of what a President Trump might do in a given situation. Yet counterintuitively, his unpredictability may help him in the international arena.

The reason is a basic tenet of game theory. In a conflict, a person’s bargaining power depends on their perceived willingness to go through with a threat, even at a cost to themselves. If an opponent sees a threatener as irrational, they will also see them as more willing to go through with a costly threat, either because they do not know or do not care about the consequences. Thus, the opponent is more likely to yield.

This is where the irrationality of Trump shines.

For example, he may have an advantage over traditional politicians in renegotiating foreign trade deals because he is viewed as unstable enough to scrap them, even if it would hurt the American economy. A politician who has shown more nuanced views of America’s trade relations and economic interests would not have this same leverage.

Thomas Schelling. Source: Harvard Gazette

This strategy of irrationality is not new. It was popularized in 1960 by the Nobel Prize winning economist Thomas Schelling in his book Strategy of Conflict. It was used in the Cold War by both American presidents and Russian secretaries. Even Voltaire said, “Be sure that your madness corresponds with the turn and temper of your age…and forget not to be excessively opinionated and obstinate.”

Of all the US presidents, Richard Nixon put the most faith in what he called the “madman strategy.” He tried to appear “mad” enough to use nuclear weapons in order to bring North Vietnam to the negotiation table. In a private conversation, Nixon told his Chief of Staff the following:

I want the North Vietnamese to believe I’ve reached the point where I might do anything to stop the war. We’ll just slip the word to them that “for God’s sake, you know Nixon is obsessed about Communism. We can’t restrain him when he’s angry — and he has his hand on the nuclear button.”

After four years, Nixon’s “madman strategy” failed to end the war. He could only apply it intermittently; his “madness” for flying planes strapped with nuclear weapons over Northern Vietnam was tempered by his sanity in negotiations with Russia and China. Additionally, the repercussions of using nuclear weapons were so drastic that it was difficult to convince anyone he was willing to use them, especially after Russia achieved nuclear parity with the US.

President Richard Nixon. Source: Flickr

President Trump may have more success in applying the “madman strategy” because many people already see him as mad. Unlike Nixon, who tried to shift his perception from sane to insane, Trump has cultivated his unstable persona over almost a year and a half of campaigning and decades in the public eye. His perceived lack of knowledge regarding everything political may also cause opponents to see him as incapable of making rational decisions.

The strategy of irrationality is contingent on a number of assumptions. It assumes a somewhat rational opponent and a centralized decision making authority, neither of which apply to America’s most virulent enemy, ISIS. It also assumes a medium of communication to send threats over, which may be more difficult in dealings with countries with whom the US lacks diplomatic relations, like Iran and North Korea.

The utility of the strategy of irrationality is further complicated by the fact that most relationships the United States has with other countries are simultaneously oppositional and collaborative. For example, President Trump may consider France an opponent in environmental and NATO negotiations but an ally in trading. His perceived instability could give him leverage in negotiations but harm mutually beneficial relations with France.

The strategy also depends on whether President Trump is as unpredictable as candidate Trump. President-Elect Trump has already backed off from some of his more outlandish campaign trail promises. Global views of Trump are constantly shifting, especially as news comes out about his cabinet, and a method to his madness may become apparent as he makes more executive decisions.

The unpredictability of Donald Trump has brought about sleepless nights for many Americans. His perceived irrationality may damage allegiances within and without the country, but it may also give him leverage in future international conflicts. Donald Trump has always said he is a dealmaker and he might just be crazy enough to be right.

by Gabe Nicholas at January 09, 2017 04:50 PM

Ph.D. alumna

Did Media Literacy Backfire?

Anxious about the widespread consumption and spread of propaganda and fake news during this year’s election cycle, many progressives are calling for an increased commitment to media literacy programs. Others are clamoring for solutions that focus on expert fact-checking and labeling. Both of these approaches are likely to fail — not because they are bad ideas, but because they fail to take into consideration the cultural context of information consumption that we’ve created over the last thirty years. The problem on our hands is a lot bigger than most folks appreciate.

CC BY 2.0-licensed photo by CEA+ | Artist: Nam June Paik, “Electronic Superhighway. Continental US, Alaska & Hawaii” (1995).

What Are Your Sources?

I remember a casual conversation that I had with a teen girl in the midwest while I was doing research. I knew her school approached sex ed through an abstinence-only education approach, but I don’t remember how the topic of pregnancy came up. What I do remember is her telling me that she and her friends talked a lot about pregnancy and “diseases” she could get through sex. As I probed further, she matter-of-factly explained a variety of “facts” she had heard that were completely inaccurate. You couldn’t get pregnant until you were 16. AIDS spreads through kissing. Etc. I asked her if she’d talked to her doctor about any of this, and she looked me as though I had horns. She explained that she and her friends had done the research themselves, by which she meant that they’d identified websites online that “proved” their beliefs.

For years, that casual conversation has stuck with me as one of the reasons that we needed better Internet-based media literacy. As I detailed in my book It’s Complicated: The Social Lives of Networked Teens, too many students I met were being told that Wikipedia was untrustworthy and were, instead, being encouraged to do research. As a result, the message that many had taken home was to turn to Google and use whatever came up first. They heard that Google was trustworthy and Wikipedia was not.

Understanding what sources to trust is a basic tenet of media literacy education. When educators encourage students to focus on sourcing quality information, they encourage them to critically ask who is publishing the content. Is the venue a respected outlet? What biases might the author have? The underlying assumption in all of this is that there’s universal agreement that major news outlets like the New York Times, scientific journal publications, and experts with advanced degrees are all highly trustworthy.

Think about how this might play out in communities where the “liberal media” is viewed with disdain as an untrustworthy source of information…or in those where science is seen as contradicting the knowledge of religious people…or where degrees are viewed as a weapon of the elite to justify oppression of working people. Needless to say, not everyone agrees on what makes a trusted source.

Students are also encouraged to reflect on economic and political incentives that might bias reporting. Follow the money, they are told. Now watch what happens when they are given a list of names of major power players in the East Coast news media whose names are all clearly Jewish. Welcome to an opening for anti-Semitic ideology.

Empowered Individuals…with Guns

We’ve been telling young people that they are the smartest snowflakes in the world. From the self-esteem movement in the 1980s to the normative logic of contemporary parenting, young people are told that they are lovable and capable and that they should trust their gut to make wise decisions. This sets them up for another great American ideal: personal responsibility.

In the United States, we believe that worthy people lift themselves up by their bootstraps. This is our idea of freedom. What it means in practice is that every individual is supposed to understand finance so well that they can effectively manage their own retirement funds. And every individual is expected to understand their health risks well enough to make their own decisions about insurance. To take away the power of individuals to control their own destiny is viewed as anti-American by so much of this country. You are your own master.

Children are indoctrinated into this cultural logic early, even as their parents restrict their mobility and limit their access to social situations. But when it comes to information, they are taught that they are the sole proprietors of knowledge. All they have to do is “do the research” for themselves and they will know better than anyone what is real.

Combine this with a deep distrust of media sources. If the media is reporting on something, and you don’t trust the media, then it is your responsibility to question their authority, to doubt the information you are being given. If they expend tremendous effort bringing on “experts” to argue that something is false, there must be something there to investigate.

Now think about what this means for #Pizzagate. Across this country, major news outlets went to great effort to challenge conspiracy reports that linked John Podesta and Hillary Clinton to a child trafficking ring supposedly run out of a pizza shop in Washington, DC. Most people never heard the conspiracy stories, but their ears perked up when the mainstream press went nuts trying to debunk these stories. For many people who distrust “liberal” media and were already primed not to trust Clinton, the abundant reporting suggested that there was something to investigate.

Most people who showed up to the Comet Ping Pong pizzeria to see for their own eyes went undetected. But then a guy with a gun decided he “wanted to do some good” and “rescue the children.” He was the first to admit that “the intel wasn’t 100%,” but what he was doing was something that we’ve taught people to do — question the information they’re receiving and find out the truth for themselves.

Experience Over Expertise

Many marginalized groups are justifiably angry about the ways in which their stories have been dismissed by mainstream media for decades. This is most acutely felt in communities of color. And this isn’t just about the past. It took five days for major news outlets to cover Ferguson. It took months and a lot of celebrities for journalists to start discussing the Dakota Pipeline. But feeling marginalized from news media isn’t just about people of color. For many Americans who have watched their local newspaper disappear, major urban news reporting appears disconnected from reality. The issues and topics that they feel affect their lives are often ignored.

For decades, civil rights leaders have been arguing for the importance of respecting experience over expertise, highlighting the need to hear the voices of people of color who are so often ignored by experts. This message has taken hold more broadly, particularly among lower and middle class whites who feel as though they are ignored by the establishment. Whites also want their experiences to be recognized, and they too have been pushing for the need to understand and respect the experiences of “the common man.” They see “liberal” “urban” “coastal” news outlets as antithetical to their interests because they quote from experts, use cleaned-up pundits to debate issues, and turn everyday people (e.g., “red sweater guy”) into spectacles for mass enjoyment.

Consider what’s happening in medicine. Many people used to have a family doctor whom they knew for decades and trusted as individuals even more than as experts. Today, many people see doctors as arrogant and condescending, overly expensive and inattentive to their needs. Doctors lack the time to spend more than a few minutes with patients, and many people doubt that the treatment they’re getting is in their best interest. People feel duped into paying obscene costs for procedures that they don’t understand. Many economists can’t understand why so many people would be against the Affordable Care Act because they don’t recognize that this “socialized” medicine is perceived as experts over experience by people who don’t trust politicians who tell them what’s in their best interest any more than they trust doctors. And public trust in doctors is declining sharply.

Why should we be surprised that most people are getting medical information from their personal social network and the Internet? It’s a lot cheaper than seeing a doctor, and both friends and strangers on the Internet are willing to listen, empathize, and compare notes. Why trust experts when you have at your fingertips a crowd of knowledgeable people who may have had the same experience as you and can help you out?

Consider this dynamic in light of discussions around autism and vaccinations. First, an expert-produced journal article was published linking autism to vaccinations. This resonated with many parents’ experience. Then, other experts debunked the first report, challenged the motivations of the researcher, and engaged in a mainstream media campaign to “prove” that there was no link. What unfolded felt like a war on experience, and a network of parents coordinated to counter this new batch of experts who were widely seen as ignorant, moneyed, and condescending. The more that the media focused on waving away these networks of parents through scientific language, the more the public felt sympathetic to the arguments being made by anti-vaxxers.

Keep in mind that anti-vaxxers aren’t arguing that vaccinations definitively cause autism. They are arguing that we don’t know. They are arguing that experts are forcing children to be vaccinated against their will, which sounds like oppression. What they want is choice — the choice to not vaccinate. And they want information about the risks of vaccination, which they feel are not being given to them. In essence, they are doing what we taught them to do: questioning information sources and raising doubts about the incentives of those who are pushing a single message. Doubt has become tool.

Grappling with “Fake News”

Since the election, everyone has been obsessed with fake news, as experts blame “stupid” people for not understanding what is “real.” The solutionism around this has been condescending at best. More experts are needed to label fake content. More media literacy is needed to teach people how not to be duped. And if we just push Facebook to curb the spread of fake news, all will be solved.

I can’t help but laugh at the irony of folks screaming up and down about fake news and pointing to the story about how the Pope backs Trump. The reason so many progressives know this story is because it was spread wildly among liberal circles who were citing it as appalling and fake. From what I can gather, it seems as though liberals were far more likely to spread this story than conservatives. What more could you want if you ran a fake news site whose goal was to make money by getting people to spread misinformation? Getting doubters to click on clickbait is far more profitable than getting believers because they’re far more likely to spread the content in an effort to dispel the content. Win!

CC BY 2.0-licensed photo by Denis Dervisevic.

People believe in information that confirms their priors. In fact, if you present them with data that contradicts their beliefs, they will double down on their beliefs rather than integrate the new knowledge into their understanding. This is why first impressions matter. It’s also why asking Facebook to show content that contradicts people’s views will not only increase their hatred of Facebook but increase polarization among the network. And it’s precisely why so many liberals spread “fake news” stories in ways that reinforce their belief that Trump supporters are stupid and backwards.

Labeling the Pope story as fake wouldn’t have stopped people from believing that story if they were conditioned to believe it. Let’s not forget that the public may find Facebook valuable, but it doesn’t necessarily trust the company. So their “expertise” doesn’t mean squat to most people. Of course, it would be an interesting experiment to run; I do wonder how many liberals wouldn’t have forwarded it along if it had been clearly identified as fake. Would they have not felt the need to warn everyone in their network that conservatives were insane? Would they have not helped fuel a money-making fake news machine? Maybe.

But I think labeling would reinforce polarization — but it would feel like something was done. Nonbelievers would use the label to reinforce their view that the information is fake (and minimize the spread, which is probably a good thing), while believers would simply ignore the label. But does that really get us to where we want to go?

Addressing so-called fake news is going to require a lot more than labeling.It’s going to require a cultural change about how we make sense of information, whom we trust, and how we understand our own role in grappling with information. Quick and easy solutions may make the controversy go away, but they won’t address the underlying problems.

What Is Truth?

As a huge proponent for media literacy for over a decade, I’m struggling with the ways in which I missed the mark. The reality is that my assumptions and beliefs do not align with most Americans. Because of my privilege as a scholar, I get to see how expert knowledge and information is produced and have a deep respect for the strengths and limitations of scientific inquiry. Surrounded by journalists and people working to distribute information, I get to see how incentives shape information production and dissemination and the fault lines of that process. I believe that information intermediaries are important, that honed expertise matters, and that no one can ever be fully informed. As a result, I have long believed that we have to outsource certain matters and to trust others to do right by us as individuals and society as a whole. This is what it means to live in a democracy, but, more importantly, it’s what it means to live in a society.

In the United States, we’re moving towards tribalism, and we’re undoing the social fabric of our country through polarization, distrust, and self-segregation. And whether we like it or not, our culture of doubt and critique, experience over expertise, and personal responsibility is pushing us further down this path.

Media literacy asks people to raise questions and be wary of information that they’re receiving. People are. Unfortunately, that’s exactly why we’re talking past one another.

The path forward is hazy. We need to enable people to hear different perspectives and make sense of a very complicated — and in many ways, overwhelming — information landscape. We cannot fall back on standard educational approaches because the societal context has shifted. We also cannot simply assume that information intermediaries can fix the problem for us, whether they be traditional news media or social media. We need to get creative and build the social infrastructure necessary for people to meaningfully and substantively engage across existing structural lines. This won’t be easy or quick, but if we want to address issues like propaganda, hate speech, fake news, and biased content, we need to focus on the underlying issues at play. No simple band-aid will work.


Special thanks to Amanda Lenhart, Claire Fontaine, Mary Madden, and Monica Bulger for their feedback!

This post was first published as part of a series on media, accountability, and the public sphere. See also:

by zephoria at January 09, 2017 01:13 PM

January 08, 2017

MIMS 2012

Sol LeWitt - Wall Drawing

I recently saw Sol LeWitt’s Wall Drawing #273 at the SF MOMA, which really stayed with me after leaving the museum. In particular, I like that it wasn’t drawn by the artist himself, but rather he wrote instructions for draftspeople to draw this piece directly on the walls of the museum, thus embracing some amount of variability. From the museum’s description:

As his works are executed over and over again in different locations, they expand or contract according to the dimensions of the space in which they are displayed and respond to ambient light and the surfaces on which they are drawn. In some instances, as in this work, those involved in the installation make decisions impacting the final composition.

Sol LeWitt's Wall Drawing #273 Sol LeWitt’s Wall Drawing #273

This embrace of variability reminds me of the web. People browse the web on different devices that have different sizes and capabilities. We can’t control how people will experience our websites. Since LeWitt left instructions for creating his pieces, I realized I could translate those instructions into code, and embrace the variability of the web in the process. The result is this CodePen.

See the Pen Sol LeWitt – Wall Drawing #273 by Jeff (@jlzych) on CodePen.

LeWitt left the following instructions:

A six-inch (15 cm) grid covering the walls. Lines from corners, sides, and center of the walls to random points on the grid.

1st wall: Red lines from the midpoints of four sides;

2nd wall: Blue lines from four corners;

3rd wall: Yellow lines from the center;

4th wall: Red lines from the midpoints of four sides, blue lines from four corners;

5th wall: Red lines from the midpoints of four sides, yellow lines from the center;

6th wall: Blue lines from four corners, yellow lines from the center;

7th wall: Red lines from the midpoints of four sides, blue lines from four corners, yellow lines from the center.

Each wall has an equal number of lines. (The number of lines and their length are determined by the draftsman.)

As indicated in the instructions, there are 7 separate walls with an equal number of lines, the number and length of which are determined by the draftsperson. To simulate the decisions the draftspeople make, I included controls to let people set how many lines should be drawn, and toggle which walls to see. I let each color be toggleable, as opposed listing out walls 1-7, since each wall is just different combinations of the red, blue, and yellow lines.

The end result fits right in with how human draftspeople have turned these instructions into art. The most notable difference I see between a human and a program is the degree of randomness in the final drawing. From comparing the output of the program to versions done by people, the ones drawn by people seem less “random.” I get the sense that people have a tendency to more evenly distribute the lines to points throughout the grid, whereas the program can create clusters and lines that are really close to each other which a person would consider unappealing and not draw.

It makes me wonder how LeWitt would respond to programmatic versions of his art. Is he okay with computers making art? Were his instructions specifically for people, or would he have embraced using machines to generate his work had the technology existed in his time? How “random” did he want people make these drawings? Does he like that a program is more “random,” or did he expect and want people to make his wall drawings in a way that they would find visually pleasing? We’ll never know, but it was fun to interpret his work through the lens of today’s technology.

by Jeff Zych at January 08, 2017 11:10 PM

January 06, 2017

Ph.D. alumna

Hacking the Attention Economy

For most non-technical folks, “hacking” evokes the notion of using sophisticated technical skills to break through the security of a corporate or government system for illicit purposes. Of course, most folks who were engaged in cracking security systems weren’t necessarily in it for espionage and cruelty. In the 1990s, I grew up among teenage hackers who wanted to break into the computer systems of major institutions that were part of the security establishment, just to show that they could. The goal here was to feel a sense of power in a world where they felt pretty powerless. The rush was in being able to do something and feel smarter than the so-called powerful. It was fun and games. At least until they started getting arrested.

Hacking has always been about leveraging skills to push the boundaries of systems. Keep in mind that one early definition of a hacker (from the Jargon File) was “A person who enjoys learning the details of programming systems and how to stretch their capabilities, as opposed to most users who prefer to learn only the minimum necessary.” In another early definition (RFC:1392), a hacker is defined as “A person who delights in having an intimate understanding of the internal workings of a system, computers and computer networks in particular.” Both of these definitions highlight something important: violating the security of a technical system isn’t necessarily the primary objective.

Indeed, over the last 15 years, I’ve watched as countless hacker-minded folks have started leveraging a mix of technical and social engineering skills to reconfigure networks of power. Some are in it for the fun. Some see dollar signs. Some have a much more ideological agenda. But above all, what’s fascinating is how many people have learned to play the game. And in some worlds, those skills are coming home to roost in unexpected ways, especially as groups are seeking to mess with information intermediaries in an effort to hack the attention economy.

CC BY-NC 2.0-licensed photo by artgraff.

It all began with memes… (and porn…)

In 2003, a 15-year-old named Chris Poole started an image board site based on a Japanese trend called 4chan. His goal was not political. Rather, like many of his male teenage peers, he simply wanted a place to share pornography and anime. But as his site’s popularity grew, he ran into a different problem — he couldn’t manage the traffic while storing all of the content. So he decided to delete older content as newer content came in. Users were frustrated that their favorite images disappeared so they reposted them, often with slight modifications. This gave birth to a phenomenon now understood as “meme culture.” Lolcats are an example. These are images of cats captioned with a specific font and a consistent grammar for entertainment.

Those who produced meme-like images quickly realized that they could spread like wildfire thanks to new types of social media (as well as older tools like blogging). People began producing memes just for fun. But for a group of hacker-minded teenagers who were born a decade after I was, a new practice emerged. Rather than trying to hack the security infrastructure, they wanted to attack the emergent attention economy. They wanted to show that they could manipulate the media narrative, just to show that they could. This was happening at a moment when social media sites were skyrocketing, YouTube and blogs were challenging mainstream media, and pundits were pushing the idea that anyone could control the narrative by being their own media channel. Hell, “You” was TIME Magazine’s person of the year in 2006.

Taking a humorist approach, campaigns emerged within 4chan to “hack” mainstream media. For example, many inside 4chan felt that widespread anxieties about pedophilia were exaggerated and sensationalized. They decided to target Oprah Winfrey, who, they felt, was amplifying this fear-mongering. Trolling her online message board, they got her to talk on live TV about how “over 9,000 penises” were raping children. Humored by this success, they then created a broader campaign around a fake character known as Pedobear. In a different campaign, 4chan “b-tards” focused on gaming the TIME 100 list of “the world’s most influential people” by arranging it such that the first letter of each name on the list spelled out “Marblecake also the game,” which is a known in-joke in this community. Many other campaigns emerged to troll major media and other cultural leaders. And frankly, it was hard not to laugh when everyone started scratching their heads about why Rick Astley’s 1987 song “Never Gonna Give You Up” suddenly became a phenomenon again.

By engaging in these campaigns, participants learned how to shape information within a networked ecosystem. They learned how to design information for it to spread across social media.

They also learned how to game social media, manipulate its algorithms, and mess with the incentive structure of both old and new media enterprises. They weren’t alone. I watched teenagers throw brand names and Buzzfeed links into their Facebook posts to increase the likelihood that their friends would see their posts in their News Feed. Consultants starting working for companies to produce catchy content that would get traction and clicks. Justin Bieber fans ran campaign after campaign to keep Bieber-related topics in Twitter Trending Topics. And the activist group Invisible Children leveraged knowledge of how social media worked to architect the #Kony2012 campaign. All of this was seen as legitimate “social media marketing,” making it hard to detect where the boundaries were between those who were hacking for fun and those who were hacking for profit or other “serious” ends.

Running campaigns to shape what the public could see was nothing new, but social media created new pathways for people and organizations to get information out to wide audiences. Marketers discussed it as the future of marketing. Activists talked about it as the next frontier for activism. Political consultants talked about it as the future of political campaigns. And a new form of propaganda emerged.

The political side to the lulz

In her phenomenal account of Anonymous — “Hacker, Hoaxer, Whistleblower, Spy” — Gabriella Coleman describes the interplay between different networks of people playing similar hacker-esque games for different motivations. She describes the goofy nature of those “Anons” who created a campaign to expose Scientology, which many believed to be a farcical religion with too much power and political sway. But she also highlights how the issues became more political and serious as WikiLeaks emerged, law enforcement started going after hackers, and the Arab Spring began.

CC BY-SA 3.0-licensed photo by Essam Sharaf via Wikimedia Commons.

Anonymous was birthed out of 4chan, but because of the emergent ideological agendas of many Anons, the norms and tactics started shifting. Some folks were in it for fun and games, but the “lulz” started getting darker and those seeking vigilante justice started using techniques like “doxing”to expose people who were seen as deserving of punishment. Targets changed over time, showcasing the divergent political agendas in play.

Perhaps the most notable turn involved “#GamerGate” when issues of sexism in the gaming industry emerged into a campaign of harassment targeted at a group of women. Doxing began being used to enable “swatting” — in which false reports called in by perpetrators would result in SWAT teams sent to targets’ homes. The strategies and tactics that had been used to enable decentralized but coordinated campaigns were now being used by those seeking to use the tools of media and attention to do serious reputational, psychological, economic, and social harm to targets. Although 4chan had long been an “anything goes” environment (with notable exceptions), #GamerGate became taboo there for stepping over the lines.

As #GamerGate unfolded, men’s rights activists began using the situation to push forward a long-standing political agenda to counter feminist ideology, pushing for #GamerGate to be framed as a serious debate as opposed to being seen as a campaign of hate and harassment. In some ways, the resultant media campaign was quite successful: major conferences and journalistic enterprises felt the need to “hear both sides” as though there was a debate unfolding. Watching this, I couldn’t help but think of the work of Frank Luntz, a remarkably effective conservative political consultant known for reframing issues using politicized language.

As doxing and swatting have become more commonplace, another type of harassment also started to emerge en masse: gaslighting. This term refers to a 1944 Ingrid Bergman film called “Gas Light” (which was based on a 1938 play). The film depicts psychological abuse in a domestic violence context, where the victim starts to doubt reality because of the various actions of the abuser. It is a form of psychological warfare that can work tremendously well in an information ecosystem, especially one where it’s possible to put up information in a distributed way to make it very unclear what is legitimate, what is fake, and what is propaganda. More importantly, as many autocratic regimes have learned, this tactic is fantastic for seeding the public’s doubt in institutions and information intermediaries.

The democratization of manipulation

In the early days of blogging, many of my fellow bloggers imagined that our practice could disrupt mainstream media. For many progressive activists, social media could be a tool that could circumvent institutionalized censorship and enable a plethora of diverse voices to speak out and have their say. Civic minded scholars were excited by “smart mobs” who leveraged new communications platforms to coordinate in a decentralized way to speak truth to power. Arab Spring. Occupy Wall Street. Black Lives Matter. These energized progressives as “proof” that social technologies could make a new form of civil life possible.

I spent 15 years watching teenagers play games with powerful media outlets and attempt to achieve control over their own ecosystem. They messed with algorithms, coordinated information campaigns, and resisted attempts to curtail their speech. Like Chinese activists, they learned to hide their traces when it was to their advantage to do so. They encoded their ideas such that access to content didn’t mean access to meaning.

Of course, it wasn’t just progressive activists and teenagers who were learning how to mess with the media ecosystem that has emerged since social media unfolded. We’ve also seen the political establishment, law enforcement, marketers, and hate groups build capacity at manipulating the media landscape. Very little of what’s happening is truly illegal, but there’s no widespread agreement about which of these practices are socially and morally acceptable or not.

The techniques that are unfolding are hard to manage and combat. Some of them look like harassment, prompting people to self-censor out of fear. Others look like “fake news”, highlighting the messiness surrounding bias, misinformation, disinformation, and propaganda. There is hate speech that is explicit, but there’s also suggestive content that prompts people to frame the world in particular ways. Dog whistle politics have emerged in a new form of encoded content, where you have to be in the know to understand what’s happening. Companies who built tools to help people communicate are finding it hard to combat the ways their tools are being used by networks looking to skirt the edges of the law and content policies. Institutions and legal instruments designed to stop abuse are finding themselves ill-equipped to function in light of networked dynamics.

The Internet has long been used for gaslighting, and trolls have long targeted adversaries. What has shifted recently is the scale of the operation, the coordination of the attacks, and the strategic agenda of some of the players.

For many who are learning these techniques, it’s no longer simply about fun, nor is it even about the lulz. It has now become about acquiring power.

A new form of information manipulation is unfolding in front of our eyes. It is political. It is global. And it is populist in nature. The news media is being played like a fiddle, while decentralized networks of people are leveraging the ever-evolving networked tools around them to hack the attention economy.

I only wish I knew what happens next.

This post was first published as part of a series on media, accountability, and the public sphere. See also:

 

This post was also translated to Portuguese

by zephoria at January 06, 2017 09:12 AM

January 04, 2017

MIMS 2012

Books I Read in 2016

In 2016, I read 22 books. Only 3 of those 22 were fiction. I had a consistent clip of 1-3 per month, and managed to finish at least one book each month.

Highlights include:

  • The Laws of Simplicity by John Maeda: the first book I read this year was super interesting. In it, Maeda offers 10 laws for for balancing simplicity and complexity in business, technology, and design. By the end, he simplifies the book down to one law: “Simplicity is about subtracting the obvious, and adding the meaningful.”
  • David Whitaker Painting by Matthew Sturgis: I had never heard of the artist David Whitaker until I stumbled on this book at Half Price Books in Berkeley. He makes abstract paintings that combine lines and colors and gradients in fantastic ways. The cover sucked me in, and after flipping through a few pages I fell in love with his work and immediately bought the book. Check out his work on his portfolio.
  • Libra by Don DeLillo: a fascinating account of all the forces (including internal ones) that pushed Lee Harvey Oswald into assassinating JFK. The book is fiction and includes plenty of embellishments from the author (especially internal dialog), but is based on real facts from Oswald’s life and the assassination.
  • NOFX: The Hepatitis Bathtub and Other Stories by NOFX: a thoroughly entertaining history of the SoCal pop-punk band NOFX as told through various ridiculous stories from the members of the band themselves. It was perfect poolside reading in Cabo.
  • Org Design for Design Orgs by Peter Merholz & Kristin Skinner: This is basically a handbook for what I should be doing as the Head of Design at Optimizely. I can’t overstate how useful this has been to me in my job. If you’re doing any type of design leadership, I highly recommend it.
  • The Gift, by Lewis Hyde: a very thought-provoking read about creativity and the tension between art and commerce. So thought-provoking that it provoked me into writing down my thoughts in my last blog post.

Full List of Books Read

  • The Laws of Simplicity by John Maeda (1/3/16)
  • Although of Course You End up Becoming Yourself by David Lipsky (1/24/16)
  • Practical Empathy by Indi Young (2/1/16)
  • Time Out of Joint by Philip K. Dick (2/8/16)
  • A Wild Sheep Chase by Haruki Murakami (3/5/16)
  • Radical Focus: Achieving Your Most Important Goals with Objectives and Key Results by Christina Wodtke (3/21/16)
  • The Elements of Style by William Strunk Jr. and E.B. White (3/23/16)
  • Sprint: How to solve big problems and test new ideas in just 5 days by Jake Knapp, with John Zeratsky & Braden Kowitz (4/8/16)
  • David Whitaker Painting by Matthew Sturgis (4/18/16)
  • Show Your Work by Austin Kleon (5/8/16)
  • Nicely Said by Kate Kiefer Lee and Nicole Fenton (6/5/16)
  • The Unsplash Book by Jory MacKay (6/27/16)
  • Words Without Music: A Memoir by Philip Glass (July)
  • Libra by Don DeLillo (8/21/16)
  • How To Visit an Art Museum by Johan Idema (8/23/16)
  • 101 Things I Learned in Architecture School by Matthew Frederick (9/5/16)
  • Intercom on Jobs-to-be-Done by Intercom (9/17/16)
  • Org Design for Design Orgs by Peter Merholz & Kristin Skinner (9/26/16)
  • NOFX: The Hepatitis Bathtub and Other Stories by NOFX with Jeff Alulis (10/23/16)
  • The User’s Journey: Storymapping Products That People Love by Donna Lichaw (11/10/16)
  • Sharpie Art Workshop Book by Timothy Goodman (11/13/16)
  • The Gift by Lewis Hyde (12/29/16)

by Jeff Zych at January 04, 2017 05:08 AM

December 31, 2016

MIMS 2012

Thoughts on “The Gift”

I finally finished “The Gift,” by Lewis Hyde, after reading it on and off for at least the last 4 months (probably more). Overall I really enjoyed it and found it very thought-provoking. At its core it’s about creativity, the arts, and the tension between art and commerce — topics which are fascinating to me. It explores the question, how do artists make a living in a market-based economy? (I say “explores the question” instead of “answers” because it doesn’t try to definitively answer the question, although some solutions are provided).

It took me awhile to finish, though, because the book skews academic at times, which made some sections a slog to get through. The first half goes pretty deep into topics including the theory of gifts, history of gift-giving, folklores about gifts, and how gift-based economies function; the latter half uses Walt Whitman and Ezra Pound as real-life examples of the theory-based first half. Both of these sections felt like they could have been edited down to be much more succinct, while still preserving the main points being made. This would have made the book easier to get through, and the book’s main points easier to parse and more impactful.

There’s a sweet spot in the middle, however, which is a thought-provoking account of the creative process and how artists describe their work. If I were to re-read the book I’d probably just read Chapter 8, “The Commerce of the Creative Spirit.”

The book makes a lot of interesting points about gifts and gift-giving, market economies, artists and the creative process, how artists can survive in a market economy, and the Cold War’s affect on art in America, which I summarize below.

On Gifts and Gift-Giving

  • Gifts need to be used or given away to have any value. Value comes from the gift’s use. They can’t be sold or stay with an individual. If they do, they’re wasted. This is true of both actual objects and talent.
  • Gift giving is a river that needs to stay in motion, whereas markets are an equilibrium that seeks balance.
  • Giving a gift creates a bond between the giver and recipient. Commerce leaves no connection between people. Gifts foster community, whereas commerce fosters freedom and individuals. Gifts are agents of social cohesion.
  • Gifts are given with no expectation of a return gift. By giving something to a member of the community, or the community itself, you trust that the gift will eventually return to you in some other form by the community.
  • Converting a gift to money, e.g. by selling it on the open market, undermines the group’s cohesion, fragments the group, and could destroy it if it becomes the norm.
  • Gift economies don’t scale, though. Once it grows beyond the point that each member knows each other to some degree it will collapse.

On Market Economies

  • Market economies are good for dealing with strangers, i.e. people who aren’t part of a group, people who you won’t see again. There’s a fair value to exchange goods and services with people outside the group, and no bond is created between people.
  • Markets serve to divide, classify, quantify. Gifts and art are a way of unifying people.

On Artists and the Creative Process

  • Artists typically don’t know where their work comes from. They produce something, then evaluate it and think, “Did I do that?”
  • To produce art, you have to turn off the part of your brain that quantifies, edits, judges. Some artists step away from their work, go on retreats, travel, see new things, have new experiences, take drugs, isolate themselves, and so on. The act of judging and evaluating kills the creative process. Only after a work of art is created can an artist quantify it and judge it and (maybe) sell it.
  • Art is a gift that is given to the world, and that gift has the power to awaken new artists (see above, gifts must keep moving). That is, an artist is initially inspired by a master’s work of art to produce their own. In this way, art is further given back to the world, and the cycle of gift-giving continues.
  • Each piece of work an artist produces is a gift given to them by an unknown external agent, and in turn a gift they pass on to the world.
  • Artists “receive” their work – it’s an invocation of something (e.g. “muse”, “genius”, etc.). The initial spark comes to them from a source they do not control. Only after this initial raw “materia” appears does the work of evaluation, clarification, revision begin. Refining an idea, and bringing it into the world, comes after that initial spark is provided to them by an external source. "Invoking the creative spirit"
    • Artists can’t control the initial spark, or will it to appear. The artist is the servant of the initial spark.
    • Evaluation kills creativity – it must be laid aside until after raw material is created.
  • The act of creation does not empty the wellspring that provided that initial spark; rather, the act of creation assures the flow continues and that the wellspring will never empty. Only if it’s not used does it go dry.
  • Imagination can assemble our fragmented experiences into a coherent whole. An artist’s work, once produced, can then reproduce the same spirit or gift initially given to them in the audience.
  • This binds people by being a shared “gift” for all who are able to receive it. This widens one’s sense of self.
  • The spirit of a people can be given form in art. This is how art comes to represent groups.
  • The primary commerce of art is gift exchange, not market exchange.

How Artists Can Survive in a Market Economy

The pattern for artists to survive is that they need to be able to create their work in a protected gift sphere, free of evaluation and judgment and quantification. Only then, after the work has been made real, can they evaluate it and bring it to market. By bringing it to the market they can convert their gift into traditional forms of wealth, which they can re-invest back in their gift. But artists can’t start in the market economy, because that isn’t art. It’s “commercial art,” i.e. creating work to satisfy an external market demand, rather than giving an internal gift to the world.

There are 3 ways of doing this:

  1. Sell the work itself on the market — but only after it’s been created. Artists need to be careful to keep the two separate.
  2. Patronage model. A king, or grants, or other body pays for the artist to create work.
  3. Work a job to pay the bills, and create your work outside of that. This frees artists from having to subsist on their work alone, and frees them to say what they want to say. This is, in a sense, self-patronage.
  4. Bonus way: arts support the arts. This means the community of artists creates a fund, or trust, that is invested in new artists. The fund’s money comes by taking a percentage of the profits from established artists. This is another form of patronage.

But even using these models, Hyde is careful to point out that this isn’t a way to become rich – it’s a way to “survive.” And even within these models there are still pitfalls.

Soviet Union’s affect on art in America

In the 25th Anniversary edition afterword, Hyde makes the connection that the Cold War spurred America to increase funding to the arts and sciences to demonstrate the culture and freedom of expression that a free market supports. A communist society, on the other hand, doesn’t value art and science since they don’t typically have direct economic benefit, and thus doesn’t have the same level of expression as a free market. The end of the Cold War, unfortunately, saw a decrease in funding since the external threat was removed. This was an interesting connection that I hadn’t thought about before.

Final Thoughts

All in all, a very thought-provoking book that I’m glad I read.

by Jeff Zych at December 31, 2016 08:47 PM

December 27, 2016

Ph.D. student

the impossibility of conveying experience under conditions of media saturation

I had a conversation today with somebody who does not yet use a smart phone. She asked me how my day had gone so far and what I had done.

Roughly speaking, the answer to the question was that I had spent the better part of the day electronically chatting with somebody else, who had recommended to me an article that I had found interesting, but then when attempting to share it with friends on social media I was so distracted by a troubled post by somebody else that I lost all resolve.

Not having either article available at my fingertips for the conversation, and not wanting to relay the entirety of my electronic chat, I had to answer with dissatisfying and terse statements to the effect the person I had spoken with was just fine, the article I read had been interesting, and that something had reminded me of something else, which put me in a bad mood.

The person I was speaking with is very verbal, and answers like these are disappointing for her. To her, not being able to articulate something is a sign that one is not thinking about it sufficiently clearly. To be inarticulate is to be uncomprehending.

What I was facing, on the contrary, was a situation where I had been subject to articulation and nothing but it for the better part of the day. My life is so saturated by media that the amount of information I’m absorbing in the average waking hour or two is just more than can be compressed into a conversation. The same text can’t occur twice, and the alternative perspective of the interlocutor makes it almost impossible to relay what the media meant to me, even if I were able to reproduce it literally for her. Add to this the complexity of my own reactions to the stimuli, which oscillate with my own thoughts on the matter, and you can see how I’d come to the conclusion at the end of the day that there is no way to convey ones lived experience accurately in writing when ones life is so saturated by media that such a conveyance would devolve into an endlessly nested system of quotations.

I’ve spent the past five years in graduate school. There’s a sense in graduate school that writing still matters. One may be expected to produce publications, even go to such lengths as writing a book. But when so much of what used to be considered conversation is now writing, one wonders whether the book, or the published article, has lost its prestige. The vague mechanics of publication no longer serves as a gatekeeper for what can and cannot be read. Rather, ‘publication’ serves some other function, perhaps a signal of provenance or a promise that something will be preserved.

The recent panic over “fake news” recalls a past when publication was a source of quality control and accountability. There was something comprehensible about a world where the official narrative was written and verified by an institution. Centralized media was a condition for modernism. Now what is our media ecosystem? Not even the awesome power of the search engine is able to tame the jungle of social media. Media is available all but immediately to the appetite of the consumer, and the millennial citizen bathes daily in the broth of those appetites. Words are no longer a mode of communication, they are a mode of consciousness. And the more of them that confront the mind, the more they resemble mere sensations, not the kinds of ideas one would assemble into a phrase and utter to another with import.

There is no going back. Our media chaos is, despite its newness, primordial. Old patterns of authority are obsolete. The questions we must ask are: what now? And, how can we tell anybody?


by Sebastian Benthall at December 27, 2016 12:01 AM

December 23, 2016

Ph.D. student

notes about natural gas and energy policy

I’m interested in energy (in the sense of the economy and ecology of energy as it powers society) but know nothing about it.

I feel like the last time I really paid attention to energy, it was still a question of oil (and its industrial analog, Big Oil) and alternative, renewable energy.

But now energy production in the U.S. has given way from oil to natural gas. I asked a friend about why, and I’ve filled in a big gap in my understanding of What’s Going On. What I filled it in with might be wrong, but here’s what it is so far:

  • At some point natural gas became a viable alternative to oil because the energy companies discovered it was cheaper to collect natural gas than to drill for oil.
  • The use of natural gas for energy has less of a carbon footprint than oil does. That makes it environmentally friendly relative to the current regulatory environment.
  • The problem (there must be a problem) is that the natural gas collection process has lots of downsides. These downsides are mainly because the process is very messy, involving smashing into some pocket of natural gas under lots of rock and trying to collect the good stuff. Lots of weird gases go everywhere. That has downsides, including:
    • Making the areas where this is happening unlivable. Because it’s harder to breathe? Because the water can be set on fire? It’s terrible.
    • It releases a lot of methane into the environment, which may be as bad if not worse for climate change than carbon. Who knows how bad it really is? Unclear.
  • Here’s the point (totally unconfirmed): The shift from oil to natural gas as an energy source has been partly due to a public awareness and regulatory gap about the side effects. There’s now lots of political pressure and science around carbon. But methane? I thought that was an energy source (because of Mad Max Beyond Thunderdome). I guess I was wrong.
  • Meanwhile, OPEC and non-OPEC have teamed up to restrict oil sales to hike up oil prices. Sucks for energy consumers, but that’s actually good for the environment.
  • Also, in response to the apparent reversal of U.S. federal interest in renewable energy, philanthropy-plus-market has stepped in with Breakthrough Energy Ventures. Since venture capital investors with technical backgrounds, unlike the U.S. government, tend to be long on science, this is just great.
  • So what: The critical focus for those interested in the environment now should be on the environmental and social impact of natural gas production, as oil has been taken care of and heavy hitters are backing sustainable energy in a way that will fix the problem if it can truly be fixed. We just have to not boil the oceans and poison all the children before they can get to it.
  • /

      If that doesn’t work, I guess at the end of the day, there’s always pigs.


by Sebastian Benthall at December 23, 2016 08:16 PM

MIMS 2016

Well, I did not know about Fractals. I will check it out.

Well, I did not know about Fractals. I will check it out.

And thanks for reading! :)

by nikhil at December 23, 2016 04:23 AM

December 15, 2016

Ph.D. student

Protected: What’s going on?

This post is password protected. You must visit the website and enter the password to continue reading.


by Sebastian Benthall at December 15, 2016 04:38 AM

December 12, 2016

Ph.D. student

energy, not technology

I’m still trying to understand what’s happening in the world and specifically in the U.S. with the 2016 election. I was so wrong about it that I think I need to take seriously the prospect that I’ve been way off in my thinking about what’s important.

In my last post, I argued that the media isn’t as politically relevant we’ve been told. If underlying demographic and economic variables were essentially as predictive as anything of voter behavior, then media mishandling of polling data or biased coverage just isn’t what’s accounting for the recent political shift.

Part of the problem with media determinist accounts of the election is that because they deal with the minutia of reporting within the United States, they don’t explain how Brexit foreshadowed Trump’s election, as anybody paying attention has been pointing out for months.

So what happens if we take seriously explanation that really what’s happening is a reaction against globalization. That’s globalization in the form of a centralized EU government, or globalization in the form of U.S. foreign policy and multiculturalism. If the United States under Obama was trying to make itself out to be a welcoming place for global intellectual talent to come and contribute to the economy through Silicon Valley jobs, then arguably the election was the backfire.

An insulated focus on “the tech industry” and its political relevance has been a theme in my media bubble for the past couple of years. Arguably, that’s just because people thought the tech industry was where the power and the money was. So of course the media should scrutinize that, because everyone trying to get to the top of that pile wants to know what’s going on there.

Now it’s not clear who is in power any more (I’ll admit I’m just thinking about power as a sloppy aggregate of political and economic power. Let’s assume that political power backing an industry leads to a favorable regulatory environment for that industry’s growth, and it’s not a bad model). It doesn’t seem like it’s Silicon Valley any more. Probably it’s the energy industry.

There’s a lot going on in the energy industry! I know basically diddly about it but I’ve started doing some research.

One interesting thing that’s happening is that Russia and OPEC are teaming up to cut oil production. This is unprecedented. It also, to me, creates a confusing narrative. I thought Obama’s Clean Power Plan, focusing on renewable energy, and efforts to build international consensus around climate change were the best bets for saving the world from high emissions. But since cutting oil production leads to cutting oil production, what if the thing that really can cut carbon dioxide emissions is an oligopolistic price hike on oil?

That said, oil prices may not necessarily dictate energy prices because the U.S. because a lot of energy used is natural gas. Shale gas, in particular, is apparently a growing percentage of natural gas used in the U.S. It’s apparently better than oil in terms of CO2 emissions. Though it’s mined through fracking, which disgusts a lot of people!

Related: I was pretty pissed when I heard about Rex Tillerson, CEO of Exxon Mobil, being tapped for Secretary of State. Because that’s the same old oil companies that have messed things up so much before, right? Maybe not. Apparently Exxon Mobil also invests heavily in natural gas. As their website will tell you, that gas industry uses a lot of human labor. Which is obviously a plus in this political climate.

What’s interesting to me about all this is that it all seems very important but it has absolutely nothing to do with social media or even on-line marketplaces. It’s all about stuff way, way upstream on the supply chain.

It is certainly humbling to feel like your area of expertise doesn’t really matter. But I’m not sure what to even do as a citizen now that I realize how little I understand. I think there’s been something very broken about my theory about society and the world.

The next few posts may continue to have this tone of “huh”. I expect I’ll be stating what’s obvious to a lot of people. But whatever. I just need to sort some things out.


by Sebastian Benthall at December 12, 2016 02:55 AM

December 07, 2016

Ph.D. student

post-election updates

Like a lot of people, I was completely surprised by the results of the 2016 election.

Rationally, one has to take these surprises as an opportunity to update ones point of view. As it’s been almost a month, there’s been lots of opportunity to process what’s going on.

For my own sake, more than for any reader, I’d like to note my updates here.

The first point has been best articulated by Jon Stewart:

Stewart rejected the idea that better news coverage would have changed the outcome of the election. “The idea that if [the media] had done a better job this country would have made another choice is fake,” he said. He cited Brexit as an example of an unfortunate outcome that occurred despite its lead-up being appropriately covered by outlets like the BBC, which offered a much more balanced view than CNN, for example. “Trump didn’t happen because CNN sucks—CNN just sucks,” he said.

Satire and comedy also couldn’t have stood in the way of Trump winning, Stewart said. If this election has taught us anything, he said, its that “controlling the culture does not equate to holding the power.”

I once cared a lot about “money in politics” at the level of campaign donations. After a little critical thinking, this leads naturally to a concern about the role of the media more generally in elections. Centralized media in particular will never put themselves behind a serious bid for campaign finance reform because those media institutions cash out every election. This is what it means for a problem to be “systemic”: it is caused by a tightly reinforcing feedback loop that makes it into a kind of social structural knot.

But with the 2016 presidential election, we’ve learned that Because of the Internet, media are so fragmented that even controlled media are not in control. People will read what they want to read, one way or another. Whatever narrative suits a person best, they will be able to find it on the Internet.

A perhaps unhelpful way to say this is that the Internet has set the Bourdieusian habitus free from media control.

But if the media doesn’t determine habitus, what does?

While there is a lot of consternation about the failure of polling (which is interesting), and while that could have negatively impacted Democratic campaign strategy (didn’t it?), the more insightful sounding commentary has recognized that the demographic fundamentals were in favor of Trump all along because of what he stood for economically and socially. Michael Moore predicted the election result; logically, because he was right, we should update towards his perspective; he makes essentially this point about Midwestern voters, angry men, depressed progressives, and the appeal of oddball voting all working against Hilary. But none of these conditions have as much to do with media as they do to the preexisting population conditions.

There’s a tremendous bias among those who “study the Internet” to assign tremendous political importance to the things we have expertise on: the media, algorithms, etc. My biggest update this election was that I now think that these are eclipsed in political relevance compared to macro-economic issues like globalization. At best changes to, say, the design of social media platforms are going to change things for a few people at the margins. But larger structural forces are both more effective and more consequential in politics. I bet that a prediction of the 2016 election based primarily on the demographic distribution of winners and losers according to each candidate’s energy policy, for example, would have been more valuable than all the rest of the polling and punditry combined. I suppose I was leaning this way throughout 2016, but the election sealed the deal for me.

This is a relief for me because it has revealed to me just how much of my internalization and anxieties about politics have been irrelevant. There is something very freeing in discovering that many things that you once thought were the most important issues in the world really just aren’t. If all those anxieties were proven to just be in my head, then it’s easier to let them go. Now I can start wondering about what really matters.


by Sebastian Benthall at December 07, 2016 07:17 AM

December 03, 2016

Ph.D. student

directions to migrate your WebFaction site to HTTPS

Hiya friends using WebFaction,

Securing the Web, even our little websites, is important — to set a good example, to maintain the confidentiality and integrity of our visitors, to get the best Google search ranking. While secure Web connections had been difficult and/or costly in the past, more recently, migrating a site to HTTPS has become fairly straightforward and costs $0 a year. It may get even easier in the future, but for now, the following steps should do the trick.

Hope this helps, and please let me know if you have any issues,
Nick

P.S. Yes, other friends, I recommend WebFaction as a host; I’ve been very happy with them. Services are reasonably priced and easy to use and I can SSH into a server and install stuff. Sign up via this affiliate link and maybe I get a discount on my service or something.

P.S. And really, let me know if and when you have issues. Encrypting access to your website has gotten easier, but it needs to become much easier still, and one part of that is knowing which parts of the process prove to be the most cumbersome. I’ll make sure your feedback gets to the appropriate people who can, for realsies, make changes as necessary to standards and implementations.

Updated 2 December 2016: to use new letsencrypt-webfaction design, which uses WebFaction's API and doesn't require emails and waiting for manual certificate installation.

Updated 16 July 2016: to fix the cron job command, which may not have always worked depending on environment variables


One day soon I hope WebFaction will make more of these steps unnecessary, but the configuring and testing will be something you have to do manually in pretty much any case. You should be able to complete all of this in an hour some evening.

Create a secure version of your website in the WebFaction Control Panel

Login to the Web Faction Control Panel, choose the “DOMAINS/WEBSITES” tab and then click “Websites”.

“Add new website”, one that will correspond to one of your existing websites. I suggest choosing a name like existingname-secure. Choose “Encrypted website (https)”. For Domains, testing will be easiest if you choose both your custom domain and a subdomain of yourusername.webfactional.com. (If you don’t have one of those subdomains set up, switch to the Domains tab and add it real quick.) So, for my site, I chose npdoty.name and npdoty.npd.webfactional.com.

Finally, for “Contents”, click “Re-use an existing application” and select whatever application (or multiple applications) you’re currently using for your http:// site.

Click “Save” and this step is done. This shouldn’t affect your existing site one whit.

Test to make sure your site works over HTTPS

Now you can test how your site works over HTTPS, even before you’ve created any certificates, by going to https://subdomain.yourusername.webfactional.com in your browser. Hopefully everything will load smoothly, but it’s reasonably likely that you’ll have some mixed content issues. The debug console of your browser should show them to you: that’s Apple-Option-K in Firefox or Apple-Option-J in Chrome. You may see some warnings like this, telling you that an image, a stylesheet or a script is being requested over HTTP instead of HTTPS:

Mixed Content: The page at ‘https://npdoty.name/’ was loaded over HTTPS, but requested an insecure image ‘http://example.com/blah.jpg’. This content should also be served over HTTPS.

Change these URLs so that they point to https://example.com/blah.jpg (you could also use a scheme-relative URL, like //example.com/blah.jpg) and update the files on the webserver and re-test.

Good job! Now, https://subdomain.yourusername.webfactional.com should work just fine, but https://yourcustomdomain.com shows a really scary message. You need a proper certificate.

Get a free certificate for your domain

Let’s Encrypt is a new, free, automated certificate authority from a bunch of wonderful people. But to get it to setup certificates on WebFaction is a little tricky, so we’ll use the letsencrypt-webfaction utility —- thanks will-in-wi!

SSH into the server with ssh yourusername@yourusername.webfactional.com.

To install, run this command:

GEM_HOME=$HOME/.letsencrypt_webfaction/gems RUBYLIB=$GEM_HOME/lib gem2.2 install letsencrypt_webfaction

(Run the same command to upgrade; necesary if you followed these instructions before Fall 2016.)

For convenience, you can add this as a function to make it easier to call. Edit ~/.bash_profile to include:

function letsencrypt_webfaction {
    PATH=$PATH:$GEM_HOME/bin GEM_HOME=$HOME/.letsencrypt_webfaction/gems RUBYLIB=$GEM_HOME/lib ruby2.2 $HOME/.letsencrypt_webfaction/gems/bin/letsencrypt_webfaction $*
}

Now, let’s test the certificate creation process. You’ll need your email address, the domain you're getting a certificate for, the path to the files for the root of your website on the server, e.g. /home/yourusername/webapps/sitename/ and the WebFaction username and password you use to log in. Filling those in as appropriate, run this command:

letsencrypt_webfaction --letsencrypt_account_email you@example.com --domains yourcustomdomain.com --public /home/yourusername/webapps/sitename/ --username webfaction_username --password webfaction_password

If all went well, you’ll see nothing on the command line. To confirm that the certificate was created successfully, check the SSL certificates tab on the WebFaction Control Panel. ("Aren't these more properly called TLS certificates?" Yes. So it goes.) You should see a certificate listed that is valid for your domain yourcustomdomain.com; click on it and you can see the expiry date and a bunch of gobblydegook which actually is the contents of the certificate.

To actually apply that certificate, head back to the Websites tab, select the -secure version of your website from the list and in the Security section, choose the certificate you just created from the dropdown menu.

Test your website over HTTPS

This time you get to test it for real. Load https://yourcustomdomain.com in your browser. (You may need to force refresh to get the new certificate.) Hopefully it loads smoothly and without any mixed content warnings. Congrats, your site is available over HTTPS!

You are not done. You might think you are done, but if you think so, you are wrong.

Set up automatic renewal of your certificates

Certificates from Let’s Encrypt expire in no more than 90 days. (Why? There are two good reasons.) Your certificates aren’t truly set up until you’ve set them up to renew automatically. You do not want to do this manually every few months; you will forget, I promise.

Cron lets us run code on WebFaction’s server automatically on a regular schedule. If you haven’t set up a cron job before, it’s just a fancy way of editing a special text file. Run this command:

EDITOR=nano crontab -e

If you haven’t done this before, this file will be empty, and you’ll want to test it to see how it works. Paste the following line of code exactly, and then hit Ctrl-O and Ctrl-X to save and exit.

* * * * * echo "cron is running" >> $HOME/logs/user/cron.log 2>&1

This will output to that log every single minute; not a good cron job to have in general, but a handy test. Wait a few minutes and check ~/logs/user/cron.log to make sure it’s working.

Rather than including our username and password in our cron job, we'll set up a configuration file with those details. Create a file config.yml, perhaps at the location ~/le_certs. (If necessary, mkdir le_certs, touch le_certs/config.yml, nano le_certs/config.yml.) In this file, paste the following, and then customize with your details:

letsencrypt_account_email: 'you@example.com'
api_url: 'https://api.webfaction.com/'
username: 'webfaction_username'
password: 'webfaction_password'

(Ctrl-O and Ctrl-X to save and close it.) Now, let’s edit the crontab to remove the test line and add the renewal line, being sure to fill in your domain name, the path to your website’s directory, and the path to the configuration file you just created:

0 4 15 */2 * PATH=$PATH:$GEM_HOME/bin GEM_HOME=$HOME/.letsencrypt_webfaction/gems RUBYLIB=$GEM_HOME/lib /usr/local/bin/ruby2.2 $HOME/.letsencrypt_webfaction/gems/bin/letsencrypt_webfaction --domains example.com --public /home/yourusername/webapps/sitename/ --config /home/yourusername/le_certs/config.yml >> $HOME/logs/user/cron.log 2>&1

You’ll probably want to create the line in a text editor on your computer and then copy and paste it to make sure you get all the substitutions right. Paths must be fully specified as the above; don't use ~ for your home directory. Ctrl-O and Ctrl-X to save and close it. Check with crontab -l that it looks correct. As a test to make sure the config file setup is correct, you can run the command part directly; if it works, you shouldn't see any error messages on the command line. (Copy and paste the line below, making the the same substitutions as you just did for the crontab.)

PATH=$PATH:$GEM_HOME/bin GEM_HOME=$HOME/.letsencrypt_webfaction/gems RUBYLIB=$GEM_HOME/lib /usr/local/bin/ruby2.2 $HOME/.letsencrypt_webfaction/gems/bin/letsencrypt_webfaction --domains example.com --public /home/yourusername/webapps/sitename/ --config /home/yourusername/le_certs/config.yml

With that cron job configured, you'll automatically get a new certificate at 4am on the 15th of alternating months (January, March, May, July, September, November). New certificates every two months is fine, though one day in the future we might change this to get a new certificate every few days; before then WebFaction will have taken over the renewal process anyway. Debugging cron jobs can be tricky (I've had to update the command in this post once already); I recommend adding an alert to your calendar for the day after the first time this renewal is supposed to happen, to remind yourself to confirm that it worked. If it didn't work, any error messages should be stored in the cron.log file.

Redirect your HTTP site (optional, but recommended)

Now you’re serving your website in parallel via http:// and https://. You can keep doing that for a while, but everyone who follows old links to the HTTP site won’t get the added security, so it’s best to start permanently re-directing the HTTP version to HTTPS.

WebFaction has very good documentation on how to do this, and I won’t duplicate it all here. In short, you’ll create a new static application named “redirect”, which just has a .htaccess file with, for example, the following:

RewriteEngine On
RewriteCond %{HTTP_HOST} ^www\.(.*)$ [NC]
RewriteRule ^(.*)$ https://%1/$1 [R=301,L]
RewriteCond %{HTTP:X-Forwarded-SSL} !on
RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]

This particular variation will both redirect any URLs that have www to the “naked” domain and make all requests HTTPS. And in the Control Panel, make the redirect application the only one on the HTTP version of your site. You can re-use the “redirect” application for different domains.

Test to make sure it’s working! http://yourcustomdomain.com, http://www.yourcustomdomain.com, https://www.yourcustomdomain.com and https://yourcustomdomain.com should all end up at https://yourcustomdomain.com. (You may need to force refresh a couple of times.)

by nick@npdoty.name at December 03, 2016 12:55 AM

November 30, 2016

adjunct professor

Dept. of Commerce’s Privacy Shield Checklist

Practitioner friends, the Department of Commerce just released their checklist for Privacy Shield applicants. More on this later.

by web at November 30, 2016 05:42 PM

November 27, 2016

Ph.D. student

reflexive control

A theory I wish I had more time to study in depth these days is the Soviet field of reflexive control (see for example this paper by Timothy Thomas on the subject).

Reflexive control is defined as a means of conveying to a partner or an opponent specially prepared information to incline him to voluntarily make the predetermined decision desired by the initiator of the action. Even though the theory was developed long ago in Russia, it is still undergoing further refinement. Recent proof of this is the development in February 2001, of a new Russian journal known as Reflexive Processes and Control. The journal is not simply the product of a group of scientists but, as the editorial council suggests, the product of some of Russia’s leading national security institutes, and boasts a few foreign members as well.

While the paper describes the theory in broad strokes, I’m interested in how one would formalize and operationalize reflexive control. My intuitions thus far are like this: traditional control theory assumes that the controlled system is inanimate or at least not autonomous. The controlled system is steered, often dynamically, to some optimal state. But in reflexive control, the assumption is that the controlled system is autonomous and has a decision-making process or intelligence. Therefore reflexive control is a theory of influence, perhaps deception. Going beyond mere propaganda, it seems like reflexive control can be highly reactive, taking into account the reaction time of other agents in the field.

There are many examples, from a Russian perspective, of the use of reflexive control theory during conflicts. One of the most recent and memorable was the bombing of the market square in Sarejevo in 1995. Within minutes of the bombing, CNN and other news outlets were reporting that a Serbian mortar attack had killed many innocent people in the square. Later, crater analysis of the shells that impacted in the square, along with other supporting evidence, indicated that the incident did not happen as originally reported. This evidence also threw into doubt the identities of the perpetrators of the attack. One individual close to the investigation, Russian Colonel Andrei Demurenko, Chief of Staff of Sector Sarejevo at the time, stated, “I am not saying the Serbs didn’t commit this atrocity. I am saying that it didn’t happen the way it was originally reported.” A US and Canadian officer soon backed this position. Demurenko believed that the incident was an excellent example of reflexive control, in that the incident was made to look like it had happened in a certain way to confuse decision-makers.

Thomas’s article points out that the notable expert in reflexive control in the United States is V. A. Lefebvre, a Soviet ex-pat and mathematical psychologist at UC Irvine. He is listed on a faculty listing but doesn’t seem to have a personal home page. His wikipedia page says that reflexive theory is like the Soviet alternative to game theory. That makes sense. Reflexive theory has been used by Lefebvre to articulate a mathematical ethics, which is surely relevant to questions of machine ethics today.

Beyond its fascinating relevance to many open research questions in my field, it is interesting to see in Thomas’s article how “reflexive control” seems to capture so much of what is considered “cybersecurity” today.

One of the most complex ways to influence a state’s information resources is by use of reflexive control measures against the state’s decision-making processes. This aim is best accomplished by formulating certain information or disinformation designed to affect a specific information resource best. In this context an information resource is defined as:

  • information and transmitters of information, to include the method or technology of obtaining, conveying, gathering, accumulating, processing, storing, and exploiting that information;
  • infrastructure, including information centers, means for automating information processes, switchboard communications, and data
    transfer networks;
  • programming and mathematical means for managing information;
  • administrative and organizational bodies that manage information processes, scientific personnel, creators of data bases and knowledge, as well as personnel who service the means of informatizatsiya [informatization].

Unlike many people, I don’t think “cybersecurity” is very hard to define at all. The prefix “cyber-” clearly refers to the information-based control structures of a system, and “security” is just the assurance of something against threats. So we might consider “reflexive control” to be essentially equivalent to “cybersecurity”, except with an emphasis on the offensive rather than defensive aspects of cybernetic control.

I have yet to find something describing the mathematical specifics of the theory. I’d love to find something and see how it compares to other research in similar fields. It would be fascinating to see where Soviet and Anglophone research on these topics is convergent, and where it diverges.


by Sebastian Benthall at November 27, 2016 04:00 AM

November 21, 2016

Ph.D. student

For “Comments on Haraway”, see my “Philosophy of Computational Social Science”

One of my most frequently visited blog posts is titled “Comments on Haraway: Situated knowledge, bias, and code”.  I have decided to password protect it.

If you are looking for a reference with the most important ideas from that blog post, I refer you to my paper, “Philosophy of Computational Social Science”. In particular, its section on “situated epistemology” discusses how I think computational social scientists should think about feminist epistemology.

I have decided to hide the original post for a number of reasons.

  • I wrote it pointedly. I think all the points have now been made better elsewhere, either by me or by the greater political zeitgeist.
  • Because it was written pointedly (even a little trollishly), I am worried that it may be easy to misread my intention in writing it. I’m trying to clean up my act :)
  • I don’t know who keeps reading it, though it seems to consistently get around thirty or more hits a week. Who are these people? They won’t tell me! I think it matters who is reading it.

I’m willing to share the password with anybody who contacts me about it.


by Sebastian Benthall at November 21, 2016 06:20 AM

November 18, 2016

MIDS student

EU-US Privacy Shield: Effects on Data companies

An overview of the framework and its impact on companies dealing with data

Introduction

The EU Commission adopted a new framework for the protection of transatlantic personal data transfers of anyone in the EU to the US .This was finalised on 21 July 2016 and needless to say has since been facing some legal challenges – the latest by a French privacy advocacy group (here) . 500 companies have already signed up to the Privacy Shield and a larger number of applications are being processed by the US Department of Commerce. Only time will tell if this framework finally puts all anxieties to rest or will it meet the same fate as the Safe Harbour Privacy Principles (a swift and potentially untimely end – here).

Privacy Shield FAQs (Ref)

  • What is the need for Privacy Shield ? To provide high levels of privacy protection for personal data collected in EU and transferred to the US for processing
  • Who will be covered ? Personal data of any individual (EU citizen or not) collected in the EU for transfer to the US and beyond (as discussed below)
  • Is Privacy Shield the only framework under which one can transfer data ? There are many tools that can be used here like standard contractual clauses (SCC), binding corporate rules and the Privacy Shield. However, each has its own challenge. For eg SCC are also under a legal challenge from the Irish Data Protection Authority (DPA) and the current execution method makes them painfully slow and burdensome.(details here)
  • Which companies does it apply to? Any company that is the recipient of  personal data transferred from the EU must first sign up to this framework with the US Department of commerce.If they pass the  data to any other US or non-EU agent, the company will need to sign a contract with the agent to ensure that the agent upholds the same level of privacy protection requirements.
  • What are the salient features of the Privacy Shield ?
    • Transparency
      • Easy access to website for list of companies signed up under Privacy Shield
      • Logical order of steps that could be taken to address any suspected violations by a company with clear guidelines of time frame within which an update/decision must be provided
    • Privacy principles upheld such as
      • Individual has the right to be informed about the type and reason for information collected by a company , reason for transfer of data elsewhere etc
      • Limitations to use of the data for a purpose different from the original purpose
      • Obligation to minimize data collection and to store it only for the time required
      • Obligation to secure data
      • Allow access to individual for correction of data
  • What are the shortfalls? Besides introducing operational hurdles, Privacy Shield fails to provide clarity on US government surveillance of the personal data. Additionally this framework does not apply to within-EU transfer of data (to be covered under the General Data Protection Regulation from 2018)

How does this impact data related companies?

Given the amount of investments (Microsoft and Facebook together invested in creating an under-sea transatlantic cable network) and potential trade under threat, companies have heaved a sigh of relief with this new framework coming into effect and have started signing up enthusiastically. However,  there are quite a few issues that can have an impact going forward such as,

  • The framework clearly states that the use of the collected data for purposes unrelated to the original is prohibited. This would affect the current practice of sharing data with other companies to show “relevant” ads on website
  • The obligation to sign contracts with agents to ensure same level of privacy protection as under the Privacy Shield can be quite a taxing exercise and may affect some long time partnerships
  • A number of operationally intensive steps have been introduced such as a variety of opt-in/opt-out possibilities, consent process, ability for individuals to access their data.
  • It is difficult to justify the appropriate time-frame for storing the data especially geolocation data
  • The ongoing legal cases bring into question the longevity of the framework
  • Potential mismatch between the requirements for within EU and outside EU data transfers

by arvinsahni at November 18, 2016 05:48 AM

Ph.D. student

Protected: I study privacy now

This post is password protected. You must visit the website and enter the password to continue reading.


by Sebastian Benthall at November 18, 2016 05:20 AM

November 16, 2016

adjunct professor

On Kenneth Rogoff’s The Curse of Cash

Professor Kenneth Rogoff’s Curse of Cash convincingly argues that we pay a high price for our commitment to cash: Over a trillion dollars of it is circulating outside of US banks, enough for every American to be holding $4,200. Eighty percent of US currency is in hundred dollar bills, yet few of us actually carry large bills around (except perhaps in the Bay Area, where the ATMs do dispense 100s…). So where is all this money? Rogoff’s careful evidence gathering points to the hands of criminals and tax evaders. Perhaps more importantly, the availability of cash makes it impossible for central banks to pursue negative interest rate policies—because we can just hoard our money as cash and have an effective zero interest rate.

What to do about this? Rogoff does not argue for a cashless economy, but rather a less cash economy. Eliminate large bills, particularly the $100 (interesting fact–$1mm in 100s weighs just 22 pounds), and then moving large amounts of value around illegally becomes much more difficult. Proxies for cash are not very good—they are illiquid, heavy, or easily detectable. And what about Bitcoin?—not as anonymous as people think. Think Rogoff’s plan is impossible? Well, India Prime Minister Modi just implemented a version of it, eliminating the 500 and 1,000 rupee notes.

As you might imagine, Rogoff’s proposal angers many privacy advocates and libertarians. His well written, well informed, and well argued book deserves more than its 2 stars on Amazon.

My critique is a bit different from the discontents on Amazon. I think Rogoff’s proposal offers a good opportunity to think through what consumer protection in payments systems might look like in a less-cash world—this is a world I think we are entering. Yet, Rogoff’s discussion shows a real lack of engagement in the payments and especially the privacy literature. For Rogoff’s proposal to be taken seriously, we need to revamp payments to address the problems of fees, cybersecurity, consumer protection, and other pathologies that electronic payments exacerbate.

The Problem of Fees

One immediately apparent problem is that as much as cash contributes to crime and tax evasion, electronic payments contribute to waste as well, in different ways. The least obvious is the cartel-like fees imposed by electronic payments providers. All consumers—including cash users—subsidize the cost of electronic payments, and the price tag is massive. In the case of credit cards, fees can be as high as 3.5% of the transaction. I know from practice that startups’ business models are sometimes shaped around the problem of such fees. Fees may even be responsible for the absence of a viable micropayment system for online content.

Fees represent a hidden tax that a less-cash society will pay more of, unless users are transitioned to payment alternatives that draw directly from their bank accounts. Rogoff seems to implicitly assume that consumers will chose that alternative, but it is not clear to me that consumers perceive of the fee difference between standard credit card accounts and use of debit or ACH-linked systems. For many consumers, especially more affluent ones, the obvious choice is to choose a credit card, pay the balance monthly, and enjoy the perks. Rogoff’s policy then means more free perks for the rich that are subsidized by poorer consumers.

Taking Cybercrime Seriously

Here’s a more obvious crime problem—while Rogoff is quick to observe that cash means that cashiers will skim, there is less attention paid to the kinds of fraud that electronic payments enable. Electronic payment creates new planes of attack for different actors who are not in proximity to the victims. A cashier will skim a few dollars a night, but can be fired. Cybercriminals will bust out for much larger sums from safe havens elsewhere in the world.

The Problem of Impulsive Spending and Improvidence

Consumers also spend more when they use electronic payments. And so a less cash society means that you’ll have…less money! Cash itself is an abstract representation of value, but digital cash is both an abstraction and immaterial. One doesn’t feel the “sting” of parting with electronic cash. In fact, there is even a company making a device to simulate parting with cash to deter frivolous spending.

The Problem of Cyberattack

Rogoff imagines threats to electronic payment as power outages and the like. That’s just the beginning. There are cybercriminals who are economically motivated, but then there are those who just want to create instability or make a political statement. We should expect attacks on payments to affect confidentiality, integrity, and availability of services, and these attacks will come both from economically-motivated actors, to nation states, to terrorists simply wanting to put a thumb in the eye of commerce. The worst attacks will not be power-outage-like events, but rather attacks on integrity that undermine trust in the payment system.

Moving From Regulation Z to E

The consumer protection landscape tilts in the move from credit cards to debit and ACH. Credit cards are wonderful because the defaults protect consumers from fraud almost absolutely. ACH and debit payments place far more risk of loss onto the consumer, theoretically, more risk than even cash presents. For instance, if a business swindles a cash-paying customer, that customer only loses the cash actually transferred. In a debit transaction, the risk of loss is theoretically unlimited unless it is noticed by the consumer within 60 days. Many scammers operate today and make millions by effectuating small, unnoticed charges against consumers’ electronic accounts.

The Illiberal State; Strong Arm Robbery

Much of Rogoff’s argument depends on other assumptions, ones that we might not accept so willingly anymore. We currently live in a society committed to small-l liberal values. We have generally honest government officials. What if that were to change? In societies plagued with corruption and the need to bribe officials, mobile payments become a way to extract more money from the individual than she would ordinarily carry. Such systems make it impossible to hide how much money one has from officials or in a strong-arm robbery.

Paying Fast and Slow

Time matters and Rogoff is wrong about the relative speed of payment in a cash versus electronic transaction. Rogoff cites a 2008 study showing that debit and cash transactions take the same amount of time. This is a central issue for retailers and large ones such as Wal-Mart know to the second what is holding up a line, because these seconds literally add up to millions of dollars in lost sales. Retailers mindful of time kept credit card transaction quick, but with the advent of chip transactions, cash clearly is the quickest method of payment. It is quite aggravating to wait for so many people charging small purchases nowadays.

Mobile might change these dynamics–not not anytime soon. Bluetooth basically does not work. To use mobile payments safely one should keep their phone locked. So when you add up the time of 1) unlocking the phone, 2) finding the payment app, 3) futzing with it, and 4) waiting for the network to approve the transaction, cash is going to be quicker. These transaction costs could be lowered, but the winner is going to be the platform-provided approaches (Apple or Android) and not competitive apps.

Privacy 101

Privacy is a final area where Rogoff does not identify the literature or the issues involved. And this is too bad because electronic payments need not eliminate privacy. In fact, our current credit card system segments information such that it gives consumers some privacy: Merchants have problems identifying consumers because names are not unique and because some credit card networks prohibit retailers from using cardholder data for marketing. The credit card network is a kind of ISP and knows almost nothing about the transaction details. And the issuing and acquiring banks know how much was spent and where, but not the SKU-level data of purchases.

The problem is that almost all new electronic payments systems are designed to collect as much data as possible and to spread it around to everyone involved. This fact is hidden from the consumer, who might already falsely assume that there’s no privacy in credit transactions.

The privacy differential has real consequences for privacy that Rogoff never really contemplates or addresses. It ranges from customer profiling to the problem that you can never just buy a pack of gum without telling the retailer who you are. You indeed may have “nothing to hide” about your gum, but consider this—once the retailer identifies you, you have an “established business relationship” with that retailer. The retailer than has the legal and technical ability to send you spam, telemarketing calls, and even junk fax messages! This is why Jan Whittington and I characterized personal information transfers as “continuous” transactions—exchanges where payment doesn’t sever the link between the parties. Such continuous transactions have many more costs than the consumer can perceive.

Conclusion

Professor Rogoff’s book describes in detail how cash leads to enabling more crime, paying more taxes, and how it hobbles our government from implementing more aggressive monetary policy. But the problem is that the proposed remedy suffers from a series of pathologies that will increase costs to consumers in other ways, perhaps dramatically. So yes, there is a curse of cash, but there are dangerous and wasteful curses associated with electronic payment, particularly credit.

The critiques I write here are well established in the legal literature. Merely using the Google would have turned up the various problems explained here. And this makes me want to raise another point that is more general about academic economists. I have written elsewhere that economists’ disciplinarity is a serious problem, leading to scholarship out of touch with the realities of the very businesses that economists claim to study. I find surprisingly naive works by economists in privacy who seem immune to the idea that smart people exist outside the discipline and may have contemplated the same thoughts (often decades earlier). Making matters worse, the group agreement to observe disciplinary borders creates a kind of Dunning–Kruger effect, because peer review also misses relevant literature outside the discipline. Until academic economists look beyond the borders of their discipline, their work will always be a bit irrelevant, a bit out of step. And the industry will not correct these misperceptions because works such as these benefit banks’ policy goals.

by web at November 16, 2016 07:59 PM

November 10, 2016

Ph.D. alumna

Put an End to Reporting on Election Polls

We now know that the US election polls were wrong. Just like they were in Brexit. Over the last few months, I’ve told numerous reporters and people in the media industry that they should be wary of the polling data they’re seeing, but I was generally ignored and dismissed. I wasn’t alone — two computer scientists whom I deeply respect — Jenn Wortman Vaughan and Hanna Wallach — were trying to get an op-ed on prediction and uncertainty into major newspapers, but were repeatedly told that the outcome was obvious. It was not. And election polls will be increasingly problematic if we continue to approach them the way we currently do.

It’s now time for the media to put a moratorium on reporting on election polls and fancy visualizations of statistical data. And for data scientists and pollsters to stop feeding the media hype cycle with statistics that they know have flaws or will be misinterpreted as fact.

Why Political Polling Will Never Be Right Again

Polling and survey research has a beautiful history, one that most people who obsess over the numbers don’t know. In The Averaged American, Sarah Igo documents three survey projects that unfolded in the mid-20th century that set the stage for contemporary polling: the Middletown studies, Gallup, and Kinsey. As a researcher, it’s mindblowing to see just how naive folks were about statistics and data collection in the early development of this field, how much the field has learned and developed. But there’s another striking message in this book: Americans were willing to contribute to these kinds of studies at unparalleled levels compared to their peers worldwide because they saw themselves as contributing to the making of public life. They were willing to reveal their thoughts, beliefs, and ideas because they saw doing so as productive for them individually and collectively.

As folks unpack the inaccuracies of contemporary polling data, they’re going to focus on technical limitations. Some of these are real. Cell phones have changed polling — many people don’t pick up unknown numbers. The FCC’s ruling that limited robocalls to protect consumers in late 2015 meant that this year’s sampling process got skewed, that polling became more expensive, and that pollsters took shortcuts. We’ve heard about how efforts to extrapolate representativeness from small samples messes with the data — such as the NYTimes report on a single person distorting national polling averages.

But there’s a more insidious problem with the polling data that is often unacknowledged. Everyone and their mother wants to collect data from the public. And the public is tired of being asked, which they perceive as being nagged. In swing states, registered voters were overwhelmed with calls from real pollsters, fake pollsters, political campaigns, fundraising groups, special interest groups, and their neighbors. We know that people often lie to pollsters (confirmation bias), but when people don’t trust information collection processes, normal respondent bias becomes downright deceptive. You cannot collect reasonable data when the public doesn’t believe in the data collection project. And political pollsters have pretty much killed off their ability to do reasonable polling because they’ve undermined trust. It’s like what happens when you plant the same crop over and over again until the land can no longer sustain that crop.

Election polling is dead, and we need to accept that.

Why Reporting on Election Polling Is Dangerous

To most people, even those who know better, statistics look like facts. And polling results look like truth serum, even when pollsters responsibly report margin of error information. It’s just so reassuring or motivating to see stark numbers because you feel like you can do something about those numbers, and then, when the numbers change, you feel good. This plays into basic human psychology. And this is why we use numbers as an incentive in both education and the workplace.

Political campaigns use numbers to drive actions on their teams. They push people to go to particular geographies, they use numbers to galvanize supporters. And this is important, which is why campaigns invest in pollsters and polling processes.

Unfortunately, this psychology and logic gets messed up when you’re talking about reporting on election polls in the public. When the numbers look like your team is winning, you relax and stop fretting, often into complacency.When the numbers look like your team is losing, you feel more motivated to take steps and do something. This is part of why the media likes the horse race — they push people to action by reporting on numbers, which in effect pushes different groups to take action. They like the attention that they get as the mood swings across the country in a hotly contested race.

But there is number burnout and exhaustion. As people feel pushed and swayed, as the horse race goes on and on, they get more and more disenchanted. Rather than galvanizing people to act, reporting on political polling over a long period of time with flashy visuals and constantly shifting needles prompts people to disengage from the process. In short, when it comes to the election, this prompts people to not show up to vote. Or to be so disgusted that voting practices become emotionally negative actions rather than productively informed ones.

This is a terrible outcome. The media’s responsibility is to inform the public and contribute to a productive democratic process. By covering political polls as though they are facts in an obsessive way, they are not only being statistically irresponsible, but they are also being psychologically irresponsible.

The news media are trying to create an addictive product through their news coverage, and, in doing so, they are pushing people into a state of overdose.

Yesterday, I wrote about how the media is being gamed and not taking moral responsibility for its participation in the spectacle of this year’s election. One of its major flaws is how it’s covering data and engaging in polling coverage. This is, in many ways, the easiest part of the process to fix. So I call on the news media to put a moratorium on political polling coverage, to radically reduce the frequency with which they reference polls during an election season, and to be super critical of the data that they receive. If they want to be a check to power, they need to have the structures in place to be a check to math.

(This was first posted on Points.)

by zephoria at November 10, 2016 07:53 PM

November 09, 2016

Center for Technology, Society & Policy

Un-Pitch Day success & project opportunities

Our Social Impact Un-Pitch Day event back in October was a great success — held in conjunction with the Information Management Student Association at the School of Information, organizational attendees received help scoping potential technology projects, while scores of Berkeley students offered advice with project design and also learned more about opportunities both to help the attending organizations and CTSP funding.

A key outcome of the event was a list of potential projects developed by 10 organizations, from social service non-profits such as Berkeley Food Pantry and Kiva.org, to technology advocacy groups such as the ACLU of Northern California and the Center for Democracy and Technology (just to name a few!).

We are providing a list of the projects (with contact information) with the goal both of generating interest in these groups’ work as well as providing potential project ideas and matches for CTSP applicants. Please note that we cannot guarantee funding for these projects should you choose to “adopt” a project and work with one of these organizations. Even if a project match doesn’t result in a CTSP fellowship, we hope we can match technologists with these organizations to help promote tech policy for the public interest regardless.

Please check out the list and consider contacting one of these organizations ASAP if their project fits your interests or skill sets! As a reminder, the deadline to apply to CTSP for this funding cycle is November 28, 2016.

by Jennifer King at November 09, 2016 11:19 PM

Ph.D. alumna

I blame the media. Reality check time.

For months I have been concerned about how what I was seeing on the ground and in various networks was not at all aligned with what pundits were saying. I knew the polling infrastructure had broken, but whenever I told people about the problems with the sampling structure, they looked at me like an alien and told me to stop worrying. Over the last week, I started to accept that I was wrong. I wasn’t.

And I blame the media.

The media is supposed to be a check to power, but, for years now, it has basked in becoming power in its own right. What worries me right now is that, as it continues to report out the spectacle, it has no structure for self-reflection, for understanding its weaknesses, its potential for manipulation.

I believe in data, but data itself has become spectacle. I cannot believe that it has become acceptable for media entities to throw around polling data without any critique of the limits of that data, to produce fancy visualizations which suggest that numbers are magical information. Every pollster got it wrong. And there’s a reason. They weren’t paying attention to the various structural forces that made their sample flawed, the various reasons why a disgusted nation wasn’t going to contribute useful information to inform a media spectacle. This abuse of data has to stop. We need data to be responsible, not entertainment.

This election has been a spectacle because the media has enjoyed making it as such. And in doing so, they showcased just how easily they could be gamed. I refer to the sector as a whole because individual journalists and editors are operating within a structural frame, unmotivated to change the status quo even as they see similar structural problems to the ones I do. They feel as though they “have” to tell a story because others are doing so, because their readers can’t resist reading. They live in the world pressured by clicks and other elements of the attention economy. They need attention in order to survive financially. And they need a spectacle, a close race.

We all know that story. It’s not new. What is new is that they got played.
Over the last year, I’ve watched as a wide variety of decentralized pro-Trump actors first focused on getting the media to play into his candidacy as spectacle, feeding their desire for a show. In the last four months, I watched those same networks focus on depressing turnout, using the media to trigger the populace to feel so disgusted and frustrated as to disengage. It really wasn’t hard because the media was so easy to mess with. And they were more than happy to spend a ridiculous amount of digital ink circling round and round into a frenzy.

Around the world, people have been looking at us in a state of confusion and shock, unsure how we turned our democracy into a new media spectacle. What hath 24/7 news, reality TV, and social media wrought? They were right to ask. We were irresponsible to ignore.

In the tech sector, we imagined that decentralized networks would bring people together for a healthier democracy. We hung onto this belief even as we saw that this wasn’t playing out. We built the structures for hate to flow along the same pathways as knowledge, but we kept hoping that this wasn’t really what was happening. We aided and abetted the media’s suicide.
The red pill is here. And it ain’t pretty.

We live in a world shaped by fear and hype, not because it has to be that way, but because this is the obvious paradigm that can fuel the capitalist information architectures we have produced.

Many critics think that the answer is to tear down capitalism, make communal information systems, or get rid of social media. I disagree. But I do think that we need to actively work to understand complexity, respectfully engage people where they’re at, and build the infrastructure to enable people to hear and appreciate different perspectives. This is what it means to be truly informed.

There are many reasons why we’ve fragmented as a country. From the privatization of the military (which undermined the development of diverse social networks) to our information architectures, we live in a moment where people do not know how to hear or understand one another. And our obsession with quantitative data means that we think we understand when we hear numbers in polls, which we use to judge people whose views are different than our own. This is not productive.

Most people are not apathetic, but they are disgusted and exhausted. We have unprecedented levels of anxiety and fear in our country. The feelings of insecurity and inequality cannot be written off by economists who want to say that the world is better today than it ever was. It doesn’t feel that way. And it doesn’t feel that way because, all around us, the story is one of disenfranchisement, difference, and uncertainty.

All of us who work in the production and dissemination of information need to engage in a serious reality check.

The media industry needs to take responsibility for its role in producing spectacle for selfish purposes. There is a reason that the public doesn’t trust institutions in this country. And what the media has chosen to do is far from producing information. It has chosen to produce anxiety in the hopes that we will obsessively come back for more. That is unhealthy. And it’s making us an unhealthy country.

Spectacle has a cost. It always has. And we are about to see what that cost will be.

(This was first posted at Points.)

by zephoria at November 09, 2016 04:47 PM

MIMS 2016

All of these were created by actual people to emphasize bad form design.

All of these were created by actual people to emphasize bad form design. Here are their actual sources if you are interested in appropriate attribution:

by nikhil at November 09, 2016 12:20 AM

November 02, 2016

Center for Technology, Society & Policy

Now accepting 2016-2017 fellow applications!

Our website is now updated with the both the fellowship application and application upload page. As a reminder, we are accepting applications for the 2016-2017 cycle through Monday, November 28 at 11:59am PT.

by Jennifer King at November 02, 2016 10:47 PM

MIMS 2011

How Wikipedia’s silent coup ousted our traditional sources of knowledge

[Reposted from The Conversation, 15 January 2016]

As Wikipedia turns 15, volunteer editors worldwide will be celebrating with themed cakes and edit-a-thons aimed at filling holes in poorly covered topics. It’s remarkable that a user-editable encyclopedia project that allows anyone to edit has got this far, especially as the website is kept afloat through donations and the efforts of thousands of volunteers. But Wikipedia hasn’t just become an important and heavily relied-upon source of facts: it has become an authority on those facts.

Through six years of studying Wikipedia I’ve learned that we are witnessing a largely silent coup, in which traditional sources of authority have been usurped. Rather than discovering what the capital of Israel is by consulting paper copies of Encyclopedia Britannica or geographical reference books, we source our information online. Instead of learning about thermonuclear warfare from university professors, we can now watch a YouTube video about it.

The ability to publish online cheaply has led to an explosion in the number and range of people putting across facts and opinion than was traditionally delivered through largely academic publishers. But rather than this leading to an increase in the diversity of knowledge and the democratisation of expertise, the result has actually been greater consolidation in the number of knowledge sources considered authoritative. Wikipedia, particularly in terms of its alliance with Google and other search engines, now plays a central role.

From outsider to authority

Once ridiculed for allowing anyone to edit it, Wikipedia is now the seventh most visited website in the world, and the most popular reference source among them. Wikipedia articles feature at the top of the majority of searches conducted on Google, Bing, and other search engines. In 2012, Google announced the Knowledge Graph which moved Google from providing possible answers to a user’s questions in the search results it offers, to providing an authoritative answer in the form of a fact box with content drawn from Wikipedia articles about people, places and things.

Perhaps the clearest indication of Wikipedia’s new authority is demonstrated by who uses it and regards its content as credible. Whereas governments, corporations and celebrities couldn’t have cared less whether they had a Wikipedia page in 2001, now tales of politicians, celebrities, governments or corporations (or their PR firms) ham-fistedly trying to edit Wikipedia articles on them to remove negative statements or criticism regularly appear in the news.

Happy 15th birthday Wikipedia! Beko, CC BY-SA 


Wisdom of crowds

How exactly did Wikipedia become so authoritative? Two complementary explanations stand out from many. First, the rise of the idea that crowds are wise and the logic that open systems produce better quality results than closed ones. Second is the decline in the authority accorded to scientific knowledge, and the sense that scientific authorities are no longer always seen as objective or even reliable. As the authority of named experts housed in institutions has waned, Wikipedia, as a site that the majority of users believe is contributed to by unaffiliated and therefore unbiased individuals, has risen triumphant.

The realignment of expertise and authority is not new; changes to whom or what society deems credible sources of information have been a feature of the modern age. Authors in the field of the sociology of knowledge have written for decades about the struggles of particular fields of knowledge to gain credibility. Some have been more successful than others.

What makes today’s realignment different is the ways in which sources like Wikipedia are governed and controlled. Instead of the known, visible heads of academic and scientific institutions, sources like Wikipedia are largely controlled by nebulous, often anonymous individuals and collectives. Instead of transparent policies and methods, Wikipedia’s policies are so complex and numerous that they have become obscure, especially to newcomers. Instead of a few visible gatekeepers, the internet’s architecture means that those in control are often widely distributed and difficult to call to account.

Wikipedia is not neutral. Its platform has enabled certain languages, topics and regions to dominate others. Despite the difficulty of holding our new authorities of knowledge to account, it’s a challenge that’s critical to the future of an equitable and truly global internet. There are new powers in town, so alongside the birthday cake and celebrations there should be some reflection on who will watch Wikipedia and where we go from here.


by Heather Ford at November 02, 2016 10:15 AM

November 01, 2016

MIDS student

First blog post – Terms and Conditions apply !!

If you are like me, you would have never read a single terms and conditions document ( lovingly called T&Cs) whether it’s for a website, an app or even contracts.I am guilty of signing job contracts after just checking name, $$$ and location assuming that it’s all in good faith.

I have recently learnt that concept of good faith is heavily skewed against people like you and me.

As part of a class assignment, we were asked to read the T&C doc of any app /website of our choice. I approached this with a sinking feeling and expectation of the next one hour being wasted on fighting sleep while I try to make sense of this soporific exercise.I selected a data science related website that I had been following for a while. Needless to say, I had signed up without reading the T&Cs.

The first sentence on the T&C had me hooked . It stated in CAPS “IF YOU DO NOT AGREE TO ALL OF THE FOLLOWING, YOU MAY NOT USE OR ACCESS THE SERVICES IN ANY MANNER.” Somehow that felt like someone just threatened me .
Surprisingly the document had very simple and easy-to-understand language i.e. I did not need to hire a lawyer to read a menagerie of multiple comma, semi-colon infused sentences. Bravo on that !!

The simplicity also meant that for the first time, I realised the imbalance of power between the users and the service providers. My notion about buyers having the upper hand was just turned over on its head. Although “buyers” is not the technically correct terminology to use as the website under question provided free service .

As you read through the document, it is quite evident that they have access to a lot more than you can imagine. God forbid, if you are a competition winner, you are essentially giving them a royalty-free,global, timeless license to use your material.To be fair to them, they do state that ownership is still yours though I am not sure what does one do with it when someone else can use and distribute it freely.

The scary or comforting thing (depending on which camp you come from) is that this is not a lone case. My classmates analysed many apps in varying fields like social media, fitness, transport, finance etc and found similar issues.For example, you realise that if you have accounts with certain websites, your each and every movement on that website, its sister websites, any website that may be using its services are all being tracked because of course, they want to show you better ads !! But what if I don’t care about ads ? well, sometimes you can escape by just upgrading to premium ( no guarantee of not tracking even then ) or as in most cases, make your peace with the fact that privacy may not be so private after all . Besides a high annoyance factor, ads can also be misleading at times. One of the very popular social media site states the following in their T&C . ” You give us permission to use your name, profile picture, content, and information in connection with commercial, sponsored, or related content (such as a brand you like) served or enhanced by us. ….. You understand that we may not always identify paid services and communications as such.

Another interesting trend that was pointed out was the exponential increase in the length of the T&C document with each passing year of existence – most probably due to lawsuits and/or change in privacy,corporate or other laws.

At least after this exercise. I try to skim through the T&C of most of the new websites that I sign up for . However, each time I end up with a queasy feeling that I should not click on the “Agree” button but the attraction of that new functionality, especially when its free is too difficult to overcome.

(CONFESSION: I have not yet read the T&C for WordPress)


by arvinsahni at November 01, 2016 11:32 PM