School of Information Blogs

May 21, 2018

Ph.D. student

General intelligence, social privilege, and causal inference from factor analysis

I came upon this excellent essay by Cosma Shalizi about how factor analysis has been spuriously used to support the scientific theory of General Intelligence (i.e., IQ). Shalizi, if you don’t know, is one of the best statisticians around. He writes really well and isn’t afraid to point out major blunders in things. He’s one of my favorite academics, and I don’t think I’m alone in this assessment.

First, a motive: Shalizi writes this essay because he thinks the scientific theory of General Intelligence, or a g factor that is some real property of the mind, is wrong. This theory is famous because (a) a lot of people DO believe in IQ as a real feature of the mind, and (b) a significant percentage of these people believe that IQ is hereditary and correlated with race, and (c) the ideas in (b) are used to justify pernicious and unjust social policy. Shalizi, being a principled statistician, appears to take scientific objection to (a) independently of his objection to (c), and argues persuasively that we can reject (a). How?

Shalizi’s point is that the general intelligence factor g is a latent variable that was supposedly discovered using a factor analysis of several different intelligence tests that were supposed to be independent of each other. You can take the data from these data sets and do a dimensionality reduction (that’s what factor analysis is) and get something that looks like a single factor, just as you can take a set of cars and do a dimensionality reduction and get something that looks like a single factor, “size”. The problem is that “intelligence”, just like “size”, can also be a combination of many other factors that are only indirectly associated with each other (height, length, mass, mass of specific components independent of each other, etc.). Once you have many different independent factors combining into one single reduced “dimension” of analysis, you no longer have a coherent causal story of how your general latent variable caused the phenomenon. You have, effectively, correlation without demonstrated causation and, moreover, the correlation is a construct of your data analysis method, and so isn’t really even telling you what correlations normally tell you.

To put it another way: the fact that some people seem to be generally smarter than other people can be due to thousands of independent factors that happen to combine when people apply themselves to different kinds of tasks. If some people were NOT seeming generally smarter than others, that would allow you to reject the hypothesis that there was general intelligence. But the mere presence of the aggregate phenomenon does not prove the existence of a real latent variable. In fact, Shalizi goes on to say, when you do the right kinds of tests to see if there really is a latent factor of ‘general intelligence’, you find that there isn’t any. And so it’s just the persistent and possibly motivated interpretation of the observational data that allows the stubborn myth of general intelligence to continue.

Are you following so far? If you are, it’s likely because you were already skeptical of IQ and its racial correlates to begin with. Now I’m going to switch it up though…

It is fairly common for educated people in the United States (for example) to talk about “privilege” of social groups. White privilege, male privilege–don’t tell me you haven’t at least heard of this stuff before; it is literally everywhere on the center-left news. Privilege here is considered to be a general factor that adheres in certain social groups. It is reinforced by all manner of social conditioning, especially through implicit bias in individual decision-making. This bias is so powerful it extends not to just cases of direct discrimination but also in cases where discrimination happens in a mediated way, for example through technical design. The evidence for these kinds of social privileging effects is obvious: we see inequality everywhere, and we can who is more powerful and benefited by the status quo and who isn’t.

You see where this is going now. I have the momentum. I can’t stop. Here it goes: Maybe this whole story about social privilege is as spuriously supported as the story about general intelligence? What if both narratives were over-interpretations of data that serve a political purpose, but which are not in fact based on sound causal inference techniques?

How could this be? Well, we might gather a lot of data about people: wealth, status, neighborhood, lifespan, etc. And then we could run a dimensionality reduction/factor analysis and get a significant factor that we could name “privilege” or “power”. Potentially that’s a single, real, latent variable. But also potentially it’s hundreds of independent factors spuriously combined into one. It would probably, if I had to bet on it, wind up looking a lot like the factor for “general intelligence”, which plays into the whole controversy about whether and how privilege and intelligence get confused. You must have heard the debates about, say, representation in the technical (or other high-status, high-paying) work force? One side says the smart people get hired; the other side say it’s the privileged (white male) people that get hired. Some jerk suggests that maybe the white males are smarter, and he gets fired. It’s a mess.

I’m offering you a pill right now. It’s not the red pill. It’s not the blue pill. It’s some other colored pill. Green?

There is no such thing as either general intelligence or group based social privilege. Each of these are the results of sloppy data compression over thousands of factors with a loose and subtle correlational structure. The reason why patterns of social behavior that we see are so robust against interventions is that each intervention can work against only one or two of these thousands of factors at a time. Discovering the real causal structure here is hard partly because the effect sizes are very small. Anybody with a simple explanation, especially a politically convenient explanation, is lying to you but also probably lying to themselves. We live in a complex world that resists our understanding and our actions to change it, though it can be better understood and changed through sound statistics. Most people aren’t bothering to do this, and that’s why the world is so dumb right now.

by Sebastian Benthall at May 21, 2018 12:05 AM

May 20, 2018

Ph.D. student

Goodbye, TheListserve!

Today I got an email I never thought I’d get: a message from the creators of TheListserve saying they were closing down the service after over 6 years.

TheListserve was a fantastic idea: it was a mailing list that allowed one person, randomly selected from the subscribers each day, to email everyone else.

It was an experiment in creating a different kind of conversational space on-line. And it worked great! Tens of thousands of subscribers, really interesting content–a space unlike most others in social media. You really did get a daily email with what some random person thought was the most interesting thing they had to say.

I was inspired enough by TheListserve to write a Twitter bot based on similar principles, TheTweetserve. Maybe the Twitter bot was also inspired by Habermas. It was not nearly as successful or interesting as TheListserve, for reasons that you could deduce if you thought about it.

Six years ago, “The Internet” was a very different imaginary. There was this idea that a lightweight intervention could capture some of the magic of serendipity that scale and connection had to offer, and that this was going to be really, really big.

It was, I guess, but then the charm wore off.

What’s happened now, I think, is that we’ve been so exposed to connection and scale that novelty has worn off. We now find ourselves exposed on-line mainly to the imposing weight of statistical aggregates and regressions to the mean. After years of messages to TheListserve, it started, somehow, to seem formulaic. You would get honest, encouraging advice, or a self-promotion. It became, after thousands of emails, a genre in itself.

I wonder if people who are younger and less jaded than I am are still finding and creating cool corners of the Internet. What I hear about more and more now are the ugly parts; they make the news. The Internet used to be full of creative chaos. Now it is so heavily instrumented and commercialized I get the sense that the next generation will see it much like I saw radio or television when I was growing up: as a medium dominated by companies, large and small. Something you had to work hard to break into as a professional choice or otherwise not at all.

by Sebastian Benthall at May 20, 2018 02:42 AM

May 15, 2018

Ph.D. student

“Context, Causality, and Information Flow: Implications for Privacy Engineering, Security, and Data Economics” <– My dissertation

In the last two weeks, I’ve completed, presented, and filed my dissertation, and commenced as a doctor of philosophy. In a word, I’ve PhinisheD!

The title of my dissertation is attention-grabbing, inviting, provocative, and impressive:

“Context, Causality, and Information Flow: Implications for Privacy Engineering, Security, and Data Economics”

If you’re reading this, you are probably wondering, “How can I drop everything and start reading that hot dissertation right now?”

Look no further: here is a link to the PDF.

You can also check out this slide deck from my “defense”. It covers the highlights.

I’ll be blogging about this material as I break it out into more digestible forms over time. For now, I’m obviously honored by any interest anybody takes in this work and happy to answer questions about it.

by Sebastian Benthall at May 15, 2018 05:24 PM

April 30, 2018

Center for Technology, Society & Policy

Data for Good Competition — Showcase and Judging

The four teams in CTSP’s Facebook-sponsored Data for Good Competition will be presenting today in CITRIS and CTSP’s Tech & Data for Good Showcase Day. The event will be streamed through Facebook Live on the CTSP Facebook page. After deliberations from the judges, the top team will receive $5000 and the runner-up will receive $2000.

Update:

Agenda:

Data for Good Judges:

Joy Bonaguro, Chief Data Officer, City and County of San Francisco

Joy Bonaguro the first Chief Data Officer for the City and County of San Francisco, where she manages the City’s open data program. Joy has spent more than a decade working at the nexus of public policy, data, and technology. Joy earned her Masters from UC Berkeley’s Goldman School of Public Policy, where she focused on IT policy.

Lisa García Bedolla, Professor, UC Berkeley Graduate School of Education and Director of UC Berkeley’s Institute of Governmental Studies

Professor Lisa García Bedolla is a Professor in the Graduate School of Education and Director of the Institute of Governmental Studies. Professor García Bedolla uses the tools of social science to reveal the causes of political and economic inequalities in the United States. Her current projects include the development of a multi-dimensional data system, called Data for Social Good, that can be used to track and improve organizing efforts on the ground to empower low-income communities of color. Professor García Bedolla earned her PhD in political science from Yale University and her BA in Latin American Studies and Comparative Literature from UC Berkeley.

Chaya Nayak, Research Manager, Public Policy, Data for Good at Facebook

Chaya Nayak is a Public Policy Research Manager at Facebook, where she leads Facebook’s Data for Good Initiative around how to use data to generate positive social impact and address policy issues. Chaya received a Masters of Public Policy from the Goldman School of Public Policy at UC Berkeley, where she focused on the intersection between Public Policy, Technology, and Utilizing Data for Social Impact.

Michael Valle, Manager, Technology Policy and Planning for California’s Office of Statewide Health Planning and Development

Michael D. Valle is Manager of Technology Policy and Planning at the California Office of Statewide Health Planning and Development, where he oversees the digital product portfolio. Michael has worked since 2009 in various roles within the California Health and Human Services Agency. In 2014 he helped launch the first statewide health open data portal in California. Michael also serves as Adjunct Professor of Political Science at American River College.

Judging:

As detailed in the call for proposals, the teams will be judged on the quality of their application of data science skills, the demonstration of how the proposal or project addresses a social good problem, their advancing the use of public open data, all while demonstrating how the proposal or project mitigates potential pitfalls.

by Daniel Griffin at April 30, 2018 07:06 PM

April 16, 2018

Ph.D. student

Keeping computation open to interpetation: Ethnographers, step right in, please

This is a post that first appeared on the ETHOSLab Blog, written by myself, Bastian Jørgensen (PhD fellow at Technologies in Practice, ITU), Michael Hockenhull (PhD fellow at Technologies in Practice, ITU), Mace Ojala (Research Assistant at Technologies in Practice, ITU).

Introduction: When is data science?

We recently held a workshop at ETHOS Lab and the Data as Relation project at ITU Copenhagen, as part of Stuart Geiger’s seminar talk on “Computational Ethnography and the Ethnography of Computation: The Case for Context” on 26th of March 2018. Tapping into his valuable experience, and position as a staff ethnographer at Berkeley Institute for Data Science, we wanted to think together about the role that computational methods could play in ethnographic and interpretivist research. Over the past decade, computational methods have exploded in popularity across academia, including in the humanities and interpretive social sciences. Stuart’s talk made an argument for a broad, collaborative, and pluralistic approach to the intersection of computation and ethnography, arguing that ethnography has many roles to play in what is often called “data science.”

Based on Stuart’s talk the previous day, we began the workshop with three different distinctions about how ethnographers can work with computation and computational data: First, the “ethnography of computation” is using traditional qualitative methods to study the social, organizational, and epistemic life of computation in a particular context: how do people build, produce, work with, and relate to systems of computation in their everyday life and work? Ethnographers have been doing such ethnographies of computation for some time, and many frameworks — from actor-network theory (Callon 1986Law 1992) to “technography” (Jansen and Vellema 2011Bucher 2012) — have been useful to think about how to put computation at the center of these research projects.

Second, “computational ethnography” involves extending the traditional qualitative toolkit of methods to include the computational analysis of data from a fieldsite, particularly when working with trace or archival data that ethnographers have not generated themselves. Computational ethnography is not replacing methods like interviews and participant-observation with such methods, but supplementing them. Frameworks like “trace ethnography” (Geiger and Ribes 2010) and “computational grounded theory” (Nelson 2017) have been useful ways of thinking about how to integrate these new methods alongside traditional qualitative methods, while upholding the particular epistemological commitments that make ethnography a rich, holistic, situated, iterative, and inductive method. Stuart walked through a few Jupyter notebooks from a recent paper (Geiger and Halfaker, 2017) in which they replicated and extended a previously published study about bots in Wikipedia. In this project, they found computational methods quite useful in identifying cases for qualitative inquiry, and they also used ethnographic methods to inform a set of computational analyses in ways that were more specific to Wikipedians’ local understandings of conflict and cooperation than previous research.

Finally, the “computation of ethnography” (thanks to Mace for this phrasing) involves applying computational methods to the qualitative data that ethnographers generate themselves, like interview transcripts or typed fieldnotes. Qualitative researchers have long used software tools like NVivo, Atlas.TI, or MaxQDA to assist in the storage and analysis of data, but what are the possibilities and pitfalls of storing and analyzing our qualitative data in various computational ways? Even ethnographers who use more standard word processing tools like Google Docs or Scrivener for fieldnotes and interviews can use computational methods to organize, index, tag, annotate, aggregate and analyze their data. From topic modeling of text data to semantic tagging of concepts to network analyses of people and objects mentioned, there are many possibilities. As multi-sited and collaborative ethnography are also growing, what tools let us collect, store, and analyze data from multiple ethnographers around the world? Finally, how should ethnographers deal with the documents and software code that circulate in their fieldsites, which often need to be linked to their interviews, fieldnotes, memos, and manuscripts?

These are not hard-and-fast distinctions, but instead should be seen as sensitizing concepts that draw our attention to different aspects of the computation / ethnography intersection. In many cases, we spoke about doing all three (or wanting to do all three) in our own projects. Like all definitions, they blur as we look closer at them, but this does not mean we should abandon the distinctions. For example, computation of ethnography can also strongly overlap with computational ethnography, particularly when thinking about how to analyze unstructured qualitative data, as in Nelson’s computational grounded theory. Yet it was productive to have different terms to refer to particular scopings: our discussion of using topic modeling of interview transcripts to help identify common themes was different than our discussion of analyzing of activity logs to see how prevalent a particular phenomenon, which were different than our discussion a situated investigation of the invisible work of code and data maintenance.

We then worked through these issues in the specific context of two cases from ETHOS Lab and Data as Relation project, where Bastian and Michael are both studying public sector organizations in Denmark that work with vast quantities and qualities of data and are often seeking to become more “data-driven.” In the Danish tax administration (SKAT) and the Municipality of Copenhagen’s Department of Cultural and Recreational Activities, there are many projects that are attempting to leverage data further in various ways. For Michael, the challenge is to be able to trace how method assemblages and sociotechnical imaginaries of data travel between private organisations and sites to public organisations, and influence the way data is worked with and what possibilities data are associated with. Whilst doing participant-observation, Michael suggested that a “computation of ethnography” approach might make it easier to trace connections between disparate sites and actors.

The ethnographer enters the perfect information organization

In one group, we explored the idea of the Perfect Information Organisation, or PIO, in which there are traces available of all workplace activity. This nightmarish panopticon construction would include video and audio surveillance of every meeting and interaction, detailed traces of every activity online, and detailed minutes on meetings and decisions. All of this would be available for the ethnographer, as she went about her work.

The PIO is of course a thought experiment designed to provoke the common desire or fantasy for more data. This is something we all often feel in our fieldwork, but we felt this raised many implicit risks if one combined and extended the three types of ethnography detailed earlier on. By thinking about the PIO, ludicrous though it might be, we would challenge ourselves to look at what sort of questions we could and should ask in such a situation. We came up with the following questions, although there are bound to be many more:

  1. What do members know about the data being collected?
  2. Does it change their behaviour?
  3. What takes place outside of the “surveilled” space? I.e. what happens at the bar after work?
  4. What spills out of the organisation, like when members of the organization visit other sites as part of their work?
  5. How can such a system be slowed down and/or “disconcerted” (a concept from Helen Verran that have found useful in thinking about data in context)?
  6. How can such a system even exist as an assemblage of many surveillance technologies, and would not the weight of the labour sustaining it outstrip its ability to function?

What the list shows is that although the PIO may come off as a wet-dream of the data obsessed or fetisitch researcher, even it has limits as a hypothetical thought experiment. Information is always situated in a context, often defined in relation to where and what information is not available. Yet as we often see in our own fieldwork (and constantly in the public sphere), the fantasies of total or perfect information persist for powerful reasons. Our suggestion was that such a thought experiment would be a good initial exercise for the researcher about to embark on a mixed-methods/ANT/trace ethnography inspired research approach in a site heavily infused with many data sources. The challenge of what topics and questions to ask in ethnography is always as difficult as asking what kind of data to work with, even if we put computational methods and trace data aside. We brought up many tradeoffs in our own fieldwork, such as when getting access to archival data means that the ethnographer is not spending as much time in interviews or participant observation.

This also touches on some of the central questions which the workshop provoked but didn’t answer: what is the phenomenon we are studying, in any given situation? Is it the social life in an organisation, that life distributed over a platform and “real life” social interactions or the platform’s affordances and traces itself? While there is always a risk of making problematic methodological trade-offs in trying to get both digital and more classic ethnographic traces, there is also, perhaps, a methodological necessity in paying attention to the many different types of traces available when the phenomenon we are interested in takes place both online, at the bar and elsewhere. We concluded that ethnography’s intentionally iterative, inductive, and flexible approach to research applies to these methodological tradeoffs as well: as you get access to new data (either through traditional fieldwork or digitized data) ask what you are not focusing on as you see something new.

In the end, these reflections bear a distinct risk of indulging in fantasy: the belief that we can ever achieve a full view (the view from nowhere), or a holistic or even total view of social life in all its myriad forms, whether digital or analog. The principles of ethnography are most certainly not about exhausting the phenomenon, so we do well to remain wary of this fantasy. Today, ethnography is often theorized as documentation of an encounter between an ethnographer and people in a particular context, with the partial perspectives to be embraced. However, we do believe that it is productive to think through the PIO and to not write off in advance traces which do not correspond with an orthodox view of what ethnography might consider proper material or data.

The perfect total information ethnographers

In the second group conversation originated from the wish of an ethnographer to gain access to a document sharing platform from the organization in which the ethnographer is doing fieldwork. Of course, it is not just one platform, but a loose collection of platforms in various stages of construction, adoption, and acceptance. As we know, ethnographers are not only careful about the wishes of others but also of their own wishes — how would this change their ethnography if they had access to countless internal documents, records, archives, and logs? So rather than “just doing (something)”, the ethnographer took a step back and became puzzled over wanting such a strange thing in the first place.

The imaginaries of access to data

In the group, we speculated about if ethnographer got their wish to get access to as much data as possible from the field. Would a “Google Street view” recorded from head-mounted 360° cameras into the site be too much? Probably. On highly mediated sites — Wikipedia serving as an example during the workshop — plenty of traces are publicly left by design. Such archival completeness is a property of some media in some organizations, but not others. In ethnographies of computation, the wish of total access brings some particular problems (or opportunities) as a plenitude of traces and documents are being shared on digital platforms. We talked about three potential problems, the first and most obvious being that the ethnographer drowns in the available data. A second problem, is for the ethnographer to believe that getting more access will provide them with a more “whole” or full picture of the situation. The final problem we discussed was whether the ethnographer would end up replicating the problems of the people in the organization they are studying, which was working out how to deal with a multitude of heterogeneous data in their work.

Besides the problems we also discussed, we asked why the ethnographer would want access to the many documents and traces in the first place. What ideas of ethnography and epistemology does such a desire imply? Would the ethnographer want to “power up” their analysis by mimicking the rhetoric of “the more data the better”? Would the ethnographer add their own data (in the form of field notes and pictures) and through visualisations, show a different perspective on the situation? Even though we reject the notion of a panoptic view on various grounds, we are still left with the question of how much data we need or should want as ethnographers. Imagine that we are puzzled by a particular discussion, would we benefit from having access to a large pile of documents or logs that we could computationally search through for further information? Or would more traditional ethnographic methods like interviews actually be better for the goals of ethnography?

Bringing data home

“Bringing data home” is an idea and phrase that originates from the fieldsite and captures something about the intentions that are playing out. One must wonder what is implied by that idea, and what does the idea do. A straightforward reading would be that it describes a strategic and managerial struggle to cut off a particular data intermediary — a middleman — and restore a more direct data-relationship between the agency and actors using the data they provide. A product/design struggle, so to say. Pushing the speculations further, what might that homecoming, that completion of the re-redesign of data products be like? As ethnographers, and participants in the events we write about, when do we say “come home, data”, or “go home, data”? What ethnography or computation will be left to do, when data has arrived home? In all, we found a common theme in ethnographic fieldwork — that our own positionalities and situations often reflect those of the people in our fieldsites.

Concluding thoughts – why this was interesting/a good idea

It is interesting that our two groups did not explicitly coordinate our topics – we split up and independently arrived at very similar thought experiments and provocations. We reflected that this is likely because all of us attending the workshop were in similar kinds of situations, as we are all struggling with the dual problem of studying computation as an object and working with computation as a method. We found that these kinds of speculative thought experiments were useful in helping us define what we mean by ethnography. What are the principles, practices, and procedures that we mean when we use this term, as opposed to any number of others that we could also use to describe this kind of work? We did not want to do too much boundary work or policing what is and isn’t “real” ethnography, but we did want to reflect on how our positionality as ethnographers is different than, say, digital humanities or computational social science.

We left with no single, simple answers, but more questions — as is probably appropriate. Where do contributions of ethnography of computation, computational ethnography, or computation of ethnography go in the future? We instead offer a few next steps:

Of all the various fields and disciplines that have taken up ethnography in a computational context, what are their various theories, methods, approaches, commitments, and tools? For example, how is work that has more of a home in STS different from that in CSCW or anthropology? Should ethnographies of computation, computational ethnography, and computation of ethnography look the same across fields and disciplines, or different?

Of all the various ethnographies of computation taking place in different contexts, what are we finding about the ways in which people relate to computation? Ethnography is good at coming up with case studies, but we often struggle (or hesitate) to generalize across cases. Our workshop brought together a diverse group of people who were studying different kinds of topics, cases, sites, peoples, and doing so from different disciplines, methods, and epistemologies. Not everyone at the workshop primarily identified as an ethnographer, which was also productive. We found this mixed group was a great way to force us to make our assumptions explicit, in ways we often get away with when we work closer to home.

Of computational ethnography, did we propose some new, operationalizable mathematical approaches to working with trace data in context? How much should the analysis of trace data depend on the ethnographer’s personal intuition about how to collect and analyze data? How much should computational ethnography involve the integration of interviews and fieldnotes alongside computational analyses?

Of computation of ethnography, what does “tooling up” involve? What do our current tools do well, and what do we struggle to do with them? How do their affordances shape the expectations and epistemologies we have of ethnography? How can we decouple the interfaces from their data, such as exporting the back-end database used by a more standard QDA program and analyzing it programmatically using text analysis packages, and find useful cuts to intervene in, in an ethnographic fashion, without engineering everything from some set of first principles? What skills would be useful in doing so?

by R. Stuart Geiger at April 16, 2018 07:00 AM

April 05, 2018

adjunct professor

Syllabi

I’m getting a lot of requests for my syllabi. Here are links to my most recent courses. Please note that we changed our LMS in 2014 and so some of my older course syllabi are missing. I’m going to round those up.

  • Cybersecurity in Context (Fall 2018)
  • Cybersecurity Reading Group (Spring 2018, Fall 2017, Spring 2017)
  • Privacy and Security Lab (Spring 2018, Spring 2017)
  • Technology Policy Reading Group (AI & ML; Free Speech: Private Regulation of Speech; CRISPR) (Spring 2017)
  • Privacy Law for Technologists (Fall 2017, Fall 2016)
  • Problem-Based Learning: The Future of Digital Consumer Protection (Fall 2017)
  • Problem-Based Learning: Educational Technology: Design Policy and Law (Spring 2016)
  • Computer Crime Law (Fall 2015, Fall 2014, Fall 2013, Fall 2012, Fall 2011)
  • FTC Privacy Seminar (Spring 2015, Spring 2010)
  • Internet Law (Spring 2013)
  • Information Privacy Law (Spring 2012, Spring 2009)
  • Samuelson Law, Technology & Public Policy Clinic (Fall 2014, Spring 2014, Fall 2013, Spring 2011, Fall 2010, Fall 2009)

by web at April 05, 2018 05:34 PM

March 30, 2018

MIMS 2014

I Googled Myself (Part 2)

In my last post, I set up an A/B test through Google Optimize and learned Google Tag Manager (GTM), Google Analytics (GA) and Google Data Studio (GDS) along the way. When I was done, I wanted to learn how to integrate Enhanced E-commerce and Adwords into my mock-site, so I set that as my next little project.

As the name suggests, Enhanced E-commerce works best with an e-commerce site—which I don’t quite have. Fortunately, I was able to find a bunch of different mock e-commerce website source code repositories on Github which I could use to bootstrap my own. After some false starts, I found one that worked well for my purposes, based on this repository that made a mock e-commerce site using the “MEAN” stack (MongoDB, Express.js, AngularJS, and node.js).

Forking this repository gave me an opportunity to learn a bit more about modern front-end / back-end website building technologies, which was probably overdue. It was also a chance to brush up on my javascript skills. Tackling this new material would have been much more difficult without the use of WebStorm, the javascript IDE by the same makers of my favorite python IDE, PyCharm.

Properly implementing Enhanced E-commerce does require some back end development—specifically to render static values on a page that can then be passed to GTM (and ultimately to GA) via the dataLayer. In the source code I inherited, this was done through the nunjucks templating library, which was well suited to the task.

Once again, I used Selenium to simulate traffic to the site. I wanted to have semi-realistic traffic to test the GA pipes, so I modeled consumer preferences off of the beta distribution with \alpha  = 2.03 and \beta = 4.67 . That looks something like this:

beta

The x value of the beta distribution is normally constrained to the (0,1) interval, but I multiplied it by the number of items in my store to simulate preferences for my customers. So in the graph, the 6th item (according to an arbitrary indexing of the store items) is the most popular, while the 22nd and 23rd items are the least popular.

For the customer basket size, I drew from a poisson distribution with \lambda  = 3 .  That looks like this:

poisson

Although the two distributions do look quite similar, they are actually somewhat different. For one thing, the Poisson distribution is discrete while the beta distribution is continuous—though I do end up dropping all decimal figures when drawing samples from the beta distribution since the items are also discrete. However, the two distributions do serve different purposes in the simulation. The x axis in the beta distribution represents an arbitrary item index, and in the poisson distribution, it represents the number of items in a customer’s basket.

So putting everything together, the simulation process goes like this: for every customer, we first draw from the Poisson distribution with \lambda = 3 to determine q , i.e. how many items that customer will purchase. Then we draw q times from the beta distribution to see which items the customer will buy. Then, using Selenium, these items are added to the customer’s basket and the purchase is executed, while sending the Enhanced Ecommerce data to GA via GTM and the dataLayer.

When it came to implementing Adwords, my plan had been to bid on uber obscure keywords that would be super cheap to bid on (think “idle giraffe” or “bellicose baby”), but unfortunately Google requires that your ad links be live, properly hosted websites. Since my website is running on my localhost, Adwords wouldn’t let me create a campaign with my mock e-commerce website 😦

As a workaround, I created a mock search engine results page that my users would navigate to before going to my mock e-commerce site’s homepage. 20% of users would click on my ‘Adwords ad’ for hoody sweatshirts on that page (that’s one of the things my store sells, BTW) . The ad link was encoded with the same UTM parameters that would be used in Google Adwords to make sure the ad click is attributed to the correct source, medium, and campaign in GA. After imposing a 40% bounce probability on these users, the remaining ones buy a hoody.

It seemed like I might as well use this project as another opportunity to work with GDS, so I went ahead and made another dashboard for my e-commerce website (live link):

gds_dashboard

If you notice that the big bar graph in the dashboard above looks a little like the beta distribution from before, that’s not an accident. Seeing the Hoody Promo Conv. Rate hover around 60% was another sign things were working as expected (implemented as a Goal in GA).

In my second go-around with GDS, however, I did come up against a few more frustrating limitations. One thing I really wanted to do was create a scorecard element that would tell you the name of the most popular item in the store, but GDS won’t let you do that.

I also wanted to make a histogram, but that is also not supported in GDS. Using my own log data, I did manage to generate the histogram I wanted—of the average order value.

avg_cart_value_hist

I’m pretty sure we’re seeing evidence of the Central Limit Theorem kicking in here. The CLT says that the distribution of sample means—even when drawn from a distribution that is not normal—will tend towards normality as the sample size gets larger.

A few things have me wondering here, however. In this simulation, the sample size is itself a random variable which is never that big. The rule of thumb says that 30 counts as a large sample size, but if you look at the Poisson graph above you’ll see the sample size rarely goes above 8. I’m wondering whether this is mitigated by a large number of samples (i.e. simulated users); the histogram above is based on 50,000 simulated users. Also, because average order values can never be negative, we can only have at best a truncated normal distribution, so unfortunately we cannot graphically verify the symmetry typical of the normal distribution in this case.

But anyway, that’s just me trying to inject a bit of probability/stats into an otherwise implementation-heavy analytics project. Next I might try to re-implement the mock e-commerce site through something like Shopify or WordPress. We’ll see.

 

by dgreis at March 30, 2018 12:48 PM

March 23, 2018

MIMS 2012

Discovery Kanban 101: My Newest Skillshare Class

I just published my first Skillshare class — Discovery Kanban 101: How to Integrate User-Centered Design with Agile. From the class description:

Learn how to make space for designers and researchers to do user-centered design in an Agile/scrum engineering environment. By creating an explicit Discovery process to focus on customer needs before committing engineers to shipping code, you will unlock design’s potential to deliver great user experiences to your customers.

By the end of this class, you will have built a Discovery Kanban board and learned how to use it to plan and manage the work of your team.

While I was at Optimizely, I implemented a Discovery kanban process to improve the effectiveness of my design team (which I blogged about previously here and here, and spoke about here). I took the lessons I learned from doing that and turned them into a class on Skillshare to help any design leader implement an explicit Discovery process at their organization.

Whether you’re a design manager, a product designer, a program manager, a product manager, or just someone who’s interested in user-centered design, I hope you find this course valuable. If you have any thoughts or questions, don’t hesitate to reach out: @jlzych

by Jeff Zych at March 23, 2018 09:50 PM

March 14, 2018

Ph.D. student

Artisanal production, productivity and automation, economic engines

I’m continuing to read Moretti’s The new geography of jobs (2012). Except for the occasional gushing over the revolutionary-ness of some new payments startup, a symptom no doubt of being so close to Silicon Valley, it continues to be an enlightening and measured read on economic change.

There are a number of useful arguments and ideas from the book, which are probably sourced more generally from economics, which I’ll outline here, with my comments:

Local, artisanal production can never substitute for large-scale manufacturing. Moretti argues that while in many places in the United States local artisinal production has cropped up, it will never replace the work done by large-scale production. Why? Because by definition, local artisinal production is (a) geographically local, and therefore unable to scale beyond a certain region, and (b) defined in part by its uniqueness, differentiating it from mainstream products. In other words, if your local small-batch shop grows to the point where it competes with large-scale production, it is no longer local and small-batch.

Interestingly, this argument about production scaling echoes work on empirical heavy tail distributions in social and economic phenomena. A world where small-scale production constituted most of production would have an exponentially bounded distribution of firm productivity. The world doesn’t look that way, and so we have very very big companies, and many many small companies, and they coexist.

Higher labor productivity in a sector results in both a richer society and fewer jobs in that sector. Productivity is how much a person’s labor produces. The idea here is that when labor productivity increases, the firm that hires those laborers needs fewer people working to satisfy its demand. But those people will be paid more, because their labor is worth more to the firm.

I think Moretti is hand-waving a bit when he argues that a society only gets richer through increased labor productivity. I don’t follow it exactly.

But I do find it interesting that Moretti calls “increases in productivity” what many others would call “automation”. Several related phenomena are viewed critically in the popular discourse on job automation: more automation causes people to lose jobs; more automation causes some people to get richer (they are higher paid); this means there is a perhaps pernicious link between automation and inequality. One aspect of this is that automation is good for capitalists. But another aspect of this is that automation is good for lucky laborers whose productivity and earnings increase as a result of automation. It’s a more nuanced story than one that is only about job loss.

The economic engine of an economy is what brings in money, it need not be the largest sector of the economy. The idea here is that for a particular (local) economy, the economic engine of that economy will be what pulls in money from outside. Moretti argues that the economic engine must be a “trade sector”, meaning a sector that trades (sells) its goods beyond its borders. It is the workers in this trade-sector economic engine that then spend their income on the “non-trade” sector of local services, which includes schoolteachers, hairdressers, personal trainers, doctors, lawyers, etc. Moretti’s book is largely about how the innovation sector is the new economic engine of many American economies.

One thing that comes to mind reading this point is that not all economic engines are engaged in commercial trade. I’m thinking about Washington, DC, and the surrounding area; the economic engine there is obviously the federal government. Another strange kind of economic engine are top-tier research universities, like Carnegie Mellon or UC Berkeley. Top-tier research universities, unlike many other forms of educational institutions, are constantly selling their degrees to foreign students. This means that they can serve as an economic engine.

Overall, Moretti’s book is a useful guide to economic geography, one that clarifies the economic causes of a number of political tensions that are often discussed in a more heated and, to me, less useful way.

References

Moretti, Enrico. The new geography of jobs. Houghton Mifflin Harcourt, 2012.

by Sebastian Benthall at March 14, 2018 04:04 PM

March 10, 2018

Ph.D. student

the economic construction of knowledge

We’ve all heard about the social construction of knowledge.

Here’s the story: Knowledge isn’t just in the head. Knowledge is a social construct. What we call “knowledge” is what it is because of social institutions and human interactions that sustain, communicate, and define it. Therefore all claims to absolute and unsituated knowledge are suspect.

There are many different social constructivist theories. One of the best, in my opinion, is Bourdieu’s, because he has one of the best social theories. For Bourdieu, social fields get their structure in part through the distribution of various kinds of social capital. Economic capital (money!) is one kind of social capital. Symbolic capital (the fact of having published in a peer-reviewed journal) is a different form of capital. What makes the sciences special, for Bourdieu, is that they are built around a particular mechanism for awarding symbolic capital that makes it (science) get the truth (the real truth). Bourdieu thereby harmonizes social constructivism with scientific realism, which is a huge relief for anybody trying to maintain their sanity in these trying times.

This is all super. What I’m beginning to appreciate more as I age, develop, and in some sense I suppose ‘progress’, is that economic capital is truly the trump card of all the forms of social capital, and that this point is underrated in social constructivist theories in general. What I mean by this is that flows of economic capital are a condition for the existence of the social fields (institutions, professions, etc.) in which knowledge is constructed. This is not to say that everybody engaged in the creation of knowledge is thinking about monetization all the time–to make that leap would be to commit the ecological fallacy. But at the heart of almost every institution where knowledge is created, there is somebody fundraising or selling.

Why, then, don’t we talk more about the economic construction of knowledge? It is a straightforward idea. To understand an institution or social field, you “follow the money”, seeing where it comes from and where it goes, and that allows you to situated the practice in its economic context and thereby determine its economic meaning.

by Sebastian Benthall at March 10, 2018 03:33 PM

March 08, 2018

MIMS 2012

Why I Blog

The fable of the millipede and the songbird is a story about the difference between instinct and knowledge. It goes like this:

High above the forest floor, a millipede strolled along the branch of a tree, her thousand pairs of legs swinging in an easy gait. From the tree top, song birds looked down, fascinated by the synchronization of the millipede’s stride. “That’s an amazing talent,” chirped the songbirds. “You have more limbs than we can count. How do you do it?” And for the first time in her life the millipede thought about this. “Yes,” she wondered, “how do I do what I do?” As she turned to look back, her bristling legs suddenly ran into one another and tangled like vines of ivy. The songbirds laughed as the millipede, in a panic of confusion, twisted herself in a knot and fell to earth below.

On the forest floor, the millipede, realizing that only her pride was hurt, slowly, carefully, limb by limb, unraveled herself. With patience and hard work, she studied and flexed and tested her appendages, until she was able to stand and walk. What was once instinct became knowledge. She realized she didn’t have to move at her old, slow, rote pace. She could amble, strut, prance, even run and jump. Then, as never before, she listened to the symphony of the songbirds and let music touch her heart. Now in perfect command of thousands of talented legs, she gathered courage, and, with a style of her own, danced and danced a dazzling dance that astonished all the creatures of her world. [1]

The lesson here is that conscious reflection of an unconscious action will impair your ability to do that action. But after you introspect and really study how you do what you do, it will transform into knowledge and you will have greater command of that skill.

That, in a nutshell, is why I blog. The act of introspection — of turning abstract thoughts into concrete words — strengthens my knowledge of that subject and enables me to dance a dazzling dance.


[1] I got this version of the fable from the book Story: Substance, Structure, Style and the Principles of Screenwriting by Robert McKee, but can’t find the original version of it anywhere (it’s uncredited in his book). The closest I can find is The Centipede’s Dilemma, but that version lacks the second half of the fable.

by Jeff Zych at March 08, 2018 10:18 PM

March 06, 2018

Ph.D. student

Appealing economic determinism (Moretti)

I’ve start reading Enrico Moretti’s The New Geography of Jobs and finding it very clear and persuasive (though I’m not far in).

Moretti is taking up the major theme of What The Hell Is Happening To The United States, which is being addressed by so many from different angles. But whereas many writers seem to have an agenda–e.g., Noble advocating for political reform regulating algorithms; Deenan arguing for return to traditional community values in some sense; etc.–or to focus on particularly scandalous or dramatic aspects of changing political winds–such as Gilman’s work on plutocratic insurgency and collapsing racial liberalism–Moretti is doing economic geography showing how long term economic trends are shaping the distribution of prosperity within the U.S.

From the introduction, it looks like there are a few notable points.

The first is about what Moretti calls the Great Divergence, which has been going on since the 1980’s. This is the decline of U.S. manufacturing as jobs moved from Detroit, Michegan to Shenzhen, Guangdong, paired with the rise of an innovation economy where the U.S. takes the lead in high-tech and creative work. The needs of the high-tech industry–high-skilled workers, who may often be educated immigrants–changes the demographics of the innovation hubs and results in the political polarization we’re seeing on the national stage. This is an account of the economic base determining the cultural superstructure which is so fraught right now, and exactly what I was getting at yesterday with my rant yesterday about the politics of business.

The second major point Moretti makes which is probably understated in more polemical accounts of the U.S. political economy is the multiplier effect of high-skilled jobs in innovation hubs. Moretti argues that every high-paid innovation job (like software engineer or scientist) results in four other jobs in the same city. These other jobs are in service sectors that are by their nature local and not able to be exported. The consequence is that the innovation economy does not, contrary to its greatest skeptics, only benefit the wealthy minority of innovators to the ruin of the working class. However, it does move the location of working class prosperity into the same urban centers where the innovating class is.

This gives one explanation for why the backlash against Obama-era economic policies was such a shock to the coastal elites. In the locations where the “winners” of the innovation economy were gathered, there was also growth in the service economy which by objective measures increased the prosperity of the working class in those cities. The problem was the neglected working class in those other locations, who felt left behind and struck back against the changes.

A consequence of this line of reasoning is that arguments about increasing political tribalism are really a red herring. Social tribes on the Internet are a consequence, not a cause, of divisions that come from material conditions of economy and geography.

Moretti even appears to have a constructive solution in mind. He argues that there are “three Americas”: the rich innovation hubs, the poor former manufacturing centers, and mid-sized cities that have not yet gone either way. His recipe for economic success in these middle cities is attracting high-skilled workers who are a kind of keystone species for prosperous economic ecosystems.

References

Deneen, Patrick J. Why Liberalism Failed. Yale University Press, 2018.

Gilman, Nils. “The twin insurgency.” American Interest 15 (2014): 3-11.

Gilman, Nils. “The Collapse of Racial Liberalism.” The American Interest (2018).

Moretti, Enrico. The new geography of jobs. Houghton Mifflin Harcourt, 2012.

Noble, Safiya Umoja. Algorithms of Oppression: How search engines reinforce racism. NYU Press, 2018.

by Sebastian Benthall at March 06, 2018 06:43 PM

MIMS 2014

I Googled Myself

As a huge enthusiast of A/B testing, I have been wanting to learn how to run A/B tests through Google Optimize for some time. However, it’s hard to do this without being familiar with all the different parts of the Google product eco-system. So I decided it was time to take the plunge and finally Google myself. This post will cover my adventures with several products in the Google product suite including: Google Analytics (GA), Google Tag Manager (GTM), Google Optimize (GO), and Google Data Studio (GDS).

Of course, in order to do A/B testing, you have to have A) something to test, and B) sufficient traffic to drive significant results. Early on I counted out trying to A/B test this blog—not because I don’t have sufficient traffic—I got tons of it, believe me . . . (said in my best Trump voice). The main reason I didn’t try do it with my blog is that I don’t host it, WordPress does, so I can’t easily access or manipulate the source code to implement an A/B test. It’s much easier if I host the website myself (which I can do locally using MAMP).

But how do I send traffic to a website I’m hosting locally? By simulating it, of course. Using a nifty python library called Selenium, I can be as popular as I want! I can also simulate any kind of behavior I want, and that gives me maximum control. Since I can set the expected outcomes ahead of time, I can more easily troubleshoot/debug whenever the results don’t square with expectations.

My Mini “Conversion Funnel”

When it came to designing my first A/B test, I wanted to keep things relatively simple while still mimicking the general flow of an e-commerce conversion funnel. I designed a basic website with two different landing page variants—one with a green button and one with a red button. I arbitrarily decided that users would be 80% likely to click on the button when it’s green and 95% likely to click on the button when it’s red (these conversion rates are unrealistically high, I know). Users who didn’t click on the button would bounce, while those who did would advance to the “Purchase Page”.

website_flow_diagram

To make things a little more complicated, I decided to have 20% of ‘green’ users bounce after reaching the purchase page. The main reason for this was to test out GA’s funnel visualizations to see if they would faithfully reproduce the graphic above (they did). After the purchase page, users would reach a final “Thank You” page with a button to claim their gift. There would be no further attrition at this point; all users who arrived on this page would click the “Claim Your Gift” button. This final action was the conversion (or ‘Goal’ in GA-speak) that I set as the objective for the A/B test.

Google Analytics

With GA, I jumped straight into the deep end, adding gtag.js snippets to all the pages of my site. Then I implemented a few custom events and dimensions via javascript. In retrospect, I would have done the courses offered by Google first (Google Analytics for Beginners & Advanced Google Analytics) . These courses give you a really good lay of the land of what GA is capable of, and it’s really impressive. If you have a website, I don’t see how you can get away with not having it plugged into GA.

In terms of features, the real time event tracking is a fantastic resource for debugging GA implementations. However, the one feature I wasn’t expecting GA to have was the benchmarking feature. It allows you to compare the traffic on your site with websites in similar verticals. This is really great because even if you’re totally out of ideas on what to analyze (which you shouldn’t be given the rest of the features in GA), you can use the benchmarking feature as a starting point for figuring out the weak points in your site.

The other great thing about the two courses I mentioned is that they’re free, and at the end you can take the GA Individual Qualification exam to certify your knowledge about GA (which I did). If you’re gonna put it the time to learn the platform, it’s nice to have a little endorsement at the end.

Google Tag Manager

After implementing everything in gtag.js, I did it all again using GTM. I can definitely see the appeal of GTM as a way to deploy GA; it abstracts away all of that messy javascript and replaces it with a clean user interface and a handy debug tool. The one drawback of GTM seems that it doesn’t send events to GA quite as well as gtag.js. Specifically, in my GA reports for the ‘red button` variant of my A/B test, I saw more conversions for the “Claim Your Gift” button than conversions for the initial click to get off the landing page. Given the attrition rates I defined, that’s impossible. I tried to configure the tag to wait until the event was sent to GA before the next page was loaded, but there still seemed to be some data meant to be sent to GA that got lost in the mix.

Google Optimize

Before trying out GO, I implemented my little A/B test through Google’s legacy system, Content Experiments. I can definitely see why GO is the way of the future. There’s a nifty tool that lets you edit visual DOM elements right in the browser while you’re defining your variants. In Content Experiments, you have to either provide two separate A or B pages or implement the expected changes on your end. It’s a nice thing to not have to worry about, especially if you’re not a pro front-end developer.

Also, it’s clear that GO has more powerful decision features. For one thing, it has Bayesian decision logic which is more comprehensible for business stakeholders and is gaining steam in online a/b testing. Also, it has the ability to do multivariate testing, which is a great addition, though I don’t use that functionality for this test.

The one thing that was a bit irritating with GO was setting it up to run on localhost. It took a few hours of yak shaving to get the different variants to actually show up on my computer. It boiled down to 1) editing my etc/hosts file with an extra line in accordance with this post on the Google Advertiser Community forum and 2) making sure the Selenium driver navigated to localhost.domain instead of just localhost.

Google Data Studio

Nothing is worth doing unless you can make a dashboard at the end of it, right? While GA has some amazing report power generating capabilities, it can feel somewhat rigid in terms of customizability. GDS is a relatively new program that gives you way more options to visualize the data sitting in GA. But while GDS has an advantage over GA, it does have some frustrating limitations which I hope they resolve soon. In particular, I hope they’ll let you show percent differences between two score cards soon. As someone who’s done a lot of  A/B test reports, I know that the thing stakeholders are most interested in seeing is the % difference, or lift, caused by one variant versus another.

Here is a screenshot of the ultimate dashboard (or a link it you want to see it live):

dashboard_ss

The dashboard was also a good way to do a quick check to make sure everything in the test was working as expected. For example, the expected conversion rate for the “Claim Your Gift” button was 64% versus 95%, and we see more or less those numbers in the first bar chart on the left. The conditional conversion rate (the conversion rate of users conditioned on clicking off the landing page) is also close to what was expected: 80% vs. 100%

Notes about Selenium

So I really like Selenium, and after this project I have a little personal library to do automated tests in the future that I can apply to any website, not just this little dinky one I ran locally on my machine.

When you’re writing code dealing with Selenium, one thing I’ve realized is that it’s important to write highly fault tolerant code. Things that depend on the internet imply many things that can go wrong—the wifi in the cafe you’re in might go down. Resources might randomly fail to load. So many different things that can go wrong… But if you’ve written fault-tolerant code, hitting one of these snags won’t cause your program to stop running.

Along with fault-tolerant code, it’s a good idea to write good logs. When stuff does go wrong, this helps you figure out what it was. In this particular case, logs also served as a good source of ground truth to compare against the numbers I was seeing in GA.

The End! (for now…)

I think I’ll be back soon with another post about AdWords and Advanced E-Commerce in GA…

 

by dgreis at March 06, 2018 03:37 AM

March 05, 2018

Ph.D. student

politics of business

This post is an attempt to articulate something that’s on the tip of my tongue, so bear with me.

Fraser has made the point that the politics of recognition and the politics of distribution are not the same. In her view, the conflict in the U.S. over recognition (i.e., or women, racial minorities, LGBTQ, etc. on the progressive side, and on the straight white male ‘majority’ on the reactionary side) has overshadowed the politics of distribution, which has been at a steady neoliberal status quo for some time.

First, it’s worth pointing out that in between these two political contests is a politics of representation, which may be more to the point. The claim here is that if a particular group is represented within a powerful organization–say, the government, or within a company with a lot of power such as a major financial institution or tech company–then that organization will use its power in a way that is responsive to the needs of the represented group.

Politics of representation are the link between recognition and distribution: the idea is that if “we” recognize a certain group, then through democratic or social processes members of that group will be lifted into positions of representative power, which then will lead to (re)distribution towards that group in the longer run.

I believe this is the implicit theory of social change at the heart of a lot of democratish movements today. It’s an interesting theory in part because it doesn’t seem to have any room for “good governance”, or broadly beneficial governance, or technocracy. There’s nothing deliberative about this form of democracy; it’s a tribal war-by-other-means. It is also not clear that this theory of social change based on demographic representation is any more effective at changing distributional outcomes than a pure politics of recognition, which we have reason to believhe is ineffectual.

Who do we expect to have power over distributional outcomes in our (and probably other) democracies? Realistically, it’s corporations. Businesses comprise most of the economic activity; businesses have the profits needed to reinvest in lobbying power for the sake of economic capture. So maybe if what we’re interested in is politics of distribution, we should stop trying to parse out the politics of recognition, with its deep dark rabbit hole of identity politics and the historical injustice and Jungian archetypal conflicts over the implications of the long arc of sexual maturity. These conversations do not seem to be getting anyone anywhere! It is, perhaps, fake news: not because the contents are fake, but because the idea that these issues are new is fake. They are perhaps just a lot of old issues stirred to conflagration by the feedback loops between social and traditional media.

If we are interested in the politics of distribution, let’s talk about something else, something that we all know must be more relevant, when it comes down to it, than the politics of recognition. I’m talking about the politics of business.

We have a rather complex economy with many competing business interests. Let’s assume that one of the things these businesses compete over is regulatory capture–their ability to influence economic policy in their favor.

When academics talk about neoliberal economic policy, they are often talking about those policies that benefit the financial sector and big businesses. But these big businesses are not always in agreement.

Take, for example, the steel tariff proposed by the Trump administration. There is no blunter example of a policy that benefits some business interests–U.S. steelmakers–and not others–U.S. manufacturers of steel-based products.

It’s important from the perspective of electoral politics to recognize that the U.S. steelmakers are a particular set of people who live in particular voting districts with certain demographics. That’s because, probably, if I am a U.S. steelworker, I will vote in the interest of my industry. Just as if I am a U.S. based urban information worker at an Internet company, I will vote in the interest of my company, which in my case would mean supporting net neutrality. If I worked for AT&T, I would vote against net neutrality, which today means I would vote Republican.

It’s an interesting fact that AT&T employs a lot more people than Google and (I believe this is the case, though I don’t know where to look up the data) that they are much more geographically distributed that Google because, you know, wires and towers and such. Which means that AT&T employees will be drawn from more rural, less diverse areas, giving them an additional allegiance to Republican identity politics.

You must see where I’m getting at. Assume that the main driver of U.S. politics is not popular will (which nobody really believes, right?) and is in fact corporate interests (which basically everybody admits, right?). In that case the politics of recognition will not be determining anything; rather it will be a symptom, an epiphenomenon, of an underlying politics of business. Immigration of high-talent foreigners then becomes a proxy issue for the economic battle between coastal tech companies and, say, old energy companies which have a much less geographically mobile labor base. Nationalism, or multinationalism, becomes a function of trade relations rather than a driving economic force in its own right. (Hence, Russia remains an enemy of the U.S. largely because Putin paid off all its debt to the U.S. and doesn’t owe it any money, unlike many of its other allies around the world.)

I would very much like to devote myself better to the understanding of politics of business because, as I’ve indicated, I think the politics of recognition have become a huge distraction.

by Sebastian Benthall at March 05, 2018 10:00 PM

March 02, 2018

Ph.D. student

Moral individualism and race (Barabas, Gilman, Deenan)

One of my favorite articles presented at the recent FAT* 2018 conference was Barabas et al. on “Interventions over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment” (link). To me, this was the correct response to recent academic debate about the use of actuarial risk-assessment in determining criminal bail and parole rates. I had a position on this before the conference which I drafted up here; my main frustration with the debate had been that it had gone unquestioned why bail and parole rates are based on actuarial prediction of recidivism in the first place, given that rearrest rates are so contingent on social structural factors such as whether or not police are racist.

Barabas et al. point out that there’s an implicit theory of crime behind the use of actuarial risk assessments. In that theory of crime, there are individual “bad people” and “good people”. “Bad people” are more likely to commit crimes because of their individual nature, and the goal of the criminal policing system is to keep bad people from committing crimes by putting them in prison. This is the sort of theory that, even if it is a little bit true, is also deeply wrong, and so we should probably reassess the whole criminal justice system as a result. Even leaving aside the important issue of whether “recidivism” is interpreted as reoffense or rearrest rate, it is socially quite dangerous to see probability of offense as due to the specific individual moral character of a person. One reason why this is dangerous is that if the conditions for offense are correlated with the conditions for some sort of unjust desperation, then we risk falsely justifying an injustice with the idea that the bad things are only happening to bad people.

I’d like to juxtapose this position with a couple others that may on the surface appear to be in tension with it.

Nils Gilman’s new piece on “The Collapse of Racial Liberalism” is a helpful account of how we got where we are as an American polity. True to the title, Gilman’s point is that there was a centrist consensus on ‘racial liberalism’ that it reached its apotheosis in the election of Obama and then collapsed under its one contradictions, getting us where we are today.

By racial liberalism, I mean the basic consensus that existed across the mainstream of both political parties since the 1970s, to the effect that, first, bigotry of any overt sort would not be tolerated, but second, that what was intolerable was only overt bigotry—in other words, white people’s definition of racism. Institutional or “structural” racism—that is, race-based exclusions that result from deep social habits such as where people live, who they know socially, what private organizations they belong to, and so on—were not to be addressed. The core ethic of the racial liberal consensus was colorblind individualism.

Bill Clinton was good at toeing the line of racial liberalism, and Obama, as a black meritocratic elected president, was its culmination. But:

“Obama’s election marked at once the high point and the end of a particular historical cycle: a moment when the realization of a particular ideal reveals the limits of that ideal.”

The limit of the ideal is, of course, that all the things not addressed–“race-based exclusions that result from deep social habits such as where people live, who they know socially, what private organizations they belong to, and so on”–matter, and result in, for example, innocent black guys getting shot disproportionately by police even when there is a black meritocratic sitting as president.

And interesting juxtaposition here is that in both cases discussed so far, we have a case of a system that is reaching its obsolescence due to the contradictions of individualism. In the case of actuarial policing (as it is done today; I think a properly sociological version of actuarial policing could be great), there’s the problem of considering criminals as individuals whose crimes are symptoms of their individual moral character. The solution to crime is to ostracize and contain the criminals by, e.g., putting them in prison. In the case of racial liberalism, there’s the problem of considering bigotry a symptom of individual moral character. The solution to the bigotry is to ostracize and contain the bigots by teaching them that it is socially unacceptable to express bigotry and keeping the worst bigots out of respectable organizations.

Could it be that our broken theories of both crime and bigotry both have the same problem, which is the commitment to moral individualism, by which I mean the theory that it’s individual moral character that is the cause of and solution to these problems? If a case of individual crime and individual bigotry is the result of, instead of an individual moral failing, a collective action problem, what then?

I still haven’t looked carefully into Deenan’s argument (see notes here), but I’m intrigued that his point may be that the crisis of liberalism may be, at its root, a crisis of individualism. Indeed, Kantian views of individual autonomy are really nice but they have not stood the test of time; I’d say the combined works of Haberams, Foucault, and Bourdieu have each from very different directions developed Kantian ideas into a more sociological frame. And that’s just on the continental grand theory side of the equation. I have not followed up on what Anglophone liberal theory has been doing, but I suspect that it has been going the same way.

I am wary, as I always am, of giving too much credit to theory. I know, as somebody who has read altogether too much of it, what little use it actually is. However, the notion of political and social consensus is one that tangibly effects my life these days. For this reason, it’s a topic of great personal interest.

One last point, that’s intended as constructive. It’s been argued that the appeal of individualism is due in part to the methodological individualism of rational choice theory and neoclassical economic theory. Because we can’t model economic interactions on anything but an individualistic level, we can’t design mechanisms or institutions that treat individual activity as a function of social form. This is another good reason to take seriously computational modeling of social forms.

References

Barabas, Chelsea, et al. “Interventions over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment.” arXiv preprint arXiv:1712.08238 (2017).

Deneen, Patrick J. Why Liberalism Failed. Yale University Press, 2018.

Gilman, Nils. “The Collapse of Racial Liberalism.” The American Interest (2018).

by Sebastian Benthall at March 02, 2018 09:09 PM

February 28, 2018

Ph.D. student

interesting article about business in China

I don’t know much about China, really, so I’m always fascinated to learn more.

This FT article, “Anbang arrests demonstrates hostility to business”, by Jamil Anderlini, provides some wonderful historical context to a story about the arrest of an insurance oligarch.

In ancient times, merchants were at the very bottom of the four official social classes, below warrior-scholars, farmers and artisans. Although some became very rich they were considered parasites in Chinese society.

Ever since the Han emperors established the state salt monopoly in the second century BCE (remnants of which remain to this day), large-scale business enterprises have been controlled by the state or completely reliant on the favour of the emperor and the bureaucrat class.

In the 20th century, the Communist emperor Mao Zedong effectively managed to stamp out all private enterprise for a while.

Until the party finally allowed “capitalists” to join its ranks in 2002, many of the business activities carried out by the resurgent merchant class were technically illegal.

China’s rich lists are populated by entrepreneurs operating in just a handful of industries — particularly real estate and the internet.

Tycoons like Mr Wu who emerge in state-dominated sectors are still exceedingly rare. They are almost always closely linked to one of the old revolutionary families exercising enormous power from the shadows.

Everything about this is interesting.

First, in Western scholarship we rarely give China credit for its history of bureaucracy in the absence of capitalism. In the well know Weberian account, bureaucracy is an institutional invention that provides regular rule of law so that capitalism can thrive. But China’s history is one that is statist “from ancient times”, but with effective bureaucracy from the beginning. A managerialist history, perhaps.

Which makes the second point so unusual: why, given this long history of bureaucratic rule, are Internet companies operating in a comparatively unregulated way? This seems like a massive concession of power, not unlike how (arguably) the government of the United States conceded a lot of power to Silicon Valley under the Obama administration.

The article dramatically foreshadows a potential power struggle between Xi Jinping’s consolidated state and the tech giant oligarchs:

Now that Chinese President Xi Jinping has abolished his own term limits, setting the stage for him to rule for life if he wants to, the system of state patronage and the punishment of independent oligarchs is likely to expand. Any company or billionaire who offends the emperor or his minions will be swiftly dealt with in the same way as Mr Wu.

There is one group of Chinese companies with charismatic — some would say arrogant — founders that enjoy immense economic power in China today. They would seem to be prime candidates if the assault on private enterprise is stepped up.

Internet giants Alibaba, Tencent and Baidu are not only hugely profitable, they control the data that is the lifeblood of the modern economy. That is why Alibaba founder Jack Ma has repeatedly said, including to the FT, that he would gladly hand his company over to the state if Beijing ever asked him to. Investors in BABA can only hope it never comes to that.

That is quite the expression of feudal fealty from Jack Ma. Truly, a totally different business culture from that of the United States.

by Sebastian Benthall at February 28, 2018 03:18 PM

February 27, 2018

Ph.D. student

Notes on Deenan, “Why Liberalism Failed”, Foreward

I’ve begun reading the recently published book, Why Liberalism Failed (2018), by Patrick Deenan. It appears to be making some waves in the political theory commentary. The author claims that it was 10 years in the making but was finished three weeks before the 2016 presidential election, which suggests that the argument within it is prescient.

I’m not far in yet.

There is an intriguing forward from James Davison Hunter and John M. Owen IV, the editors. Their framing of the book is surprisingly continental:

  • They declare that liberalism has arrived at its “legitimacy crisis”, a Habermasian term.
  • They claim that the core contention of the book is a critique of the contradictions within Immanuel Kant’s view of individual autonomy.
  • They compare Deenan with other “radical” critics of liberalism, of which they name: Marx, the Frankfurt School, Foucault, Nietzsche, Schmitt, and the Catholic Church.

In search of a litmus-test like clue as to where in the political spectrum the book falls, I’ve found this passage in the Foreward:

Deneen’s book is disruptive not only for the way it links social maladies to liberalism’s first principles, but also because it is difficult to categorize along our conventional left-right spectrum. Much of what he writes will cheer social democrats and anger free-market advocates; much else will hearten traditionalists and alienate social progressives.

Well, well, well. If we are to fit Deenan’s book into the conceptual 2-by-2 provided in Fraser’s recent work, it appears that Deenan’s political theory is a form of reactionary populism, rejecting progressive neoliberalism. In other words, the Foreward evinces that Deenan’s book is a high-brow political theory contribution that weighs in favor of the kind of politics that has been heretofore only articulated by intellectual pariahs.

by Sebastian Benthall at February 27, 2018 03:54 PM

February 26, 2018

MIMS 2012

On Mastery

Mastery

I completely agree with this view on mastery from American fashion designer, writer, television personality, entrepreneur, and occasional cabaret star Isaac Mizrahi:

I’m a person who’s interested in doing a bunch of things. It’s just what I like. I like it better than doing one thing over and over. This idea of mastery—of being the very best at just one thing—is not in my future. I don’t really care that much. I care about doing things that are interesting to me and that I don’t lose interest in.

Mastery – “being the very best at just one thing” – doesn’t hold much appeal for me. I’m a very curious person. I like jumping between various creative endeavors that “are interesting to me and that I don’t lose interest in.” Guitar, web design, coding, writing, hand lettering – these are just some of the creative paths I’ve gone down so far, and I know that list will continue to grow.

I’ve found that my understanding of one discipline fosters a deeper understanding of other disciplines. New skills don’t take away from each other – they only add.

So no, mastery isn’t for me. The more creative paths I go down, the better. Keep ‘em coming.


Update 4/2/18

Quartz recently profiled Charlie Munger, Warren Buffett’s billionaire deputy, who credits his investing success to not mastering just 1 field — investment theory — but instead “mastering the multiple models which underlie reality.” In other words, Munger is an expert-generalist. The term was coined by Orit Gadiesh, chairman of Bain & Co, who describes an expert-generalist as:

Someone who has the ability and curiosity to master and collect expertise in many different disciplines, industries, skills, capabilities, countries, and topics., etc. He or she can then, without necessarily even realizing it, but often by design:

  1. Draw on that palette of diverse knowledge to recognize patterns and connect the dots across multiple areas.
  2. Drill deep to focus and perfect the thinking.

The article goes on to describe the strength of this strategy:

Being an expert-generalist allows individuals to quickly adapt to change. Research shows that they:

  • See the world more accurately and make better predictions of the future because they are not as susceptible to the biases and assumptions prevailing in any given field or community.
  • Have more breakthrough ideas, because they pull insights that already work in one area into ones where they haven’t been tried yet.
  • Build deeper connections with people who are different than them because of understanding of their perspectives.
  • Build more open networks, which allows them to serve as a connector between people in different groups. According to network science research, having an open network is the #1 predictor of career success.

All of this sounds exactly right. I had never thought about the benefits of being an expert-generalist, nor did I deliberately set out to be one (my natural curiosity got me here), but reading these descriptions gave form to something that previously felt intuitively true.

Read the full article here: https://qz.com/1179027/mental-models-how-warren-buffetts-billionaire-deputy-became-an-expert-generalist/

by Jeff Zych at February 26, 2018 01:29 AM

February 20, 2018

MIMS 2012

Stay Focused on the User by Switching Between Maker Mode and Listener Mode

When writing music, ambient music composer Brian Eno makes music that’s pleasurable to listen to by switching between “maker” mode and “listener” mode. He says:

I just start something simple [in the studio]—like a couple of tones that overlay each other—and then I come back in here and do emails or write or whatever I have to do. So as I’m listening, I’ll think, It would be nice if I had more harmonics in there. So I take a few minutes to go and fix that up, and I leave it playing. Sometimes that’s all that happens, and I do my emails and then go home. But other times, it starts to sound like a piece of music. So then I start working on it.

I always try to keep this balance with ambient pieces between making them and listening to them. If you’re only in maker mode all the time, you put too much in. […] As a maker, you tend to do too much, because you’re there with all the tools and you keep putting things in. As a listener, you’re happy with quite a lot less.

In other words, Eno makes great music by experiencing it the way his listeners do: by listening to it.

This is also a great lesson for product development teams: to make a great product, regularly use your product.

By switching between “maker” and “listener” modes, you put yourself in your user’s shoes and seeing your work through their eyes, which helps prevent you from “put[ting] too much in.”

This isn’t a replacement for user testing, of course. We are not our users. But in my experience, it’s all too common for product development teams to rarely, if ever, use what they’re building. No shade – I’ve been there. We get caught on the treadmill of building new features, always moving on to the next without stopping to catch our breath and use what we’ve built. This is how products devolve into an incomprehensible pile of features.

Eno’s process is an important reminder to keep your focus on the user by regularly switching between “maker” mode and “listener” mode.

by Jeff Zych at February 20, 2018 08:20 PM

February 13, 2018

Ph.D. student

that time they buried Talcott Parsons

Continuing with what seems like a never-ending side project to get a handle on computational social science methods, I’m doing a literature review on ‘big data’ sociological methods papers. Recent reading has led to two striking revelations.

The first is that Tufekci’s 2014 critique of Big Data methodologies is the best thing on the subject I’ve ever read. What it does is very clearly and precisely lay out the methodological pitfalls of sourcing the data from social media platforms: use of a platform as a model organism; selecting on a dependent variable; not taking into account exogenous, ecological, or field factors; and so on. I suspect this is old news to people who have more rigorously surveyed the literature on this in the past. But I’ve been exposed to and distracted by literature that seems aimed mainly to discredit social scientists who want to work with this data, rather than helpfully engaging them on the promises and limitations of their methods.

The second striking revelation is that for the second time in my literature survey, I’ve found a reference to that time when the field of cultural sociology decided they’d had enough of Talcott Parsons. From (Bail, 2014):

The capacity to capture all – or nearly all – relevant text on a given topic opens exciting new lines of meso- and macro-level inquiry into what environments (Bail forthcoming). Ecological or functionalist interpretations of culture have been unpopular with cultural sociologists for some time – most likely because the subfield defined itself as an alternative to the general theory proposed by Talcott Parsons (Alexander 2006). Yet many cultural sociologists also draw inspiration from Mary Douglas (e.g., Alexander 2006; Lamont 1992; Zelizer 1985), who – like Swidler – insists upon the need for our subfield to engage broader levels of analysis. “For sociology to accept that no functionalist arguments work,” writes Douglas (1986, p. 43), “is like cutting off one’s nose to spite one’s face.” To be fair, cultural sociologists have recently made several programmatic statements about the need to engage functional or ecological theories of culture. Abbott (1995), for example, explains the formation of boundaries between professional fields as the result of an evolutionary process. Similarly, Lieberson (2000), presents an ecological model of fashion trends in child-naming practices. In a review essay, Kaufman (2004) describes such ecological approaches to cultural sociology as one of the three most promising directions for the future of the subfield.

I’m not sure what’s going on with all these references to Talcott Parsons. I gather that at one time he was a giant in sociology, but that then a generation of sociologists tried to bury him. Then the next generation of sociologists reinvented structural functionalism with new language–“ecological approaches”, “field theory”?

One wonder what Talcott Parsons did or didn’t do to inspire such a rebellion.

References

Bail, Christopher A. “The cultural environment: measuring culture with big data.” Theory and Society 43.3-4 (2014): 465-482.

Tufekci, Zeynep. “Big Questions for Social Media Big Data: Representativeness, Validity and Other Methodological Pitfalls.” ICWSM 14 (2014): 505-514.

by Sebastian Benthall at February 13, 2018 06:15 PM

February 12, 2018

Ph.D. student

What happens if we lose the prior for sparse representations?

Noting this nice paper by Giannone et al., “Economic predictions with big data: The illusion of sparsity.” It concludes:

Summing up, strong prior beliefs favouring low-dimensional models appear to be necessary to support sparse representations. In most cases, the idea that the data are informative enough to identify sparse predictive models might be an illusion.

This is refreshing honesty.

In my experience, most disciplinary social sciences have a strong prior bias towards pithy explanatory theses. In a normal social science paper, what you want is a single research question, a single hypothesis. This thesis expresses the narrative of the paper. It’s what makes the paper compelling.

In mathematical model fitting, the term for such a simply hypothesis is a sparse predictive model. These models will have relatively few independent variables predicting the dependent variable. In machine learning, this sparsity is often accomplished by a regularization step. While generally well-motivate, regularization for sparsity can be done for reasons that are more aesthetic or reflect a stronger prior than is warranted.

A consequence of this preference for sparsity, in my opinion, is the prevalence of literature on power law distributions vs. log normal explanations. (See this note on disorganized heavy tail distributions.) A dense model on a log linear regression will predict a heavy tail dependent variable without great error. But it will be unsatisfying from the perspective of scientific explanation.

What seems to be an open question in the social sciences today is whether the culture of social science will change as a result of the robust statistical analysis of new data sets. As I’ve argued elsewhere (Benthall, 2016), if the culture does change, it will mean that narrative explanation will be less highly valued.

References

Benthall, Sebastian. “Philosophy of computational social science.” Cosmos and History: The Journal of Natural and Social Philosophy 12.2 (2016): 13-30.

Giannone, Domenico, Michele Lenza, and Giorgio E. Primiceri. “Economic predictions with big data: The illusion of sparsity.” (2017).

by Sebastian Benthall at February 12, 2018 03:44 AM

February 10, 2018

Ph.D. student

The therapeutic ethos in progressive neoliberalism (Fraser and Furedi)

I’ve read two pieces recently that I found helpful in understanding today’s politics, especially today’s identity politics, in a larger context.

The first is Nancy Fraser’s “From Progressive Neoliberalism to Trump–and Beyond” (link). It portrays the present (American but also global) political moment as a “crisis of hegemony”, using Gramscian terms, for which the presidency of Donald Trump is a poster child. It’s main contribution is to point out that the hegemony that’s been in crisis is a hegemony of progressive neoliberalism, which sounds like an oxymoron but, Fraser argues, isn’t.

Rather, Fraser explains a two-dimensional political spectrum: there are politics of distribution, and there are politics of recognition.

To these ideas of Gramsci, we must add one more. Every hegemonic bloc embodies a set of assumptions about what is just and right and what is not. Since at least the mid-twentieth century in the United States and Europe, capitalist hegemony has been forged by combining two different aspects of right and justice—one focused on distribution, the other on recognition. he distributive aspect conveys a view about how society should allocate divisible goods, especially income. This aspect speaks to the economic structure of society and, however obliquely, to its class divisions. The recognition aspect expresses a sense of how society should apportion respect and esteem, the moral marks of membership and belonging. Focused on the status order of society, this aspect refers to its status hierarchies.

Fraser’s argument is that neoliberalism is a politics of distribution–it’s about using the market to distribute goods. I’m just going to assume that anybody reading this has a working knowledge of what neoliberalism means; if you don’t I recommend reading Fraser’s article about it. Progressivism is a politics of recognition that was advanced by the New Democrats. Part of its political potency been its consistency with neoliberalism:

At the core of this ethos were ideals of “diversity,” women’s “empowerment,” and LGBTQ rights; post-racialism, multiculturalism, and environmentalism. These ideals were interpreted in a specific, limited way that was fully compatible with the Goldman Sachsification of the U.S. economy…. The progressive-neoliberal program for a just status order did not aim to abolish social hierarchy but to “diversify” it, “empowering” “talented” women, people of color, and sexual minorities to rise to the top. And that ideal was inherently class specific: geared to ensuring that “deserving” individuals from “underrepresented groups” could attain positions and pay on a par with the straight white men of their own class.

A less academic, more Wall Street Journal reading member of the commentariat might be more comfortable with the terms “fiscal conservativism” and “social liberalism”. And indeed, Fraser’s argument seems mainly to be that the hegemony of the Obama era was fiscally conservatism but socially liberal. In a sense, it was the true libertarians that were winning, which is an interesting take I hadn’t heard before.

The problem, from Frasers perspective, is that neoliberalism concentrates wealth and carries the seeds of its own revolution, allowing for Trump to run on a combination of reactionary politics of recognition (social conservativism) with a populist politics of distribution (economic liberalism: big spending and protectionism). He won, and then sold out to neoliberalism, giving us the currently prevailing combination of neoliberalism and reactionary social policy. Which, by the way, we would be calling neoconservatism if it were 15 years ago. Maybe it’s time to resuscitate this term.

Fraser thinks the world would be a better place if progressive populists could establish themselves as an effective counterhegemonic bloc.

The second piece I’ve read on this recently is Frank Furedi’s “The hidden history of t identity politics” (link). Pairing Fraser with Furedi is perhaps unlikely because, to put it bluntly, Fraser is a feminist and Furedi, as far as I can tell from this one piece, isn’t. However, both are serious social historians and there’s a lot of overlap in the stories they tell. That is in itself interesting from a scholarly perspective of one trying to triangulate an accurate account of political history.

Furedi’s piece is about “identity politics” broadly, including both its right wing and left wing incarnations. So, we’re talking about what Fraser calls the politics of recognition here. On a first pass, Furedi’s point is that Enlightenment universalist values have been challenged by both right and left wing identity politics since the late 18th century Romantic nationalist movements in Europe, which led to World Wars and the holocaust. Maybe, Furedi’s piece suggests, abandoning Enlightenment universalist values was a bad idea.

Although expressed through a radical rhetoric of liberation and empowerment, the shift towards identity politics was conservative in impulse. It was a sensibility that celebrated the particular and which regarded the aspiration for universal values with suspicion. Hence the politics of identity focused on the consciousness of the self and on how the self was perceived. Identity politics was, and continues to be, the politics of ‘it’s all about me’.

Strikingly, Furedi’s argument is that the left took the “cultural turn” into recognition politics essentially because of its inability to maintain a left-wing politics of redistribution, and that this happened in the 70’s. But this in turn undermined the cause of the economic left. Why? Because economic populism requires social solidarity, while identity politics is necessarily a politics of difference. Solidarity within an identity group can cause gains for that identity group, but at the expense of political gains that could be won with an even more unified popular political force.

The emergence of different identity-based groups during the 1970s mirrored the lowering of expectations on the part of the left. This new sensibility was most strikingly expressed by the so-called ‘cultural turn’ of the left. The focus on the politics of culture, on image and representation, distracted the left from its traditional interest in social solidarity. And the most significant feature of the cultural turn was its sacralisation of identity. The ideals of difference and diversity had displaced those of human solidarity.

So far, Furedi is in agreement with Fraser that hegemonic neoliberalism has been the status quo since the 70’s, and that the main political battles have been over identity recognition. Furedi’s point, which I find interesting, is that these battles over identity recognition undermine the cause of economic populism. In short, neoliberals and neocons can use identity to divide and conquer their shared political opponents and keep things as neo- as possible.

This is all rather old news, though a nice schematic representation of it.

Where Furedi’s piece gets interesting is where it draws out the next movements in identity politics, which he describes as the shift from it being about political and economic conditions into a politics of first victimhood and then a specific therapeutic ethos.

The victimhood move grounded the politics of recognition in the authoritative status of the victim. While originally used for progresssive purposes, this move was adopted outside of the progressive movement as early as 1980’s.

A pervasive sense of victimisation was probably the most distinct cultural legacy of this era. The authority of the victim was ascendant. Sections of both the left and the right endorsed the legitimacy of the victim’s authoritative status. This meant that victimhood became an important cultural resource for identity construction. At times it seemed that everyone wanted to embrace the victim label. Competitive victimhood quickly led to attempts to create a hierarchy of victims. According to a study by an American sociologist, the different movements joined in an informal way to ‘generate a common mood of victimisation, moral indignation, and a self-righteous hostility against the common enemy – the white male’ (5). Not that the white male was excluded from the ambit of victimhood for long. In the 1980s, a new men’s movement emerged insisting that men, too, were an unrecognised and marginalised group of victims.

This is interesting in part because there’s a tendency today to see the “alt-right” of reactionary recognition politics as a very recent phenomenon. According to Furedi, it isn’t; it’s part of the history of identity politics in general. We just thought it was
dead because, as Fraser argues, progresssive neoliberalism had attained hegemony.

Buried deep into the piece is arguable Furedi’s most controversial and pointedly written point, which is about the “therapeutic ethos” of identity politics since the 1970’s that resonates quite deeply today. The idea here is that principles from psychotherapy have become part of repertoire of left-wing activism. A prescription against “blaming the victim” transformed into a prescription towards “believing the victim”, which in turn creates a culture where only those with lived experience of a human condition may speak with authority on it. This authority is ambiguous, because it is at once both the moral authority of the victim, but also the authority one must give a therapeutic patient in describing their own experiences for the sake of their mental health.

The obligation to believe and not criticise individuals claiming victim identity is justified on therapeutic grounds. Criticism is said to constitute a form of psychological re-victimisation and therefore causes psychic wounding and mental harm. This therapeutically informed argument against the exercise of critical judgement and free speech regards criticism as an attack not just on views and opinions, but also on the person holding them. The result is censorious and illiberal. That is why in society, and especially on university campuses, it is often impossible to debate certain issues.

Furedi is concerned with how the therapeutic ethos in identity politics shuts down liberal discourse, which further erodes social solidarity which would advance political populism. In therapy, your own individual self-satisfaction and validation is the most important thing. In the politics of solidarity, this is absolutely not the case. This is a subtle critique of Fraser’s argument, which argues that progressive populism is a potentially viable counterhegemonic bloc. We could imagine a synthetic point of view, which is that progressive populism is viable but only if progressives drop the therapeutic ethos. Or, to put it another way, if “[f]rom their standpoint, any criticism of the causes promoted by identitarians is a cultural crime”, then that criminalizes the kind of discourse that’s necessary for political solidarity. That serves to advantage the neoliberal or neoconservative agenda.

This is, Furedi points out, easier to see in light of history:

Outwardly, the latest version of identity politics – which is distinguished by a synthesis of victim consciousness and concern with therapeutic validation – appears to have little in common with its 19th-century predecessor. However, in one important respect it represents a continuation of the particularist outlook and epistemology of 19th-century identitarians. Both versions insist that only those who lived in and experienced the particular culture that underpins their identity can understand their reality. In this sense, identity provides a patent on who can have a say or a voice about matters pertaining to a particular culture.

While I think they do a lot to frame the present political conditions, I don’t agree with everything in either of these articles. There are a few points of tension which I wish I knew more about.

The first is the connection made in some media today between the therapeutic needs of society’s victims and economic distributional justice. Perhaps it’s the nexus of these two political flows that makes the topic of workplace harassment and culture in its most symbolic forms such a hot topic today. It is, in a sense, the quintessential progressive neoliberal problem, in that it aligns the politics of distribution with the politics of recognition while employing the therapeutic ethos. The argument goes: since market logic is fair (the neoliberal position), if there is unfair distribution it must be because the politics of recognition are unfair (progressivism). That’s because if there is inadequate recognition, then the societal victims will feel invalidated, preventing them from asserting themselves effectively in the workplace (therapeutic ethos). To put it another way, distributional inequality is being represented as a consequence of a market externality, which is the psychological difficulty imposed by social and economic inequality. A progressive politthiics of recognition are a therapeutic intervention designed to alleviate this psychological difficulty, which corrects the meritocratic market logic.

One valid reaction to this is: so what? Furedi and Fraser are both essentially card carrying socialists. If you’re a card-carrying socialist (maybe because you have a universalist sense of distributional justice), then you might see the emphasis on workplace harassment as a distraction from a broader socialist agenda. But most people aren’t card-carrying socialist academics; most people go to work and would prefer not to be harassed.

The other thing I would like to know more about is to what extent the demands of the therapeutic ethos are a political rhetorical convenience and to what extent it is a matter of ground truth. The sweeping therapeutic progressive narrative outlined pointed out by Furedi, wherein vast swathes of society (i.e, all women, all people of color, maybe all conservatives in liberal-dominant institutions, etc.) are so structurally victimized that therapy-grade levels of validation are necessary for them to function unharmed in universities and workplaces is truly a tough pill to swallow. On the other hand, a theory of justice that discounts the genuine therapeutic needs of half the population can hardly be described as a “universalist” one.

Is there a resolution to this epistemic and political crisis? If I had to drop everything and look for one, it would be in the clinical psychological literature. What I want to know is how grounded the therapeutic ethos is in (a) scientific clinical psychology, and (b) the epidemiology of mental illness. Is it the case that structural inequality is so traumatizing (either directly or indirectly) that the fragmentation of epistemic culture is necessary as a salve for it? Or is this a political fiction? I don’t know the answer.

by Sebastian Benthall at February 10, 2018 04:48 PM

February 07, 2018

MIMS 2011

Wikipedia’s relationship to academia and academics

I was recently quoted in an article for Science News about the relationship between academia and Wikipedia by Bethany Brookshire. I was asked to comment on a recent paper by MIT Sloan‘s Neil Thompson and Douglas Hanley who investigated the relationship between Wikipedia articles and scientific papers using examples from chemistry and econometrics. There are a bunch of studies on a similar topic (if you’re interested, here is a good place to start) and I’ve been working on this topic – but from a very different angle – for a qualitative study to be published soon. I thought I would share my answers to the interview questions here since many of them are questions that friends and colleagues ask regularly about citing Wikipedia articles and about quality issues on Wikipedia.

Have you ever edited Wikipedia articles?  What do you think of the process?

Some, yes. Being a successful editor on English Wikipedia is a complicated process, particularly if you’re writing about topics that are either controversial or outside the purview of the majority of Western editors. Editing is complicated not only because it is technical (even with the excellent new tools that have been developed to support editing without having to learn wiki markup) – most of the complications come with knowing the norms, the rules and the power dynamics at play.

You’ve worked previously with Wikipedia on things like verification practices. What are the verification practices currently?

That’s a big question 🙂 Verification practices involve a complicated set of norms, rules and technologies. Editors may (or may not) verify their statements by checking sources, but the power of Wikipedia’s claim-making practice lies in the norms of questioning  unsourced claims using the “citation needed” tag and by any other editor being able to remove claims that they believe to be incorrect. This, of course, does not guarantee that every claim on Wikipedia is factually correct, but it does enable the dynamic labelling of unverified claims and the ability to set verification tasks in an iterative fashion.

Many people in academia view Wikipedia as an unreliable source and do not encourage students to use it. What do you think of this?

Academic use of sources is a very contextual practice. We refer to sources in our own papers and publications not only when we are supporting the claims they contain, but also when we dispute them. That’s the first point: even if Wikipedia was generally unreliable, that is not a good reason for denying its use. The second point is that Wikipedia can be a very reliable source for particular types of information. Affirming the claims made in a particular article, if that was our goal in using it, would require verifying the information that we are reinforcing through citation and in citing the particular version (the “oldid” in Wikipedia terms) that we are referring to. Wikipedia can be used very soundly by academics and students – we just need to do so carefully and with an understanding of the context of citation – something we should be doing generally, not only on Wikipedia.

You work in a highly social media savvy field, what is the general attitude of your colleagues toward Wikipedia as a research resource? Do you think it differs from the attitudes of other academics?

I would say that Wikipedia is widely recognized by academics, including those of my colleagues who don’t specifically conduct Wikipedia research, as a source that is fine to visit but not to cite.

What did you think of this particular paper overall?

I thought that it was a really good paper. Excellent research design and very solid analysis. The only weakness, I would argue, would be that there are quite different results for chemistry and econometrics and that those differences aren’t adequately accounted for. More on that below.

The authors were attempting a causational study by adding Wikipedia articles (while leaving some written but unadded) and looking at how the phrases translated to the scientific literature six months later. Is this a long enough period of time?

This seems to be an appropriate amount of time to study, but there are probably quite important differences between fields of study that might influence results. The volume of publication (social scientists and humanities scholars tend to produce much lower volumes of publications and publications thus tend to be extended over time than natural science and engineering subjects, for example), the volume of explanatory or definitional material in publications (requiring greater use of the literature), the extent to which academics in the particular field consult and contribute to Wikipedia – all might affect how different fields of study influence and are influenced by Wikipedia articles.

Do you think the authors achieved evidence of causation here?

Yes. But again, causation in a single field i.e. chemistry.

It is important to know whether Wikipedia is influencing the scientific literature? Why or why not?

Yes. It is important to know whether Wikipedia is influencing scientific literature – particularly because we need to know where power to influence knowledge is located (in order to ensure that it is being fairly governed and maintained for the development of accurate and unbiased public knowledge).

Do you think papers like this will impact how scientists view and use Wikipedia?

As far as I know, this is the first paper that attributes a strong link between what is on Wikipedia and the development of science. I am sure that it will influence how scientists and other academic view and use Wikipedia – particularly in driving initiatives where scientists contribute to Wikipedia either directly or via initiatives such as PLoS’s Topic Pages.

Is there anything especially important to emphasize?

The most important thing is to emphasize the differences between fields that I think needs to be better explained. I definitely think that certain types of academic research are more in line with Wikipedia’s way of working, forms and styles of publication and epistemology and that it will not have the same influence on other fields.

by Heather Ford at February 07, 2018 08:10 AM

February 06, 2018

Ph.D. student

Values, norms, and beliefs: units of analysis in research on culture

Much of the contemporary critical discussion about technology in society and ethical design hinges on the term “values”. Privacy is one such value, according to Mulligan, Koopman, and Doty (2016), drawing on Westin and Post. Contextual Integrity (Nissenbaum, 2009) argues that privacy is a function of norms, and that norms get their legitimacy from, among other sources, societal values. The Data and Society Research Institute lists “values” as one of the cross-cutting themes of its research. Richmond Wong (2017) has been working on eliciting values reflections as a tool in privacy by design. And so on.

As much as ‘values’ get emphasis in this literary corner, I have been unsatisfied with how these literatures represent values as either sociological or philosophical phenomena. How are values distributed in society? Are they stable under different methods of measurement? Do they really have ethical entailments, or are they really just a kind of emotive expression?

For only distantly related reasons, I’ve been looking into the literature on quantitative measurement of culture. I’m doing a bit of a literature review and need your recommendations! But an early hit is Marsden and Swingle’s is a “Conceptualizing and measuring culture in surveys: Values, strategies, and symbols” (1994), which is a straightforward social science methods piece apparently written before either rejections of positivism or Internet-based research became so destructively fashionable.

A useful passage comes early:

To frame our discussion of the content of the culture module, we have drawn on distinctions made in Peterson’s (1979: 137-138) review of cultural research in sociology. Peterson observes that sociological work published in the late 1940s and 1950s treated values – conceptualizations of desirable end-states – and the behavioral norms they specify as the principal explanatory elements of culture. Talcott Parsons (19.51) figured prominently in this school of thought, and more recent survey studies of culture and cultural change in both the United States (Rokeach, 1973) and Europe (Inglehart, 1977) continue the Parsonsian tradition of examining values as a core concept.

This was a surprise! Talcott Parsons is not a name you hear every day in the world of sociology of technology. That’s odd, because as far as I can tell he’s one of these robust and straightforwardly scientific sociologists. The main complaint against him, if I’ve heard any, is that he’s dry. I’ve never heard, despite his being tied to structural functionalism, that his ideas have been substantively empirically refuted (unlike Durkheim, say).

So the mystery is…whatever happened to the legacy of Talcott Parsons? And how is it represented, if at all, in contemporary sociological research today?

One reason why we don’t hear much about Parsons may be because the sociological community moved from measuring “values” to measuring “beliefs”. Marsden and Swingle go on:

Cultural sociologists writing since the late 1970s however, have accented other elements of culture. These include, especially, beliefs and expressive symbols. Peterson’s (1979: 138) usage of “beliefs” refers to “existential statements about how the world operates that often serve to justify value and norms”. As such, they are less to be understood as desirable end-states in and of themselves, but instead as habits or styles of thought that people draw upon, especially in unstructured situations (Swidler, 1986).

Intuitively, this makes sense. When we look at the contemporary seemingly mortal combat of partisan rhetoric and tribalist propaganda, a lot of what we encounter are beliefs and differences in beliefs. As suggested in this text, beliefs justify values and norms, meaning that even values (which you might have thought are the source of all justification) get their meaning from a kind of world-view, rather than being held in a simple way.

That makes a lot of sense. There’s often a lot more commonality in values than in ways those values should be interpreted or applied. Everybody cares about fairness, for example. What people disagree about, often vehemently, is what is fair, and that’s because (I’ll argue here) people have widely varying beliefs about the world and what’s important.

To put it another way, the Humean model where we have beliefs and values separately and then combine the two in an instrumental calculus is wrong, and we’ve known it’s wrong since the 70’s. Instead, we have complexes of normatively thick beliefs that reinforce each other into a worldview. When we we’re asked about our values, we are abstracting in a derivative way from this complex of frames, rather than getting at a more core feature of personality or culture.

A great book on this topic is Hilary Putnam’s The collapse of the fact/value dichotomy (2002), just for example. It would be nice if more of this metaethical theory and sociology of values surfaced in the values in design literature, despite it’s being distinctly off-trend.

References

Marsden, Peter V., and Joseph F. Swingle. “Conceptualizing and measuring culture in surveys: Values, strategies, and symbols.” Poetics 22.4 (1994): 269-289.

Mulligan, Deirdre K., Colin Koopman, and Nick Doty. “Privacy is an essentially contested concept: a multi-dimensional analytic for mapping privacy.” Phil. Trans. R. Soc. A 374.2083 (2016): 20160118.

Nissenbaum, Helen. Privacy in context: Technology, policy, and the integrity of social life. Stanford University Press, 2009.

Putnam, Hilary. The collapse of the fact/value dichotomy and other essays. Harvard University Press, 2002.

Wong, Richmond Y., et al. “Eliciting Values Reflections by Engaging Privacy Futures Using Design Workbooks.” (2017).

by Sebastian Benthall at February 06, 2018 08:40 PM

January 26, 2018

Ph.D. student

Call for abstracts for critical data studies / human contexts and ethics track at the 2018 4S Annual Conference

4S 2018 Open Panel 101: Critical Data Studies: Human Contexts and Ethics

We’re pleased to be organizing one of the open panels at the 2018 Meeting of the Society for the Social Studies of Science (4S). Please submit an abstract!

Deadline: 1 February 2018, submit 250 word abstract here

Conference: 29 August - 1 September 2018, Sydney, Australia

Convenors:

Call for abstracts

In this continuation of the previous Critical Data Studies / Studying Data Critically tracks at 4S (see also Dalton and Thatcher 2014; Iliadis and Russo 2016), we invite papers that address the organizational, social, cultural, ethical, and otherwise human impacts of data science applications in areas like science, education, consumer products, labor and workforce management, bureaucracies and administration, media platforms, or families. Ethnographies, case studies, and theoretical works that take a situated approach to data work, practices, politics, and/or infrastructures in specific contexts are all welcome.

Datafication and autonomous computational systems and practices are producing significant transformations in our analytical and deontological framework, sometimes with objectionable consequences (O’Neill 2016; Barocas, Bradley, Honovar, and Provost 2017). Whether we’re looking at the ways in which new artefacts are constructed or at their social consequences, questions of value and valuation or objectivity and operationalization are indissociable from the processes of innovation and the principles of fairness, reliability, usability, privacy, social justice, and harm avoidance (Campolo, Sanfilippo, Whittaker, and Crawford, 2017).

By reflecting on situated unintended and objectionable consequences, we will gather a collection of works that illuminate one or several aspects of the unfolding of controversies and ethical challenges posed by these new systems and practices. We’re specifically interested in pieces that provide innovative theoretical insights about ethics and controversies, fieldwork, and reflexivity about the researcher’s positionality and her own ethical practices. We also encourage practitioners and educators who have worked to infuse ethical questions and concerns into a workflow, pedagogical strategy, collaboration, or intervention.

Submit a 250 word abstract here.

by R. Stuart Geiger at January 26, 2018 08:00 AM

January 24, 2018

MIMS 2014

A Possible Explanation why America does Nothing about Gun Control

Ever since the Las Vegas mass shooting last October, I’ve wanted to blog about gun control. But I also wanted to wait—to see whether that mass shooting, though the deadliest to date in U.S. history, would quickly slip into the dull recesses of the American public subconsciousness just like all the rest. It did, and once again we find ourselves in the same sorry cycle of inaction that by this point is painfully familiar to everyone.

I also recently came across a 2016 study, by Kalesan, Weinberg, and Galea, which found that on average, Americans are 99% likely to know someone either killed or injured by gun violence over the course of their lifetime. That made me wonder: how can it possibly be that Americans remain so paralyzed on this issue if it affects pretty much everyone?

It could be that the ubiquity of gun violence is actually the thing that actually causes the paralysis. That is, gun violence affects almost everyone just as Kalesan et al argue, but the reactions Americans have to the experience are diametrically opposed to one another. These reactions result in hardened views that inform people’s voting choices, and since these choices more or less divide the country in half across partisan lines, the result is an equilibrium where nothing can ever get done on gun control. So on this reading, it’s not so much a paralysis of inaction so much as a tense political stalemate.

But it could also be something else. Kalesan et al calculate the likelihood of knowing someone killed or injured by general gun violence over the course of a lifetime, but they don’t focus on mass shootings in particular. Their methodology is based on basic principles of probability and some social network theory that posits people have an effective social network numbering a little fewer than 300 people. If you look at the Kalesan et al paper, it becomes clear that their methodology can also be used to calculate the likelihood of knowing someone killed or injured in a mass shooting. It’s just a matter of substituting the rate of general gun violence for the rate of mass shooting in their probability calculation.

It turns out that the probability of knowing someone killed/injured in a mass shooting is much, much lower than for gun violence more generally. Even with a relatively generous definition of what counts as a mass shooting (four or more people injured/killed not including the shooter, according to the Gun Violence Archive), this probability is about 10%. When you only include incidents that have received major national news media attention—based on a list compiled by Mother Jones—that probability drops to about 0.36%.

So, it’s possible the reason Americans continue to drag their feet on gun control is that the problem just doesn’t personally affect enough people. Curiously, the even lower likelihood of knowing someone killed or injured in a terrorist attack doesn’t seem to hinder politicians from working aggressively to prevent further terrorist attacks. Still, if more people were personally affected by mass shootings, more might change their minds on gun control like Caleb Keeter, the Josh Abbot band guitarist who survived the Las Vegas shooting.

by dgreis at January 24, 2018 05:58 PM

January 21, 2018

Ph.D. student

It’s just like what happened when they invented calculus…

I’ve picked up this delightful book again: David Foster Wallace’s Everything and More: A Compact History of Infinity (2003). It is the David Foster Wallace (the brilliant and sadly dead writer and novelist you’ve heard of) writing a history of mathematics, starting with the Ancient Greeks and building up to the discovery of infinity by Georg Cantor.

It’s a brilliantly written book written to educate its reader without any doctrinal baggage. Wallace doesn’t care if he’s a mathematician or a historian; he’s just a great writer. And what comes through in the book is truly a history of the idea of infinity, with all the ways that it was a reflection of the intellectual climate and preconceptions of the mathematicians working on it. The book is fully of mathematical proofs that are blended seamlessly into the casual prose. The whole idea is to build up the excitement and wonder of mathematical discover, just how hard it was to come to appreciate infinity in the way we understand it mathematically today. A lot of this development had to do with the way mathematicians and scientists thought about their relationship to abstraction.

It’s a wonderful book that, refreshingly, isn’t obsessed with how everything has been digitized. Rather (just as one gem), it offers a historical perspective on what was perhaps even a more profound change: that time in the 1700’s when suddenly everything started to be looked at as an expression of mathematical calculus.

To quote the relevant passage:

As has been at least implied and will now be exposited on, the math-historical consensus is that the late 1600s mark the start of a modern Golden Age in which there are far more significant mathematical advances than anytime else in world history. Now things start moving really fast, and we can do little more than try to build a sort of flagstone path from early work on functions to Cantor’s infinicopia.

Two large-scale changes in the world of math to note very quickly The first involves abstraction. Pretty much all math from the Greeks to Galileo is empirically based: math concepts are straightforward abstractions from real-world experience. This is one reason why geometry (along with Aristotle) dominated mathematical reasoning for so long. The modern transition from geometric to algebraic reasoning was itself a symptom of a larger shift. By 1600, entities like zero, negative integers, and irrationals are used routinely. Now start adding in the subsequent decades’ introductions of complex numbers, Napierian logarithms, higher-degree polynomials and literal coefficients in algebra–plus of course eventually the 1st and 2nd derivative and the integral–and it’s clear that as of some pre-Enlightenment date math has gotten so remote from any sort of real-world observation that we and Saussure can say verily it is now, as a system of symbols, “independent of the objects designated,” i.e. that math is now concerned much more with the logical relations between abstract concepts than with any particular correspondence between those concepts and physical reality. The point: It’s in the seventeenth century that math becomes primarily a system of abstractions from other abstractions instead of from the world.

Which makes the second big change seem paradoxical: math’s new hyperabstractness turns out to work incredibly well in real-world applications. In science, engineering, physics, etc. Take, for one obvious example, calculus, which is exponentially more abstract than any sort of ‘practical’ math before (like, from what real-world observation does one dream up the idea than an object’s velocity and a curve’s subtending area have anything to do with each other?), and yet it is unprecedentedly good for representing/explaining motion and acceleration, gravity, planetary movements, heat–everything science tells us is real about the real world. Not at all for nothing does D. Berlinski call calculus “the story this world first told itself as it became the modern world.” Because what the modern world’s about, what it is, is science.And it’s in the seventeenth century that the marriage of math and science is consummated, the Scientific Revolution both causing and caused by the Math Explosion because science–increasingly freed of its Aristotelian hangups with substance v. matter and potentiality v. actuality–becomes now essentially a mathematical enterprise in which force, motion, mass, and law-as-formula compose the new template for understanding how reality works. By the late 1600s, serious math is part of astronomy, mechanics, geography, civil engineering, city planning, stonecutting, carpentry, metallurgy, chemistry, hyrdraulics, optics, lens-grinding, military strategy, gun- and cannon-design, winemaking, architecture, music, shipbuilding, timekeeping, calendar-reckoning; everything.

We take these changes for granted now.

But once, this was a scientific revolution that transformed, as Wallace observed, everything.

Maybe this is the best historical analogy for the digital transformation we’ve been experiencing in the past decade.

by Sebastian Benthall at January 21, 2018 02:07 AM

January 19, 2018

Ph.D. student

May there be shared blocklists

A reminder:

Unconstrained media access to a person is indistinguishable from harassment.

It pains me to watch my grandfather suffer from surfeit of communication. He can't keep up with the mail he receives each day. Because of his noble impulse to charity and having given money to causes he supports (evangelical churches, military veterans, disadvantaged children), those charities sell his name for use by other charities (I use "charity" very loosely), and he is inundated with requests for money. Very frequently, those requests include a "gift", apparently in order to induce a sense of obligation: a small calendar, a pen and pad of paper, refrigerator magnets, return address labels, a crisp dollar bill. Those monetary ones surprised me at first, but they are common and if some small percentage of people feel an obligation to write a $50 check, then sending out a $1 to each person makes it worth their while (though it must not help the purported charitable cause very much, not a high priority). Many now include a handful of US coins stuck to the response card -- ostensibly to imply that just a few cents a day can make a difference, but, I suspect, to make it harder to recycle the mail directly because it includes metal as well as paper. (I throw these in the recycling anyway.) Some of these solicitations include a warning on the outside that I hadn't seen before, indicating that it's a federal criminal offense to open postal mail or to keep it from the recipient. Perhaps this is a threat to caregivers to discourage them from throwing away this junk mail for their family members; I suspect more likely, it encourages the suspicion in the recipient that someone might try to filter their mail, and that to do so would be unjust, even criminal, that anyone trying to help them by sorting their mail should not be trusted. It disgusts me.

But the mails are nothing compared to the active intrusiveness of other media. Take conservative talk radio, which my grandfather listened to for years as a way to keep sound in the house and fend off loneliness. It's often on in the house at a fairly low volume, but it's ever present, and it washes over the brain. I suspect most people could never genuinely understand Rush Limbaugh's rants, but coherent argument is not the point, it's just the repetition of a claim, not even a claim, just a general impression. For years, my grandfather felt conflicted, as many of his beloved family members (liberal and conservative) worked for the federal government, but he knew, in some quite vague but very deep way, that everyone involved with the federal government was a menace to freedom. He tells me explicitly that if you hear something often enough, you start to think it must be true.

And then there's the TV, now on and blaring 24 hours a day, whether he's asleep or awake. He watches old John Wayne movies or NCIS marathons. Or, more accurately, he watches endless loud commercials, with some snippets of quiet movies or television shows interspersed between them. The commercials repeat endlessly throughout the day and I start to feel confused, stressed and tired within a few hours of arriving at his house. I suspect advertisers on those channels are happy with the return they receive; with no knowledge of the source, he'll tell me that he "really ought to" get or try some product or another for around the house. He can't hear me, or other guests, or family he's talking to on the phone when a commercial is on, because they're so loud.

Compared to those media, email is clear and unintrusive, though its utility is still lost in inundation. Email messages that start with "Fw: FWD: FW: FW FW Fw:" cover most of his inbox; if he clicks on one and scrolls down far enough he can get to the message, a joke about Obama and monkeys, or a cute picture of a kitten. He can sometimes get to the link to photos of the great-grand-children, but after clicking the link he's faced with a moving pop-up box asking him to login, covering the faces of the children. To close that box, he must identify and click on a small "x" in very light grey on a white background. He can use the Web for his bible study and knows it can be used for other purposes, but ubiquitous and intrusive prompts (advertising or otherwise) typically distract him from other tasks.

My grandfather grew up with no experience with media of these kinds, and had no time to develop filters or practices to avoid these intrusions. At his age, it is probably too late to learn a new mindset to throw out mail without a second thought or immediately scroll down a webpage. With a lax regulatory environment and unfamiliar with filtering, he suffers -- financially and emotionally -- from these exploitations on a daily basis. Mail, email, broadcast video, radio and telephone could provide an enormous wealth of benefits for an elderly person living alone: information, entertainment, communication, companionship, edification. But those advantages are made mostly inaccessible.

Younger generations suffer other intrusions of media. Online harassment is widely experienced (its severity varies, by gender among other things); your social media account probably lets you block an account that sends you a threat or other unwelcome message, but it probably doesn't provide mitigations against dogpiling, where a malicious actor encourages their followers to pursue you. Online harassment is important because of the severity and chilling impact on speech, but an analogous problem of over-access exists with other attention-grabbing prompts. What fraction of smartphone users know how to filter the notifications that buzz or ring their phone? Notifications are typically on by default rather than opt-in with permission. Smartphone users can, even without the prompt of the numerous thinkpieces on the topic, describe the negative effects on their attention and well-being.

The capability to filter access to ourselves must be a fundamental principle of online communication: it may be the key privacy concern of our time. Effective tools that allow us to control the information we're exposed to are necessities for freedom from harassment; they are necessities for genuine accessibility of information and free expression. May there be shared blocklists, content warnings, notification silencers, readability modes and so much more.

by nick@npdoty.name at January 19, 2018 10:50 PM

January 15, 2018

Ph.D. student

social structure and the private sector

The Human Cell

Academic social scientists leaning towards the public intellectual end of the spectrum love to talk about social norms.

This is perhaps motivated by the fact that these intellectual figures are prominent in the public sphere. The public sphere is where these norms are supposed to solidify, and these intellectuals would like to emphasize their own importance.

I don’t exclude myself from this category of persons. A lot of my work has been about social norms and technology design (Benthall, 2014; Benthall, Gürses and Nissenbaum, 2017)

But I also work in the private sector, and it’s striking how differently things look from that perspective. It’s natural for academics who participate more in the public sphere than the private sector to be biased in their view of social structure. From the perspective of being able to accurately understand what’s going on, you have to think about both at once.

That’s challenging for a lot of reasons, one of which is that the private sector is a lot less transparent than the public sphere. In general the internals of actors in the private sector are not open to the scrutiny of commentariat onlookers. Information is one of the many resources traded in pairwise interactions; when it is divulged, it is divulged strategically, introducing bias. So it’s hard to get a general picture of the private sector, even though accounts for a much larger proportion of the social structure that’s available than the public sphere. In other words, public spheres are highly over-represented in analysis of social structure due to the available of public data about them. That is worrisome from an analytic perspective.

It’s well worth making the point that the public/private dichotomy is problematic. Contextual integrity theory (Nissenbaum, 2009) argues that modern society is differentiated among many distinct spheres, each bound by its own social norms. Nissenbaum actually has a quite different notion of norm formation from, say, Habermas. For Nissenbaum, norms evolve over social history, but may be implicit. Contrast this with Habermas’s view that norms are the result of communicative rationality, which is an explicit and linguistically mediated process. The public sphere is a big deal for Habermas. Nissenbaum, a scholar of privacy, reject’s the idea of the ‘public sphere’ simpliciter. Rather, social spheres self-regulate and privacy, which she defines as appropriate information flow, is maintained when information flows according to these multiple self-regulatory regimes.

I believe Nissenbaum is correct on this point of societal differentiation and norm formation. This nuanced understanding of privacy as the differentiated management of information flow challenges any simplistic notion of the public sphere. Does it challenge a simplistic notion of the private sector?

Naturally, the private sector doesn’t exist in a vacuum. In the modern economy, companies are accountable to the law, especially contract law. They have to pay their taxes. They have to deal with public relations and are regulated as to how they manage information flows internally. Employees can sue their employers, etc. So just as the ‘public sphere’ doesn’t permit a total free-for-all of information flow (some kinds of information flow in public are against social norms!), so too does the ‘private sector’ not involve complete secrecy from the public.

As a hypothesis, we can posit that what makes the private sector different is that the relevant social structures are less open in their relations with each other than they are in the public sphere. We can imagine an autonomous social entity like a biological cell. Internally it may have a lot of interesting structure and organelles. Its membrane prevents this complexity leaking out into the aether, or plasma, or whatever it is that human cells float around in. Indeed, this membrane is necessary for the proper functioning of the organelles, which in turn allows the cell to interact properly with other cells to form a larger organism. Echoes of Francisco Varela.

It’s interesting that this may actually be a quantifiable difference. One way of modeling the difference between the internal and external-facing complexity of an entity is using information theory. The more complex internal state of the entity has higher entropy than the membrane. The fact that the membrane causally mediates interactions between the internals and the environment prevents information flow between them; this is captured by the Data Processing Inequality. The lack of information flow between the system internals and externals is quantified as lower mutual information between the two domains. At zero mutual information, the two domains are statistically independent of each other.

I haven’t worked out all the implications of this.

References

Benthall, Sebastian. (2015) Designing Networked Publics for Communicative Action. Jenny Davis & Nathan Jurgenson (eds.) Theorizing the Web 2014 [Special Issue]. Interface 1.1. (link)

Sebastian Benthall, Seda Gürses and Helen Nissenbaum (2017), “Contextual Integrity through the Lens of Computer Science”, Foundations and Trends® in Privacy and Security: Vol. 2: No. 1, pp 1-69. http://dx.doi.org/10.1561/3300000016

Nissenbaum, H. (2009). Privacy in context: Technology, policy, and the integrity of social life. Stanford University Press.

by Sebastian Benthall at January 15, 2018 09:01 PM

January 14, 2018

Ph.D. student

on university businesses

Suppose we wanted to know why there’s an “epistemic crisis” today. Suppose we wanted to talk about higher education’s role and responsibility towards that crisis, even though that may be just a small part of it.

That’s a reason why we should care about postmodernism in universities. The alternative, some people have argued, is a ‘modernist’ or even ‘traditional’ university which was based on a perhaps simpler and less flexible theory of knowledge. For the purpose of this post I’m going to assume the reader knows roughly what that’s all about. Since postmodernism rejects meta-narratives and instead admits that all we have to legitimize anything is a contest of narratives, that is really just asking for an epistemic crisis where people just use whatever narratives are most convenient for them and then society collapses.

In my last post I argued that the question of whether universities should be structured around modernist or postmodernist theories of legitimation and knowledge has been made moot by the fact that universities have the option of operating solely on administrative business logic. I wasn’t being entirely serious, but it’s a point that’s worth exploring.

One reason why it’s not so terrible if universities operate according to business logic is because it may still, simply as a function of business logic, be in their strategic interest to hire serious scientists and scholars whose work is not directly driven by business logic. These scholars will be professionally motivated and in part directed by the demands of their scholarly fields. But that kicks the can of the inquiry down the road.

Suppose that there are some fields that are Bourdieusian sciences, which might be summarized as an artistic field structured by the distribution of symbolic capital to those who win points in the game of arbitration of the real. (Writing that all out now I can see why many people might find Bourdieu a little opaque.)

Then if a university business thinks it should hire from the Bourdieusian sciences, that’s great. But there’s many other kinds of social fields it might be useful to hire from for, e.g, faculty positions. This seems to agree with the facts: many university faculty are not from Bourdieusian sciences!

This complicates, a lot actually, the story about the relationship between universities and knowledge. One thing that is striking from the ethnography of education literature (Jean Lave) is how much the social environment of learning is constitutive of what learning is (to put it one way). Society expects and to some extent enforces that when a student is in a classroom, what they are taught is knowledge. We have concluded that not every teacher in a university business is a Bourdieusian scientist, hence some of what students learn in universities is not Bourdieusian science, so it must be that a lot of what students are taught in universities: isn’t real. But what is it then? It’s got to be knowledge!

The answer may be: it’s something useful. It may not be real or even approximating what’s real (by scientific standards), but it may still be something that’s useful to believe, express, or perform. If it’s useful to “know” even in this pragmatic and distinctly non-Platonic sense of the term, there’s probably a price at which people are willing to be taught it.

As a higher order effect, universities might engage in advertising in such a way that some prospective students are convinced that what they teach is useful to know even when it’s not really useful at all. This prospect is almost too cynical to even consider. But that’s why it’s important to consider why a university operating solely according to business logic would in fact be terrible! This would not just be the sophists teaching sophistry to students so that they can win in court. It would be sophists teaching bullshit to students because they can get away with being paid for it. In other words, charlatans.

Wow. You know I didn’t know where this was going to go when I started reasoning about this, but it’s starting to sound worse and worse!

It can’t possibly be that bad. University businesses have a reputation to protect, and they are subject to the court of public opinion. Even if not all fields are Bourdieusian science, each scholarly field has its own reputation to protect and so has an incentive to ensure that it, at least, is useful for something. It becomes, in a sense, a web of trust, where each link in the network is tested over time. As an aside, this is an argument for the importance of interdisciplinary work. It’s not just a nice-to-have because wouldn’t-it-be-interesting. It’s necessary as a check on the mutual compatibility of different fields. Prevents disciplines from becoming exploitative of students and other resources in society.

Indeed, it’s possible that this process of establishing mutual trust among experts even across different fields is what allows a kind of coherentist, pragmatist truth to emerge. But that’s by no means guaranteed. But to be very clear, that process can happen among people whether or not they are involved in universities or higher education. Everybody is responsible for reality, in a sense. To wit, citizen science is still Bourdieusian science.

But see how the stature of the university has fallen. Under a modernist logic, the university was where one went to learn what is real. One would trust that learning it would be useful because universities were dedicated to teaching what was real. Under business logic, the university is a place to learn something that the university finds it useful to teach you. It cannot be trusted without lots of checked from the rest of the society. Intellectual authority is now much more distributed.

The problem with the business university is that it finds itself in competition for intellectual authority, and hence society’s investment in education, with other kinds of institutions. These include employers, who can discount wages for jobs that give their workers valuable human capital (e.g. the free college internship). Moreover, absent its special dedication to science per se, there’s less of a reason to put society’s investment to basic research in its hands. This accords with Clark Kerr‘s observation that the postwar era was golden for universities because the federal government kept them flush with funds for basic research, but these started to trickle down and now a lot more important basic research is done in the private sector.

So to the extent that the university is responsible for the ‘epistemic crisis’, it may be because universities began to adopt business logic as their guiding principle. This is not because they then began to teach garbage. It’s because they lost the special authority awarded to modernist universities, which we funded for a special mission in society. This opened the door for more charlatans, most of whom are not at universities. They might be on YouTube.

Note that this gets us back to something similar but not identical to postmodernism.* What’s at stake are not just narratives, but also practices and other forms of symbolic and social capital. But there’s certainly many different ones, articulated differently, and in competition with each other. The university business winds up reflecting the many different kinds of useful knowledge across all society and reproducing it through teaching. Society at large can then keep universities in check.
This “society keeping university businesses in check” point is a case for abolishing tenure in university businesses. Tenure may be a great idea in universities with different purposes and incentive structures. But for university businesses, it’s not good–it makes them less good businesses.

The epistemic crisis is due to a crisis in epistemic authority. To the extent universities are responsible, it’s because universities lost their special authority. This may be because they abandoned the modernist model of the university. But is not because they abandoned modernism to postmodernism. “Postmodern” and “modern” fields coexist symbiotically with the pragmatist model of the university as business. But losing modernism has been bad for the university business as a brand.

* Though it must be noted that Lyotard’s analysis of the postmodern condition is all about how legitimation by performativity is the cause of this new condition. I’m probably just recapitulating his points in this post.

by Sebastian Benthall at January 14, 2018 06:03 AM

January 12, 2018

Ph.D. student

STEM and (post-)modernism

There is an active debate in the academic social sciences about modernism and postmodernism. I’ll refer to my notes on Clark Kerr’s comments on the postmodern university as an example of where this topic comes up.

If postmodernism is the condition where society is no longer bound by a single unified narrative but rather is constituted by a lot of conflicting narratives, then, yeah, ok, we live in a postmodern society. This isn’t what the debate is really about though.

The debate is about whether we (anybody in intellectual authority) should teach people that we live in a postmodern society and how to act effectively in that world, or if we should teach people to believe in a metanarrative which allows for truth, progress, and so on.

It’s important to notice that this whole question of what narratives we do or do not teach our students is irrelevant to a lot of educational fields. STEM fields aren’t really about narratives. They are about skills or concepts or something.

Let me put it another way. Clark Kerr was concerned about the rise of the postmodern university–was the traditional, modernist university on its way out?

The answer, truthfully, was that neither the traditional modernist university nor the postmodern university became dominant. Probably the most dominant university in the United States today is Stanford; it has accomplished this through a winning combination of STEM education, proximity to venture capital, and private fundraising. You don’t need a metanarrative if you’re rich.

Maybe that indicates where education has to go. The traditional university believed that philosophy was at its center. Philosophy is no longer at the center of the university. Is there a center? If there isn’t, then postmodernism reigns. But something else seems to be happening: STEM is becoming the new center, because it’s the best funded of the disciplines. Maybe that’s fine! Maybe focusing on STEM is how to get modernism back.

by Sebastian Benthall at January 12, 2018 03:58 AM

January 09, 2018

Ph.D. student

The social value of an actually existing alternative — BLOCKCHAIN BLOCKCHAIN BLOCKCHAIN

When people get excited about something, they will often talk about it in hyberbolic terms. Some people will actually believe what they say, though this seems to drop off with age. The emotionally energetic framing of the point can be both factually wrong and contain a kernel of truth.

This general truth applies to hype about particular technologies. Does it apply to blockchain technologies and cryptocurrencies? Sure it does!

Blockchain boosters have offered utopian or radical visions about what this technology can achieve. We should be skeptical about these visions prima facie precisely in proportion to how utopian and radical they are. But that doesn’t mean that this technology isn’t accomplishing anything new or interesting.

Here is a summary of some dialectics around blockchain technology:

A: “Blockchains allow for fully decentralized, distributed, and anonymous applications. These can operate outside of the control of the law, and that’s exciting because it’s a new frontier of options!”

B1: “Blockchain technology isn’t really decentralized, distributed, or anonymous. It’s centralizing its own power into the hands of the few, and meanwhile traditional institutions have the power to crush it. Their anarchist mentality is naive and short-sighted.”

B2: “Blockchain technology enthusiasts will soon discover that they actually want all the legal institutions they designed their systems to escape. Their anarchist mentality is naive and short-sighted.”

While B1 and B2 are both critical of blockchain technology and see A as naive, it’s important to realize that they believe A is naive for contradictory reasons. B1 is arguing that it does not accomplish what it was purportedly designed to do, which is provide a foundation of distributed, autonomous systems that’s free from internal and external tyranny. B2 is arguing that nobody actually wants to be free of these kinds of tyrannies.

These are conservative attitudes that we would expect from conservative (in the sense of conservation, or “inhibiting change”) voices in society. These are probably demographically different people from person A. And this makes all the difference.

If what differentiates people is their relationship to different kinds of social institutions or capital (in the Bourdieusian sense), then it would be natural for some people to be incumbents in old institutions who would argue for their preservation and others to be willing to “exit” older institutions and join new ones. However imperfect the affordances of blockchain technology may be, they are different affordances than those of other technologies, and so they promise the possibility of new kinds of institutions with an alternative information and communications substrate.

It may well be that the pioneers in the new substrate will find that they have political problems of their own and need to reinvent some of the societal controls that they were escaping. But the difference will be that in the old system, the pioneers were relative outsiders, whereas in the new system, they will be incumbents.

The social value of blockchain technology therefore comes in two waves. The first wave is the value it provides to early adopters who use it instead of other institutions that were failing them. These people have made the choice to invest in something new because the old options were not good enough for them. We can celebrate their successes as people who have invented quite literally a new form of social capital, quite possibly literally a new form of wealth. When a small group of people create a lot of new wealth this almost immediately creates a lot of resentment from others who did not get in on it.

But there’s a secondary social value to the creation of actually existing alternative institutions and forms of capital (which are in a sense the same thing). This is the value of competition. The marginal person, who can choose how to invest themselves, can exit from one failing institution to a fresh new one if they believe it’s worth the risk. When an alternative increases the amount of exit potential in society, that increases the competitive pressure on institutions to perform. That should benefit even those with low mobility.

So, in conclusion, blockchain technology is good because it increases institutional competition. At the end of the day that reduces the power of entrenched incumbents to collect rents and gives everybody else more flexibility.

by Sebastian Benthall at January 09, 2018 11:37 PM

January 07, 2018

Ph.D. student

The economy of responsibility and credit in ethical AI; also, shameless self-promotion

Serious discussions about ethics and AI can be difficult because at best most people are trained in either ethics or AI, but not both. This leads to lots of confusion as a lot of the debate winds up being about who should take responsibility and credit for making the hard decisions.

Here are some of a flavors of outcomes of AI ethics discussions. Without even getting into the specifics of the content, each position serves a different constituency, despite all coming under the heading of “AI Ethics”.

  • Technical practicioners getting together to decide a set of professional standards by which to self-regulate their use of AI.
  • Ethicists getting together to decide a set of professional standards by which to regulate the practices of technical people building AI.
  • Computer scientists getting together to come up with a set of technical standards to be used in the implementation of autonomous AI so that the latter performs ethically.
  • Ethicists getting together to come up with ethical positions with which to critique the implementations of AI.

Let’s pretend for a moment that the categories used here of “computer scientists” and “ethicists” are valid ones. I’m channeling the zeitgeist here. The core motivation of “ethics in AI” is the concern that the AI that gets made will be bad or unethical for some reason. This is rumored to be because there are people who know how to create AI–the technical practicioners–who are not thinking through the ethical consequences of their work. There are supposed to be some people who are authorities on what outcomes are good and bad; I’m calling these ‘ethicists’, though I include sociologists of science and lawyers claiming an ethical authority in that term.

What are the dimensions along which these positions vary?

What is the object of the prescription? Are technical professionals having their behavior prescribed? Or is it the specification of the machine that’s being prescribed?

Who is creating the prescription? Is it “technical people” like programmers and computer scientists, or is it people ‘trained in ethics’ like lawyers and sociologists?

When is the judgment being made? Is the judgment being made before the AI system is being created as part of its production process, or is it happening after the fact when it goes live?

These dimensions are not independent from each other and in fact it’s their dependence on each other that makes the problem of AI ethics politically challenging. In general, people would like to pass on responsibility to others and take credit for themselves. Technicians love to pass responsibility to their machines–“the algorithm did it!”. Ethicists love to pass responsibility to technicians. In one view of the ideal world, ethicists would come up with a set of prescriptions, technologists would follow them, and nobody would have any ethical problems with the implementations of AI.

This would entail, more or less, that ethical requirements have been internalized into either technical design processes, engineering principles, or even mathematical specifications. This would probably be great for society as a whole. But the more ethical principles get translated into something that’s useful for engineers, the less ethicists can take credit for good technical outcomes. Some technical person has gotten into the loop and solved the problem. They get the credit, except that they are largely anonymous, and so the product, the AI system, gets the credit for being a reliable, trustworthy product. The more AI products are reliable, trustworthy, good, the less credible are the concerns of the ethicists, whose whole raison d’etre is to prevent the uninformed technologists from doing bad things.

The temptation for ethicists, then, is to sit safely where they can critique after the fact. Ethicists can write for the public condemning evil technologists without ever getting their hands dirty with the problems of implementation. There’s an audience for this and it’s a stable strategy for ethicists, but it’s not very good for society. It winds up putting public pressure on technologists to solve the problem themselves through professional self-regulation or technical specification. If they succeed, then the ethicists don’t have anything to critique, and so it is in the interest of ethicists to cast doubt on these self-regulation efforts without ever contributing to their success. Ethicists have the tricky job of pointing out that technologists are not listening to ethicists, and are therefore suspect, without ever engaging with technologists in such a way that would allow them to arrive at a bona fide ethical technical solution. This is, one must admit, not a very ethical thing to do.

There are exceptions to this bleak and cynical picture!

In fact, yours truly is an exception to this bleak and cynical picture, along with my brilliant co-authors Seda Gürses and Helen Nissenbaum! If you would like to see an honest attempt at translating ethics into computer science so that AI can be more ethical, look no further than:

Sebastian Benthall, Seda Gürses and Helen Nissenbaum (2017), “Contextual Integrity through the Lens of Computer Science”, Foundations and Trends® in Privacy and Security: Vol. 2: No. 1, pp 1-69. http://dx.doi.org/10.1561/3300000016

Contextual Integrity is an ethical framework. I’d go so far as to say that it’s a meta-ethical framework, as it provides a theory of where ethics comes from an why they are important. It’s a theory that’s developed by the esteemed ethicist and friend-of-computer-science Helen Nissenbaum.

In this paper, which you should definitely read, two researchers team up with Helen Nissenbaum to review all the computer science papers we can find that reference Contextual Integrity. One of those researchers is Seda Gürses, a computer scientist with deep background in privacy and security engineering. You essentially can’t find two researchers more credible than Helen and Seda, paired up, on the topic of how to engineer privacy (which is a subset of ethics).

I am also a co-author of this paper. You can certainly find more credible researchers on this topic than myself, but I have the enormous good fortune to have worked with such profoundly wise and respectable collaborators.

Probably the best part about this paper, in my view, is that we’ve managed to write a paper about ethics and computer science (and indeed, AI is a subset of what we are talking about in the paper) which is honestly trying to grapple with the technical challenges of designing ethical systems, while also contending with all the sociological complication of what ethics is. There’s a while section where we refuse to let computer scientists off the hook from dealing with how norms (and therefore ethics) is the result of a situated and historical process of social adaptation. But then there’s a whole other section where we talk about how developing AI that copes responsibly with the situated and historical process of social adaptation is an open research problem in privacy engineering! There’s truly something for everybody!

by Sebastian Benthall at January 07, 2018 10:11 PM

January 02, 2018

Ph.D. student

Exit vs. Voice as Defecting vs. Cooperation as …

These dichotomies that are often thought of separately are actually the same.

Cooperation Defection
Voice (Hirschman) Exit (Hirschman)
Lifeworld (Habermas) System (Habermas)
Power (Arendt) Violence (Arendt)
Institutions Markets

by Sebastian Benthall at January 02, 2018 04:56 PM

Why I will blog more about math in 2018

One reason to study and write about political theory is what Habermas calls the emancipatory interest of human inquiry: to come to better understand the social world one lives in, unclouded by ideology, in order to be more free from those ideological expectations.

This is perhaps counterintuitive since what is perhaps most seductive about political theory is that it is the articulation of so many ideologies. Indeed, one can turn to political theory because one is looking for an ideology that suits them. Having a secure world view is comforting and can provide a sense of purpose. I know that personally I’ve struggled with one after another.

Looking back on my philosophical ‘work’ over the decade years (as opposed to my technical and scientific work) I’d like to declare it an emancipatory success for at least one person, myself. I am happier for it, though at the cost that comes from learning the hard way.

A problem with this blog is that it is too esoteric. It has not been written with a particular academic discipline in mind. It draws rather too heavily from certain big name thinkers that not enough people have read. I don’t provide background material in these thinkers, and so many find this inaccessible.

One day I may try to edit this material into a more accessible version of its arguments. I’m not sure who would find this useful, because much of what I’ve been doing in this work is arriving at the conclusion that actually, truly, mathematical science is the finest way of going about understanding sociotechnical systems. I believe this follows even from deep philosophical engagement with notable critics of this view–and I have truly tried to engage with the best and most notable of these critics. There will always be more of them, but I think at this point I have to make a decision to not seek them out any more. I have tested these views enough to build on them as a secure foundation.

What follows then is a harder but I think more rewarding task of building out the mathematical theory that reflects my philosophical conclusions. This is necessary for, for example, building a technical implementation that expresses the political values that I’ve arrived at. Arguably, until I do this, I’ll have just been beating around the bush.

I will admit to being sheepish about blogging on technical and mathematical topics. This is because in my understanding technical and mathematical writing is held to a higher standard that normal writing. Errors are more clear, and more permanent.

I recognize this now as a personal inhibition and a destructive one. If this blog has been valuable to me as a tool for reading, writing, and developing fluency in obscure philosophical literature, why shouldn’t it also be a tool for reading, writing, and developing fluency in obscure mathematical and technical literature? And to do the latter, shouldn’t I have to take the risk of writing with the same courage, if not abandon?

This is my wish for 2018: to blog more math. It’s a riskier project, but I think I have to in order to keep developing these ideas.

by Sebastian Benthall at January 02, 2018 03:17 AM

December 31, 2017

MIMS 2012

Books Read in 2017

Books Read 2017 banner

This year I read 14 books, which is 8 fewer than the 22 I read last year (view last year’s list here). Lower than I was hoping, but it at least averages out to more than 1 per month. I’m not too surprised, though, since I traveled a lot and was busier socially this year. Once again, I was heavy on the non-fiction — I only read 2 fiction books this year. Just 2! I need to up that number in 2018.

Highlights

Service Design: From Insight to Implementation

by Andy Polaine, Lavrans Løvlie, and Ben Reason

This book really opened my eyes to the world of service design and thinking about a person’s experience beyond just the confines of the screen. Using the product is just one part of a person’s overall experience accomplishing their goal. This book is a great primer on the subject.

View on Amazon

Sol LeWitt: The Well-Tempered Grid

by Charles Haxthausen, Christianna Bonin, and Erica Dibenedetto

I’ve been quite taken by Sol LeWitt’s work after seeing his art at various museums, such as the SF MOMA. I finally bought a book to learn more about his work and approach to art. This inspired me to re-create his work programmatically.

View on Amazon

The Corrections

by Jonathan Franzen

This is the first Franzen book I’ve read, and I thoroughly enjoyed it. I’ve been interested in him for a long time because David Foster Wallace is a fan of his. A well-written, engaging tale of a family’s troubles, anxieties, and the “corrections” they need to make to keep their lives intact.

View on Amazon

Radical Candor

by Kim Scott

Great book on managing people. Highly recommended for anyone who manages or is interested in managing. Even if you’re an individual contributor it’s worth reading because it will help you be a better employee and have a better relationship with your boss.

View on Amazon

Emotional Design

by Don Norman

This companion to Norman’s The Design of Everyday Things is just as good as its better-known sibling. In this book he focuses on the emotional and aesthetic side of design, and why those elements are an important part of designing a successful product. He goes beyond fluffy, surface-level explanations, though, and explains the why behind these phenomenon using science, psychology, and biology. This makes for a convincing argument behind the importance of this aspect of design, which can often be written off as “nice-to-have” or self-indulgent.

View on Amazon

Full List of Books Read

All links are Amazon affiliate links.

by Jeff Zych at December 31, 2017 10:39 PM

December 22, 2017

Ph.D. student

technological determinism and economic determinism

If you are trying to explain society, politics, the history of the world, whatever, it’s a good idea to narrow the scope of what you are talking about to just the most important parts because there is literally only so much you could ever possibly say. Life is short. A principled way of choosing what to focus on is to discuss only those parts that are most significant in the sense that they played the most causally determinative role in the events in question. By widely accepted interventionist theories of causation, what makes something causally determinative of something else is the fact that in a counterfactual world in which the cause was made to be somehow different, the effect would have been different as well.

Since we basically never observe a counterfactual history, this leaves a wide open debate over the general theoretical principles one would use to predict the significance of certain phenomena over others.

One point of view on this is called technological determinism. It is the view that, for a given social phenomenon, what’s really most determinative of it is the technological substrate of it. Engineers-turned-thought-leaders love technological determinism because of course it implies that really the engineers shape society, because they are creating the technology.

Technological determinism is absolutely despised by academic social scientists who have to deal with technology and its role in society. I have a hard time understanding why. Sometimes it is framed as an objection to technologist who are avoiding responsibility for social problems they create because it’s the technology that did it, not them. But such a childish tactic really doesn’t seem to be what’s at stake if you’re critiquing technological determinism. Another way of framing the problem is the say that the way a technology affects society in San Francisco is going to be different from how it affects society in Beijing. Society has its role in a a dialectic.

So there is a grand debate of “politics” versus “technology” which reoccurs everywhere. This debate is rather one sided, since it is almost entirely constituted by political scientists or sociologists complaining that the engineers aren’t paying enough attention to politics, seeing how their work has political causes and effects. Meanwhile, engineers-turned-thought-leaders just keep spouting off whatever nonsense comes to their head and they do just fine because, unlike the social scientist critics, engineers-turned-thought-leaders tend to be rich. That’s why they are thought leaders: because their company was wildly successful.

What I find interesting is that economic determinism is never part of this conversation. It seems patently obvious that economics drives both politics and technology. You can be anywhere on the political spectrum and hold this view. Once it was called “dialectical materialism”, and it was the foundation for left-wing politics for generations.

So what has happened? Here are a few possible explanations.

The first explanation is that if you’re an economic determinist, maybe you are smart enough to do something more productive with your time than get into debates about whether technology or politics is more important. You would be doing something more productive, like starting a business to develop a technology that manipulates political opinion to favor the deregulation of your business. Or trying to get a socialist elected so the government will pay off student debts.

A second explanation is… actually, that’s it. That’s the only reason I can think of. Maybe there’s another one?

by Sebastian Benthall at December 22, 2017 02:28 AM

December 18, 2017

Ph.D. student

The Data Processing Inequality and bounded rationality

I have long harbored the hunch that information theory, in the classic Shannon sense, and social theory are deeply linked. It has proven to be very difficult to find an audience for this point of view or an opportunity to work on it seriously. Shannon’s information theory is widely respected in engineering disciplines; many social theorists who are unfamiliar with it are loathe to admit that something from engineering should carry essential insights for their own field. Meanwhile, engineers are rarely interested in modeling social systems.

I’ve recently discovered an opportunity to work on this problem through my dissertation work, which is about privacy engineering. Privacy is a subtle social concept but also one that has been rigorously formalized. I’m working on formal privacy theory now and have been reminded of a theorem from information theory: the Data Processing Theorem. What strikes me about this theorem is that is captures an point that comes up again and again in social and political problems, though it’s a point that’s almost never addressed head on.

The Data Processing Inequality (DPI) states that for three random variables, X, Y, and Z, arranged in Markov Chain such that X \rightarrow Y \rightarrow Z, then I(X,Z) \leq I(X,Y), where here I stands for mutual information. Mutual information is a measure of how much two random variables carry information about each other. If $I(X,Y) = 0$, that means the variables are independent. $I(X,Y) \geq 0$ always–that’s just a mathematical fact about how it’s defined.

The implications of this for psychology, social theory, and artificial intelligence are I think rather profound. It provides a way of thinking about bounded rationality in a simple and generalizable way–something I’ve been struggling to figure out for a long time.

Suppose that there’s a big world out the, W and there’s am organism, or a person, or a sociotechnical organization within it, Y. The world is big and complex, which implies that it has a lot of informational entropy, H(W). Through whatever sensory apparatus is available to Y, it acquires some kind of internal sensory state. Because this organism is much small than the world, its entropy is much lower. There are many fewer possible states that the organism can be in, relative to the number of states of the world. H(W) >> H(Y). This in turn bounds the mutual information between the organism and the world: I(W,Y) \leq H(Y)

Now let’s suppose the actions that the organism takes, Z depend only on its internal state. It is an agent, reacting to its environment. Well whatever these actions are, they can only be so calibrated to the world as the agent had capacity to absorb the world’s information. I.e., I(W,Z) \leq H(Y) << H(W). The implication is that the more limited the mental capacity of the organism, the more its actions will be approximately independent of the state of the world that precedes it.

There are a lot of interesting implications of this for social theory. Here are a few cases that come to mind.

I've written quite a bit here (blog links) and here (arXiv) about Bostrom’s superintelligence argument and why I’m generally not concerned with the prospect of an artificial intelligence taking over the world. My argument is that there are limits to how much an algorithm can improve itself, and these limits put a stop to exponential intelligence explosions. I’ve been criticized on the grounds that I don’t specify what the limits are, and that if the limits are high enough then maybe relative superintelligence is possible. The Data Processing Inequality gives us another tool for estimating the bounds of an intelligence based on the range of physical states it can possibly be in. How calibrated can a hegemonic agent be to the complexity of the world? It depends on the capacity of that agent to absorb information about the world; that can be measured in information entropy.

A related case is a rendering of Scott’s Seeing Like a State arguments. Why is it that “high modernist” governments failed to successfully control society through scientific intervention? One reason is that the complexity of the system they were trying to manage vastly outsized the complexity of the centralized control mechanisms. Centralized control was very blunt, causing many social problems. Arguably, behavioral targeting and big data centers today equip controlling organizations with more informational capacity (more entropy), but they
still get it wrong sometimes, causing privacy violations, because they can’t model the entirety of the messy world we’re in.

The Data Processing Inequality is also helpful for explaining why the world is so messy. There are a lot of different agents in the world, and each one only has so much bandwidth for taking in information. This means that most agents are acting almost independently from each other. The guiding principle of society isn’t signal, it’s noise. That explains why there are so many disorganized heavy tail distributions in social phenomena.

Importantly, if we let the world at any time slice be informed by the actions of many agents acting nearly independently from each other in the slice before, then that increases the entropy of the world. This increases the challenge for any particular agent to develop an effective controlling strategy. For this reason, we would expect the world to get more out of control the more intelligence agents are on average. The popularity of the personal computer perhaps introduced a lot more entropy into the world, distributed in an agent-by-agent way. Moreover, powerful controlling data centers may increase the world’s entropy, rather than redtucing it. So even if, for example, Amazon were to try to take over the world, the existence of Baidu would be a major obstacle to its plans.

There are a lot of assumptions built into these informal arguments and I’m not wedded to any of them. But my point here is that information theory provides useful tools for thinking about agents in a complex world. There’s potential for using it for modeling sociotechnical systems and their limitations.

by Sebastian Benthall at December 18, 2017 10:34 PM

adjunct professor

The harmonics of 'entitlement'

A lot of the most effective political keywords derive their force from a maneuver akin to what H. W. Fowler called "legerdemain with two senses," which enables you to slip from one idea to another without ever letting on that you’ve changed the subject. Values oscillates between mores (which vary from one group to another) and morals (of which some people have more than others do). The polemical uses of elite blend power (as in the industrial elite) and pretension (as in the names of bakeries and florists). Bias suggests both a disposition and an activity (as in housing bias), and ownership society conveys both material possession and having a stake in something.

And then there's entitlement, one of the seven words and phrases that the administration has instructed policy analysts at the Center for Disease Control to avoid in budget documents, presumably in an effort, as Mark put it in an earlier post, to create "a safe space where [congresspersons'] delicate sensibilities will not be affronted by such politically incorrect words and phrases." Though it's unlikely that the ideocrats who came up with the list thought it through carefully, I can see why this would lead them to discourage the use of items like diversity. But the inclusion of entitlement on the list is curious, since the right has been at pains over the years to bend that word to their own purposes.

I did a Fresh Air piece on entitlement back in 2012, when Romney's selection of Paul Ryan as his running mate opened up the issue of "entitlement spending."  Unlike most other political keywords, the polysemy from which this one profits is purely fortuitous. As I noted in that piece:

One sense of the word was an obscure political legalism until the advent of the Great Society programs that some economists called “uncontrollables.” Technically, entitlements are just programs that provide benefits that aren’t subject to budgetary discretion. But the word also implied that the recipients had a moral right to the benefits. As LBJ said in justifying Medicare: “By God, you can’t treat grandma this way. She’s entitled to it."

The negative connotations of the word arose in a another, rather distant corner of the language, when psychologists began to use a different notion of entitlement as a diagnostic for narcissism. Both those words entered everyday usage in the late 1970s, with a big boost from Christopher Lasch’s 1979 bestseller The Culture of Narcissism, an indictment of the pathological self-absorption of American life. By the early eighties, you no longer had to preface “sense of entitlement” with “unwarranted” or “bloated.” That was implicit in the word entitlement itself, which had become the epithet of choice whenever you wanted to scold the baby boomers for their superficiality and selfishness….

But it’s only when critics get to the role of government that the two meanings of entitlement start to seep into each other…. When conservatives fulminate about the cost of government entitlements, there’s often an implicit modifier “unearned” lurking in the background. And that in turn makes it easier to think of those programs as the cause of a wider social malaise: they create a “culture of dependency,” or a class of “takers,” which is basically what the Victorians called the undeserving poor.

That isn’t a new argument. The early opponents of Social Security charged that it would discourage individual thrift and reduce Americans to the level of Europeans. But now the language itself helps make the argument by using the same word for the political cause and the cultural effects. You can deplore “the entitlement society” without actually having to say whether you mean the social or political sense of the word, or even acknowledging that there’s any difference. It’s a strategic rewriting of linguistic history, as if we call the programs benefits simply because people feel entitled to them.

But to make that linguistic fusion work, you have to bend the meanings of the words to fit. When people rail about the cost of government entitlements, they’re thinking of social benefit programs like Medicare, not the price supports or the tax breaks that some economists call hidden entitlements.

Entitlements is back in the headlines now that the Republicans are looking for ways to make up for the revenues to be lost in the tax bill. "We're going to have to get back next year at entitlement reform, which is how you tackle the debt and the deficit," Ryan said just last week, adding, "Frankly, it's the health care entitlements that are the big drivers of our debt… that's really where the problem lies, fiscally speaking." Others extend "entitlement reform" to restructuring social security and other programs. Leaving aside the policy implications of these moves, which are beyond the modest purview of Language Log, it's clear that entitlement is still doing exactly the kind of rhetorical work for Republicans that it was doing in the Reagan era. So why is it suddenly verbum non gratum at the CDC?

by Geoff Nunberg at December 18, 2017 04:55 AM

December 15, 2017

Ph.D. student

Net neutrality

What do I think of net neutrality?

I think it’s bad for my personal self-interest. I am, economically, a part of the newer tech economy of software and data. I believe this economy benefits from net neutrality. I also am somebody who loves The Web as a consumer. I’ve grown up with it. It’s shaped my values.

From a broader perspective, I think ending net neutrality will revitalize U.S. telecom and give it leverage over the ‘tech giants’–Google, Facebook, Apple, Amazon—that have been rewarded by net neutrality policies. Telecom is a platform, but it had been turned into a utility platform. Now it can be a full-featured market player. This gives it an opportunity for platform envelopment, moving into the markets of other companies and bundling them in with ISP services.

Since this will introduce competition into the market and other players are very well-established, this could actually be good for consumers because it breaks up an oligopoly in the services that are most user-facing. On the other hand, since ISPs are monopolists in most places, we could also expect Internet-based service experience quality to deteriorate in general.

What this might encourage is a proliferation of alternatives to cable ISPs, which would be interesting. Ending net neutrality creates a much larger design space in products that provision network access. Mobile companies are in this space already. So we could see this regulation as a move in favor of the cell phone companies, not just the ISPs. This too could draw surplus away the big four.

This probably means the end of “The Web”. But we’d already seen the end of “The Web” with the proliferation of apps as a replacement for Internet browsing. IoT provides yet another alternative to “The Web”. I loved the Web as a free, creative place where everyone could make their own website about their cat. It had a great moment. But it’s safe to say that it isn’t what it used to be. In fifteen years it may be that most people no longer visit web sites. They just use connected devices and apps. Ending net neutrality means that the connectivity necessary for these services can be bundled in with the service itself. In the long run, that should be good for consumers and even the possibility of market entry for new firms.

In the long run, I’m not sure “The Web” is that important. Maybe it was a beautiful disruptive moment that will never happen again. Or maybe, if there were many more kinds of alternatives, “The Web” would return to being the quirky, radically free and interesting thing it was before it got so mainstream. Remember when The Web was just The Well (which is still around), and only people who were really curious about it bothered to use it? I don’t, because that was well before my time. But it’s possible that the Internet in its browse-happy form will become something like that again.

I hadn’t really thought about net neutrality very much before, to be honest. Maybe there are some good rebuttals to this argument. I’d love to hear them! But for now, I think I’m willing to give the shuttering of net neutrality a shot.

by Sebastian Benthall at December 15, 2017 04:31 AM

December 14, 2017

Ph.D. student

Marcuse, de Beauvoir, and Badiou: reflections on three strategies

I have written in this blog about three different philosophers who articulated a vision of hope for a more free world, including in their account an understanding of the role of technology. I would like to compare these views because nuanced differences between them may be important.

First, let’s talk about Marcuse, a Frankfurt School thinker whose work was an effective expression of philosophical Marxism that catalyzed the New Left. Marcuse was, like other Frankfurt School thinkers, concerned about the role of technology in society. His proposed remedy was “the transcendent project“, which involves an attempt at advancing “the totality” through an understanding of its logic and action to transform it into something that is better, more free.

As I began to discuss here, there is a problem with this kind of Marxist aspiration for a transformation of all of society through philosophical understanding, which is this: the political and technical totality exists as it does in no small part to manage its own internal information flows. Information asymmetries and differentiation of control structures are a feature, not a bug. The convulsions caused by the Internet as it tears and repairs the social fabric have not created the conditions of unified enlightened understanding. Rather, they have exposed that given nearly boundless access to information, most people will ignore it and maintain, against all evidence to the contrary, the dignity of one who has a valid opinion.

The Internet makes a mockery of expertise, and makes no exception for the expertise necessary for the Marcusian “transcendental project”. Expertise may be replaced with the technological apparati of artificial intelligence and mass data collection, but the latter are a form of capital whose distribution is a part of the totality. If they are having their transcendent effect today, as the proponents of AI claim, this effect is in the hands of a very few. Their motivations are inscrutable. As they have their own opinions and courtiers, writing for them is futile. They are, properly speaking, a great uncertainty that shows that centralized control does not close down all options. It may be that the next defining moment in history is set by the decision of how Jeff Bezos decides to spend his wealth, and that is his decision alone. For “our” purposes–yours, my reader, and mine–this arbitrariness of power must be seen as part of the totality to be transcended, if that is possible.

It probably isn’t. And if it Really isn’t, that may be the best argument for something like the postmodern breakdown of all epistemes. There are at least two strands of postmodern thought coming from the denial of traditional knowledge and university structure. The first is the phenomenological privileging of subjective experience. This approach has the advantage of never being embarrassed by the fact that the Internet is constantly exposing us as fools. Rather, it allows us to narcissistically and uncritically indulge in whatever bubble we find ourselves in. The alternative approach is to explicitly theorize about ones finitude and the radical implications of it, to embrace a kind of realist skepticism or at least acknowledgement of the limitations of the human condition.

It’s this latter approach which was taken up by the existentialists in the mid-20th century. In particular, I keep returning to de Beauvoir as a hopeful voice that recognizes a role for science that is not totalizing, but nevertheless liberatory. De Beauvoir does not take aim, like Marcuse and the Frankfurt School, at societal transformation. Her concern is with individual transformation, which is, given the radical uncertainty of society, a far more tractable problem. Individual ethics are based in local effects, not grand political outcomes. The desirable local effects are personal liberation and liberation of those one comes in contact with. Science, and other activities, is a way of opening new possibilities, not limited to what is instrumental for control.

Such a view of incremental, local, individual empowerment and goodness seems naive in the face of pessimistic views of society’s corruptedness. Whether these be economic or sociological theories of how inequality and oppression are locked into society, and however emotionally compelling and widespread they may be in social media, it is necessary by our previous argument to remember that these views are always mere ideology, not scientific fact, because an accurate totalizing view of society is impossible given real constraints on information flow and use. Totalizing ideologies that are not rigorous in their acceptance of basic realistic points are a symptom of more complex social structure (i.e. the distribution of capitals, the reproduction of many habiti) not a definition of it.

It is consistent for a scientific attitude to deflate political ideology because this deflation is an opening of possibility against both utopian and dystopian trajectories. What’s missing is a scientific proof of this very point, comparable to a Halting Problem or Incompleteness Theorem, but for social understanding.

A last comment, comparing Badiou to de Beauvoir and Marcuse. Badiou’s theory of the Event as the moment that may be seized to effect a transformation is perhaps a synthesis of existentialist and Marxian philosophies. Badiou is still concerned with transcendence, i.e. the moment when, given one assumed structure to life or reality or psychology, one discovers an opening into a renewed life with possibilities that the old model did not allow. But (at least as far as I have read him, which is not enough) he sees the Event as something that comes from without. It cannot be predicted or anticipate within the system but is instead a kind of grace. Without breaking explicitly from professional secularism, Badiou’s work suggests that we must have faith in something outside our understanding to provide an opportunity for transcendence. This is opposed to the more muscular theories described above: Marcuse’s theory of transcendent political activism and de Beauvoir’s active individual projects are not as patient.

I am still young and strong and so prefer the existentialist position on these matters. I am politically engaged to some extent and so, as an extension of my projects of individual freedom, am in search of opportunities for political transcendence as well–a kind of Marcuse light, as politics like science is a field of contest that is reproduced as its games are played and this is its structure. But life has taught me again and again to appreciate Badiou’s point as well, which is the appreciation of the unforeseen opportunity, the scientific and political anomaly.

What does this reflection conclude?

First, it acknowledges the situatedness and fragility of expertise, which deflates grand hopes for transcendent political projects. Pessimistic ideologies that characterize the totality as beyond redemption are false; indeed it is characteristic of the totality that it is incomprehensible. This is a realistic view, and transcendence must take it seriously.

Second, it acknowledges the validity of more localized liberatory projects despite the first point.

Third, it acknowledges that the unexpected event is a feature of the totality to be embraced, contrary to pessimistic ideologies to the contrary. The latter, far from encouraging transcendence, are blinders that prevent the recognition of events.

Because realism requires that we not abandon core logical principles despite our empirical uncertainty, you may permit one more deduction. To the extent that actors in society pursue the de Beauvoiran strategy of engaging in local liberatory projects that affect others, the probability of a Badiousian event in the life of another increases. Solipsism is false, and so (to put it tritely) “random acts of kindness” do have their effect on the totality, in aggregate. In fact, there may be no more radical political agenda than this opening up of spaces of local freedom, which shrugs off the depression of pessimistic ideology and suppression of technical control. Which is not a new view at all. What is perhaps surprising is how easy it may be.

by Sebastian Benthall at December 14, 2017 03:41 PM

December 13, 2017

Ph.D. student

transcending managerialism

What motivates my interest in managerialism?

It may be a bleak topic to study, but recent traffic to this post on Marcuse has reminded me of the terms to explain my intention.

For Marcuse, a purpose of scholarship is the transcendent project, whereby an earlier form of rationality and social totality are superseded by a new one that offers “a greater chance for the free development of human needs and faculties.” In order to accomplish this, it has to first “define[] the established totality in its very structure, basic tendencies, and relations”.

Managerialism, I propose, is a way of defining and articulating the established totality: they way everything in our social world (the totality) has been established. Once this is understood, it may be possible to identify a way of transcending that totality. But, the claim is, you can’t transcend what you don’t understand.

Marx had a deeply insightful analysis of capitalism and then used that to develop an idea of socialism. The subsequent century indeed saw the introduction of many socialistic ideas into the mainstream, including labor organizing and the welfare state. Now it is inadequate to consider the established totality through a traditional or orthodox Marxist lens. It doesn’t grasp how things are today.

Arguably, critiques of neoliberalism, enshrined in academic discourse since the 80’s, have the same problem. The world is different from how it was in the 80’s, and civil society has already given what it can to resist neoliberalism. So a critical perspective that uses the same tropes as those used in the 80’s is going to be part of the established totality, but not definitive of it. Hence, it will fail to live up to the demands of the transcendent project.

So we need a new theory of the totality that is adequate to the world today. It can’t look exactly like the old views.

Gilman’s theory of plutocratic insurgency is a good example of the kind of theorizing I’m talking about, but this obviously leaves a lot out. Indeed, the biggest challenge to defining the established totality is the complexity of the totality; this complexity could makes the transcendent project literally impossible. But to stop there is a tremendous cop out.

Rather, what’s needed is an explicit theorization of the way societal complexity, and society’s response to it, shape the totality in systematic ways. “Complexity” can’t be used in a fuzzy way for this to work. It has to be defined in the mathematically precise ways that the institutions that manage and create this complexity think about it. That means–and this is the hardest thing for a political or social theorist to swallow–that computer science and statistics have to be included as part of the definition of totality. Which brings us back to the promise of computational social science if and when it includes its mathematical methodological concepts into its own vocabulary of theorization.

References

Benthall, Sebastian. “Philosophy of computational social science.” Cosmos and History: The Journal of Natural and Social Philosophy 12.2 (2016): 13-30.

Gilman, Nils. “The twin insurgency.” American Interest 15 (2014).

Marcuse, Herbert. One-dimensional man: Studies in the ideology of advanced industrial society. Routledge, 2013.

by Sebastian Benthall at December 13, 2017 06:54 PM

Notes on Clark Kerr’s “The ‘City of Intellect’ in a Century for Foxes?”, in The Uses of the University 5th Edition

I am in my seventh and absolutely, definitely last year of a doctoral program and so have many questions about the future of higher education and whether or not I will be a part of it. For insight, I have procured an e-book copy of Clark Kerr’s The Uses of the University (5th Edition, 2001). Clark Kerr was the 20th President of University of California system and became famous among other things for his candid comments on university administration, which included such gems as

“I find that the three major administrative problems on a campus are sex for the students, athletics for the alumni and parking for the faculty.”

…and…

“One of the most distressing tasks of a university president is to pretend that the protest and outrage of each new generation of undergraduates is really fresh and meaningful. In fact, it is one of the most predictable controversies that we know. The participants go through a ritual of hackneyed complaints, almost as ancient as academe, while believing that what is said is radical and new.”

The Uses of the University is a collection of lectures on the topic of the university, most of which we given in the second half of the 20th century. The most recent edition contains a lecture given in the year 2000, after Kerr had retired from administration, but anticipating the future of the university in the 21st century. The title of the lecture is “The ‘City of Intellect’ in a Century for Foxes?”, and it is encouragingly candid and prescient.

To my surprise, Kerr approaches the lecture as a forecasting exercise. Intriguingly, Kerr employs the hedgehog/fox metaphor from Isaiah Berlin in a lecture about forecasting five years before the publication of Tetlock’s 2005 book Expert Political Judgment (review link), which used the fox/hedgehog distinction to cluster properties that were correlated with political expert’s predictive power. Kerr’s lecture is structured partly as the description of a series of future scenarios, reminiscent of scenario planning as a forecasting method. I didn’t expect any of this, and it goes to show perhaps how pervasive scenario thinking was as a 20th century rhetorical technique.

Kerr makes a number of warning about the university in the 20th century, especially with respect to the glory of the university in the 20th century. He makes a historical case for this: universities in the 20th century thrived on new universal access to students, federal investment in universities as the sites of basic research, and general economic prosperity. He doesn’t see these guaranteed in the 20th century, though he also makes the point that in official situations, the only thing a university president should do is discuss the past with pride and the future with apprehension. He has a rather detailed analysis of the incentives guiding this rhetorical strategy as part of the lecture, which makes you wonder how much salt to take the rest of the lecture with.

What are the warnings Kerr makes? Some are a continuation of the problems universities experienced in the 20th century. Military and industrial research funding changed the roles of universities away from liberal arts education into research shop. This was not a neutral process. Undergraduate education suffered, and in 1963 Kerr predicted that this slackening of the quality of undergraduate education would lead to student protests. He was half right; students instead turned their attention externally to politics. Under these conditions, there grew to be a great tension between the “internal justice” of a university that attempted to have equality among its faculty and the permeation of external forces that made more of the professiorate face outward. A period of attempted reforms throguh “participatory democracy” was “a flash in the pan”, resulting mainly in “the creation of courses celebrating ethnic, racial, and gender diversities. “This experience with academic reform illustrated how radical some professors can be when they look at the external world and how conservative when they look inwardly at themselves–a split personality”.

This turn to industrial and military funding and the shift of universities away from training in morality (theology), traditional professions (medicine, law), self-chosen intellectual interest for its own sake, and entrance into elite society towards training for the labor force (including business administration and computer science) is now quite old–at least 50 years. Among other things, Kerr predicts, this means that we will be feeling the effects of the hollowing out of the education system that happened as higher education deprioritized teaching in favor of research. The baby boomers who went through this era of vocational university education become, in Kerr’s analysis, an enormous class of retirees by 2030, putting new strain on the economy at large. Meanwhile, without naming computers and the Internet, Kerr acknowledged that the “electronic revolution” is the first major change to affect universities for three hundred years, and could radically alter their role in society. He speaks highly of Peter Drucker, who in 1997 was already calling the university “a failure” that would be made obsolete by long-distance learning.

In an intriguing comment on aging baby boomers, which Kerr discusses under the heading “The Methuselah Scenario”, is that the political contest between retirees and new workers will break down partly along racial lines: “Nasty warfare may take place between the old and the young, parents and children, retired Anglos and labor force minorities.” Almost twenty years later, this line makes me wonder how much current racial tensions are connected to age and aging. Have we seen the baby boomer retirees rise as a political class to vigorously defend the welfare state from plutocratic sabotage? Will we?

Kerr discusses the scenario of the ‘disintegration of the integrated university’. The old model of medicine, agriculture, and law integrated into one system is coming apart as external forces become controlling factors within the university. Kerr sees this in part as a source of ethical crises for universities.

“Integration into the external world inevitably leads to disintegration of the university internally. What are perceived by some as the injustices in the external labor market penetrate the system of economic rewards on campus, replacing policies of internal justice. Commitments to external interests lead to internal conflicts over the impartiality of the search for truth. Ideologies conflict. Friendships and loyalties flow increasingly outward. Spouses, who once held the academic community together as a social unit, now have their own jobs. “Alma Mater Dear” to whom we “sing a joyful chorus” becomes an almost laughable idea.”

A factor in this disintegration is globalization, which Kerr identifies with the mobility of those professors who are most able to get external funding. These professors have increased bargaining power and can use “the banner of departmental autonomy” to fight among themselves for industrial contracts. Without oversight mechanisms, “the university is helpless in the face of the combined onslaught of aggressive industry and entrepreneurial faculty members”.

Perhaps most fascinating for me, because it resonates with some of my more esoteric passions, is Kerr’s section on “The fractionalization of the academic guild“. Subject matter interest breaks knowledge into tiny disconnected topics–"Once upon a time, the entire academic enterprise originated in and remained connected to philosophy." The tension between "internal justice" and the "injustices of the external labor market" creates a conflict over monetary rewards. Poignantly, "fractionalization also increases over differing convictions about social justice, over whether it should be defined as equality of opportunity or equality of results, the latter often taking the form of equality of representation. This may turn out to be the penultimate ideological battle on campus."

And then:

The ultimate conflict may occur over models of the university itself, whether to support the traditional or the “postmodern” model. The traditional model is based on the enlightenment of the eighteenth century–rationality, scientific processes of thought, the search for truth, objectivity, “knowledge for its own sake and for its practical applications.” And the traditional university, to quote the Berkeley philosopher John Searle, “attempts to be apolitical or at least politically neutral.” The university of postmodernism thinks that all discourse is political anyway, and it seeks to use the university for beneficial rather than repressive political ends… The postmodernists are attempting to challenge certain assumptions about the nature of truth, objectivity, rationality, reality, and intellectual quality.”

… Any further politicization of the university will, of course, alienate much of the public at large. While most acknowledge that the traditional university was partially politicized already, postmodernism will further raise questions of whether the critical function of the university is based on political orientation rather than on nonpolitical scientific analysis.”

I could go on endlessly about this topic; I’ll try to be brief. First, as per Lyotard’s early analysis of the term, postmodernism is as much as result of the permeation of the university by industrial interests as anything else. Second, we are seeing, right now today in Congress and on the news etc., the eroded trust that a large portion of the public has of university “expertise”, as they assume (having perhaps internalized a reductivist version of the postmodern message despite or maybe because they were being taught by teaching assistants instead of professors) that the professoriate is politically biased. And now the students are in revolt over Free Speech again as a result.

Kerr entertains for a paragraph the possibility of a Hobbesian doomsday free-for-all over the university before considering more mundane possibilities such as a continuation of the status quo. Adapting to new telecommunications (including “virtual universities”), new amazing discoveries in biological sciences, and higher education as a step in mid-career advancement are all in Kerr’s more pragmatic view of the future. The permeability of the university can bring good as well as bad as it is influenced by traffic back and forth across its borders. “The drawbridge is now down. Who and what shall cross over it?”

Kerr counts three major wildcards determining the future of the university. The first is overall economic productivity, the second is fluctuations in returns to a higher education. The third is the United States’ role in the global economy “as other nations or unions of nations (for example, the EU) may catch up with and even surpass it. The quality of education and training for all citizens will be to this contest. The American university may no longer be supreme.” Fourth, student unrest turning universities into the “independent critic”. And fifth, the battles within the professoriate, “over academic merit versus social justice in treatment of students, over internal justice in the professional reward system versus the pressures of external markets, over the better model for the university–modern or post-modern.”

He concludes with three wishes for the open-minded, cunning, savvy administrator of the future, the “fox”:

  1. Careful study of new information technologies and their role.
  2. “An open, in-depth debate…between the proponents of the traditional and the postmodern university instead of the sniper shots of guerilla warfare…”
  3. An “in-depth discussion…about the ethical systems of the future university”. “Now the ethical problems are found more in the flow of contacts between the academic and the external worlds. There have never been so many ethical problems swirling about as today.”

by Sebastian Benthall at December 13, 2017 01:28 AM

December 12, 2017

Ph.D. student

Re: a personal mission statement

Awesome. I hadn't considered a personal "mission statement" before now, even though I often consider and appreciate organizational mission statements. However, I do keep a yearly plan, including my personal goals.

Doty Plan 2017: https://npdoty.name/plan
Doty Plan 2016: https://npdoty.name/plan2016.html

I like that your categories let you provide a little more text than my bare-bones list of goals/areas/actions. I especially like the descriptions of role and mission; I feel like I both understand you more and I find those inspiring. That said, it also feels like a lot! Providing a coherent set of beliefs, values and strategies seems like more than I would be comfortable committing to. Is that what you want?

The other difference in my practice that I have found useful is the occasional updates: what is started, what is on track and what is at risk. Would it be useful for you to check in with yourself from time to time? I suppose I picked up that habit from Microsoft's project management practices, but despite its corporate origins, it helps me see where I'm doing well and where I need to re-focus or pick a new approach.

Cheers,
Nick

BCC my public blog, because I suppose these are documents that I could try to share with a wider group.

by nick@npdoty.name at December 12, 2017 02:40 AM

Ph.D. student

Contextual Integrity as a field

There was a nice small gathering of nearby researchers (and one important call-in) working on Contextual Integrity at Princeton’s CITP today. It was a nice opportunity to share what we’ve been working on and make plans for the future.

There was a really nice range of different contributions: systems engineering for privacy policy enforcement, empirical survey work testing contextualized privacy expectations, a proposal for a participatory design approach to identifying privacy norms in marginalized communities, a qualitative study on how children understand privacy, and an analysis of the privacy implications of the Cybersecurity Information Sharing Act, among other work.

What was great is that everybody was on the same page about what we were after: getting a better understanding of what privacy really is, so that we can design between policies, educational tools, and technologies that preserve it. For one reason or another, the people in the room had been attracted to Contextual Integrity. Many of us have reservations about the theory in one way or another, but we all see its value and potential.

One note of consensus was that we should try to organize a workshop dedicated specifically to Contextual Integrity, and widening what we accomplished today to bring in more researchers. Today’s meeting was a convenience sample, leaving out a lot of important perspectives.

Another interesting thing that happened today was a general acknowledgment that Contextual Integrity is not a static framework. As a theory, it is subject to change as scholars critique and contribute to it through their empirical and theoretical work. A few of us are excited about the possibility of a Contextual Integrity 2.0, extending the original theory to fill theoretical gaps that have been identified in it.

I’d articulate the aspiration of the meeting today as being about letting Contextual Integrity grow from being a framework into a field–a community of people working together to cultivate something, in this case, a kind of knowledge.


by Sebastian Benthall at December 12, 2017 02:03 AM

December 10, 2017

Ph.D. student

Appearance, deed, and thing: meta-theory of the politics of technology

Flammarion engraving

Much is written today about the political and social consequences of technology. This writing often maintains that this inquiry into politics and society is distinct from the scientific understanding that informs the technology itself. This essay argues that this distinction is an error. Truly, there is only one science of technology and its politics.

Appearance, deed, and thing

There are worthwhile distinctions made between how our experience of the world feels to us directly (appearance), how we can best act strategically in the world (deed), and how the world is “in itself” or, in a sense, despite ourselves (individually) (thing).

Appearance

The world as we experience it has been given the name “phenomenon” (late Latin from Greek phainomenon ‘thing appearing to view’) and so “phenomenology” is the study of what we colloquially call today our “lived experience”. Some anthropological methods are a kind of social phenomenology, and some scholars will deny that there is anything beyond phenomenology. Those that claim to have a more effective strategy or truer picture of the world may have rhetorical power, powers that work on the lived experience of the more oppressed people because they have not been adequately debunked and shown to be situated, relativized. The solution to social and political problems, to these scholars, is more phenomenology.*

Deed

There are others that see things differently. A perhaps more normal attitude is that the outcomes of ones actions are more important that how the world feels. Things can feel one way now and another way tomorrow; does it much matter? If one holds some beliefs that don’t work when practically applied, one can correct oneself. The name for this philosophical attitude is pragmatism, (from Greek pragma, ‘deed’). There are many people, including some scholars, who find this approach entirely sufficient. The solution to social and political problems is more pragmatism. Sometimes this involves writing off impractical ideas and the people who hold them either useless or as mere pawns. It is their loss.

Thing

There are others that see things still differently. A perhaps diminishing portion of the population holds theories of how the world works that transcend both their own lived experience and individual practical applications. Scientific theories about the physical nature of the universe, though tested pragmatically and through the phenomena apparent to the scientists, are based in a higher claim about their value. As Bourdieu (2004) argues, the whole field of science depends on the accepted condition that scientists fairly contend for a “monopoly on the arbitration of the real”. Scientific theories are tested through contest, with a deliberate effort by all parties to prove their theory to be the greatest. These conditions of contest hold science to a more demanding standard than pragmatism, as results of applying a pragmatic attitude will depend on the local conditions of action. Scientific theories are, in principle, accountable to the real (from late Latin realis, from Latin res ‘thing’); these scientists may
be called ‘realists’ in general, though there are many flavors of realism as, appropriately, theories of what is real and how to discover reality have come and gone (see post-positivism and critical realism, for example).

Realists may or may not be concerned with social and political problems. Realists may ask: What is a social problem? What do solutions to these problems look like?

By this account, these three foci and their corresponding methodological approaches are not equivalent to each other. Phenomenology concerns itself with documenting the multiplicity of appearances. Pragmatism introduces something over and above this: a sorting or evaluation of appearances based on some goals or desired outcomes. Realism introduces something over and above pragmatism: an attempt at objectivity based on the contest of different theories across a wide range of goals. ‘Disinterested’ inquiry, or equivalently inquiry that is maximally inclusive of all interests, further refines the evaluation of which appearances are valid.

If this account sounds disparaging of phenomenology as merely a part of higher and more advanced forms of inquiry, that is truly how it is intended. However, it is equally notable that to live up to its own standard of disinterestedness, realism must include phenomenology fully within itself.

Nature and technology

It would be delightful if we could live forever in a world of appearances that takes the shape that we desire of it when we reason about it critically enough. But this is not how any but the luckiest live.

Rather, the world acts on us in ways that we do not anticipate. Things appear to us unbidden; they are born, and sometimes this is called ‘nature’ (from Latin natura ‘birth, nature, quality,’ from nat- ‘born’). The first snow of Winter comes as a surprise after a long warm Autumn. We did nothing to summon it, it was always there. For thousands of years humanity has worked to master nature through pragmatic deeds and realistic science. Now, very little of nature has been untouched by human hands. The stars are still things in themselves. Our planetary world is one we have made.

“Technology” (from Greek tekhnologia ‘systematic treatment,’ from tekhnē ‘art, craft’) is what we call those things that are made by skillful human deed. A glance out the window into a city, or at the device one uses to read this blog post, is all one needs to confirm that the world is full of technology. Sitting in the interior of an apartment now, literally everything in my field of vision except perhaps my own two hands and the potted plant are technological artifacts.

Science and technology studies: political appearances

According to one narrative, Winner (1980) famously asked the galling question “Do artifacts have politics?” and spawned a field of study** that questions the social consequences of technology. Science and Technology Studies (STS) is, purportedly, this field.
The insight this field claims as their own is that technology has social impact that is politically interesting, the specifics of this design determine these impacts, and that the social context of the design therefore influences the consequences of the technology. At its most ambitious, STS attempts to take the specifics of the technology out of the explanatory loop, showing instead how politics drives design and implementation to further political ends.

Anthropological methods are popular among STS scholars, who often commit themselves to revealing appearances that demonstrate the political origins and impacts of technology. The STS researcher might asked, rhetorically, “Did you know that this interactive console is designed and used for surveillance?”

We can nod sagely at these observations. Indeed, things appear to people in myriad ways, and critical analysis of those appearances does expose that there is a multiplicity of ways of looking at things. But what does one do with this picture?

The pragmatic turn back to realism

When one starts to ask the pragmatic question “What is to be done?”, one leaves the domain of mere appearances and begins to question the consequences of one’s deeds. This leads one to take actions and observe the unanticipated results. Suddenly, one is engaging in experimentation, and new kinds of knowledge are necessary. One needs to study organizational theory to understand the role of h technology within a firm, economics to understand how it interacts with the economy. One quickly leaves the field of study known as “science and technology studies” as soon as one begins to consider ones practical effects.

Worse (!), the pragmatist quickly discovers that discovering the impact of ones deeds requires an analysis of probabilities and the difficulty techniques of sampling data and correcting for bias. These techniques have been proven through the vigorous contest of the realists, and the pragmatist discovers that many tools–technologies–have been invented and provisioned for them to make it easier to use these robust strategies. The pragmatist begins to use, without understanding them, all the fruits of science. Their successes are alienated from their narrow lived experience, which are not enough to account for the miracles the= world–one others have invented for them–performs for them every day.

The pragmatist must draw the following conclusions. The world is full of technology, is constituted by it. The world is also full of politics. Indeed, the world is both politics and technology; politics is a technology; technology is form of politics. The world that must be mastered, for pragmatic purposes, is this politico-technical*** world.

What is technical about the world is that it is a world of things created through deed. These things manifest themselves in appearances in myriad and often unpredictable ways.

What is political about the world is that it is a contest of interests. To the most naive student, it may be a shock that technology is part of this contest of interests, but truly this is the most extreme naivete. What adolescent is not exposed to some form of arms race, whether it be in sports equipment, cosmetics, transportation, recreation, etc. What adult does not encounter the reality of technology’s role in their own business or home, and the choice of what to procure and use.

The pragmatist must be struck by the sheer obviousness of the observation that artifacts “have” politics, though they must also acknowledge that “things” are different from the deeds that create them and the appearances they create. There are, after all, many mistakes in design. The effects of technology may as often be due to incompetence as they are to political intent. And to determine the difference, one must contest the designer of the technology on their own terms, in the engineering discourse that has attempted to prove which qualities of a thing survive scrutiny across all interests. The pragmatist engaging the politico-technical world has to ask: “What is real?”

The real thing

“What is real?” This is the scientific question. It has been asked again and again for thousands of years for reasons not unlike those traced in this essay. The scientific struggle is the political struggle for mastery over our own politico-technical world, over the reality that is being constantly reinvented as things through human deeds.

There are no short cuts to answering this question. There are only many ways to cop out. These steps take one backward into striving for ones local interest or, further, into mere appearance, with its potential for indulgence and delusion. This is the darkness of ignorance. Forward, far ahead, is a horizon, an opening, a strange new light.

* This narrow view of the ‘privilege of subjectivity’ is perhaps a cause of recent confusion over free speech on college campuses. Realism, as proposed in this essay, is a possible alternative to that.

** It has been claimed that this field of study does not exist, much to the annoyance of those working within it.

*** I believe this term is no uglier than the now commonly used “sociotechnical”.

References

Bourdieu, Pierre. Science of science and reflexivity. Polity, 2004.

Winner, Langdon. “Do artifacts have politics?.” Daedalus (1980): 121-136.


by Sebastian Benthall at December 10, 2017 05:22 PM

December 08, 2017

Ph.D. student

managerialism, continued

I’ve begun preliminary skimmings of Enteman’s Managerialism. It is a dense work of analytic philosophy, thick with argument. Sporadic summaries may not do it justice. That said, the principle of this blog is that the bar for ‘publication’ is low.

According to its introduction, Enteman’s Managerialism is written by a philosophy professor (Willard Enteman) who kept finding that the “great thinkers”–Adam Smith, Karl Marx–and the theories espoused in their writing kept getting debunked by his students. Contemporary examples showed that, contrary to conventional wisdom, the United States was not a capitalist country whose only alternative was socialism. In his observation, the United States in 1993 was neither strictly speaking capitalist, nor was it socialist. There was a theoretical gap that needed to be filled.

One of the concepts reintroduced by Enteman is Robert Dahl‘s concept of polyarchy, or “rule by many”. A polyarchy is neither a dictatorship nor a democracy, but rather is a form of government where many different people with different interests, but then again probably not everybody, is in charge. It represents some necessary but probably insufficient conditions for democracy.

This view of power seems evidently correct in most political units within the United States. Now I am wondering if I should be reading Dahl instead of Enteman. It appears that Dahl was mainly offering this political theory in contrast to a view that posited that political power was mainly held by a single dominant elite. In a polyarchy, power is held by many different kinds of elites in contest with each other. At its democratic best, these elites are responsive to citizen interests in a pluralistic way, and this works out despite the inability of most people to participate in government.

I certainly recommend the Wikipedia articles linked above. I find I’m sympathetic to this view, having come around to something like it myself but through the perhaps unlikely path of Bourdieu.

This still limits the discussion of political power in terms of the powers of particular people. Managerialism, if I’m reading it right, makes the case that individual power is not atomic but is due to organizational power. This makes sense; we can look at powerful individuals having an influence on government, but a more useful lens could look to powerful companies and civil society organizations, because these shape the incentives of the powerful people within them.

I should make a shift I’ve made just now explicit. When we talk about democracy, we are often talking about a formal government, like a sovereign nation or municipal government. But when we talk about powerful organizations in society, we are no longer just talking about elected officials and their appointees. We are talking about several different classes of organizations–businesses, civil society organizations, and governments among them–interacting with each other.

It may be that that’s all there is to it. Maybe Capitalism is an ideology that argues for more power to businesses, Socialism is an ideology that argues for more power to formal government, and Democracy is an ideology that argues for more power to civil society institutions. These are zero-sum ideologies. Managerialism would be a theory that acknowledges the tussle between these sectors at the organizational level, as opposed to at the atomic individual level.

The reason why this is a relevant perspective to engage with today is that there has probably in recent years been a transfer of power (I might say ‘control’) from government to corporations–especially Big Tech (Google, Amazon, Facebook, Apple). Frank Pasquale makes the argument for this in a recent piece. He writes and speaks with a particular policy agenda that is far better researched than this blog post. But a good deal of the work is framed around the surprise that ‘governance’ might shift to a private company in the first place. This is a framing that will always be striking to those who are invested in the politics of the state; the very word “govern” is unmarkedly used for formal government and then surprising when used to refer to something else.

Managerialism, then, may be a way of pointing to an option where more power is held by non-state actors. Crucially, though, managerialism is not the same thing as neoliberalism, because neoliberalism is based on laissez-faire market ideology and contempory information infrastructure oligopolies look nothing like laissez-faire markets! Calling the transfer of power from government to corporation today neoliberalism is quite anachronistic and misleading, really!

Perhaps managerialism, like polyarchy, is a descriptive term of a set of political conditions that does not represent an ideal, but a reality with potential to become an ideal. In that case, it’s worth investigating managerialism more carefully and determining what it is and isn’t, and why it is on the rise.


by Sebastian Benthall at December 08, 2017 01:20 AM

December 06, 2017

Ph.D. student

beginning Enteman’s Managerialism

I’ve been writing about managerialism without having done my homework.

Today I got a new book in the mail, Willard Enteman’s Managerialism: The Emergence of a New Ideology, a work of analytic political philosophy that came out in 1993. The gist of the book is that none of the dominant world ideologies of the time–capitalism, socialism, and democracy–actually describe the world as it functions.

Enter Enteman’s managerialism, which considers a society composed of organizations, not individuals, and social decisions as a consequence of the decisions of organizational managers.

It’s striking that this political theory has been around for so long, though it is perhaps more relevant today because of large digital platforms.


by Sebastian Benthall at December 06, 2017 07:49 PM

Ph.D. student

Assembling Critical Practices Reading List Posted

At the Berkeley School of Information, a group of researchers interested in the areas of critically-oriented design practices, critical social theory, and STS have hosted a reading group called “Assembling Critical Practices,” bringing together literature from these fields, in part to track their historical continuities and discontinuities, as well as to see new opportunities for design and research when putting them in conversation together.
I’ve posted our reading list from our first iterations of this group. Sections 1-3 focus on critically-oriented HCI, early critiques of AI, and an introduction to critical theory through the Frankfurt School. This list comes from an I School reading group put together in collaboration with Anne Jonas and Jenna Burrell.

Section 4 covers a broader range of social theories. This comes from a reading group sponsored by the Berkeley Social Science Matrix organized by myself and Anne Jonas with topic contributions from Nick Merrill, Noura Howell, Anna Lauren Hoffman, Paul Duguid, and Morgan Ames (Feedback and suggestions are welcome! Send an email to richmond@ischool.berkeley.edu).

Table of Contents:

See the whole reading list on this page.

by Richmond at December 06, 2017 07:29 AM

December 02, 2017

Ph.D. student

How to promote employees using machine learning without societal bias

Though it may at first read as being callous, a managerialist stance on inequality in statistical classification can help untangle some of the rhetoric around this tricky issue.

Consider the example that’s been in the news lately:

Suppose a company begins to use an algorithm to make decisions about which employees to promote. It uses a classifier trained on past data about who has been promoted. Because of societal bias, women are systematically under-promoted; this is reflected in the data set. The algorithm, naively trained on the historical data, reproduces the historical bias.

This example describes a bad situation. It is bad from a social justice perspective; by assumption, it would be better if men and women had equal opportunity in this work place.

It is also bad from a managerialist perspective. Why? Because if the point of using an algorithm were not to correct for societal biases introducing irrelevancies into the promotion decision, then it would not make managerial sense to change business practices over to using an algorithm. The whole point of using an algorithm is to improve on human decision-making. This is a poor match of an algorithm to a problem.

Unfortunately, what makes this example compelling is precisely what makes it a bad example of using an algorithm in this context. The only variables discussed in the example are the socially salient ones thick with political implications: gender, and promotion. What are more universal concerns than gender relations and socioeconomic status?!

But from a managerialist perspective, promotions should be issued based on a number of factors not mentioned in the example. What factors are these? That’s a great and difficult question. Promotions can reward hard work and loyalty. They can also be issued to those who demonstrate capacity for leadership, which can be a function of how well they get along with other members of the organization. There may be a number of features that predict these desirable qualities, most of which will have to do with working conditions within the company as opposed to qualities inherent in the employee (such as their past education, or their gender).

If one were to start to use machine learning intelligently to solve this problem, then one would go about solving it in a way entirely unlike the procedure in the problematic example. One would rather draw on soundly sourced domain expertise to develop a model of the relationship between relevant, work-related factors. For many of the key parts of the model, such as general relationships between personality type, leadership style, and cooperation with colleagues, one would look outside the organization for gold standard data that was sampled responsibly.

Once the organization has this model, then it can apply it to its own employees. For this to work, employees would need to provide significant detail about themselves, and the company would need to provide contextual information about the conditions under which employees work, as these may be confounding factors.

Part of the merit of building and fitting such a model would be that, because it is based on a lot of new and objective scientific considerations, it would produce novel results in recommending promotions. Again, if the algorithm merely reproduced past results, it would not be worth the investment in building the model.

When the algorithm is introduced, it ideally is used in a way that maintains traditional promotion processes in parallel so that the two kinds of results can be compared. Evaluation of the algorithm’s performance, relative to traditional methods, is a long, arduous process full of potential insights. Using the algorithm as an intervention at first allows the company to develop a causal understanding its impact. Insights from the evaluation can be factored back into the algorithm, improving the latter.

In all these cases, the company must keep its business goals firmly in mind. If they do this, then the rest of the logic of their method falls out of data science best practices, which are grounded in mathematical principles of statistics. While the political implications of poorly managed machine learning are troubling, effective management of machine learning which takes the precautions necessary to develop objectivity is ultimately a corrective to social bias. This is a case where sounds science and managerialist motives and social justice are aligned.


by Sebastian Benthall at December 02, 2017 03:40 PM

Enlightening economics reads

Nils Gilman argues that the future of the world is wide open because neoliberalism has been discredited. So what’s the future going to look like?

Given that neoliberalism is for the most part an economic vision, and that competing theories have often also been economic visions (when they have not been political or theological theories), a compelling futurist approach is to look out for new thinking about economics. The three articles below have recently taught me something new about economics:

Dani Rodrik. “Rescuing Economics from Neoliberalism”, Boston Review. (link)

This article makes the case that the association frequently made between economics as a social science and neoliberalism as an ideology is overdrawn. Of course, probably the majority of economists are not neoliberals. Rodrik is defending a view of economics that keeps its options open. I think he overstates the point with the claim, “Good economists know that the correct answer to any question in economics is: it depends.” This is just simply incorrect, if questions have their assumptions bracketed well enough. But since Rodrik’s rhetorical point appears to be that economists should not be dogmatists, he can be forgiven this overstatement.

As an aside, there is something compelling but also dangerous to the view that a social science can provide at best narrowly tailored insights into specific phenomena. These kinds of ‘sciences’ wind up being unaccountable, because the specificity of particular events prevent the repeated testing of the theories that are used to explain them. There is a risk of too much nuance, which is akin to the statistical concept of overfitting.

A different kind of article is:

Seth Ackerman. “The Disruptors” Jacobin. (link)

An interview with J.W. Mason in the smart socialist magazine, Jacobin, that had the honor of a shout out from Matt Levine’s popular “Money Talk” Bloomberg column (column?). On of the interesting topics it raises is whether or not mutual funds, in which many people invest in a fund that then owns a wide portfolio of stocks, are in a sense socialist and anti-competitive because shareholders no longer have an interest in seeing competition in the market.

This is original thinking, and the endorsement by Levine is an indication that it’s not a crazy thing to consider even for the seasoned practical economists in the financial sector. My hunch at this point in life is that if you want to understand the economy, you have to understand finance, because they are the ones whose job it is to profit from their understanding of the economy. As a corollary, I don’t really understand the economy because I don’t have a great grasp of the financial sector. Maybe one day that will change.

Speaking of expertise being enhanced by having ‘skin in the game’, the third article is:

Nassim Nicholas Taleb. “Inequality and Skin in the Game,” Medium. (link)

I haven’t read a lot of Taleb though I acknowledge he’s a noteworthy an important thinker. This article confirmed for me the reputation of his style. It was also a strikingly fresh look at economics of inequality, capturing a few of the important things mainstream opinion overlooks about inequality, namely:

  • Comparing people at different life stages is a mistake when analyzing inequality of a population.
  • A lot of the cause of inequality is randomness (as opposed to fixed population categories), and this inequality is inevitable

He’s got a theory of what kinds of inequality people resent versus what they tolerate, which is a fine theory. It would be nice to see some empirical validation of it. He writes about the relationship between ergodicity and inequality, which is interesting. He is scornful of Piketty and everyone who was impressed by Piketty’s argument, which comes off as unfriendly.

Much of what Taleb writes about the need to understand the economy through a richer understanding of probability and statistics strikes me as correct. If it is indeed the case that mainstream economics has not caught up to this, there is an opportunity here!


by Sebastian Benthall at December 02, 2017 02:28 AM

November 28, 2017

Ph.D. student

mathematical discourse vs. exit; blockchain applications

Continuing my effort to tie together the work on this blog into a single theory, I should address the theme of an old post that I’d forgotten about.

The post discusses the discourse theory of law, attributed to the later, matured Habermas. According to it, the law serves as a transmission belt between legitimate norms established by civil society and a system of power, money, and technology. When it is efficacious and legitimate, society prospers in its legitimacy. The blog post toys with the idea of normatively aligned algorithm law established in a similar way: through the norms established by civil society.

I wrote about this in 2014 and I’m surprised to find myself revisiting these themes in my work today on privacy by design.

What this requires, however, is that civil society must be able to engage in mathematical discourse, or mathematized discussion of norms. In other words, there has to be an intersection of civil society and science for this to make sense. I’m reminded by how inspired I’ve felt by Nick Doty’s work on multistakerholderism in Internet standards as a model.

I am more skeptical of this model than I have been before, if only because in the short term I’m unsure if a critical mass of scientific talent can engage with civil society well enough to change the law. This is because scientific talent is a form of capital which has no clear incentive for self-regulation. Relatedly, I’m no longer as confident that civil society carries enough clout to change policy. I must consider other options.

The other option, besides voicing ones concerns in civil society, is, of course, exit, in Hirschmann‘s sense. Theoretically an autonomous algorithmic law could be designed such that it encourages exit from other systems into itself. Or, more ecologically, competing autonomous (or decentralized, …) systems can be regulated by an exit mechanism. This is in fact what happens now with blockchain technology and cryptocurrency. Whenever there is a major failure of one of these currencies, there is a fork.


by Sebastian Benthall at November 28, 2017 04:25 PM

November 27, 2017

Ph.D. student

Re: Tear down the new institutions

Hiya Ben,

And with enough social insight, you can build community standards into decentralized software.
https://words.werd.io

Yes! I might add, though, that community standards don't need to be enacted entirely in the source code, although code could certainly help. I was in New York earlier this month talking with Cornell Tech folks (for example, Helen Nissenbaum, a philosopher) about exactly this thing: there are "handoffs" between human and technical mechanisms to support values in sociotechnical systems.

What makes federated social networking like Mastodon most of interest to me is that different smaller communities can interoperate while also maintaining their own community standards. Rather than every user having to maintain massive blocklists or trying alone to encourage better behavior in their social network, we can support admins and moderators, self-organize into the communities we prefer and have some investment in, and still basically talk with everyone we want to.

As I understand it, one place to have this design conversation is the Social Web Incubator Community Group (SocialCG), which you can find on W3C IRC (#social) and Github (but no mailing list!), and we talked about harassment challenges at a small face-to-face Social Web meeting at TPAC a few weeks back. Or I'm @npd@octodon.social; there is a special value (in a Kelty recursive publics kind of way) in using a communication system to discuss its subsequent design decisions. I think, as you note, that working on mitigations for harassment and abuse (whether it's dogpiling or fake news distribution) in the fediverse is an urgent and important need.

In a way, then, I guess I'm looking to the creation of new institutions, rather than their dismantling. Or, as cwebber put it:

I'm not very interested in how to tear systems down nearly as much as what structure to replace them with (and how you realistically think we'll get there)
@cwebber@octodon.social

While I agree that the outsize power of large social networking platforms can be harmful even as it seemed to disrupt old gatekeepers, I do want to create new institutions, institutions that reflect our values and involve widespread participation from often underserved groups. The utopia that "everything would be free" doesn't really work for autonomy, free expression and democracy, rather, we need to build the system we really want. We need institutions both in the sense of valued patterns of behavior and in the sense of community organizations.

If you're interested in helping or have suggestions of people that are, do let me know.
Cheers,
Nick

Some links:

by npdoty@ischool.berkeley.edu at November 27, 2017 11:55 PM

November 26, 2017

MIMS 2012

My Talk at Lean Kanban Central Europe 2017

On a chilly fall day a few weeks back, I gave a talk at the cozy Lean Kanban Central Europe in Hamburg, Germany. I was honored to be invited to give a reprise of the talk I gave with Keith earlier this year at Lean Kanban North America.

I spoke about Optimizely’s software development process, and how we’ve used ideas from Lean Kanban and ESP (Enterprise Service Planning) to help us ship faster, with higher quality, to better meet customer needs. Overall it went well, but I had too much content and rushed at the end. If I do this talk again, I would cut some slides and make the presentation more focused and concise. Watch the talk below.

Jeff Zych - From 20/20 Hindsight to ESP at Optimizely @ LKCE17 from Lean Kanban Central Europe on Vimeo.

Epilogue

One of the cool things this conference does is give the audience green, yellow, and red index cards they can use to give feedback to the speakers. Green indicates you liked the talk, red means you didn’t like it, and yellow is neutral.

I got just one red card, with the comment, “topic title not accurate (this is not ESP?!).” In retrospect, I realized this person is correct — my talk really doesn’t talk about ESP much. I touch on it, but that was what Keith covered. Since he dropped out, I mostly cut those sections of the presentation since I can’t speak as confidently about them. If I did this talk solo again, I would probably change the title. So thank you, anonymous commenter 🙏

I also got two positive comments on green cards:

Thanks for sharing. Some useful insights + good to see it used in industry. - Thanks.

And:

Thank you! Great examples, (maybe less slides next time?) but this was inspiring

I also got some good tweets, like this and this.

by Jeff Zych at November 26, 2017 10:57 PM

Ph.D. student

Recap

Sometimes traffic on this blog draws attention to an old post from years ago. This can be a reminder that I’ve been repeating myself, encountering the same themes over and over again. This is not necessarily a bad thing, because I hope to one day compile the ideas from this blog into a book. It’s nice to see what points keep resurfacing.

One of these points is that liberalism assumes equality, but this challenged by society’s need for control structures, which creates inequality, which then undermines liberalism. This post calls in Charles Taylor (writing about Hegel!) to make the point. This post makes the point more succinctly. I’ve been drawing on Beniger for the ‘society needs control to manage its own integration’ thesis. I’ve pointed to the term managerialism as referring to an alternative to liberalism based on the acknowledgement of this need for control structures. Managerialism looks a lot like liberalism, it turns out, but it justifies things on different grounds and does not get so confused. As an alternative, more Bourdieusian view of the problem, I consider the relationship between capital, democracy, and oligarchy here. There are some useful names for what happens when managerialism goes wrong and people seem disconnected from each other–anomie–or from the control structures–alienation.

A related point I’ve made repeatedly is the tension between procedural legitimacy and getting people the substantive results that they want. That post about Hegel goes into this. But it comes up again in very recent work on antidiscrimination law and machine learning. What this amounts to is that attempts to come up with a fair, legitimate procedure are going to divide up the “pie” of resources, or be perceived to divide up the pie of resources, somehow, and people are going to be upset about it, however the pie is sliced.

A related theme that comes up frequently is mathematics. My contention is that effective control is a technical accomplishment that is mathematically optimized and constrained. There are mathematical results that reveal necessary trade-offs between values. Data science has been misunderstood as positivism when in fact it is a means of power. Technical knowledge and technology are forms of capital (Bourdieu again). Perhaps precisely because it is a rare form of capital, science is politically distrusted.

To put it succinctly: lack of mathematics education, due to lack of opportunity or mathophobia, lead to alienation and anomie in an economy of control. This is partly reflected in the chaotic disciplinarity of the social sciences, especially as they react to computational social science, at the intersection of social sciences, statistics, and computer science.

Lest this all seem like an argument for the mathematical certitude of totalitarianism, I have elsewhere considered and rejected this possibility of ‘instrumentality run amok‘. I’ve summarized these arguments here, though this appears to have left a number of people unconvinced. I’ve argued this further, and think there’s more to this story (a formalization of Scott’s arguments from Seeing Like a State, perhaps), but I must admit I don’t have a convincing solution to the “control problem” yet. However, it must be noted that the answer to the control problem is an empirical or scientific prediction, not a political inclination. Whether or not it is the most interesting or important question regarding technological control has been debated to a stalemate, as far as I can tell.

As I don’t believe singleton control is a likely or interesting scenario, I’m more interested in practical ways of offering legitimacy or resistance to control structures. I used to think the “right” political solution was a kind of “hacker class consciousness“; I don’t believe this any more. However, I still think there’s a lot to the idea of recursive publics as actually existing alternative power structures. Platform coops are interesting for the same reason.

All this leads me to admit my interest in the disruptive technology du jour, the blockchain.


by Sebastian Benthall at November 26, 2017 05:44 AM

November 24, 2017

Ph.D. student

Values in design and mathematical impossibility

Under pressure from the public and no doubt with sincere interest in the topic, computer scientists have taken up the difficulty task of translating commonly held values into the mathematical forms that can be used for technical design. Commonly, what these researches discover is some form of mathematical impossibility of achieving a number of desirable goals at the same time. This work has demonstrated the impossibility of having a classifier that is fair with respect to a social category without data about that very category (Dwork et al., 2012), having a fair classifier that is both statistically well calibrated for the prediction of properties of persons and equalizing the false positive and false negative rates of partitions of that population (Klienberg et al., 2016), of preserving privacy of individuals after an arbitrary number of queries to a database, however obscured (Dwork, 2008), or of a coherent notion of proxy variable use in privacy and fairness applications that is based on program semantics (as opposed to syntax) (Datta et al., 2017).

These are important results. An important thing about them is that they transcend the narrow discipline in which they originated. As mathematical theorems, they will be true whether or not they are implemented on machines or in human behavior. Therefore, these theorems have a role comparable to other core mathematical theorems in social science, such as Arrow’s Impossibility Theorem (Arrow, 1950), a theorem about the impossibility of having a voting system with reasonable desiderata for determining social welfare.

There can be no question of the significance of this kind of work. It was significant a hundred years ago. It is perhaps of even more immediate, practical importance when so much public infrastructure is computational. For what computation is is automation of mathematics, full stop.

There are some scholars, even some ethicists, for whom this is an unwelcome idea. I have been recently told by one ethics professor that to try to mathematize core concepts in ethics is to commit a “category mistake”. This is refuted by the clearly productive attempts to do this, some of which I’ve cited above. This belief that scientists and mathematicians are on a different plane than ethicists is quite old: Hannah Arendt argued that scientists should not be trusted because their mathematical language prevented them from engaging in normal political and ethical discourse (Arendt, 1959). But once again, this recent literature (as well as much older literature in such fields as theoretical economics) demonstrates that this view is incorrect.

There are many possible explanations for the persistence of the view that mathematics and the hard sciences do not concern themselves with ethics, are somehow lacking in ethical education, or that engineers require non-technical people to tell them how to engineer things more ethically.

One reason is that the sciences are much broader in scope than the ethical results mentioned here. It is indeed possible to get a specialist’s education in a technical field without much ethical training, even in the mathematical ethics results mentioned above.

Another reason is that whereas understanding the mathematical tradeoffs inherent in certain kinds of design is an important part of ethics, it can be argued by others that what’s most important about ethics is some substantive commitment that cannot be mathematically defended. For example, suppose half the population believes that it is most ethical for members of the other half to treat them with special dignity and consideration, at the expense of the other half. It may be difficult to arrive at this conclusion from mathematics alone, but this group may advocate for special treatment out of ethical consideration nonetheless.

These two reasons are similar. The first states that mathematics includes many things that are not ethics. The second states that ethics potentially (and certainly in the minds of some people) includes much that is not mathematical.

I want to bring up a third reason, which is perhaps more profound than the other two, which is this: what distinguishes mathematics as a field is its commitment to logical non-contradiction, which means that it is able to baldly claim when goals a impossible to achieve, Acknowledging tradeoffs is part of what mathematicians and scientists do.

Acknowledging tradeoffs is not something that everybody else is trained to do, and indeed many philosophers are apparently motivated by the ability to surpass limitations. Alain Badiou, who is one of the living philosophers that I find to be most inspiring and correct, maintains that mathematics is the science of pure Being, of all possibilities. Reality is just a subset of these possibilities, and much of Badiou’s philosophy is dedicated to the Event, those points where the logical constraints of our current worldview are defeated and new possibilities open up.

This is inspirational work, but it contradicts what many mathematicians do in fact, which is identity impossibility. Science forecloses possibilities where a poet may see infinite potential.

Other ethicists, especially existentialist ethicists, see the limitation and expansion of possibility, especially in the possibility of personal accomplishment, as fundamental to ethics. This work is inspiring precisely because it states so clearly what it is we hope for and aspire to.

What mathematical ethics often tells us is that these hopes are fruitless. The desiderata cannot be met. Somebody will always get the short stick. Engineers, unable to triumph against mathematics, will always disappoint somebody, and whoever that somebody is can always argue that the engineers have neglected ethics, and demand justice.

There may be good reasons for making everybody believe that they are qualified to comment on the subject of ethics. Indeed, in a sense everybody is required to act ethically even when they are not ethicists. But the preceding argument suggests that perhaps mathematical education is an essential part of ethical education, because without it one can have unrealistic expectations of the ethics of others. This is a scary thought because mathematics education is so often so poor. We live today, as we have lived before, in a culture with great mathophobia (Papert, 1980) and this mathophobia is perpetuated by those who try to equate mathematical training with immorality.

References

Arendt, Hannah. The human condition:[a study of the central dilemmas facing modern man]. Doubleday, 1959.

Arrow, Kenneth J. “A difficulty in the concept of social welfare.” Journal of political economy 58.4 (1950): 328-346.

Benthall, Sebastian. “Philosophy of computational social science.” Cosmos and History: The Journal of Natural and Social Philosophy 12.2 (2016): 13-30.

Datta, Anupam, et al. “Use Privacy in Data-Driven Systems: Theory and Experiments with Machine Learnt Programs.” arXiv preprint arXiv:1705.07807 (2017).

Dwork, Cynthia. “Differential privacy: A survey of results.” International Conference on Theory and Applications of Models of Computation. Springer, Berlin, Heidelberg, 2008.

Dwork, Cynthia, et al. “Fairness through awareness.” Proceedings of the 3rd Innovations in Theoretical Computer Science Conference. ACM, 2012.

Kleinberg, Jon, Sendhil Mullainathan, and Manish Raghavan. “Inherent trade-offs in the fair determination of risk scores.” arXiv preprint arXiv:1609.05807 (2016).

Papert, Seymour. Mindstorms: Children, computers, and powerful ideas. Basic Books, Inc., 1980.


by Sebastian Benthall at November 24, 2017 11:09 PM

November 22, 2017

Ph.D. student

Pondering “use privacy”

I’ve been working carefully with Datta et al.’s “Use Privacy” work (link), which makes a clear case for how a programmatic, data-driven model may be statically analyzed for its use of a proxy of a protected variable, and repaired.

Their system has a number of interesting characteristics, among which are:

  • The use of a normative oracle for determining which proxy uses are prohibited.
  • A proof that there is no coherent definition of proxy use which has all of a set of very reasonable properties defined over function semantics.

Given (2), they continue with a compelling study of how a syntactic definition of proxy use, one based on the explicit contents of a function, can support a system of detecting and repairing proxies.

My question is to what extent the sources of normative restriction on proxies (those characterized by the oracle in (1)) are likely to favor syntactic proxy use restrictions, as opposed to semantic ones. Since ethicists and lawyers, who are the purported sources of these normative restrictions, are likely to consider any technical system a black box for the purpose of their evaluation, they will naturally be concerned with program semantics. It may be comforting for those responsible for a technical program to be able to, in a sense, avoid liability by assuring that their programs are not using a restricted proxy. But, truly, so what? Since these syntactic considerations do not make any semantic guarantees, will they really plausibly address normative concerns?

A striking result from their analysis which has perhaps broader implications is the incoherence of a semantic notion of proxy use. Perhaps sadly but also substantively, this result shows that a certain plausible normative is impossible for a system to fulfill in general. Only restricted conditions make such a thing possible. This seems to be part of a pattern in these rigorous computer science evaluations of ethical problems; see also Kleinberg et al. (2016) on how it’s impossible to meet several plausible definitions of “fairness” in the risk-assessment scores across social groups except under certain conditions.

The conclusion for me is that what this nobly motivated computer science work reveals is that what people are actually interested in normatively is not the functioning of any particular computational system. They are rather interested in social conditions more broadly, which are rarely aligned with our normative ideals. Computational systems, by making realities harshly concrete, are disappointing, but it’s a mistake to make that a disappointment with the computing systems themselves. Rather, there are mathematical facts that are disappointing regardless of what sorts of systems mediate our social world.

This is not merely a philosophical consideration or sociological observation. Since the the interpretation of laws are part of the process of informing normative expectations (as in a normative oracle), it is an interesting an perhaps open question how lawyers and judges, in their task of legal interpretation, make use of the mathematical conclusions about normative tradeoffs being offered up by computer scientists.

References

Datta, Anupam, et al. “Use Privacy in Data-Driven Systems: Theory and Experiments with Machine Learnt Programs.” arXiv preprint arXiv:1705.07807 (2017).

Kleinberg, Jon, Sendhil Mullainathan, and Manish Raghavan. “Inherent trade-offs in the fair determination of risk scores.” arXiv preprint arXiv:1609.05807 (2016).


by Sebastian Benthall at November 22, 2017 06:02 PM

Ph.D. student

Interrogating Biosensing Privacy Futures with Design Fiction (video)

 

I presented this talk in November 2017, at the Berkeley I School PhD Research Reception. The talk discusses findings from 2 of our papers:

Richmond Y. Wong, Ellen Van Wyk and James Pierce. (2017). Real-Fictional Entanglements: Using Science Fiction and Design Fiction to Interrogate Sensing Technologies. In Proceedings of the ACM Conference on Designing Interactive Systems (DIS ’17). https://escholarship.org/uc/item/7r229796

Richmond Y. Wong, Deirdre K. Mulligan, Ellen Van Wyk, James Pierce and John Chuang. (2017). Eliciting Values Reflections by Engaging Privacy Futures Using Design Workbooks. Proceedings of the ACM Human Computer Interaction (CSCW 2018 Online First). 1, 2, Article 111 (November 2017), 27 pages. https://escholarship.org/uc/item/78c2802k

More about this project and some of the designs can be found here: biosense.berkeley.edu/projects/sci-fi-design-fiction/

by Richmond at November 22, 2017 05:25 PM

November 19, 2017

Ph.D. student

On achieving social equality

When evaluating a system, we have a choice of evaluating its internal functions–the inside view–or evaluating its effects situated in a larger context–the outside view.

Decision procedures (whether they are embodied by people or performed in concert with mechanical devices–I don’t think this distinction matters here) for sorting people are just such a system. If I understand correctly, the question of which principles animate antidiscrimination law hinge on this difference between the inside and outside view.

We can look at a decision-making process and evaluate whether as a procedure it achieves its goals of e.g. assigning credit scores without bias against certain groups. Even including processes of the gathering of evidence or data in such a system, it can in principle be bounded and evaluated by its ability to perform its goals. We do seem to care about the difference between procedural discrimination and procedural nondiscrimination. For example, an overtly racist policy that ignores truly talent and opportunity seems worse than a bureaucratic system that is indifferent to external inequality between groups that then gets reflected in decisions made according to other factors that are merely correlated with race.

The latter case has been criticized in the outside view. The criticism is captured by the phrasing that “algorithms can reproduce existing biases”. The supposedly neutral algorithm (which can, again, be either human or machine) is not neutral in its impact because in making its considerations of e.g. business interest are indifferent to the conditions outside it. The business is attracted to wealth and opportunity, which are held disproportionately by some part of the population, so the business is attracted to that population.

There is great wisdom in recognizing that institutions that are neutral in their inside view will often reproduce bias in the outside view. But it is incorrect to therefore conflate neutrality in the inside view with a biased inside view, even though their effects may be under some circumstances the same. When I say it is “incorrect”, I mean that they are in fact different because, for example, if the external conditions of procedurally neutral institution change, then it will reflect those new conditions. A procedurally biased institution will not reflect those new conditions in the same way.

Empirically it is very hard to tell when an institution is being procedurally neutral and indeed this is the crux of an enormous amount of political tension today. The first line of defense of an institution blamed of bias is to claim that their procedural neutrality is merely reflecting environmental conditions outside of its control. This is unconvincing for many politically active people. It seems to me that it is now much more common for institutions to avoid this problem by explicitly declaring their bias. Rather than try to accomplish the seemingly impossible task of defending their rigorous neutrality, it’s easier to declare where one stands on the issue of resource allocation globally and adjust ones procedure accordingly.

I don’t think this is a good thing.

One consequence of evaluating all institutions based on their global, “systemic” impact as opposed to their procedural neutrality is that it hollows out the political center. The evidence is in that politics has become more and more polarized. This is inevitable if politics becomes so explicitly about maintaining or reallocating resources as opposed to about building neutrally legitimate institutions. When one party in Congress considers a tax bill which seems designed mainly to enrich ones own constituencies at the expense of the other’s things have gotten out of hand. The idea of a unified idea of ‘good government’ has been all but abandoned.

An alternative is a commitment to procedural neutrality in the inside view of institutions, or at least some institutions. The fact that there are many different institutions that may have different policies is indeed quite relevant here. For while it is commonplace to say that a neutral institution will “reproduce existing biases”, “reproduction” is not a particularly helpful word here. Neither is “bias”. What we can say more precisely is that the operations of procedurally neutral institution will not change the distribution of resources even though they are unequal.

But if we do not hold all institutions accountable for correcting the inequality of society, isn’t that the same thing as approving of the status quo, which is so unequal? A thousand times no.

First, there’s the problem that many institutions are not, currently, procedurally neutral. Procedural neutrality is a higher standard than what many institutions are currently held to. Consider what is widely known about human beings and their implicit biases. One good argument for transferring decision-making authority to machine learning algorithms, even standard ones not augmented for ‘fairness’, is that they will not have the same implicit, inside, biases as the humans that currently make these decisions.

Second, there’s the fact that responsibility for correcting social inequality can be taken on by some institutions that are dedicated to this task while others are procedurally neutral. For example, one can consistently believe in the importance of a progressive social safety net combined with procedurally neutral credit reporting. Society is complex and perhaps rightly has many different functioning parts; not all the parts have to reflect socially progressive values for the arc of history to bend towards justice.

Third, there is reason to believe that even if all institutions were procedurally neutral, there would eventually be social equality. This has to do with the mathematically bulletproof but often ignored phenomenon of regression towards the mean. When values are sampled from a process at random, their average will approach the mean of the distribution as more values are accumulated. In terms of the allocation of resources in a population, there is some random variation in the way resources flow. When institutions are fair, inequality in resource allocation will settle into an unbiased distribution. While their may continue to be some apparent inequality due to disorganized heavy tail effects, these will not be biased, in a political sense.

Fourth, there is the problem of political backlash. Whenever political institutions are weak enough to be modified towards what is purported to be a ‘substantive’ or outside view neutrality, that will always be because some political coalition has attained enough power to swing the pendulum in their favor. The more explicit they are about doing this, the more it will mobilize the enemies of this coallition to try to swing the pendulum back the other way. The result is war by other means, the outcome of which will never be fair, because in war there are many who wind up dead or injured.

I am arguing for a centrist position on these matters, one that favors procedural neutrality in most institutions. This is not because I don’t care about substantive, “outside view” inequality. On the contrary, it’s because I believe that partisan bickering that explicitly undermines the inside neutrality of institutions undermines substantive equality. Partisan bickering over the scraps within narrow institutional frames is a distraction from, for example, the way the most wealthy avoid taxes while the middle class pays even more. There is a reason why political propaganda that induces partisan divisions is a weapon. Agreement about procedural neutrality is a core part of civic unity that allows for collective action against the very most abusively powerful.

References

Zachary C. Lipton, Alexandra Chouldechova, Julian McAuley. “Does mitigating ML’s disparate impact require disparate treatment?” 2017


by Sebastian Benthall at November 19, 2017 06:03 PM

November 18, 2017

Ph.D. student

what to do about the blog

Initially, I thought, I needed to get bcc.npdoty.name to load over HTTPS. Previously I had been using TLS transit part of the way using Cloudflare, but I've moved away from that, I'd rather not have the additional service, it was only a partial solution, and I'm tired of seeing Certificate Transparency alerts from Facebook when CloudFlare creates a new cert every week for my domain name and a thousand others, but now I've heard that Google has announced good HTTPS support for custom domain names when using Google App Engine and so I should be good to go. HTTPS is important, and I should fix that before I post more on this blog.

I was plagued for weeks trying to use Google's new developer console, reading through various documentation that was out of date, confronted by the vaguest possible error messages. Eventually, I discover that there's just a bug for most or all long-time App Engine users who created custom domains on applications years ago using a different system; the issue is acknowledged; no timeline for a fix; no documentation; no workaround.* Just a penalty for being a particularly long-time customer. Meanwhile, Google is charging me for server time on the blog that sees no usage, for some other reason I haven't been able to nail down.

I start to investigate other blogging software: is Ghost the preferred customizable blogging platform these days? What about static-site generation, from Jekyll, or Hugo? Can I find something written in a language where I could comfortably customize it (JavaScript, Python) and still have a well-supported and simple infrastructure for creating static pages that I can easily host on my existing simple infrastructure? I go through enough of the process to actually set up a sample Ghost installation on WebFaction, before realizing (and I really credit the candor of their documentation here) that this is way too heavyweight for what I'm trying to do.

Ah, I fell into that classic trap! This isn't blogging. This isn't even working on building a new and better blogging infrastructure or social media system. This isn't writing prose, this isn't writing code. This is meta-crap, this is clicking around, comparing feature lists, being annoyed about technology. So, to answer the original small question to myself "what to do about the blog", how about, for now, "just fucking post on whatever infrastructure you've got".

—npd

* I see that at least one of the bugs has some updates now, and maybe using a different (command-line) tool I could unblock myself with that particular sub-issue.
https://issuetracker.google.com/issues/66984633
Maybe. Or maybe I would hit their next undocumented error message and get stuck again, having invested several more hours in it. And it does actually seem important to move away from this infrastructure; I'm not really sure to what extent Google is supporting it, but I do know that when I run into completely blocking issues that there is no way for me to contact Google's support team or get updates on issues (beyond, search various support forums for hours to reverse-engineer your problem, see if there's an open bug on their issue tracker, click Star), and that in the meantime they are charging me what I consider a significant amount of money.

by nick@npdoty.name at November 18, 2017 09:43 PM