# School of Information Blogs

## March 14, 2018

Ph.D. student

#### Artisanal production, productivity and automation, economic engines

I’m continuing to read Moretti’s The new geography of jobs (2012). Except for the occasional gushing over the revolutionary-ness of some new payments startup, a symptom no doubt of being so close to Silicon Valley, it continues to be an enlightening and measured read on economic change.

There are a number of useful arguments and ideas from the book, which are probably sourced more generally from economics, which I’ll outline here, with my comments:

Local, artisanal production can never substitute for large-scale manufacturing. Moretti argues that while in many places in the United States local artisinal production has cropped up, it will never replace the work done by large-scale production. Why? Because by definition, local artisinal production is (a) geographically local, and therefore unable to scale beyond a certain region, and (b) defined in part by its uniqueness, differentiating it from mainstream products. In other words, if your local small-batch shop grows to the point where it competes with large-scale production, it is no longer local and small-batch.

Interestingly, this argument about production scaling echoes work on empirical heavy tail distributions in social and economic phenomena. A world where small-scale production constituted most of production would have an exponentially bounded distribution of firm productivity. The world doesn’t look that way, and so we have very very big companies, and many many small companies, and they coexist.

Higher labor productivity in a sector results in both a richer society and fewer jobs in that sector. Productivity is how much a person’s labor produces. The idea here is that when labor productivity increases, the firm that hires those laborers needs fewer people working to satisfy its demand. But those people will be paid more, because their labor is worth more to the firm.

I think Moretti is hand-waving a bit when he argues that a society only gets richer through increased labor productivity. I don’t follow it exactly.

But I do find it interesting that Moretti calls “increases in productivity” what many others would call “automation”. Several related phenomena are viewed critically in the popular discourse on job automation: more automation causes people to lose jobs; more automation causes some people to get richer (they are higher paid); this means there is a perhaps pernicious link between automation and inequality. One aspect of this is that automation is good for capitalists. But another aspect of this is that automation is good for lucky laborers whose productivity and earnings increase as a result of automation. It’s a more nuanced story than one that is only about job loss.

The economic engine of an economy is what brings in money, it need not be the largest sector of the economy. The idea here is that for a particular (local) economy, the economic engine of that economy will be what pulls in money from outside. Moretti argues that the economic engine must be a “trade sector”, meaning a sector that trades (sells) its goods beyond its borders. It is the workers in this trade-sector economic engine that then spend their income on the “non-trade” sector of local services, which includes schoolteachers, hairdressers, personal trainers, doctors, lawyers, etc. Moretti’s book is largely about how the innovation sector is the new economic engine of many American economies.

One thing that comes to mind reading this point is that not all economic engines are engaged in commercial trade. I’m thinking about Washington, DC, and the surrounding area; the economic engine there is obviously the federal government. Another strange kind of economic engine are top-tier research universities, like Carnegie Mellon or UC Berkeley. Top-tier research universities, unlike many other forms of educational institutions, are constantly selling their degrees to foreign students. This means that they can serve as an economic engine.

Overall, Moretti’s book is a useful guide to economic geography, one that clarifies the economic causes of a number of political tensions that are often discussed in a more heated and, to me, less useful way.

References

Moretti, Enrico. The new geography of jobs. Houghton Mifflin Harcourt, 2012.

## March 10, 2018

Ph.D. student

#### the economic construction of knowledge

We’ve all heard about the social construction of knowledge.

Here’s the story: Knowledge isn’t just in the head. Knowledge is a social construct. What we call “knowledge” is what it is because of social institutions and human interactions that sustain, communicate, and define it. Therefore all claims to absolute and unsituated knowledge are suspect.

There are many different social constructivist theories. One of the best, in my opinion, is Bourdieu’s, because he has one of the best social theories. For Bourdieu, social fields get their structure in part through the distribution of various kinds of social capital. Economic capital (money!) is one kind of social capital. Symbolic capital (the fact of having published in a peer-reviewed journal) is a different form of capital. What makes the sciences special, for Bourdieu, is that they are built around a particular mechanism for awarding symbolic capital that makes it (science) get the truth (the real truth). Bourdieu thereby harmonizes social constructivism with scientific realism, which is a huge relief for anybody trying to maintain their sanity in these trying times.

This is all super. What I’m beginning to appreciate more as I age, develop, and in some sense I suppose ‘progress’, is that economic capital is truly the trump card of all the forms of social capital, and that this point is underrated in social constructivist theories in general. What I mean by this is that flows of economic capital are a condition for the existence of the social fields (institutions, professions, etc.) in which knowledge is constructed. This is not to say that everybody engaged in the creation of knowledge is thinking about monetization all the time–to make that leap would be to commit the ecological fallacy. But at the heart of almost every institution where knowledge is created, there is somebody fundraising or selling.

Why, then, don’t we talk more about the economic construction of knowledge? It is a straightforward idea. To understand an institution or social field, you “follow the money”, seeing where it comes from and where it goes, and that allows you to situated the practice in its economic context and thereby determine its economic meaning.

## March 08, 2018

MIMS 2012

#### Why I Blog

The fable of the millipede and the songbird is a story about the difference between instinct and knowledge. It goes like this:

High above the forest floor, a millipede strolled along the branch of a tree, her thousand pairs of legs swinging in an easy gait. From the tree top, song birds looked down, fascinated by the synchronization of the millipede’s stride. “That’s an amazing talent,” chirped the songbirds. “You have more limbs than we can count. How do you do it?” And for the first time in her life the millipede thought about this. “Yes,” she wondered, “how do I do what I do?” As she turned to look back, her bristling legs suddenly ran into one another and tangled like vines of ivy. The songbirds laughed as the millipede, in a panic of confusion, twisted herself in a knot and fell to earth below.

On the forest floor, the millipede, realizing that only her pride was hurt, slowly, carefully, limb by limb, unraveled herself. With patience and hard work, she studied and flexed and tested her appendages, until she was able to stand and walk. What was once instinct became knowledge. She realized she didn’t have to move at her old, slow, rote pace. She could amble, strut, prance, even run and jump. Then, as never before, she listened to the symphony of the songbirds and let music touch her heart. Now in perfect command of thousands of talented legs, she gathered courage, and, with a style of her own, danced and danced a dazzling dance that astonished all the creatures of her world. [1]

The lesson here is that conscious reflection of an unconscious action will impair your ability to do that action. But after you introspect and really study how you do what you do, it will transform into knowledge and you will have greater command of that skill.

That, in a nutshell, is why I blog. The act of introspection — of turning abstract thoughts into concrete words — strengthens my knowledge of that subject and enables me to dance a dazzling dance.

[1] I got this version of the fable from the book Story: Substance, Structure, Style and the Principles of Screenwriting by Robert McKee, but can’t find the original version of it anywhere (it’s uncredited in his book). The closest I can find is The Centipede’s Dilemma, but that version lacks the second half of the fable.

## March 06, 2018

Ph.D. student

#### Appealing economic determinism (Moretti)

I’ve start reading Enrico Moretti’s The New Geography of Jobs and finding it very clear and persuasive (though I’m not far in).

Moretti is taking up the major theme of What The Hell Is Happening To The United States, which is being addressed by so many from different angles. But whereas many writers seem to have an agenda–e.g., Noble advocating for political reform regulating algorithms; Deenan arguing for return to traditional community values in some sense; etc.–or to focus on particularly scandalous or dramatic aspects of changing political winds–such as Gilman’s work on plutocratic insurgency and collapsing racial liberalism–Moretti is doing economic geography showing how long term economic trends are shaping the distribution of prosperity within the U.S.

From the introduction, it looks like there are a few notable points.

The first is about what Moretti calls the Great Divergence, which has been going on since the 1980’s. This is the decline of U.S. manufacturing as jobs moved from Detroit, Michegan to Shenzhen, Guangdong, paired with the rise of an innovation economy where the U.S. takes the lead in high-tech and creative work. The needs of the high-tech industry–high-skilled workers, who may often be educated immigrants–changes the demographics of the innovation hubs and results in the political polarization we’re seeing on the national stage. This is an account of the economic base determining the cultural superstructure which is so fraught right now, and exactly what I was getting at yesterday with my rant yesterday about the politics of business.

The second major point Moretti makes which is probably understated in more polemical accounts of the U.S. political economy is the multiplier effect of high-skilled jobs in innovation hubs. Moretti argues that every high-paid innovation job (like software engineer or scientist) results in four other jobs in the same city. These other jobs are in service sectors that are by their nature local and not able to be exported. The consequence is that the innovation economy does not, contrary to its greatest skeptics, only benefit the wealthy minority of innovators to the ruin of the working class. However, it does move the location of working class prosperity into the same urban centers where the innovating class is.

This gives one explanation for why the backlash against Obama-era economic policies was such a shock to the coastal elites. In the locations where the “winners” of the innovation economy were gathered, there was also growth in the service economy which by objective measures increased the prosperity of the working class in those cities. The problem was the neglected working class in those other locations, who felt left behind and struck back against the changes.

A consequence of this line of reasoning is that arguments about increasing political tribalism are really a red herring. Social tribes on the Internet are a consequence, not a cause, of divisions that come from material conditions of economy and geography.

Moretti even appears to have a constructive solution in mind. He argues that there are “three Americas”: the rich innovation hubs, the poor former manufacturing centers, and mid-sized cities that have not yet gone either way. His recipe for economic success in these middle cities is attracting high-skilled workers who are a kind of keystone species for prosperous economic ecosystems.

References

Deneen, Patrick J. Why Liberalism Failed. Yale University Press, 2018.

Gilman, Nils. “The twin insurgency.” American Interest 15 (2014): 3-11.

Gilman, Nils. “The Collapse of Racial Liberalism.” The American Interest (2018).

Moretti, Enrico. The new geography of jobs. Houghton Mifflin Harcourt, 2012.

Noble, Safiya Umoja. Algorithms of Oppression: How search engines reinforce racism. NYU Press, 2018.

MIMS 2014

#### I Googled Myself

As a huge enthusiast of A/B testing, I have been wanting to learn how to run A/B tests through Google Optimize for some time. However, it’s hard to do this without being familiar with all the different parts of the Google product eco-system. So I decided it was time to take the plunge and finally Google myself. This post will cover my adventures with several products in the Google product suite including: Google Analytics (GA), Google Tag Manager (GTM), Google Optimize (GO), and Google Data Studio (GDS).

Of course, in order to do A/B testing, you have to have A) something to test, and B) sufficient traffic to drive significant results. Early on I counted out trying to A/B test this blog—not because I don’t have sufficient traffic—I got tons of it, believe me . . . (said in my best Trump voice). The main reason I didn’t try do it with my blog is that I don’t host it, WordPress does, so I can’t easily access or manipulate the source code to implement an A/B test. It’s much easier if I host the website myself (which I can do locally using MAMP).

But how do I send traffic to a website I’m hosting locally? By simulating it, of course. Using a nifty python library called Selenium, I can be as popular as I want! I can also simulate any kind of behavior I want, and that gives me maximum control. Since I can set the expected outcomes ahead of time, I can more easily troubleshoot/debug whenever the results don’t square with expectations.

### My Mini “Conversion Funnel”

When it came to designing my first A/B test, I wanted to keep things relatively simple while still mimicking the general flow of an e-commerce conversion funnel. I designed a basic website with two different landing page variants—one with a green button and one with a red button. I arbitrarily decided that users would be 80% likely to click on the button when it’s green and 95% likely to click on the button when it’s red (these conversion rates are unrealistically high, I know). Users who didn’t click on the button would bounce, while those who did would advance to the “Purchase Page”.

To make things a little more complicated, I decided to have 20% of ‘green’ users bounce after reaching the purchase page. The main reason for this was to test out GA’s funnel visualizations to see if they would faithfully reproduce the graphic above (they did). After the purchase page, users would reach a final “Thank You” page with a button to claim their gift. There would be no further attrition at this point; all users who arrived on this page would click the “Claim Your Gift” button. This final action was the conversion (or ‘Goal’ in GA-speak) that I set as the objective for the A/B test.

With GA, I jumped straight into the deep end, adding gtag.js snippets to all the pages of my site. Then I implemented a few custom events and dimensions via javascript. In retrospect, I would have done the courses offered by Google first (Google Analytics for Beginners & Advanced Google Analytics) . These courses give you a really good lay of the land of what GA is capable of, and it’s really impressive. If you have a website, I don’t see how you can get away with not having it plugged into GA.

In terms of features, the real time event tracking is a fantastic resource for debugging GA implementations. However, the one feature I wasn’t expecting GA to have was the benchmarking feature. It allows you to compare the traffic on your site with websites in similar verticals. This is really great because even if you’re totally out of ideas on what to analyze (which you shouldn’t be given the rest of the features in GA), you can use the benchmarking feature as a starting point for figuring out the weak points in your site.

The other great thing about the two courses I mentioned is that they’re free, and at the end you can take the GA Individual Qualification exam to certify your knowledge about GA (which I did). If you’re gonna put it the time to learn the platform, it’s nice to have a little endorsement at the end.

### Google Tag Manager

After implementing everything in gtag.js, I did it all again using GTM. I can definitely see the appeal of GTM as a way to deploy GA; it abstracts away all of that messy javascript and replaces it with a clean user interface and a handy debug tool. The one drawback of GTM seems that it doesn’t send events to GA quite as well as gtag.js. Specifically, in my GA reports for the ‘red button variant of my A/B test, I saw more conversions for the “Claim Your Gift” button than conversions for the initial click to get off the landing page. Given the attrition rates I defined, that’s impossible. I tried to configure the tag to wait until the event was sent to GA before the next page was loaded, but there still seemed to be some data meant to be sent to GA that got lost in the mix.

Before trying out GO, I implemented my little A/B test through Google’s legacy system, Content Experiments. I can definitely see why GO is the way of the future. There’s a nifty tool that lets you edit visual DOM elements right in the browser while you’re defining your variants. In Content Experiments, you have to either provide two separate A or B pages or implement the expected changes on your end. It’s a nice thing to not have to worry about, especially if you’re not a pro front-end developer.

Also, it’s clear that GO has more powerful decision features. For one thing, it has Bayesian decision logic which is more comprehensible for business stakeholders and is gaining steam in online a/b testing. Also, it has the ability to do multivariate testing, which is a great addition, though I don’t use that functionality for this test.

The one thing that was a bit irritating with GO was setting it up to run on localhost. It took a few hours of yak shaving to get the different variants to actually show up on my computer. It boiled down to 1) editing my etc/hosts file with an extra line in accordance with this post on the Google Advertiser Community forum and 2) making sure the Selenium driver navigated to localhost.domain instead of just localhost`.

### Google Data Studio

Nothing is worth doing unless you can make a dashboard at the end of it, right? While GA has some amazing report power generating capabilities, it can feel somewhat rigid in terms of customizability. GDS is a relatively new program that gives you way more options to visualize the data sitting in GA. But while GDS has an advantage over GA, it does have some frustrating limitations which I hope they resolve soon. In particular, I hope they’ll let you show percent differences between two score cards soon. As someone who’s done a lot of  A/B test reports, I know that the thing stakeholders are most interested in seeing is the % difference, or lift, caused by one variant versus another.

Here is a screenshot of the ultimate dashboard (or a link it you want to see it live):

The dashboard was also a good way to do a quick check to make sure everything in the test was working as expected. For example, the expected conversion rate for the “Claim Your Gift” button was 64% versus 95%, and we see more or less those numbers in the first bar chart on the left. The conditional conversion rate (the conversion rate of users conditioned on clicking off the landing page) is also close to what was expected: 80% vs. 100%

### Notes about Selenium

So I really like Selenium, and after this project I have a little personal library to do automated tests in the future that I can apply to any website, not just this little dinky one I ran locally on my machine.

When you’re writing code dealing with Selenium, one thing I’ve realized is that it’s important to write highly fault tolerant code. Things that depend on the internet imply many things that can go wrong—the wifi in the cafe you’re in might go down. Resources might randomly fail to load. So many different things that can go wrong… But if you’ve written fault-tolerant code, hitting one of these snags won’t cause your program to stop running.

Along with fault-tolerant code, it’s a good idea to write good logs. When stuff does go wrong, this helps you figure out what it was. In this particular case, logs also served as a good source of ground truth to compare against the numbers I was seeing in GA.

### The End! (for now…)

I think I’ll be back soon with another post about AdWords and Advanced E-Commerce in GA…

## March 05, 2018

Ph.D. student

#### politics of business

This post is an attempt to articulate something that’s on the tip of my tongue, so bear with me.

Fraser has made the point that the politics of recognition and the politics of distribution are not the same. In her view, the conflict in the U.S. over recognition (i.e., or women, racial minorities, LGBTQ, etc. on the progressive side, and on the straight white male ‘majority’ on the reactionary side) has overshadowed the politics of distribution, which has been at a steady neoliberal status quo for some time.

First, it’s worth pointing out that in between these two political contests is a politics of representation, which may be more to the point. The claim here is that if a particular group is represented within a powerful organization–say, the government, or within a company with a lot of power such as a major financial institution or tech company–then that organization will use its power in a way that is responsive to the needs of the represented group.

Politics of representation are the link between recognition and distribution: the idea is that if “we” recognize a certain group, then through democratic or social processes members of that group will be lifted into positions of representative power, which then will lead to (re)distribution towards that group in the longer run.

I believe this is the implicit theory of social change at the heart of a lot of democratish movements today. It’s an interesting theory in part because it doesn’t seem to have any room for “good governance”, or broadly beneficial governance, or technocracy. There’s nothing deliberative about this form of democracy; it’s a tribal war-by-other-means. It is also not clear that this theory of social change based on demographic representation is any more effective at changing distributional outcomes than a pure politics of recognition, which we have reason to believhe is ineffectual.

Who do we expect to have power over distributional outcomes in our (and probably other) democracies? Realistically, it’s corporations. Businesses comprise most of the economic activity; businesses have the profits needed to reinvest in lobbying power for the sake of economic capture. So maybe if what we’re interested in is politics of distribution, we should stop trying to parse out the politics of recognition, with its deep dark rabbit hole of identity politics and the historical injustice and Jungian archetypal conflicts over the implications of the long arc of sexual maturity. These conversations do not seem to be getting anyone anywhere! It is, perhaps, fake news: not because the contents are fake, but because the idea that these issues are new is fake. They are perhaps just a lot of old issues stirred to conflagration by the feedback loops between social and traditional media.

If we are interested in the politics of distribution, let’s talk about something else, something that we all know must be more relevant, when it comes down to it, than the politics of recognition. I’m talking about the politics of business.

We have a rather complex economy with many competing business interests. Let’s assume that one of the things these businesses compete over is regulatory capture–their ability to influence economic policy in their favor.

When academics talk about neoliberal economic policy, they are often talking about those policies that benefit the financial sector and big businesses. But these big businesses are not always in agreement.

Take, for example, the steel tariff proposed by the Trump administration. There is no blunter example of a policy that benefits some business interests–U.S. steelmakers–and not others–U.S. manufacturers of steel-based products.

It’s important from the perspective of electoral politics to recognize that the U.S. steelmakers are a particular set of people who live in particular voting districts with certain demographics. That’s because, probably, if I am a U.S. steelworker, I will vote in the interest of my industry. Just as if I am a U.S. based urban information worker at an Internet company, I will vote in the interest of my company, which in my case would mean supporting net neutrality. If I worked for AT&T, I would vote against net neutrality, which today means I would vote Republican.

It’s an interesting fact that AT&T employs a lot more people than Google and (I believe this is the case, though I don’t know where to look up the data) that they are much more geographically distributed that Google because, you know, wires and towers and such. Which means that AT&T employees will be drawn from more rural, less diverse areas, giving them an additional allegiance to Republican identity politics.

You must see where I’m getting at. Assume that the main driver of U.S. politics is not popular will (which nobody really believes, right?) and is in fact corporate interests (which basically everybody admits, right?). In that case the politics of recognition will not be determining anything; rather it will be a symptom, an epiphenomenon, of an underlying politics of business. Immigration of high-talent foreigners then becomes a proxy issue for the economic battle between coastal tech companies and, say, old energy companies which have a much less geographically mobile labor base. Nationalism, or multinationalism, becomes a function of trade relations rather than a driving economic force in its own right. (Hence, Russia remains an enemy of the U.S. largely because Putin paid off all its debt to the U.S. and doesn’t owe it any money, unlike many of its other allies around the world.)

I would very much like to devote myself better to the understanding of politics of business because, as I’ve indicated, I think the politics of recognition have become a huge distraction.

## March 02, 2018

Ph.D. student

#### Moral individualism and race (Barabas, Gilman, Deenan)

One of my favorite articles presented at the recent FAT* 2018 conference was Barabas et al. on “Interventions over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment” (link). To me, this was the correct response to recent academic debate about the use of actuarial risk-assessment in determining criminal bail and parole rates. I had a position on this before the conference which I drafted up here; my main frustration with the debate had been that it had gone unquestioned why bail and parole rates are based on actuarial prediction of recidivism in the first place, given that rearrest rates are so contingent on social structural factors such as whether or not police are racist.

Barabas et al. point out that there’s an implicit theory of crime behind the use of actuarial risk assessments. In that theory of crime, there are individual “bad people” and “good people”. “Bad people” are more likely to commit crimes because of their individual nature, and the goal of the criminal policing system is to keep bad people from committing crimes by putting them in prison. This is the sort of theory that, even if it is a little bit true, is also deeply wrong, and so we should probably reassess the whole criminal justice system as a result. Even leaving aside the important issue of whether “recidivism” is interpreted as reoffense or rearrest rate, it is socially quite dangerous to see probability of offense as due to the specific individual moral character of a person. One reason why this is dangerous is that if the conditions for offense are correlated with the conditions for some sort of unjust desperation, then we risk falsely justifying an injustice with the idea that the bad things are only happening to bad people.

I’d like to juxtapose this position with a couple others that may on the surface appear to be in tension with it.

Nils Gilman’s new piece on “The Collapse of Racial Liberalism” is a helpful account of how we got where we are as an American polity. True to the title, Gilman’s point is that there was a centrist consensus on ‘racial liberalism’ that it reached its apotheosis in the election of Obama and then collapsed under its one contradictions, getting us where we are today.

By racial liberalism, I mean the basic consensus that existed across the mainstream of both political parties since the 1970s, to the effect that, first, bigotry of any overt sort would not be tolerated, but second, that what was intolerable was only overt bigotry—in other words, white people’s definition of racism. Institutional or “structural” racism—that is, race-based exclusions that result from deep social habits such as where people live, who they know socially, what private organizations they belong to, and so on—were not to be addressed. The core ethic of the racial liberal consensus was colorblind individualism.

Bill Clinton was good at toeing the line of racial liberalism, and Obama, as a black meritocratic elected president, was its culmination. But:

“Obama’s election marked at once the high point and the end of a particular historical cycle: a moment when the realization of a particular ideal reveals the limits of that ideal.”

The limit of the ideal is, of course, that all the things not addressed–“race-based exclusions that result from deep social habits such as where people live, who they know socially, what private organizations they belong to, and so on”–matter, and result in, for example, innocent black guys getting shot disproportionately by police even when there is a black meritocratic sitting as president.

And interesting juxtaposition here is that in both cases discussed so far, we have a case of a system that is reaching its obsolescence due to the contradictions of individualism. In the case of actuarial policing (as it is done today; I think a properly sociological version of actuarial policing could be great), there’s the problem of considering criminals as individuals whose crimes are symptoms of their individual moral character. The solution to crime is to ostracize and contain the criminals by, e.g., putting them in prison. In the case of racial liberalism, there’s the problem of considering bigotry a symptom of individual moral character. The solution to the bigotry is to ostracize and contain the bigots by teaching them that it is socially unacceptable to express bigotry and keeping the worst bigots out of respectable organizations.

Could it be that our broken theories of both crime and bigotry both have the same problem, which is the commitment to moral individualism, by which I mean the theory that it’s individual moral character that is the cause of and solution to these problems? If a case of individual crime and individual bigotry is the result of, instead of an individual moral failing, a collective action problem, what then?

I still haven’t looked carefully into Deenan’s argument (see notes here), but I’m intrigued that his point may be that the crisis of liberalism may be, at its root, a crisis of individualism. Indeed, Kantian views of individual autonomy are really nice but they have not stood the test of time; I’d say the combined works of Haberams, Foucault, and Bourdieu have each from very different directions developed Kantian ideas into a more sociological frame. And that’s just on the continental grand theory side of the equation. I have not followed up on what Anglophone liberal theory has been doing, but I suspect that it has been going the same way.

I am wary, as I always am, of giving too much credit to theory. I know, as somebody who has read altogether too much of it, what little use it actually is. However, the notion of political and social consensus is one that tangibly effects my life these days. For this reason, it’s a topic of great personal interest.

One last point, that’s intended as constructive. It’s been argued that the appeal of individualism is due in part to the methodological individualism of rational choice theory and neoclassical economic theory. Because we can’t model economic interactions on anything but an individualistic level, we can’t design mechanisms or institutions that treat individual activity as a function of social form. This is another good reason to take seriously computational modeling of social forms.

References

Barabas, Chelsea, et al. “Interventions over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment.” arXiv preprint arXiv:1712.08238 (2017).

Deneen, Patrick J. Why Liberalism Failed. Yale University Press, 2018.

Gilman, Nils. “The Collapse of Racial Liberalism.” The American Interest (2018).

## February 28, 2018

Ph.D. student

#### interesting article about business in China

I don’t know much about China, really, so I’m always fascinated to learn more.

This FT article, “Anbang arrests demonstrates hostility to business”, by Jamil Anderlini, provides some wonderful historical context to a story about the arrest of an insurance oligarch.

In ancient times, merchants were at the very bottom of the four official social classes, below warrior-scholars, farmers and artisans. Although some became very rich they were considered parasites in Chinese society.

Ever since the Han emperors established the state salt monopoly in the second century BCE (remnants of which remain to this day), large-scale business enterprises have been controlled by the state or completely reliant on the favour of the emperor and the bureaucrat class.

In the 20th century, the Communist emperor Mao Zedong effectively managed to stamp out all private enterprise for a while.

Until the party finally allowed “capitalists” to join its ranks in 2002, many of the business activities carried out by the resurgent merchant class were technically illegal.

China’s rich lists are populated by entrepreneurs operating in just a handful of industries — particularly real estate and the internet.

Tycoons like Mr Wu who emerge in state-dominated sectors are still exceedingly rare. They are almost always closely linked to one of the old revolutionary families exercising enormous power from the shadows.

First, in Western scholarship we rarely give China credit for its history of bureaucracy in the absence of capitalism. In the well know Weberian account, bureaucracy is an institutional invention that provides regular rule of law so that capitalism can thrive. But China’s history is one that is statist “from ancient times”, but with effective bureaucracy from the beginning. A managerialist history, perhaps.

Which makes the second point so unusual: why, given this long history of bureaucratic rule, are Internet companies operating in a comparatively unregulated way? This seems like a massive concession of power, not unlike how (arguably) the government of the United States conceded a lot of power to Silicon Valley under the Obama administration.

The article dramatically foreshadows a potential power struggle between Xi Jinping’s consolidated state and the tech giant oligarchs:

Now that Chinese President Xi Jinping has abolished his own term limits, setting the stage for him to rule for life if he wants to, the system of state patronage and the punishment of independent oligarchs is likely to expand. Any company or billionaire who offends the emperor or his minions will be swiftly dealt with in the same way as Mr Wu.

There is one group of Chinese companies with charismatic — some would say arrogant — founders that enjoy immense economic power in China today. They would seem to be prime candidates if the assault on private enterprise is stepped up.

Internet giants Alibaba, Tencent and Baidu are not only hugely profitable, they control the data that is the lifeblood of the modern economy. That is why Alibaba founder Jack Ma has repeatedly said, including to the FT, that he would gladly hand his company over to the state if Beijing ever asked him to. Investors in BABA can only hope it never comes to that.

That is quite the expression of feudal fealty from Jack Ma. Truly, a totally different business culture from that of the United States.

## February 27, 2018

Ph.D. student

#### Notes on Deenan, “Why Liberalism Failed”, Foreward

I’ve begun reading the recently published book, Why Liberalism Failed (2018), by Patrick Deenan. It appears to be making some waves in the political theory commentary. The author claims that it was 10 years in the making but was finished three weeks before the 2016 presidential election, which suggests that the argument within it is prescient.

I’m not far in yet.

There is an intriguing forward from James Davison Hunter and John M. Owen IV, the editors. Their framing of the book is surprisingly continental:

• They declare that liberalism has arrived at its “legitimacy crisis”, a Habermasian term.
• They claim that the core contention of the book is a critique of the contradictions within Immanuel Kant’s view of individual autonomy.
• They compare Deenan with other “radical” critics of liberalism, of which they name: Marx, the Frankfurt School, Foucault, Nietzsche, Schmitt, and the Catholic Church.

In search of a litmus-test like clue as to where in the political spectrum the book falls, I’ve found this passage in the Foreward:

Deneen’s book is disruptive not only for the way it links social maladies to liberalism’s first principles, but also because it is difficult to categorize along our conventional left-right spectrum. Much of what he writes will cheer social democrats and anger free-market advocates; much else will hearten traditionalists and alienate social progressives.

Well, well, well. If we are to fit Deenan’s book into the conceptual 2-by-2 provided in Fraser’s recent work, it appears that Deenan’s political theory is a form of reactionary populism, rejecting progressive neoliberalism. In other words, the Foreward evinces that Deenan’s book is a high-brow political theory contribution that weighs in favor of the kind of politics that has been heretofore only articulated by intellectual pariahs.

## February 26, 2018

MIMS 2012

#### On Mastery

I completely agree with this view on mastery from American fashion designer, writer, television personality, entrepreneur, and occasional cabaret star Isaac Mizrahi:

I’m a person who’s interested in doing a bunch of things. It’s just what I like. I like it better than doing one thing over and over. This idea of mastery—of being the very best at just one thing—is not in my future. I don’t really care that much. I care about doing things that are interesting to me and that I don’t lose interest in.

Mastery – “being the very best at just one thing” – doesn’t hold much appeal for me. I’m a very curious person. I like jumping between various creative endeavors that “are interesting to me and that I don’t lose interest in.” Guitar, web design, coding, writing, hand lettering – these are just some of the creative paths I’ve gone down so far, and I know that list will continue to grow.

I’ve found that my understanding of one discipline fosters a deeper understanding of other disciplines. New skills don’t take away from each other – they only add.

So no, mastery isn’t for me. The more creative paths I go down, the better. Keep ‘em coming.

## February 20, 2018

MIMS 2012

#### Stay Focused on the User by Switching Between Maker Mode and Listener Mode

When writing music, ambient music composer Brian Eno makes music that’s pleasurable to listen to by switching between “maker” mode and “listener” mode. He says:

I just start something simple [in the studio]—like a couple of tones that overlay each other—and then I come back in here and do emails or write or whatever I have to do. So as I’m listening, I’ll think, It would be nice if I had more harmonics in there. So I take a few minutes to go and fix that up, and I leave it playing. Sometimes that’s all that happens, and I do my emails and then go home. But other times, it starts to sound like a piece of music. So then I start working on it.

I always try to keep this balance with ambient pieces between making them and listening to them. If you’re only in maker mode all the time, you put too much in. […] As a maker, you tend to do too much, because you’re there with all the tools and you keep putting things in. As a listener, you’re happy with quite a lot less.

In other words, Eno makes great music by experiencing it the way his listeners do: by listening to it.

This is also a great lesson for product development teams: to make a great product, regularly use your product.

By switching between “maker” and “listener” modes, you put yourself in your user’s shoes and seeing your work through their eyes, which helps prevent you from “put[ting] too much in.”

This isn’t a replacement for user testing, of course. We are not our users. But in my experience, it’s all too common for product development teams to rarely, if ever, use what they’re building. No shade – I’ve been there. We get caught on the treadmill of building new features, always moving on to the next without stopping to catch our breath and use what we’ve built. This is how products devolve into an incomprehensible pile of features.

Eno’s process is an important reminder to keep your focus on the user by regularly switching between “maker” mode and “listener” mode.

## February 13, 2018

Ph.D. student

#### that time they buried Talcott Parsons

Continuing with what seems like a never-ending side project to get a handle on computational social science methods, I’m doing a literature review on ‘big data’ sociological methods papers. Recent reading has led to two striking revelations.

The first is that Tufekci’s 2014 critique of Big Data methodologies is the best thing on the subject I’ve ever read. What it does is very clearly and precisely lay out the methodological pitfalls of sourcing the data from social media platforms: use of a platform as a model organism; selecting on a dependent variable; not taking into account exogenous, ecological, or field factors; and so on. I suspect this is old news to people who have more rigorously surveyed the literature on this in the past. But I’ve been exposed to and distracted by literature that seems aimed mainly to discredit social scientists who want to work with this data, rather than helpfully engaging them on the promises and limitations of their methods.

The second striking revelation is that for the second time in my literature survey, I’ve found a reference to that time when the field of cultural sociology decided they’d had enough of Talcott Parsons. From (Bail, 2014):

The capacity to capture all – or nearly all – relevant text on a given topic opens exciting new lines of meso- and macro-level inquiry into what environments (Bail forthcoming). Ecological or functionalist interpretations of culture have been unpopular with cultural sociologists for some time – most likely because the subfield defined itself as an alternative to the general theory proposed by Talcott Parsons (Alexander 2006). Yet many cultural sociologists also draw inspiration from Mary Douglas (e.g., Alexander 2006; Lamont 1992; Zelizer 1985), who – like Swidler – insists upon the need for our subfield to engage broader levels of analysis. “For sociology to accept that no functionalist arguments work,” writes Douglas (1986, p. 43), “is like cutting off one’s nose to spite one’s face.” To be fair, cultural sociologists have recently made several programmatic statements about the need to engage functional or ecological theories of culture. Abbott (1995), for example, explains the formation of boundaries between professional fields as the result of an evolutionary process. Similarly, Lieberson (2000), presents an ecological model of fashion trends in child-naming practices. In a review essay, Kaufman (2004) describes such ecological approaches to cultural sociology as one of the three most promising directions for the future of the subfield.

I’m not sure what’s going on with all these references to Talcott Parsons. I gather that at one time he was a giant in sociology, but that then a generation of sociologists tried to bury him. Then the next generation of sociologists reinvented structural functionalism with new language–“ecological approaches”, “field theory”?

One wonder what Talcott Parsons did or didn’t do to inspire such a rebellion.

References

Bail, Christopher A. “The cultural environment: measuring culture with big data.” Theory and Society 43.3-4 (2014): 465-482.

Tufekci, Zeynep. “Big Questions for Social Media Big Data: Representativeness, Validity and Other Methodological Pitfalls.” ICWSM 14 (2014): 505-514.

## February 12, 2018

Ph.D. student

#### What happens if we lose the prior for sparse representations?

Noting this nice paper by Giannone et al., “Economic predictions with big data: The illusion of sparsity.” It concludes:

Summing up, strong prior beliefs favouring low-dimensional models appear to be necessary to support sparse representations. In most cases, the idea that the data are informative enough to identify sparse predictive models might be an illusion.

This is refreshing honesty.

In my experience, most disciplinary social sciences have a strong prior bias towards pithy explanatory theses. In a normal social science paper, what you want is a single research question, a single hypothesis. This thesis expresses the narrative of the paper. It’s what makes the paper compelling.

In mathematical model fitting, the term for such a simply hypothesis is a sparse predictive model. These models will have relatively few independent variables predicting the dependent variable. In machine learning, this sparsity is often accomplished by a regularization step. While generally well-motivate, regularization for sparsity can be done for reasons that are more aesthetic or reflect a stronger prior than is warranted.

A consequence of this preference for sparsity, in my opinion, is the prevalence of literature on power law distributions vs. log normal explanations. (See this note on disorganized heavy tail distributions.) A dense model on a log linear regression will predict a heavy tail dependent variable without great error. But it will be unsatisfying from the perspective of scientific explanation.

What seems to be an open question in the social sciences today is whether the culture of social science will change as a result of the robust statistical analysis of new data sets. As I’ve argued elsewhere (Benthall, 2016), if the culture does change, it will mean that narrative explanation will be less highly valued.

References

Benthall, Sebastian. “Philosophy of computational social science.” Cosmos and History: The Journal of Natural and Social Philosophy 12.2 (2016): 13-30.

Giannone, Domenico, Michele Lenza, and Giorgio E. Primiceri. “Economic predictions with big data: The illusion of sparsity.” (2017).

## February 10, 2018

Ph.D. student

#### The therapeutic ethos in progressive neoliberalism (Fraser and Furedi)

I’ve read two pieces recently that I found helpful in understanding today’s politics, especially today’s identity politics, in a larger context.

The first is Nancy Fraser’s “From Progressive Neoliberalism to Trump–and Beyond” (link). It portrays the present (American but also global) political moment as a “crisis of hegemony”, using Gramscian terms, for which the presidency of Donald Trump is a poster child. It’s main contribution is to point out that the hegemony that’s been in crisis is a hegemony of progressive neoliberalism, which sounds like an oxymoron but, Fraser argues, isn’t.

Rather, Fraser explains a two-dimensional political spectrum: there are politics of distribution, and there are politics of recognition.

To these ideas of Gramsci, we must add one more. Every hegemonic bloc embodies a set of assumptions about what is just and right and what is not. Since at least the mid-twentieth century in the United States and Europe, capitalist hegemony has been forged by combining two different aspects of right and justice—one focused on distribution, the other on recognition. he distributive aspect conveys a view about how society should allocate divisible goods, especially income. This aspect speaks to the economic structure of society and, however obliquely, to its class divisions. The recognition aspect expresses a sense of how society should apportion respect and esteem, the moral marks of membership and belonging. Focused on the status order of society, this aspect refers to its status hierarchies.

Fraser’s argument is that neoliberalism is a politics of distribution–it’s about using the market to distribute goods. I’m just going to assume that anybody reading this has a working knowledge of what neoliberalism means; if you don’t I recommend reading Fraser’s article about it. Progressivism is a politics of recognition that was advanced by the New Democrats. Part of its political potency been its consistency with neoliberalism:

At the core of this ethos were ideals of “diversity,” women’s “empowerment,” and LGBTQ rights; post-racialism, multiculturalism, and environmentalism. These ideals were interpreted in a specific, limited way that was fully compatible with the Goldman Sachsification of the U.S. economy…. The progressive-neoliberal program for a just status order did not aim to abolish social hierarchy but to “diversify” it, “empowering” “talented” women, people of color, and sexual minorities to rise to the top. And that ideal was inherently class specific: geared to ensuring that “deserving” individuals from “underrepresented groups” could attain positions and pay on a par with the straight white men of their own class.

A less academic, more Wall Street Journal reading member of the commentariat might be more comfortable with the terms “fiscal conservativism” and “social liberalism”. And indeed, Fraser’s argument seems mainly to be that the hegemony of the Obama era was fiscally conservatism but socially liberal. In a sense, it was the true libertarians that were winning, which is an interesting take I hadn’t heard before.

The problem, from Frasers perspective, is that neoliberalism concentrates wealth and carries the seeds of its own revolution, allowing for Trump to run on a combination of reactionary politics of recognition (social conservativism) with a populist politics of distribution (economic liberalism: big spending and protectionism). He won, and then sold out to neoliberalism, giving us the currently prevailing combination of neoliberalism and reactionary social policy. Which, by the way, we would be calling neoconservatism if it were 15 years ago. Maybe it’s time to resuscitate this term.

Fraser thinks the world would be a better place if progressive populists could establish themselves as an effective counterhegemonic bloc.

The second piece I’ve read on this recently is Frank Furedi’s “The hidden history of t identity politics” (link). Pairing Fraser with Furedi is perhaps unlikely because, to put it bluntly, Fraser is a feminist and Furedi, as far as I can tell from this one piece, isn’t. However, both are serious social historians and there’s a lot of overlap in the stories they tell. That is in itself interesting from a scholarly perspective of one trying to triangulate an accurate account of political history.

Furedi’s piece is about “identity politics” broadly, including both its right wing and left wing incarnations. So, we’re talking about what Fraser calls the politics of recognition here. On a first pass, Furedi’s point is that Enlightenment universalist values have been challenged by both right and left wing identity politics since the late 18th century Romantic nationalist movements in Europe, which led to World Wars and the holocaust. Maybe, Furedi’s piece suggests, abandoning Enlightenment universalist values was a bad idea.

Although expressed through a radical rhetoric of liberation and empowerment, the shift towards identity politics was conservative in impulse. It was a sensibility that celebrated the particular and which regarded the aspiration for universal values with suspicion. Hence the politics of identity focused on the consciousness of the self and on how the self was perceived. Identity politics was, and continues to be, the politics of ‘it’s all about me’.

Strikingly, Furedi’s argument is that the left took the “cultural turn” into recognition politics essentially because of its inability to maintain a left-wing politics of redistribution, and that this happened in the 70’s. But this in turn undermined the cause of the economic left. Why? Because economic populism requires social solidarity, while identity politics is necessarily a politics of difference. Solidarity within an identity group can cause gains for that identity group, but at the expense of political gains that could be won with an even more unified popular political force.

The emergence of different identity-based groups during the 1970s mirrored the lowering of expectations on the part of the left. This new sensibility was most strikingly expressed by the so-called ‘cultural turn’ of the left. The focus on the politics of culture, on image and representation, distracted the left from its traditional interest in social solidarity. And the most significant feature of the cultural turn was its sacralisation of identity. The ideals of difference and diversity had displaced those of human solidarity.

So far, Furedi is in agreement with Fraser that hegemonic neoliberalism has been the status quo since the 70’s, and that the main political battles have been over identity recognition. Furedi’s point, which I find interesting, is that these battles over identity recognition undermine the cause of economic populism. In short, neoliberals and neocons can use identity to divide and conquer their shared political opponents and keep things as neo- as possible.

This is all rather old news, though a nice schematic representation of it.

Where Furedi’s piece gets interesting is where it draws out the next movements in identity politics, which he describes as the shift from it being about political and economic conditions into a politics of first victimhood and then a specific therapeutic ethos.

The victimhood move grounded the politics of recognition in the authoritative status of the victim. While originally used for progresssive purposes, this move was adopted outside of the progressive movement as early as 1980’s.

A pervasive sense of victimisation was probably the most distinct cultural legacy of this era. The authority of the victim was ascendant. Sections of both the left and the right endorsed the legitimacy of the victim’s authoritative status. This meant that victimhood became an important cultural resource for identity construction. At times it seemed that everyone wanted to embrace the victim label. Competitive victimhood quickly led to attempts to create a hierarchy of victims. According to a study by an American sociologist, the different movements joined in an informal way to ‘generate a common mood of victimisation, moral indignation, and a self-righteous hostility against the common enemy – the white male’ (5). Not that the white male was excluded from the ambit of victimhood for long. In the 1980s, a new men’s movement emerged insisting that men, too, were an unrecognised and marginalised group of victims.

This is interesting in part because there’s a tendency today to see the “alt-right” of reactionary recognition politics as a very recent phenomenon. According to Furedi, it isn’t; it’s part of the history of identity politics in general. We just thought it was
dead because, as Fraser argues, progresssive neoliberalism had attained hegemony.

Buried deep into the piece is arguable Furedi’s most controversial and pointedly written point, which is about the “therapeutic ethos” of identity politics since the 1970’s that resonates quite deeply today. The idea here is that principles from psychotherapy have become part of repertoire of left-wing activism. A prescription against “blaming the victim” transformed into a prescription towards “believing the victim”, which in turn creates a culture where only those with lived experience of a human condition may speak with authority on it. This authority is ambiguous, because it is at once both the moral authority of the victim, but also the authority one must give a therapeutic patient in describing their own experiences for the sake of their mental health.

The obligation to believe and not criticise individuals claiming victim identity is justified on therapeutic grounds. Criticism is said to constitute a form of psychological re-victimisation and therefore causes psychic wounding and mental harm. This therapeutically informed argument against the exercise of critical judgement and free speech regards criticism as an attack not just on views and opinions, but also on the person holding them. The result is censorious and illiberal. That is why in society, and especially on university campuses, it is often impossible to debate certain issues.

Furedi is concerned with how the therapeutic ethos in identity politics shuts down liberal discourse, which further erodes social solidarity which would advance political populism. In therapy, your own individual self-satisfaction and validation is the most important thing. In the politics of solidarity, this is absolutely not the case. This is a subtle critique of Fraser’s argument, which argues that progressive populism is a potentially viable counterhegemonic bloc. We could imagine a synthetic point of view, which is that progressive populism is viable but only if progressives drop the therapeutic ethos. Or, to put it another way, if “[f]rom their standpoint, any criticism of the causes promoted by identitarians is a cultural crime”, then that criminalizes the kind of discourse that’s necessary for political solidarity. That serves to advantage the neoliberal or neoconservative agenda.

This is, Furedi points out, easier to see in light of history:

Outwardly, the latest version of identity politics – which is distinguished by a synthesis of victim consciousness and concern with therapeutic validation – appears to have little in common with its 19th-century predecessor. However, in one important respect it represents a continuation of the particularist outlook and epistemology of 19th-century identitarians. Both versions insist that only those who lived in and experienced the particular culture that underpins their identity can understand their reality. In this sense, identity provides a patent on who can have a say or a voice about matters pertaining to a particular culture.

While I think they do a lot to frame the present political conditions, I don’t agree with everything in either of these articles. There are a few points of tension which I wish I knew more about.

The first is the connection made in some media today between the therapeutic needs of society’s victims and economic distributional justice. Perhaps it’s the nexus of these two political flows that makes the topic of workplace harassment and culture in its most symbolic forms such a hot topic today. It is, in a sense, the quintessential progressive neoliberal problem, in that it aligns the politics of distribution with the politics of recognition while employing the therapeutic ethos. The argument goes: since market logic is fair (the neoliberal position), if there is unfair distribution it must be because the politics of recognition are unfair (progressivism). That’s because if there is inadequate recognition, then the societal victims will feel invalidated, preventing them from asserting themselves effectively in the workplace (therapeutic ethos). To put it another way, distributional inequality is being represented as a consequence of a market externality, which is the psychological difficulty imposed by social and economic inequality. A progressive politthiics of recognition are a therapeutic intervention designed to alleviate this psychological difficulty, which corrects the meritocratic market logic.

One valid reaction to this is: so what? Furedi and Fraser are both essentially card carrying socialists. If you’re a card-carrying socialist (maybe because you have a universalist sense of distributional justice), then you might see the emphasis on workplace harassment as a distraction from a broader socialist agenda. But most people aren’t card-carrying socialist academics; most people go to work and would prefer not to be harassed.

The other thing I would like to know more about is to what extent the demands of the therapeutic ethos are a political rhetorical convenience and to what extent it is a matter of ground truth. The sweeping therapeutic progressive narrative outlined pointed out by Furedi, wherein vast swathes of society (i.e, all women, all people of color, maybe all conservatives in liberal-dominant institutions, etc.) are so structurally victimized that therapy-grade levels of validation are necessary for them to function unharmed in universities and workplaces is truly a tough pill to swallow. On the other hand, a theory of justice that discounts the genuine therapeutic needs of half the population can hardly be described as a “universalist” one.

Is there a resolution to this epistemic and political crisis? If I had to drop everything and look for one, it would be in the clinical psychological literature. What I want to know is how grounded the therapeutic ethos is in (a) scientific clinical psychology, and (b) the epidemiology of mental illness. Is it the case that structural inequality is so traumatizing (either directly or indirectly) that the fragmentation of epistemic culture is necessary as a salve for it? Or is this a political fiction? I don’t know the answer.

## February 06, 2018

Ph.D. student

#### Values, norms, and beliefs: units of analysis in research on culture

Much of the contemporary critical discussion about technology in society and ethical design hinges on the term “values”. Privacy is one such value, according to Mulligan, Koopman, and Doty (2016), drawing on Westin and Post. Contextual Integrity (Nissenbaum, 2009) argues that privacy is a function of norms, and that norms get their legitimacy from, among other sources, societal values. The Data and Society Research Institute lists “values” as one of the cross-cutting themes of its research. Richmond Wong (2017) has been working on eliciting values reflections as a tool in privacy by design. And so on.

As much as ‘values’ get emphasis in this literary corner, I have been unsatisfied with how these literatures represent values as either sociological or philosophical phenomena. How are values distributed in society? Are they stable under different methods of measurement? Do they really have ethical entailments, or are they really just a kind of emotive expression?

For only distantly related reasons, I’ve been looking into the literature on quantitative measurement of culture. I’m doing a bit of a literature review and need your recommendations! But an early hit is Marsden and Swingle’s is a “Conceptualizing and measuring culture in surveys: Values, strategies, and symbols” (1994), which is a straightforward social science methods piece apparently written before either rejections of positivism or Internet-based research became so destructively fashionable.

A useful passage comes early:

To frame our discussion of the content of the culture module, we have drawn on distinctions made in Peterson’s (1979: 137-138) review of cultural research in sociology. Peterson observes that sociological work published in the late 1940s and 1950s treated values – conceptualizations of desirable end-states – and the behavioral norms they specify as the principal explanatory elements of culture. Talcott Parsons (19.51) figured prominently in this school of thought, and more recent survey studies of culture and cultural change in both the United States (Rokeach, 1973) and Europe (Inglehart, 1977) continue the Parsonsian tradition of examining values as a core concept.

This was a surprise! Talcott Parsons is not a name you hear every day in the world of sociology of technology. That’s odd, because as far as I can tell he’s one of these robust and straightforwardly scientific sociologists. The main complaint against him, if I’ve heard any, is that he’s dry. I’ve never heard, despite his being tied to structural functionalism, that his ideas have been substantively empirically refuted (unlike Durkheim, say).

So the mystery is…whatever happened to the legacy of Talcott Parsons? And how is it represented, if at all, in contemporary sociological research today?

One reason why we don’t hear much about Parsons may be because the sociological community moved from measuring “values” to measuring “beliefs”. Marsden and Swingle go on:

Cultural sociologists writing since the late 1970s however, have accented other elements of culture. These include, especially, beliefs and expressive symbols. Peterson’s (1979: 138) usage of “beliefs” refers to “existential statements about how the world operates that often serve to justify value and norms”. As such, they are less to be understood as desirable end-states in and of themselves, but instead as habits or styles of thought that people draw upon, especially in unstructured situations (Swidler, 1986).

Intuitively, this makes sense. When we look at the contemporary seemingly mortal combat of partisan rhetoric and tribalist propaganda, a lot of what we encounter are beliefs and differences in beliefs. As suggested in this text, beliefs justify values and norms, meaning that even values (which you might have thought are the source of all justification) get their meaning from a kind of world-view, rather than being held in a simple way.

That makes a lot of sense. There’s often a lot more commonality in values than in ways those values should be interpreted or applied. Everybody cares about fairness, for example. What people disagree about, often vehemently, is what is fair, and that’s because (I’ll argue here) people have widely varying beliefs about the world and what’s important.

To put it another way, the Humean model where we have beliefs and values separately and then combine the two in an instrumental calculus is wrong, and we’ve known it’s wrong since the 70’s. Instead, we have complexes of normatively thick beliefs that reinforce each other into a worldview. When we we’re asked about our values, we are abstracting in a derivative way from this complex of frames, rather than getting at a more core feature of personality or culture.

A great book on this topic is Hilary Putnam’s The collapse of the fact/value dichotomy (2002), just for example. It would be nice if more of this metaethical theory and sociology of values surfaced in the values in design literature, despite it’s being distinctly off-trend.

References

Marsden, Peter V., and Joseph F. Swingle. “Conceptualizing and measuring culture in surveys: Values, strategies, and symbols.” Poetics 22.4 (1994): 269-289.

Mulligan, Deirdre K., Colin Koopman, and Nick Doty. “Privacy is an essentially contested concept: a multi-dimensional analytic for mapping privacy.” Phil. Trans. R. Soc. A 374.2083 (2016): 20160118.

Nissenbaum, Helen. Privacy in context: Technology, policy, and the integrity of social life. Stanford University Press, 2009.

Putnam, Hilary. The collapse of the fact/value dichotomy and other essays. Harvard University Press, 2002.

Wong, Richmond Y., et al. “Eliciting Values Reflections by Engaging Privacy Futures Using Design Workbooks.” (2017).

Ph.D. student

## 4S 2018 Open Panel 101: Critical Data Studies: Human Contexts and Ethics

We’re pleased to be organizing one of the open panels at the 2018 Meeting of the Society for the Social Studies of Science (4S). Please submit an abstract!

### Call for abstracts

In this continuation of the previous Critical Data Studies / Studying Data Critically tracks at 4S (see also Dalton and Thatcher 2014; Iliadis and Russo 2016), we invite papers that address the organizational, social, cultural, ethical, and otherwise human impacts of data science applications in areas like science, education, consumer products, labor and workforce management, bureaucracies and administration, media platforms, or families. Ethnographies, case studies, and theoretical works that take a situated approach to data work, practices, politics, and/or infrastructures in specific contexts are all welcome.

Datafication and autonomous computational systems and practices are producing significant transformations in our analytical and deontological framework, sometimes with objectionable consequences (O’Neill 2016; Barocas, Bradley, Honovar, and Provost 2017). Whether we’re looking at the ways in which new artefacts are constructed or at their social consequences, questions of value and valuation or objectivity and operationalization are indissociable from the processes of innovation and the principles of fairness, reliability, usability, privacy, social justice, and harm avoidance (Campolo, Sanfilippo, Whittaker, and Crawford, 2017).

By reflecting on situated unintended and objectionable consequences, we will gather a collection of works that illuminate one or several aspects of the unfolding of controversies and ethical challenges posed by these new systems and practices. We’re specifically interested in pieces that provide innovative theoretical insights about ethics and controversies, fieldwork, and reflexivity about the researcher’s positionality and her own ethical practices. We also encourage practitioners and educators who have worked to infuse ethical questions and concerns into a workflow, pedagogical strategy, collaboration, or intervention.

## January 24, 2018

MIMS 2014

#### A Possible Explanation why America does Nothing about Gun Control

Ever since the Las Vegas mass shooting last October, I’ve wanted to blog about gun control. But I also wanted to wait—to see whether that mass shooting, though the deadliest to date in U.S. history, would quickly slip into the dull recesses of the American public subconsciousness just like all the rest. It did, and once again we find ourselves in the same sorry cycle of inaction that by this point is painfully familiar to everyone.

I also recently came across a 2016 study, by Kalesan, Weinberg, and Galea, which found that on average, Americans are 99% likely to know someone either killed or injured by gun violence over the course of their lifetime. That made me wonder: how can it possibly be that Americans remain so paralyzed on this issue if it affects pretty much everyone?

It could be that the ubiquity of gun violence is actually the thing that actually causes the paralysis. That is, gun violence affects almost everyone just as Kalesan et al argue, but the reactions Americans have to the experience are diametrically opposed to one another. These reactions result in hardened views that inform people’s voting choices, and since these choices more or less divide the country in half across partisan lines, the result is an equilibrium where nothing can ever get done on gun control. So on this reading, it’s not so much a paralysis of inaction so much as a tense political stalemate.

But it could also be something else. Kalesan et al calculate the likelihood of knowing someone killed or injured by general gun violence over the course of a lifetime, but they don’t focus on mass shootings in particular. Their methodology is based on basic principles of probability and some social network theory that posits people have an effective social network numbering a little fewer than 300 people. If you look at the Kalesan et al paper, it becomes clear that their methodology can also be used to calculate the likelihood of knowing someone killed or injured in a mass shooting. It’s just a matter of substituting the rate of general gun violence for the rate of mass shooting in their probability calculation.

It turns out that the probability of knowing someone killed/injured in a mass shooting is much, much lower than for gun violence more generally. Even with a relatively generous definition of what counts as a mass shooting (four or more people injured/killed not including the shooter, according to the Gun Violence Archive), this probability is about 10%. When you only include incidents that have received major national news media attention—based on a list compiled by Mother Jones—that probability drops to about 0.36%.

So, it’s possible the reason Americans continue to drag their feet on gun control is that the problem just doesn’t personally affect enough people. Curiously, the even lower likelihood of knowing someone killed or injured in a terrorist attack doesn’t seem to hinder politicians from working aggressively to prevent further terrorist attacks. Still, if more people were personally affected by mass shootings, more might change their minds on gun control like Caleb Keeter, the Josh Abbot band guitarist who survived the Las Vegas shooting.

## January 21, 2018

Ph.D. student

#### It’s just like what happened when they invented calculus…

I’ve picked up this delightful book again: David Foster Wallace’s Everything and More: A Compact History of Infinity (2003). It is the David Foster Wallace (the brilliant and sadly dead writer and novelist you’ve heard of) writing a history of mathematics, starting with the Ancient Greeks and building up to the discovery of infinity by Georg Cantor.

It’s a brilliantly written book written to educate its reader without any doctrinal baggage. Wallace doesn’t care if he’s a mathematician or a historian; he’s just a great writer. And what comes through in the book is truly a history of the idea of infinity, with all the ways that it was a reflection of the intellectual climate and preconceptions of the mathematicians working on it. The book is fully of mathematical proofs that are blended seamlessly into the casual prose. The whole idea is to build up the excitement and wonder of mathematical discover, just how hard it was to come to appreciate infinity in the way we understand it mathematically today. A lot of this development had to do with the way mathematicians and scientists thought about their relationship to abstraction.

It’s a wonderful book that, refreshingly, isn’t obsessed with how everything has been digitized. Rather (just as one gem), it offers a historical perspective on what was perhaps even a more profound change: that time in the 1700’s when suddenly everything started to be looked at as an expression of mathematical calculus.

To quote the relevant passage:

As has been at least implied and will now be exposited on, the math-historical consensus is that the late 1600s mark the start of a modern Golden Age in which there are far more significant mathematical advances than anytime else in world history. Now things start moving really fast, and we can do little more than try to build a sort of flagstone path from early work on functions to Cantor’s infinicopia.

Two large-scale changes in the world of math to note very quickly The first involves abstraction. Pretty much all math from the Greeks to Galileo is empirically based: math concepts are straightforward abstractions from real-world experience. This is one reason why geometry (along with Aristotle) dominated mathematical reasoning for so long. The modern transition from geometric to algebraic reasoning was itself a symptom of a larger shift. By 1600, entities like zero, negative integers, and irrationals are used routinely. Now start adding in the subsequent decades’ introductions of complex numbers, Napierian logarithms, higher-degree polynomials and literal coefficients in algebra–plus of course eventually the 1st and 2nd derivative and the integral–and it’s clear that as of some pre-Enlightenment date math has gotten so remote from any sort of real-world observation that we and Saussure can say verily it is now, as a system of symbols, “independent of the objects designated,” i.e. that math is now concerned much more with the logical relations between abstract concepts than with any particular correspondence between those concepts and physical reality. The point: It’s in the seventeenth century that math becomes primarily a system of abstractions from other abstractions instead of from the world.

Which makes the second big change seem paradoxical: math’s new hyperabstractness turns out to work incredibly well in real-world applications. In science, engineering, physics, etc. Take, for one obvious example, calculus, which is exponentially more abstract than any sort of ‘practical’ math before (like, from what real-world observation does one dream up the idea than an object’s velocity and a curve’s subtending area have anything to do with each other?), and yet it is unprecedentedly good for representing/explaining motion and acceleration, gravity, planetary movements, heat–everything science tells us is real about the real world. Not at all for nothing does D. Berlinski call calculus “the story this world first told itself as it became the modern world.” Because what the modern world’s about, what it is, is science.And it’s in the seventeenth century that the marriage of math and science is consummated, the Scientific Revolution both causing and caused by the Math Explosion because science–increasingly freed of its Aristotelian hangups with substance v. matter and potentiality v. actuality–becomes now essentially a mathematical enterprise in which force, motion, mass, and law-as-formula compose the new template for understanding how reality works. By the late 1600s, serious math is part of astronomy, mechanics, geography, civil engineering, city planning, stonecutting, carpentry, metallurgy, chemistry, hyrdraulics, optics, lens-grinding, military strategy, gun- and cannon-design, winemaking, architecture, music, shipbuilding, timekeeping, calendar-reckoning; everything.

We take these changes for granted now.

But once, this was a scientific revolution that transformed, as Wallace observed, everything.

Maybe this is the best historical analogy for the digital transformation we’ve been experiencing in the past decade.

## January 19, 2018

Ph.D. student

#### May there be shared blocklists

A reminder:

Unconstrained media access to a person is indistinguishable from harassment.

It pains me to watch my grandfather suffer from surfeit of communication. He can't keep up with the mail he receives each day. Because of his noble impulse to charity and having given money to causes he supports (evangelical churches, military veterans, disadvantaged children), those charities sell his name for use by other charities (I use "charity" very loosely), and he is inundated with requests for money. Very frequently, those requests include a "gift", apparently in order to induce a sense of obligation: a small calendar, a pen and pad of paper, refrigerator magnets, return address labels, a crisp dollar bill. Those monetary ones surprised me at first, but they are common and if some small percentage of people feel an obligation to write a $50 check, then sending out a$1 to each person makes it worth their while (though it must not help the purported charitable cause very much, not a high priority). Many now include a handful of US coins stuck to the response card -- ostensibly to imply that just a few cents a day can make a difference, but, I suspect, to make it harder to recycle the mail directly because it includes metal as well as paper. (I throw these in the recycling anyway.) Some of these solicitations include a warning on the outside that I hadn't seen before, indicating that it's a federal criminal offense to open postal mail or to keep it from the recipient. Perhaps this is a threat to caregivers to discourage them from throwing away this junk mail for their family members; I suspect more likely, it encourages the suspicion in the recipient that someone might try to filter their mail, and that to do so would be unjust, even criminal, that anyone trying to help them by sorting their mail should not be trusted. It disgusts me.

But the mails are nothing compared to the active intrusiveness of other media. Take conservative talk radio, which my grandfather listened to for years as a way to keep sound in the house and fend off loneliness. It's often on in the house at a fairly low volume, but it's ever present, and it washes over the brain. I suspect most people could never genuinely understand Rush Limbaugh's rants, but coherent argument is not the point, it's just the repetition of a claim, not even a claim, just a general impression. For years, my grandfather felt conflicted, as many of his beloved family members (liberal and conservative) worked for the federal government, but he knew, in some quite vague but very deep way, that everyone involved with the federal government was a menace to freedom. He tells me explicitly that if you hear something often enough, you start to think it must be true.

And then there's the TV, now on and blaring 24 hours a day, whether he's asleep or awake. He watches old John Wayne movies or NCIS marathons. Or, more accurately, he watches endless loud commercials, with some snippets of quiet movies or television shows interspersed between them. The commercials repeat endlessly throughout the day and I start to feel confused, stressed and tired within a few hours of arriving at his house. I suspect advertisers on those channels are happy with the return they receive; with no knowledge of the source, he'll tell me that he "really ought to" get or try some product or another for around the house. He can't hear me, or other guests, or family he's talking to on the phone when a commercial is on, because they're so loud.

Compared to those media, email is clear and unintrusive, though its utility is still lost in inundation. Email messages that start with "Fw: FWD: FW: FW FW Fw:" cover most of his inbox; if he clicks on one and scrolls down far enough he can get to the message, a joke about Obama and monkeys, or a cute picture of a kitten. He can sometimes get to the link to photos of the great-grand-children, but after clicking the link he's faced with a moving pop-up box asking him to login, covering the faces of the children. To close that box, he must identify and click on a small "x" in very light grey on a white background. He can use the Web for his bible study and knows it can be used for other purposes, but ubiquitous and intrusive prompts (advertising or otherwise) typically distract him from other tasks.

My grandfather grew up with no experience with media of these kinds, and had no time to develop filters or practices to avoid these intrusions. At his age, it is probably too late to learn a new mindset to throw out mail without a second thought or immediately scroll down a webpage. With a lax regulatory environment and unfamiliar with filtering, he suffers -- financially and emotionally -- from these exploitations on a daily basis. Mail, email, broadcast video, radio and telephone could provide an enormous wealth of benefits for an elderly person living alone: information, entertainment, communication, companionship, edification. But those advantages are made mostly inaccessible.

Younger generations suffer other intrusions of media. Online harassment is widely experienced (its severity varies, by gender among other things); your social media account probably lets you block an account that sends you a threat or other unwelcome message, but it probably doesn't provide mitigations against dogpiling, where a malicious actor encourages their followers to pursue you. Online harassment is important because of the severity and chilling impact on speech, but an analogous problem of over-access exists with other attention-grabbing prompts. What fraction of smartphone users know how to filter the notifications that buzz or ring their phone? Notifications are typically on by default rather than opt-in with permission. Smartphone users can, even without the prompt of the numerous thinkpieces on the topic, describe the negative effects on their attention and well-being.

The capability to filter access to ourselves must be a fundamental principle of online communication: it may be the key privacy concern of our time. Effective tools that allow us to control the information we're exposed to are necessities for freedom from harassment; they are necessities for genuine accessibility of information and free expression. May there be shared blocklists, content warnings, notification silencers, readability modes and so much more.

## January 15, 2018

Ph.D. student

#### social structure and the private sector

The Human Cell

Academic social scientists leaning towards the public intellectual end of the spectrum love to talk about social norms.

This is perhaps motivated by the fact that these intellectual figures are prominent in the public sphere. The public sphere is where these norms are supposed to solidify, and these intellectuals would like to emphasize their own importance.

I don’t exclude myself from this category of persons. A lot of my work has been about social norms and technology design (Benthall, 2014; Benthall, Gürses and Nissenbaum, 2017)

But I also work in the private sector, and it’s striking how differently things look from that perspective. It’s natural for academics who participate more in the public sphere than the private sector to be biased in their view of social structure. From the perspective of being able to accurately understand what’s going on, you have to think about both at once.

That’s challenging for a lot of reasons, one of which is that the private sector is a lot less transparent than the public sphere. In general the internals of actors in the private sector are not open to the scrutiny of commentariat onlookers. Information is one of the many resources traded in pairwise interactions; when it is divulged, it is divulged strategically, introducing bias. So it’s hard to get a general picture of the private sector, even though accounts for a much larger proportion of the social structure that’s available than the public sphere. In other words, public spheres are highly over-represented in analysis of social structure due to the available of public data about them. That is worrisome from an analytic perspective.

It’s well worth making the point that the public/private dichotomy is problematic. Contextual integrity theory (Nissenbaum, 2009) argues that modern society is differentiated among many distinct spheres, each bound by its own social norms. Nissenbaum actually has a quite different notion of norm formation from, say, Habermas. For Nissenbaum, norms evolve over social history, but may be implicit. Contrast this with Habermas’s view that norms are the result of communicative rationality, which is an explicit and linguistically mediated process. The public sphere is a big deal for Habermas. Nissenbaum, a scholar of privacy, reject’s the idea of the ‘public sphere’ simpliciter. Rather, social spheres self-regulate and privacy, which she defines as appropriate information flow, is maintained when information flows according to these multiple self-regulatory regimes.

I believe Nissenbaum is correct on this point of societal differentiation and norm formation. This nuanced understanding of privacy as the differentiated management of information flow challenges any simplistic notion of the public sphere. Does it challenge a simplistic notion of the private sector?

Naturally, the private sector doesn’t exist in a vacuum. In the modern economy, companies are accountable to the law, especially contract law. They have to pay their taxes. They have to deal with public relations and are regulated as to how they manage information flows internally. Employees can sue their employers, etc. So just as the ‘public sphere’ doesn’t permit a total free-for-all of information flow (some kinds of information flow in public are against social norms!), so too does the ‘private sector’ not involve complete secrecy from the public.

As a hypothesis, we can posit that what makes the private sector different is that the relevant social structures are less open in their relations with each other than they are in the public sphere. We can imagine an autonomous social entity like a biological cell. Internally it may have a lot of interesting structure and organelles. Its membrane prevents this complexity leaking out into the aether, or plasma, or whatever it is that human cells float around in. Indeed, this membrane is necessary for the proper functioning of the organelles, which in turn allows the cell to interact properly with other cells to form a larger organism. Echoes of Francisco Varela.

It’s interesting that this may actually be a quantifiable difference. One way of modeling the difference between the internal and external-facing complexity of an entity is using information theory. The more complex internal state of the entity has higher entropy than the membrane. The fact that the membrane causally mediates interactions between the internals and the environment prevents information flow between them; this is captured by the Data Processing Inequality. The lack of information flow between the system internals and externals is quantified as lower mutual information between the two domains. At zero mutual information, the two domains are statistically independent of each other.

I haven’t worked out all the implications of this.

References

Benthall, Sebastian. (2015) Designing Networked Publics for Communicative Action. Jenny Davis & Nathan Jurgenson (eds.) Theorizing the Web 2014 [Special Issue]. Interface 1.1. (link)

Sebastian Benthall, Seda Gürses and Helen Nissenbaum (2017), “Contextual Integrity through the Lens of Computer Science”, Foundations and Trends® in Privacy and Security: Vol. 2: No. 1, pp 1-69. http://dx.doi.org/10.1561/3300000016

Nissenbaum, H. (2009). Privacy in context: Technology, policy, and the integrity of social life. Stanford University Press.

## January 14, 2018

Ph.D. student

#### on university businesses

Suppose we wanted to know why there’s an “epistemic crisis” today. Suppose we wanted to talk about higher education’s role and responsibility towards that crisis, even though that may be just a small part of it.

That’s a reason why we should care about postmodernism in universities. The alternative, some people have argued, is a ‘modernist’ or even ‘traditional’ university which was based on a perhaps simpler and less flexible theory of knowledge. For the purpose of this post I’m going to assume the reader knows roughly what that’s all about. Since postmodernism rejects meta-narratives and instead admits that all we have to legitimize anything is a contest of narratives, that is really just asking for an epistemic crisis where people just use whatever narratives are most convenient for them and then society collapses.

In my last post I argued that the question of whether universities should be structured around modernist or postmodernist theories of legitimation and knowledge has been made moot by the fact that universities have the option of operating solely on administrative business logic. I wasn’t being entirely serious, but it’s a point that’s worth exploring.

One reason why it’s not so terrible if universities operate according to business logic is because it may still, simply as a function of business logic, be in their strategic interest to hire serious scientists and scholars whose work is not directly driven by business logic. These scholars will be professionally motivated and in part directed by the demands of their scholarly fields. But that kicks the can of the inquiry down the road.

Suppose that there are some fields that are Bourdieusian sciences, which might be summarized as an artistic field structured by the distribution of symbolic capital to those who win points in the game of arbitration of the real. (Writing that all out now I can see why many people might find Bourdieu a little opaque.)

Then if a university business thinks it should hire from the Bourdieusian sciences, that’s great. But there’s many other kinds of social fields it might be useful to hire from for, e.g, faculty positions. This seems to agree with the facts: many university faculty are not from Bourdieusian sciences!

This complicates, a lot actually, the story about the relationship between universities and knowledge. One thing that is striking from the ethnography of education literature (Jean Lave) is how much the social environment of learning is constitutive of what learning is (to put it one way). Society expects and to some extent enforces that when a student is in a classroom, what they are taught is knowledge. We have concluded that not every teacher in a university business is a Bourdieusian scientist, hence some of what students learn in universities is not Bourdieusian science, so it must be that a lot of what students are taught in universities: isn’t real. But what is it then? It’s got to be knowledge!

The answer may be: it’s something useful. It may not be real or even approximating what’s real (by scientific standards), but it may still be something that’s useful to believe, express, or perform. If it’s useful to “know” even in this pragmatic and distinctly non-Platonic sense of the term, there’s probably a price at which people are willing to be taught it.

As a higher order effect, universities might engage in advertising in such a way that some prospective students are convinced that what they teach is useful to know even when it’s not really useful at all. This prospect is almost too cynical to even consider. But that’s why it’s important to consider why a university operating solely according to business logic would in fact be terrible! This would not just be the sophists teaching sophistry to students so that they can win in court. It would be sophists teaching bullshit to students because they can get away with being paid for it. In other words, charlatans.

Wow. You know I didn’t know where this was going to go when I started reasoning about this, but it’s starting to sound worse and worse!

It can’t possibly be that bad. University businesses have a reputation to protect, and they are subject to the court of public opinion. Even if not all fields are Bourdieusian science, each scholarly field has its own reputation to protect and so has an incentive to ensure that it, at least, is useful for something. It becomes, in a sense, a web of trust, where each link in the network is tested over time. As an aside, this is an argument for the importance of interdisciplinary work. It’s not just a nice-to-have because wouldn’t-it-be-interesting. It’s necessary as a check on the mutual compatibility of different fields. Prevents disciplines from becoming exploitative of students and other resources in society.

Indeed, it’s possible that this process of establishing mutual trust among experts even across different fields is what allows a kind of coherentist, pragmatist truth to emerge. But that’s by no means guaranteed. But to be very clear, that process can happen among people whether or not they are involved in universities or higher education. Everybody is responsible for reality, in a sense. To wit, citizen science is still Bourdieusian science.

But see how the stature of the university has fallen. Under a modernist logic, the university was where one went to learn what is real. One would trust that learning it would be useful because universities were dedicated to teaching what was real. Under business logic, the university is a place to learn something that the university finds it useful to teach you. It cannot be trusted without lots of checked from the rest of the society. Intellectual authority is now much more distributed.

The problem with the business university is that it finds itself in competition for intellectual authority, and hence society’s investment in education, with other kinds of institutions. These include employers, who can discount wages for jobs that give their workers valuable human capital (e.g. the free college internship). Moreover, absent its special dedication to science per se, there’s less of a reason to put society’s investment to basic research in its hands. This accords with Clark Kerr‘s observation that the postwar era was golden for universities because the federal government kept them flush with funds for basic research, but these started to trickle down and now a lot more important basic research is done in the private sector.

So to the extent that the university is responsible for the ‘epistemic crisis’, it may be because universities began to adopt business logic as their guiding principle. This is not because they then began to teach garbage. It’s because they lost the special authority awarded to modernist universities, which we funded for a special mission in society. This opened the door for more charlatans, most of whom are not at universities. They might be on YouTube.

Note that this gets us back to something similar but not identical to postmodernism.* What’s at stake are not just narratives, but also practices and other forms of symbolic and social capital. But there’s certainly many different ones, articulated differently, and in competition with each other. The university business winds up reflecting the many different kinds of useful knowledge across all society and reproducing it through teaching. Society at large can then keep universities in check.
This “society keeping university businesses in check” point is a case for abolishing tenure in university businesses. Tenure may be a great idea in universities with different purposes and incentive structures. But for university businesses, it’s not good–it makes them less good businesses.

The epistemic crisis is due to a crisis in epistemic authority. To the extent universities are responsible, it’s because universities lost their special authority. This may be because they abandoned the modernist model of the university. But is not because they abandoned modernism to postmodernism. “Postmodern” and “modern” fields coexist symbiotically with the pragmatist model of the university as business. But losing modernism has been bad for the university business as a brand.

* Though it must be noted that Lyotard’s analysis of the postmodern condition is all about how legitimation by performativity is the cause of this new condition. I’m probably just recapitulating his points in this post.

## January 12, 2018

Ph.D. student

#### STEM and (post-)modernism

There is an active debate in the academic social sciences about modernism and postmodernism. I’ll refer to my notes on Clark Kerr’s comments on the postmodern university as an example of where this topic comes up.

If postmodernism is the condition where society is no longer bound by a single unified narrative but rather is constituted by a lot of conflicting narratives, then, yeah, ok, we live in a postmodern society. This isn’t what the debate is really about though.

The debate is about whether we (anybody in intellectual authority) should teach people that we live in a postmodern society and how to act effectively in that world, or if we should teach people to believe in a metanarrative which allows for truth, progress, and so on.

It’s important to notice that this whole question of what narratives we do or do not teach our students is irrelevant to a lot of educational fields. STEM fields aren’t really about narratives. They are about skills or concepts or something.

Let me put it another way. Clark Kerr was concerned about the rise of the postmodern university–was the traditional, modernist university on its way out?

The answer, truthfully, was that neither the traditional modernist university nor the postmodern university became dominant. Probably the most dominant university in the United States today is Stanford; it has accomplished this through a winning combination of STEM education, proximity to venture capital, and private fundraising. You don’t need a metanarrative if you’re rich.

Maybe that indicates where education has to go. The traditional university believed that philosophy was at its center. Philosophy is no longer at the center of the university. Is there a center? If there isn’t, then postmodernism reigns. But something else seems to be happening: STEM is becoming the new center, because it’s the best funded of the disciplines. Maybe that’s fine! Maybe focusing on STEM is how to get modernism back.

## January 09, 2018

Ph.D. student

#### The social value of an actually existing alternative — BLOCKCHAIN BLOCKCHAIN BLOCKCHAIN

When people get excited about something, they will often talk about it in hyberbolic terms. Some people will actually believe what they say, though this seems to drop off with age. The emotionally energetic framing of the point can be both factually wrong and contain a kernel of truth.

This general truth applies to hype about particular technologies. Does it apply to blockchain technologies and cryptocurrencies? Sure it does!

Blockchain boosters have offered utopian or radical visions about what this technology can achieve. We should be skeptical about these visions prima facie precisely in proportion to how utopian and radical they are. But that doesn’t mean that this technology isn’t accomplishing anything new or interesting.

Here is a summary of some dialectics around blockchain technology:

A: “Blockchains allow for fully decentralized, distributed, and anonymous applications. These can operate outside of the control of the law, and that’s exciting because it’s a new frontier of options!”

B1: “Blockchain technology isn’t really decentralized, distributed, or anonymous. It’s centralizing its own power into the hands of the few, and meanwhile traditional institutions have the power to crush it. Their anarchist mentality is naive and short-sighted.”

B2: “Blockchain technology enthusiasts will soon discover that they actually want all the legal institutions they designed their systems to escape. Their anarchist mentality is naive and short-sighted.”

While B1 and B2 are both critical of blockchain technology and see A as naive, it’s important to realize that they believe A is naive for contradictory reasons. B1 is arguing that it does not accomplish what it was purportedly designed to do, which is provide a foundation of distributed, autonomous systems that’s free from internal and external tyranny. B2 is arguing that nobody actually wants to be free of these kinds of tyrannies.

These are conservative attitudes that we would expect from conservative (in the sense of conservation, or “inhibiting change”) voices in society. These are probably demographically different people from person A. And this makes all the difference.

If what differentiates people is their relationship to different kinds of social institutions or capital (in the Bourdieusian sense), then it would be natural for some people to be incumbents in old institutions who would argue for their preservation and others to be willing to “exit” older institutions and join new ones. However imperfect the affordances of blockchain technology may be, they are different affordances than those of other technologies, and so they promise the possibility of new kinds of institutions with an alternative information and communications substrate.

It may well be that the pioneers in the new substrate will find that they have political problems of their own and need to reinvent some of the societal controls that they were escaping. But the difference will be that in the old system, the pioneers were relative outsiders, whereas in the new system, they will be incumbents.

The social value of blockchain technology therefore comes in two waves. The first wave is the value it provides to early adopters who use it instead of other institutions that were failing them. These people have made the choice to invest in something new because the old options were not good enough for them. We can celebrate their successes as people who have invented quite literally a new form of social capital, quite possibly literally a new form of wealth. When a small group of people create a lot of new wealth this almost immediately creates a lot of resentment from others who did not get in on it.

But there’s a secondary social value to the creation of actually existing alternative institutions and forms of capital (which are in a sense the same thing). This is the value of competition. The marginal person, who can choose how to invest themselves, can exit from one failing institution to a fresh new one if they believe it’s worth the risk. When an alternative increases the amount of exit potential in society, that increases the competitive pressure on institutions to perform. That should benefit even those with low mobility.

So, in conclusion, blockchain technology is good because it increases institutional competition. At the end of the day that reduces the power of entrenched incumbents to collect rents and gives everybody else more flexibility.

## January 07, 2018

Ph.D. student

#### The economy of responsibility and credit in ethical AI; also, shameless self-promotion

Serious discussions about ethics and AI can be difficult because at best most people are trained in either ethics or AI, but not both. This leads to lots of confusion as a lot of the debate winds up being about who should take responsibility and credit for making the hard decisions.

Here are some of a flavors of outcomes of AI ethics discussions. Without even getting into the specifics of the content, each position serves a different constituency, despite all coming under the heading of “AI Ethics”.

• Technical practicioners getting together to decide a set of professional standards by which to self-regulate their use of AI.
• Ethicists getting together to decide a set of professional standards by which to regulate the practices of technical people building AI.
• Computer scientists getting together to come up with a set of technical standards to be used in the implementation of autonomous AI so that the latter performs ethically.
• Ethicists getting together to come up with ethical positions with which to critique the implementations of AI.

Let’s pretend for a moment that the categories used here of “computer scientists” and “ethicists” are valid ones. I’m channeling the zeitgeist here. The core motivation of “ethics in AI” is the concern that the AI that gets made will be bad or unethical for some reason. This is rumored to be because there are people who know how to create AI–the technical practicioners–who are not thinking through the ethical consequences of their work. There are supposed to be some people who are authorities on what outcomes are good and bad; I’m calling these ‘ethicists’, though I include sociologists of science and lawyers claiming an ethical authority in that term.

What are the dimensions along which these positions vary?

What is the object of the prescription? Are technical professionals having their behavior prescribed? Or is it the specification of the machine that’s being prescribed?

Who is creating the prescription? Is it “technical people” like programmers and computer scientists, or is it people ‘trained in ethics’ like lawyers and sociologists?

When is the judgment being made? Is the judgment being made before the AI system is being created as part of its production process, or is it happening after the fact when it goes live?

These dimensions are not independent from each other and in fact it’s their dependence on each other that makes the problem of AI ethics politically challenging. In general, people would like to pass on responsibility to others and take credit for themselves. Technicians love to pass responsibility to their machines–“the algorithm did it!”. Ethicists love to pass responsibility to technicians. In one view of the ideal world, ethicists would come up with a set of prescriptions, technologists would follow them, and nobody would have any ethical problems with the implementations of AI.

This would entail, more or less, that ethical requirements have been internalized into either technical design processes, engineering principles, or even mathematical specifications. This would probably be great for society as a whole. But the more ethical principles get translated into something that’s useful for engineers, the less ethicists can take credit for good technical outcomes. Some technical person has gotten into the loop and solved the problem. They get the credit, except that they are largely anonymous, and so the product, the AI system, gets the credit for being a reliable, trustworthy product. The more AI products are reliable, trustworthy, good, the less credible are the concerns of the ethicists, whose whole raison d’etre is to prevent the uninformed technologists from doing bad things.

The temptation for ethicists, then, is to sit safely where they can critique after the fact. Ethicists can write for the public condemning evil technologists without ever getting their hands dirty with the problems of implementation. There’s an audience for this and it’s a stable strategy for ethicists, but it’s not very good for society. It winds up putting public pressure on technologists to solve the problem themselves through professional self-regulation or technical specification. If they succeed, then the ethicists don’t have anything to critique, and so it is in the interest of ethicists to cast doubt on these self-regulation efforts without ever contributing to their success. Ethicists have the tricky job of pointing out that technologists are not listening to ethicists, and are therefore suspect, without ever engaging with technologists in such a way that would allow them to arrive at a bona fide ethical technical solution. This is, one must admit, not a very ethical thing to do.

There are exceptions to this bleak and cynical picture!

In fact, yours truly is an exception to this bleak and cynical picture, along with my brilliant co-authors Seda Gürses and Helen Nissenbaum! If you would like to see an honest attempt at translating ethics into computer science so that AI can be more ethical, look no further than:

Sebastian Benthall, Seda Gürses and Helen Nissenbaum (2017), “Contextual Integrity through the Lens of Computer Science”, Foundations and Trends® in Privacy and Security: Vol. 2: No. 1, pp 1-69. http://dx.doi.org/10.1561/3300000016

Contextual Integrity is an ethical framework. I’d go so far as to say that it’s a meta-ethical framework, as it provides a theory of where ethics comes from an why they are important. It’s a theory that’s developed by the esteemed ethicist and friend-of-computer-science Helen Nissenbaum.

In this paper, which you should definitely read, two researchers team up with Helen Nissenbaum to review all the computer science papers we can find that reference Contextual Integrity. One of those researchers is Seda Gürses, a computer scientist with deep background in privacy and security engineering. You essentially can’t find two researchers more credible than Helen and Seda, paired up, on the topic of how to engineer privacy (which is a subset of ethics).

I am also a co-author of this paper. You can certainly find more credible researchers on this topic than myself, but I have the enormous good fortune to have worked with such profoundly wise and respectable collaborators.

Probably the best part about this paper, in my view, is that we’ve managed to write a paper about ethics and computer science (and indeed, AI is a subset of what we are talking about in the paper) which is honestly trying to grapple with the technical challenges of designing ethical systems, while also contending with all the sociological complication of what ethics is. There’s a while section where we refuse to let computer scientists off the hook from dealing with how norms (and therefore ethics) is the result of a situated and historical process of social adaptation. But then there’s a whole other section where we talk about how developing AI that copes responsibly with the situated and historical process of social adaptation is an open research problem in privacy engineering! There’s truly something for everybody!

## January 02, 2018

Ph.D. student

#### Exit vs. Voice as Defecting vs. Cooperation as …

These dichotomies that are often thought of separately are actually the same.

 Cooperation Defection Voice (Hirschman) Exit (Hirschman) Lifeworld (Habermas) System (Habermas) Power (Arendt) Violence (Arendt) Institutions Markets

#### Why I will blog more about math in 2018

One reason to study and write about political theory is what Habermas calls the emancipatory interest of human inquiry: to come to better understand the social world one lives in, unclouded by ideology, in order to be more free from those ideological expectations.

This is perhaps counterintuitive since what is perhaps most seductive about political theory is that it is the articulation of so many ideologies. Indeed, one can turn to political theory because one is looking for an ideology that suits them. Having a secure world view is comforting and can provide a sense of purpose. I know that personally I’ve struggled with one after another.

Looking back on my philosophical ‘work’ over the decade years (as opposed to my technical and scientific work) I’d like to declare it an emancipatory success for at least one person, myself. I am happier for it, though at the cost that comes from learning the hard way.

A problem with this blog is that it is too esoteric. It has not been written with a particular academic discipline in mind. It draws rather too heavily from certain big name thinkers that not enough people have read. I don’t provide background material in these thinkers, and so many find this inaccessible.

One day I may try to edit this material into a more accessible version of its arguments. I’m not sure who would find this useful, because much of what I’ve been doing in this work is arriving at the conclusion that actually, truly, mathematical science is the finest way of going about understanding sociotechnical systems. I believe this follows even from deep philosophical engagement with notable critics of this view–and I have truly tried to engage with the best and most notable of these critics. There will always be more of them, but I think at this point I have to make a decision to not seek them out any more. I have tested these views enough to build on them as a secure foundation.

What follows then is a harder but I think more rewarding task of building out the mathematical theory that reflects my philosophical conclusions. This is necessary for, for example, building a technical implementation that expresses the political values that I’ve arrived at. Arguably, until I do this, I’ll have just been beating around the bush.

I will admit to being sheepish about blogging on technical and mathematical topics. This is because in my understanding technical and mathematical writing is held to a higher standard that normal writing. Errors are more clear, and more permanent.

I recognize this now as a personal inhibition and a destructive one. If this blog has been valuable to me as a tool for reading, writing, and developing fluency in obscure philosophical literature, why shouldn’t it also be a tool for reading, writing, and developing fluency in obscure mathematical and technical literature? And to do the latter, shouldn’t I have to take the risk of writing with the same courage, if not abandon?

This is my wish for 2018: to blog more math. It’s a riskier project, but I think I have to in order to keep developing these ideas.

## December 31, 2017

MIMS 2012

#### Books Read in 2017

This year I read 14 books, which is 8 fewer than the 22 I read last year (view last year’s list here). Lower than I was hoping, but it at least averages out to more than 1 per month. I’m not too surprised, though, since I traveled a lot and was busier socially this year. Once again, I was heavy on the non-fiction — I only read 2 fiction books this year. Just 2! I need to up that number in 2018.

## Highlights

### Service Design: From Insight to Implementation

by Andy Polaine, Lavrans Løvlie, and Ben Reason

This book really opened my eyes to the world of service design and thinking about a person’s experience beyond just the confines of the screen. Using the product is just one part of a person’s overall experience accomplishing their goal. This book is a great primer on the subject.

View on Amazon

### Sol LeWitt: The Well-Tempered Grid

by Charles Haxthausen, Christianna Bonin, and Erica Dibenedetto

I’ve been quite taken by Sol LeWitt’s work after seeing his art at various museums, such as the SF MOMA. I finally bought a book to learn more about his work and approach to art. This inspired me to re-create his work programmatically.

View on Amazon

### The Corrections

by Jonathan Franzen

This is the first Franzen book I’ve read, and I thoroughly enjoyed it. I’ve been interested in him for a long time because David Foster Wallace is a fan of his. A well-written, engaging tale of a family’s troubles, anxieties, and the “corrections” they need to make to keep their lives intact.

View on Amazon

by Kim Scott

Great book on managing people. Highly recommended for anyone who manages or is interested in managing. Even if you’re an individual contributor it’s worth reading because it will help you be a better employee and have a better relationship with your boss.

View on Amazon

### Emotional Design

by Don Norman

This companion to Norman’s The Design of Everyday Things is just as good as its better-known sibling. In this book he focuses on the emotional and aesthetic side of design, and why those elements are an important part of designing a successful product. He goes beyond fluffy, surface-level explanations, though, and explains the why behind these phenomenon using science, psychology, and biology. This makes for a convincing argument behind the importance of this aspect of design, which can often be written off as “nice-to-have” or self-indulgent.

View on Amazon

## December 22, 2017

Ph.D. student

#### technological determinism and economic determinism

If you are trying to explain society, politics, the history of the world, whatever, it’s a good idea to narrow the scope of what you are talking about to just the most important parts because there is literally only so much you could ever possibly say. Life is short. A principled way of choosing what to focus on is to discuss only those parts that are most significant in the sense that they played the most causally determinative role in the events in question. By widely accepted interventionist theories of causation, what makes something causally determinative of something else is the fact that in a counterfactual world in which the cause was made to be somehow different, the effect would have been different as well.

Since we basically never observe a counterfactual history, this leaves a wide open debate over the general theoretical principles one would use to predict the significance of certain phenomena over others.

One point of view on this is called technological determinism. It is the view that, for a given social phenomenon, what’s really most determinative of it is the technological substrate of it. Engineers-turned-thought-leaders love technological determinism because of course it implies that really the engineers shape society, because they are creating the technology.

Technological determinism is absolutely despised by academic social scientists who have to deal with technology and its role in society. I have a hard time understanding why. Sometimes it is framed as an objection to technologist who are avoiding responsibility for social problems they create because it’s the technology that did it, not them. But such a childish tactic really doesn’t seem to be what’s at stake if you’re critiquing technological determinism. Another way of framing the problem is the say that the way a technology affects society in San Francisco is going to be different from how it affects society in Beijing. Society has its role in a a dialectic.

So there is a grand debate of “politics” versus “technology” which reoccurs everywhere. This debate is rather one sided, since it is almost entirely constituted by political scientists or sociologists complaining that the engineers aren’t paying enough attention to politics, seeing how their work has political causes and effects. Meanwhile, engineers-turned-thought-leaders just keep spouting off whatever nonsense comes to their head and they do just fine because, unlike the social scientist critics, engineers-turned-thought-leaders tend to be rich. That’s why they are thought leaders: because their company was wildly successful.

What I find interesting is that economic determinism is never part of this conversation. It seems patently obvious that economics drives both politics and technology. You can be anywhere on the political spectrum and hold this view. Once it was called “dialectical materialism”, and it was the foundation for left-wing politics for generations.

So what has happened? Here are a few possible explanations.

The first explanation is that if you’re an economic determinist, maybe you are smart enough to do something more productive with your time than get into debates about whether technology or politics is more important. You would be doing something more productive, like starting a business to develop a technology that manipulates political opinion to favor the deregulation of your business. Or trying to get a socialist elected so the government will pay off student debts.

A second explanation is… actually, that’s it. That’s the only reason I can think of. Maybe there’s another one?

## December 18, 2017

Ph.D. student

#### The Data Processing Inequality and bounded rationality

I have long harbored the hunch that information theory, in the classic Shannon sense, and social theory are deeply linked. It has proven to be very difficult to find an audience for this point of view or an opportunity to work on it seriously. Shannon’s information theory is widely respected in engineering disciplines; many social theorists who are unfamiliar with it are loathe to admit that something from engineering should carry essential insights for their own field. Meanwhile, engineers are rarely interested in modeling social systems.

I’ve recently discovered an opportunity to work on this problem through my dissertation work, which is about privacy engineering. Privacy is a subtle social concept but also one that has been rigorously formalized. I’m working on formal privacy theory now and have been reminded of a theorem from information theory: the Data Processing Theorem. What strikes me about this theorem is that is captures an point that comes up again and again in social and political problems, though it’s a point that’s almost never addressed head on.

The Data Processing Inequality (DPI) states that for three random variables, X, Y, and Z, arranged in Markov Chain such that $X \rightarrow Y \rightarrow Z$, then $I(X,Z) \leq I(X,Y)$, where here $I$ stands for mutual information. Mutual information is a measure of how much two random variables carry information about each other. If $I(X,Y) = 0$, that means the variables are independent. $I(X,Y) \geq 0$ always–that’s just a mathematical fact about how it’s defined.

The implications of this for psychology, social theory, and artificial intelligence are I think rather profound. It provides a way of thinking about bounded rationality in a simple and generalizable way–something I’ve been struggling to figure out for a long time.

Suppose that there’s a big world out the, $W$ and there’s am organism, or a person, or a sociotechnical organization within it, $Y$. The world is big and complex, which implies that it has a lot of informational entropy, $H(W)$. Through whatever sensory apparatus is available to $Y$, it acquires some kind of internal sensory state. Because this organism is much small than the world, its entropy is much lower. There are many fewer possible states that the organism can be in, relative to the number of states of the world. $H(W) >> H(Y)$. This in turn bounds the mutual information between the organism and the world: $I(W,Y) \leq H(Y)$

Now let’s suppose the actions that the organism takes, $Z$ depend only on its internal state. It is an agent, reacting to its environment. Well whatever these actions are, they can only be so calibrated to the world as the agent had capacity to absorb the world’s information. I.e., $I(W,Z) \leq H(Y) << H(W)$. The implication is that the more limited the mental capacity of the organism, the more its actions will be approximately independent of the state of the world that precedes it.

There are a lot of interesting implications of this for social theory. Here are a few cases that come to mind.

I've written quite a bit here (blog links) and here (arXiv) about Bostrom’s superintelligence argument and why I’m generally not concerned with the prospect of an artificial intelligence taking over the world. My argument is that there are limits to how much an algorithm can improve itself, and these limits put a stop to exponential intelligence explosions. I’ve been criticized on the grounds that I don’t specify what the limits are, and that if the limits are high enough then maybe relative superintelligence is possible. The Data Processing Inequality gives us another tool for estimating the bounds of an intelligence based on the range of physical states it can possibly be in. How calibrated can a hegemonic agent be to the complexity of the world? It depends on the capacity of that agent to absorb information about the world; that can be measured in information entropy.

A related case is a rendering of Scott’s Seeing Like a State arguments. Why is it that “high modernist” governments failed to successfully control society through scientific intervention? One reason is that the complexity of the system they were trying to manage vastly outsized the complexity of the centralized control mechanisms. Centralized control was very blunt, causing many social problems. Arguably, behavioral targeting and big data centers today equip controlling organizations with more informational capacity (more entropy), but they
still get it wrong sometimes, causing privacy violations, because they can’t model the entirety of the messy world we’re in.

The Data Processing Inequality is also helpful for explaining why the world is so messy. There are a lot of different agents in the world, and each one only has so much bandwidth for taking in information. This means that most agents are acting almost independently from each other. The guiding principle of society isn’t signal, it’s noise. That explains why there are so many disorganized heavy tail distributions in social phenomena.

Importantly, if we let the world at any time slice be informed by the actions of many agents acting nearly independently from each other in the slice before, then that increases the entropy of the world. This increases the challenge for any particular agent to develop an effective controlling strategy. For this reason, we would expect the world to get more out of control the more intelligence agents are on average. The popularity of the personal computer perhaps introduced a lot more entropy into the world, distributed in an agent-by-agent way. Moreover, powerful controlling data centers may increase the world’s entropy, rather than redtucing it. So even if, for example, Amazon were to try to take over the world, the existence of Baidu would be a major obstacle to its plans.

There are a lot of assumptions built into these informal arguments and I’m not wedded to any of them. But my point here is that information theory provides useful tools for thinking about agents in a complex world. There’s potential for using it for modeling sociotechnical systems and their limitations.

#### The harmonics of 'entitlement'

A lot of the most effective political keywords derive their force from a maneuver akin to what H. W. Fowler called "legerdemain with two senses," which enables you to slip from one idea to another without ever letting on that you’ve changed the subject. Values oscillates between mores (which vary from one group to another) and morals (of which some people have more than others do). The polemical uses of elite blend power (as in the industrial elite) and pretension (as in the names of bakeries and florists). Bias suggests both a disposition and an activity (as in housing bias), and ownership society conveys both material possession and having a stake in something.

And then there's entitlement, one of the seven words and phrases that the administration has instructed policy analysts at the Center for Disease Control to avoid in budget documents, presumably in an effort, as Mark put it in an earlier post, to create "a safe space where [congresspersons'] delicate sensibilities will not be affronted by such politically incorrect words and phrases." Though it's unlikely that the ideocrats who came up with the list thought it through carefully, I can see why this would lead them to discourage the use of items like diversity. But the inclusion of entitlement on the list is curious, since the right has been at pains over the years to bend that word to their own purposes.

I did a Fresh Air piece on entitlement back in 2012, when Romney's selection of Paul Ryan as his running mate opened up the issue of "entitlement spending."  Unlike most other political keywords, the polysemy from which this one profits is purely fortuitous. As I noted in that piece:

One sense of the word was an obscure political legalism until the advent of the Great Society programs that some economists called “uncontrollables.” Technically, entitlements are just programs that provide benefits that aren’t subject to budgetary discretion. But the word also implied that the recipients had a moral right to the benefits. As LBJ said in justifying Medicare: “By God, you can’t treat grandma this way. She’s entitled to it."

The negative connotations of the word arose in a another, rather distant corner of the language, when psychologists began to use a different notion of entitlement as a diagnostic for narcissism. Both those words entered everyday usage in the late 1970s, with a big boost from Christopher Lasch’s 1979 bestseller The Culture of Narcissism, an indictment of the pathological self-absorption of American life. By the early eighties, you no longer had to preface “sense of entitlement” with “unwarranted” or “bloated.” That was implicit in the word entitlement itself, which had become the epithet of choice whenever you wanted to scold the baby boomers for their superficiality and selfishness….

But it’s only when critics get to the role of government that the two meanings of entitlement start to seep into each other…. When conservatives fulminate about the cost of government entitlements, there’s often an implicit modifier “unearned” lurking in the background. And that in turn makes it easier to think of those programs as the cause of a wider social malaise: they create a “culture of dependency,” or a class of “takers,” which is basically what the Victorians called the undeserving poor.

That isn’t a new argument. The early opponents of Social Security charged that it would discourage individual thrift and reduce Americans to the level of Europeans. But now the language itself helps make the argument by using the same word for the political cause and the cultural effects. You can deplore “the entitlement society” without actually having to say whether you mean the social or political sense of the word, or even acknowledging that there’s any difference. It’s a strategic rewriting of linguistic history, as if we call the programs benefits simply because people feel entitled to them.

But to make that linguistic fusion work, you have to bend the meanings of the words to fit. When people rail about the cost of government entitlements, they’re thinking of social benefit programs like Medicare, not the price supports or the tax breaks that some economists call hidden entitlements.

Entitlements is back in the headlines now that the Republicans are looking for ways to make up for the revenues to be lost in the tax bill. "We're going to have to get back next year at entitlement reform, which is how you tackle the debt and the deficit," Ryan said just last week, adding, "Frankly, it's the health care entitlements that are the big drivers of our debt… that's really where the problem lies, fiscally speaking." Others extend "entitlement reform" to restructuring social security and other programs. Leaving aside the policy implications of these moves, which are beyond the modest purview of Language Log, it's clear that entitlement is still doing exactly the kind of rhetorical work for Republicans that it was doing in the Reagan era. So why is it suddenly verbum non gratum at the CDC?

## December 15, 2017

Ph.D. student

#### Net neutrality

What do I think of net neutrality?

I think it’s bad for my personal self-interest. I am, economically, a part of the newer tech economy of software and data. I believe this economy benefits from net neutrality. I also am somebody who loves The Web as a consumer. I’ve grown up with it. It’s shaped my values.

From a broader perspective, I think ending net neutrality will revitalize U.S. telecom and give it leverage over the ‘tech giants’–Google, Facebook, Apple, Amazon—that have been rewarded by net neutrality policies. Telecom is a platform, but it had been turned into a utility platform. Now it can be a full-featured market player. This gives it an opportunity for platform envelopment, moving into the markets of other companies and bundling them in with ISP services.

Since this will introduce competition into the market and other players are very well-established, this could actually be good for consumers because it breaks up an oligopoly in the services that are most user-facing. On the other hand, since ISPs are monopolists in most places, we could also expect Internet-based service experience quality to deteriorate in general.

What this might encourage is a proliferation of alternatives to cable ISPs, which would be interesting. Ending net neutrality creates a much larger design space in products that provision network access. Mobile companies are in this space already. So we could see this regulation as a move in favor of the cell phone companies, not just the ISPs. This too could draw surplus away the big four.

This probably means the end of “The Web”. But we’d already seen the end of “The Web” with the proliferation of apps as a replacement for Internet browsing. IoT provides yet another alternative to “The Web”. I loved the Web as a free, creative place where everyone could make their own website about their cat. It had a great moment. But it’s safe to say that it isn’t what it used to be. In fifteen years it may be that most people no longer visit web sites. They just use connected devices and apps. Ending net neutrality means that the connectivity necessary for these services can be bundled in with the service itself. In the long run, that should be good for consumers and even the possibility of market entry for new firms.

In the long run, I’m not sure “The Web” is that important. Maybe it was a beautiful disruptive moment that will never happen again. Or maybe, if there were many more kinds of alternatives, “The Web” would return to being the quirky, radically free and interesting thing it was before it got so mainstream. Remember when The Web was just The Well (which is still around), and only people who were really curious about it bothered to use it? I don’t, because that was well before my time. But it’s possible that the Internet in its browse-happy form will become something like that again.

I hadn’t really thought about net neutrality very much before, to be honest. Maybe there are some good rebuttals to this argument. I’d love to hear them! But for now, I think I’m willing to give the shuttering of net neutrality a shot.

## December 14, 2017

Ph.D. student

#### Marcuse, de Beauvoir, and Badiou: reflections on three strategies

I have written in this blog about three different philosophers who articulated a vision of hope for a more free world, including in their account an understanding of the role of technology. I would like to compare these views because nuanced differences between them may be important.

First, let’s talk about Marcuse, a Frankfurt School thinker whose work was an effective expression of philosophical Marxism that catalyzed the New Left. Marcuse was, like other Frankfurt School thinkers, concerned about the role of technology in society. His proposed remedy was “the transcendent project“, which involves an attempt at advancing “the totality” through an understanding of its logic and action to transform it into something that is better, more free.

As I began to discuss here, there is a problem with this kind of Marxist aspiration for a transformation of all of society through philosophical understanding, which is this: the political and technical totality exists as it does in no small part to manage its own internal information flows. Information asymmetries and differentiation of control structures are a feature, not a bug. The convulsions caused by the Internet as it tears and repairs the social fabric have not created the conditions of unified enlightened understanding. Rather, they have exposed that given nearly boundless access to information, most people will ignore it and maintain, against all evidence to the contrary, the dignity of one who has a valid opinion.

The Internet makes a mockery of expertise, and makes no exception for the expertise necessary for the Marcusian “transcendental project”. Expertise may be replaced with the technological apparati of artificial intelligence and mass data collection, but the latter are a form of capital whose distribution is a part of the totality. If they are having their transcendent effect today, as the proponents of AI claim, this effect is in the hands of a very few. Their motivations are inscrutable. As they have their own opinions and courtiers, writing for them is futile. They are, properly speaking, a great uncertainty that shows that centralized control does not close down all options. It may be that the next defining moment in history is set by the decision of how Jeff Bezos decides to spend his wealth, and that is his decision alone. For “our” purposes–yours, my reader, and mine–this arbitrariness of power must be seen as part of the totality to be transcended, if that is possible.

It probably isn’t. And if it Really isn’t, that may be the best argument for something like the postmodern breakdown of all epistemes. There are at least two strands of postmodern thought coming from the denial of traditional knowledge and university structure. The first is the phenomenological privileging of subjective experience. This approach has the advantage of never being embarrassed by the fact that the Internet is constantly exposing us as fools. Rather, it allows us to narcissistically and uncritically indulge in whatever bubble we find ourselves in. The alternative approach is to explicitly theorize about ones finitude and the radical implications of it, to embrace a kind of realist skepticism or at least acknowledgement of the limitations of the human condition.

It’s this latter approach which was taken up by the existentialists in the mid-20th century. In particular, I keep returning to de Beauvoir as a hopeful voice that recognizes a role for science that is not totalizing, but nevertheless liberatory. De Beauvoir does not take aim, like Marcuse and the Frankfurt School, at societal transformation. Her concern is with individual transformation, which is, given the radical uncertainty of society, a far more tractable problem. Individual ethics are based in local effects, not grand political outcomes. The desirable local effects are personal liberation and liberation of those one comes in contact with. Science, and other activities, is a way of opening new possibilities, not limited to what is instrumental for control.

Such a view of incremental, local, individual empowerment and goodness seems naive in the face of pessimistic views of society’s corruptedness. Whether these be economic or sociological theories of how inequality and oppression are locked into society, and however emotionally compelling and widespread they may be in social media, it is necessary by our previous argument to remember that these views are always mere ideology, not scientific fact, because an accurate totalizing view of society is impossible given real constraints on information flow and use. Totalizing ideologies that are not rigorous in their acceptance of basic realistic points are a symptom of more complex social structure (i.e. the distribution of capitals, the reproduction of many habiti) not a definition of it.

It is consistent for a scientific attitude to deflate political ideology because this deflation is an opening of possibility against both utopian and dystopian trajectories. What’s missing is a scientific proof of this very point, comparable to a Halting Problem or Incompleteness Theorem, but for social understanding.

A last comment, comparing Badiou to de Beauvoir and Marcuse. Badiou’s theory of the Event as the moment that may be seized to effect a transformation is perhaps a synthesis of existentialist and Marxian philosophies. Badiou is still concerned with transcendence, i.e. the moment when, given one assumed structure to life or reality or psychology, one discovers an opening into a renewed life with possibilities that the old model did not allow. But (at least as far as I have read him, which is not enough) he sees the Event as something that comes from without. It cannot be predicted or anticipate within the system but is instead a kind of grace. Without breaking explicitly from professional secularism, Badiou’s work suggests that we must have faith in something outside our understanding to provide an opportunity for transcendence. This is opposed to the more muscular theories described above: Marcuse’s theory of transcendent political activism and de Beauvoir’s active individual projects are not as patient.

I am still young and strong and so prefer the existentialist position on these matters. I am politically engaged to some extent and so, as an extension of my projects of individual freedom, am in search of opportunities for political transcendence as well–a kind of Marcuse light, as politics like science is a field of contest that is reproduced as its games are played and this is its structure. But life has taught me again and again to appreciate Badiou’s point as well, which is the appreciation of the unforeseen opportunity, the scientific and political anomaly.

What does this reflection conclude?

First, it acknowledges the situatedness and fragility of expertise, which deflates grand hopes for transcendent political projects. Pessimistic ideologies that characterize the totality as beyond redemption are false; indeed it is characteristic of the totality that it is incomprehensible. This is a realistic view, and transcendence must take it seriously.

Second, it acknowledges the validity of more localized liberatory projects despite the first point.

Third, it acknowledges that the unexpected event is a feature of the totality to be embraced, contrary to pessimistic ideologies to the contrary. The latter, far from encouraging transcendence, are blinders that prevent the recognition of events.

Because realism requires that we not abandon core logical principles despite our empirical uncertainty, you may permit one more deduction. To the extent that actors in society pursue the de Beauvoiran strategy of engaging in local liberatory projects that affect others, the probability of a Badiousian event in the life of another increases. Solipsism is false, and so (to put it tritely) “random acts of kindness” do have their effect on the totality, in aggregate. In fact, there may be no more radical political agenda than this opening up of spaces of local freedom, which shrugs off the depression of pessimistic ideology and suppression of technical control. Which is not a new view at all. What is perhaps surprising is how easy it may be.

## December 13, 2017

Ph.D. student

#### transcending managerialism

What motivates my interest in managerialism?

It may be a bleak topic to study, but recent traffic to this post on Marcuse has reminded me of the terms to explain my intention.

For Marcuse, a purpose of scholarship is the transcendent project, whereby an earlier form of rationality and social totality are superseded by a new one that offers “a greater chance for the free development of human needs and faculties.” In order to accomplish this, it has to first “define[] the established totality in its very structure, basic tendencies, and relations”.

Managerialism, I propose, is a way of defining and articulating the established totality: they way everything in our social world (the totality) has been established. Once this is understood, it may be possible to identify a way of transcending that totality. But, the claim is, you can’t transcend what you don’t understand.

Marx had a deeply insightful analysis of capitalism and then used that to develop an idea of socialism. The subsequent century indeed saw the introduction of many socialistic ideas into the mainstream, including labor organizing and the welfare state. Now it is inadequate to consider the established totality through a traditional or orthodox Marxist lens. It doesn’t grasp how things are today.

Arguably, critiques of neoliberalism, enshrined in academic discourse since the 80’s, have the same problem. The world is different from how it was in the 80’s, and civil society has already given what it can to resist neoliberalism. So a critical perspective that uses the same tropes as those used in the 80’s is going to be part of the established totality, but not definitive of it. Hence, it will fail to live up to the demands of the transcendent project.

So we need a new theory of the totality that is adequate to the world today. It can’t look exactly like the old views.

Gilman’s theory of plutocratic insurgency is a good example of the kind of theorizing I’m talking about, but this obviously leaves a lot out. Indeed, the biggest challenge to defining the established totality is the complexity of the totality; this complexity could makes the transcendent project literally impossible. But to stop there is a tremendous cop out.

Rather, what’s needed is an explicit theorization of the way societal complexity, and society’s response to it, shape the totality in systematic ways. “Complexity” can’t be used in a fuzzy way for this to work. It has to be defined in the mathematically precise ways that the institutions that manage and create this complexity think about it. That means–and this is the hardest thing for a political or social theorist to swallow–that computer science and statistics have to be included as part of the definition of totality. Which brings us back to the promise of computational social science if and when it includes its mathematical methodological concepts into its own vocabulary of theorization.

References

Benthall, Sebastian. “Philosophy of computational social science.” Cosmos and History: The Journal of Natural and Social Philosophy 12.2 (2016): 13-30.

Gilman, Nils. “The twin insurgency.” American Interest 15 (2014).

Marcuse, Herbert. One-dimensional man: Studies in the ideology of advanced industrial society. Routledge, 2013.

#### Notes on Clark Kerr’s “The ‘City of Intellect’ in a Century for Foxes?”, in The Uses of the University 5th Edition

I am in my seventh and absolutely, definitely last year of a doctoral program and so have many questions about the future of higher education and whether or not I will be a part of it. For insight, I have procured an e-book copy of Clark Kerr’s The Uses of the University (5th Edition, 2001). Clark Kerr was the 20th President of University of California system and became famous among other things for his candid comments on university administration, which included such gems as

“I find that the three major administrative problems on a campus are sex for the students, athletics for the alumni and parking for the faculty.”

…and…

“One of the most distressing tasks of a university president is to pretend that the protest and outrage of each new generation of undergraduates is really fresh and meaningful. In fact, it is one of the most predictable controversies that we know. The participants go through a ritual of hackneyed complaints, almost as ancient as academe, while believing that what is said is radical and new.”

The Uses of the University is a collection of lectures on the topic of the university, most of which we given in the second half of the 20th century. The most recent edition contains a lecture given in the year 2000, after Kerr had retired from administration, but anticipating the future of the university in the 21st century. The title of the lecture is “The ‘City of Intellect’ in a Century for Foxes?”, and it is encouragingly candid and prescient.

To my surprise, Kerr approaches the lecture as a forecasting exercise. Intriguingly, Kerr employs the hedgehog/fox metaphor from Isaiah Berlin in a lecture about forecasting five years before the publication of Tetlock’s 2005 book Expert Political Judgment (review link), which used the fox/hedgehog distinction to cluster properties that were correlated with political expert’s predictive power. Kerr’s lecture is structured partly as the description of a series of future scenarios, reminiscent of scenario planning as a forecasting method. I didn’t expect any of this, and it goes to show perhaps how pervasive scenario thinking was as a 20th century rhetorical technique.

Kerr makes a number of warning about the university in the 20th century, especially with respect to the glory of the university in the 20th century. He makes a historical case for this: universities in the 20th century thrived on new universal access to students, federal investment in universities as the sites of basic research, and general economic prosperity. He doesn’t see these guaranteed in the 20th century, though he also makes the point that in official situations, the only thing a university president should do is discuss the past with pride and the future with apprehension. He has a rather detailed analysis of the incentives guiding this rhetorical strategy as part of the lecture, which makes you wonder how much salt to take the rest of the lecture with.

What are the warnings Kerr makes? Some are a continuation of the problems universities experienced in the 20th century. Military and industrial research funding changed the roles of universities away from liberal arts education into research shop. This was not a neutral process. Undergraduate education suffered, and in 1963 Kerr predicted that this slackening of the quality of undergraduate education would lead to student protests. He was half right; students instead turned their attention externally to politics. Under these conditions, there grew to be a great tension between the “internal justice” of a university that attempted to have equality among its faculty and the permeation of external forces that made more of the professiorate face outward. A period of attempted reforms throguh “participatory democracy” was “a flash in the pan”, resulting mainly in “the creation of courses celebrating ethnic, racial, and gender diversities. “This experience with academic reform illustrated how radical some professors can be when they look at the external world and how conservative when they look inwardly at themselves–a split personality”.

This turn to industrial and military funding and the shift of universities away from training in morality (theology), traditional professions (medicine, law), self-chosen intellectual interest for its own sake, and entrance into elite society towards training for the labor force (including business administration and computer science) is now quite old–at least 50 years. Among other things, Kerr predicts, this means that we will be feeling the effects of the hollowing out of the education system that happened as higher education deprioritized teaching in favor of research. The baby boomers who went through this era of vocational university education become, in Kerr’s analysis, an enormous class of retirees by 2030, putting new strain on the economy at large. Meanwhile, without naming computers and the Internet, Kerr acknowledged that the “electronic revolution” is the first major change to affect universities for three hundred years, and could radically alter their role in society. He speaks highly of Peter Drucker, who in 1997 was already calling the university “a failure” that would be made obsolete by long-distance learning.

In an intriguing comment on aging baby boomers, which Kerr discusses under the heading “The Methuselah Scenario”, is that the political contest between retirees and new workers will break down partly along racial lines: “Nasty warfare may take place between the old and the young, parents and children, retired Anglos and labor force minorities.” Almost twenty years later, this line makes me wonder how much current racial tensions are connected to age and aging. Have we seen the baby boomer retirees rise as a political class to vigorously defend the welfare state from plutocratic sabotage? Will we?

Kerr discusses the scenario of the ‘disintegration of the integrated university’. The old model of medicine, agriculture, and law integrated into one system is coming apart as external forces become controlling factors within the university. Kerr sees this in part as a source of ethical crises for universities.

“Integration into the external world inevitably leads to disintegration of the university internally. What are perceived by some as the injustices in the external labor market penetrate the system of economic rewards on campus, replacing policies of internal justice. Commitments to external interests lead to internal conflicts over the impartiality of the search for truth. Ideologies conflict. Friendships and loyalties flow increasingly outward. Spouses, who once held the academic community together as a social unit, now have their own jobs. “Alma Mater Dear” to whom we “sing a joyful chorus” becomes an almost laughable idea.”

A factor in this disintegration is globalization, which Kerr identifies with the mobility of those professors who are most able to get external funding. These professors have increased bargaining power and can use “the banner of departmental autonomy” to fight among themselves for industrial contracts. Without oversight mechanisms, “the university is helpless in the face of the combined onslaught of aggressive industry and entrepreneurial faculty members”.

Perhaps most fascinating for me, because it resonates with some of my more esoteric passions, is Kerr’s section on “The fractionalization of the academic guild“. Subject matter interest breaks knowledge into tiny disconnected topics–"Once upon a time, the entire academic enterprise originated in and remained connected to philosophy." The tension between "internal justice" and the "injustices of the external labor market" creates a conflict over monetary rewards. Poignantly, "fractionalization also increases over differing convictions about social justice, over whether it should be defined as equality of opportunity or equality of results, the latter often taking the form of equality of representation. This may turn out to be the penultimate ideological battle on campus."

And then:

The ultimate conflict may occur over models of the university itself, whether to support the traditional or the “postmodern” model. The traditional model is based on the enlightenment of the eighteenth century–rationality, scientific processes of thought, the search for truth, objectivity, “knowledge for its own sake and for its practical applications.” And the traditional university, to quote the Berkeley philosopher John Searle, “attempts to be apolitical or at least politically neutral.” The university of postmodernism thinks that all discourse is political anyway, and it seeks to use the university for beneficial rather than repressive political ends… The postmodernists are attempting to challenge certain assumptions about the nature of truth, objectivity, rationality, reality, and intellectual quality.”

… Any further politicization of the university will, of course, alienate much of the public at large. While most acknowledge that the traditional university was partially politicized already, postmodernism will further raise questions of whether the critical function of the university is based on political orientation rather than on nonpolitical scientific analysis.”

I could go on endlessly about this topic; I’ll try to be brief. First, as per Lyotard’s early analysis of the term, postmodernism is as much as result of the permeation of the university by industrial interests as anything else. Second, we are seeing, right now today in Congress and on the news etc., the eroded trust that a large portion of the public has of university “expertise”, as they assume (having perhaps internalized a reductivist version of the postmodern message despite or maybe because they were being taught by teaching assistants instead of professors) that the professoriate is politically biased. And now the students are in revolt over Free Speech again as a result.

Kerr entertains for a paragraph the possibility of a Hobbesian doomsday free-for-all over the university before considering more mundane possibilities such as a continuation of the status quo. Adapting to new telecommunications (including “virtual universities”), new amazing discoveries in biological sciences, and higher education as a step in mid-career advancement are all in Kerr’s more pragmatic view of the future. The permeability of the university can bring good as well as bad as it is influenced by traffic back and forth across its borders. “The drawbridge is now down. Who and what shall cross over it?”

Kerr counts three major wildcards determining the future of the university. The first is overall economic productivity, the second is fluctuations in returns to a higher education. The third is the United States’ role in the global economy “as other nations or unions of nations (for example, the EU) may catch up with and even surpass it. The quality of education and training for all citizens will be to this contest. The American university may no longer be supreme.” Fourth, student unrest turning universities into the “independent critic”. And fifth, the battles within the professoriate, “over academic merit versus social justice in treatment of students, over internal justice in the professional reward system versus the pressures of external markets, over the better model for the university–modern or post-modern.”

He concludes with three wishes for the open-minded, cunning, savvy administrator of the future, the “fox”:

1. Careful study of new information technologies and their role.
2. “An open, in-depth debate…between the proponents of the traditional and the postmodern university instead of the sniper shots of guerilla warfare…”
3. An “in-depth discussion…about the ethical systems of the future university”. “Now the ethical problems are found more in the flow of contacts between the academic and the external worlds. There have never been so many ethical problems swirling about as today.”

## December 12, 2017

Ph.D. student

#### Re: a personal mission statement

Awesome. I hadn't considered a personal "mission statement" before now, even though I often consider and appreciate organizational mission statements. However, I do keep a yearly plan, including my personal goals.

Doty Plan 2017: https://npdoty.name/plan
Doty Plan 2016: https://npdoty.name/plan2016.html

I like that your categories let you provide a little more text than my bare-bones list of goals/areas/actions. I especially like the descriptions of role and mission; I feel like I both understand you more and I find those inspiring. That said, it also feels like a lot! Providing a coherent set of beliefs, values and strategies seems like more than I would be comfortable committing to. Is that what you want?

The other difference in my practice that I have found useful is the occasional updates: what is started, what is on track and what is at risk. Would it be useful for you to check in with yourself from time to time? I suppose I picked up that habit from Microsoft's project management practices, but despite its corporate origins, it helps me see where I'm doing well and where I need to re-focus or pick a new approach.

Cheers,
Nick

BCC my public blog, because I suppose these are documents that I could try to share with a wider group.

Ph.D. student

#### Contextual Integrity as a field

There was a nice small gathering of nearby researchers (and one important call-in) working on Contextual Integrity at Princeton’s CITP today. It was a nice opportunity to share what we’ve been working on and make plans for the future.

There was a really nice range of different contributions: systems engineering for privacy policy enforcement, empirical survey work testing contextualized privacy expectations, a proposal for a participatory design approach to identifying privacy norms in marginalized communities, a qualitative study on how children understand privacy, and an analysis of the privacy implications of the Cybersecurity Information Sharing Act, among other work.

What was great is that everybody was on the same page about what we were after: getting a better understanding of what privacy really is, so that we can design between policies, educational tools, and technologies that preserve it. For one reason or another, the people in the room had been attracted to Contextual Integrity. Many of us have reservations about the theory in one way or another, but we all see its value and potential.

One note of consensus was that we should try to organize a workshop dedicated specifically to Contextual Integrity, and widening what we accomplished today to bring in more researchers. Today’s meeting was a convenience sample, leaving out a lot of important perspectives.

Another interesting thing that happened today was a general acknowledgment that Contextual Integrity is not a static framework. As a theory, it is subject to change as scholars critique and contribute to it through their empirical and theoretical work. A few of us are excited about the possibility of a Contextual Integrity 2.0, extending the original theory to fill theoretical gaps that have been identified in it.

I’d articulate the aspiration of the meeting today as being about letting Contextual Integrity grow from being a framework into a field–a community of people working together to cultivate something, in this case, a kind of knowledge.

## December 10, 2017

Ph.D. student

#### Appearance, deed, and thing: meta-theory of the politics of technology

Flammarion engraving

Much is written today about the political and social consequences of technology. This writing often maintains that this inquiry into politics and society is distinct from the scientific understanding that informs the technology itself. This essay argues that this distinction is an error. Truly, there is only one science of technology and its politics.

#### Appearance, deed, and thing

There are worthwhile distinctions made between how our experience of the world feels to us directly (appearance), how we can best act strategically in the world (deed), and how the world is “in itself” or, in a sense, despite ourselves (individually) (thing).

##### Appearance

The world as we experience it has been given the name “phenomenon” (late Latin from Greek phainomenon ‘thing appearing to view’) and so “phenomenology” is the study of what we colloquially call today our “lived experience”. Some anthropological methods are a kind of social phenomenology, and some scholars will deny that there is anything beyond phenomenology. Those that claim to have a more effective strategy or truer picture of the world may have rhetorical power, powers that work on the lived experience of the more oppressed people because they have not been adequately debunked and shown to be situated, relativized. The solution to social and political problems, to these scholars, is more phenomenology.*

##### Deed

There are others that see things differently. A perhaps more normal attitude is that the outcomes of ones actions are more important that how the world feels. Things can feel one way now and another way tomorrow; does it much matter? If one holds some beliefs that don’t work when practically applied, one can correct oneself. The name for this philosophical attitude is pragmatism, (from Greek pragma, ‘deed’). There are many people, including some scholars, who find this approach entirely sufficient. The solution to social and political problems is more pragmatism. Sometimes this involves writing off impractical ideas and the people who hold them either useless or as mere pawns. It is their loss.

##### Thing

There are others that see things still differently. A perhaps diminishing portion of the population holds theories of how the world works that transcend both their own lived experience and individual practical applications. Scientific theories about the physical nature of the universe, though tested pragmatically and through the phenomena apparent to the scientists, are based in a higher claim about their value. As Bourdieu (2004) argues, the whole field of science depends on the accepted condition that scientists fairly contend for a “monopoly on the arbitration of the real”. Scientific theories are tested through contest, with a deliberate effort by all parties to prove their theory to be the greatest. These conditions of contest hold science to a more demanding standard than pragmatism, as results of applying a pragmatic attitude will depend on the local conditions of action. Scientific theories are, in principle, accountable to the real (from late Latin realis, from Latin res ‘thing’); these scientists may
be called ‘realists’ in general, though there are many flavors of realism as, appropriately, theories of what is real and how to discover reality have come and gone (see post-positivism and critical realism, for example).

Realists may or may not be concerned with social and political problems. Realists may ask: What is a social problem? What do solutions to these problems look like?

By this account, these three foci and their corresponding methodological approaches are not equivalent to each other. Phenomenology concerns itself with documenting the multiplicity of appearances. Pragmatism introduces something over and above this: a sorting or evaluation of appearances based on some goals or desired outcomes. Realism introduces something over and above pragmatism: an attempt at objectivity based on the contest of different theories across a wide range of goals. ‘Disinterested’ inquiry, or equivalently inquiry that is maximally inclusive of all interests, further refines the evaluation of which appearances are valid.

If this account sounds disparaging of phenomenology as merely a part of higher and more advanced forms of inquiry, that is truly how it is intended. However, it is equally notable that to live up to its own standard of disinterestedness, realism must include phenomenology fully within itself.

#### Nature and technology

It would be delightful if we could live forever in a world of appearances that takes the shape that we desire of it when we reason about it critically enough. But this is not how any but the luckiest live.

Rather, the world acts on us in ways that we do not anticipate. Things appear to us unbidden; they are born, and sometimes this is called ‘nature’ (from Latin natura ‘birth, nature, quality,’ from nat- ‘born’). The first snow of Winter comes as a surprise after a long warm Autumn. We did nothing to summon it, it was always there. For thousands of years humanity has worked to master nature through pragmatic deeds and realistic science. Now, very little of nature has been untouched by human hands. The stars are still things in themselves. Our planetary world is one we have made.

“Technology” (from Greek tekhnologia ‘systematic treatment,’ from tekhnē ‘art, craft’) is what we call those things that are made by skillful human deed. A glance out the window into a city, or at the device one uses to read this blog post, is all one needs to confirm that the world is full of technology. Sitting in the interior of an apartment now, literally everything in my field of vision except perhaps my own two hands and the potted plant are technological artifacts.

#### Science and technology studies: political appearances

According to one narrative, Winner (1980) famously asked the galling question “Do artifacts have politics?” and spawned a field of study** that questions the social consequences of technology. Science and Technology Studies (STS) is, purportedly, this field.
The insight this field claims as their own is that technology has social impact that is politically interesting, the specifics of this design determine these impacts, and that the social context of the design therefore influences the consequences of the technology. At its most ambitious, STS attempts to take the specifics of the technology out of the explanatory loop, showing instead how politics drives design and implementation to further political ends.

Anthropological methods are popular among STS scholars, who often commit themselves to revealing appearances that demonstrate the political origins and impacts of technology. The STS researcher might asked, rhetorically, “Did you know that this interactive console is designed and used for surveillance?”

We can nod sagely at these observations. Indeed, things appear to people in myriad ways, and critical analysis of those appearances does expose that there is a multiplicity of ways of looking at things. But what does one do with this picture?

#### The pragmatic turn back to realism

When one starts to ask the pragmatic question “What is to be done?”, one leaves the domain of mere appearances and begins to question the consequences of one’s deeds. This leads one to take actions and observe the unanticipated results. Suddenly, one is engaging in experimentation, and new kinds of knowledge are necessary. One needs to study organizational theory to understand the role of h technology within a firm, economics to understand how it interacts with the economy. One quickly leaves the field of study known as “science and technology studies” as soon as one begins to consider ones practical effects.

Worse (!), the pragmatist quickly discovers that discovering the impact of ones deeds requires an analysis of probabilities and the difficulty techniques of sampling data and correcting for bias. These techniques have been proven through the vigorous contest of the realists, and the pragmatist discovers that many tools–technologies–have been invented and provisioned for them to make it easier to use these robust strategies. The pragmatist begins to use, without understanding them, all the fruits of science. Their successes are alienated from their narrow lived experience, which are not enough to account for the miracles the= world–one others have invented for them–performs for them every day.

The pragmatist must draw the following conclusions. The world is full of technology, is constituted by it. The world is also full of politics. Indeed, the world is both politics and technology; politics is a technology; technology is form of politics. The world that must be mastered, for pragmatic purposes, is this politico-technical*** world.

What is technical about the world is that it is a world of things created through deed. These things manifest themselves in appearances in myriad and often unpredictable ways.

What is political about the world is that it is a contest of interests. To the most naive student, it may be a shock that technology is part of this contest of interests, but truly this is the most extreme naivete. What adolescent is not exposed to some form of arms race, whether it be in sports equipment, cosmetics, transportation, recreation, etc. What adult does not encounter the reality of technology’s role in their own business or home, and the choice of what to procure and use.

The pragmatist must be struck by the sheer obviousness of the observation that artifacts “have” politics, though they must also acknowledge that “things” are different from the deeds that create them and the appearances they create. There are, after all, many mistakes in design. The effects of technology may as often be due to incompetence as they are to political intent. And to determine the difference, one must contest the designer of the technology on their own terms, in the engineering discourse that has attempted to prove which qualities of a thing survive scrutiny across all interests. The pragmatist engaging the politico-technical world has to ask: “What is real?”

#### The real thing

“What is real?” This is the scientific question. It has been asked again and again for thousands of years for reasons not unlike those traced in this essay. The scientific struggle is the political struggle for mastery over our own politico-technical world, over the reality that is being constantly reinvented as things through human deeds.

There are no short cuts to answering this question. There are only many ways to cop out. These steps take one backward into striving for ones local interest or, further, into mere appearance, with its potential for indulgence and delusion. This is the darkness of ignorance. Forward, far ahead, is a horizon, an opening, a strange new light.

* This narrow view of the ‘privilege of subjectivity’ is perhaps a cause of recent confusion over free speech on college campuses. Realism, as proposed in this essay, is a possible alternative to that.

** It has been claimed that this field of study does not exist, much to the annoyance of those working within it.

*** I believe this term is no uglier than the now commonly used “sociotechnical”.

References

Bourdieu, Pierre. Science of science and reflexivity. Polity, 2004.

Winner, Langdon. “Do artifacts have politics?.” Daedalus (1980): 121-136.

## December 08, 2017

Ph.D. student

#### managerialism, continued

I’ve begun preliminary skimmings of Enteman’s Managerialism. It is a dense work of analytic philosophy, thick with argument. Sporadic summaries may not do it justice. That said, the principle of this blog is that the bar for ‘publication’ is low.

According to its introduction, Enteman’s Managerialism is written by a philosophy professor (Willard Enteman) who kept finding that the “great thinkers”–Adam Smith, Karl Marx–and the theories espoused in their writing kept getting debunked by his students. Contemporary examples showed that, contrary to conventional wisdom, the United States was not a capitalist country whose only alternative was socialism. In his observation, the United States in 1993 was neither strictly speaking capitalist, nor was it socialist. There was a theoretical gap that needed to be filled.

One of the concepts reintroduced by Enteman is Robert Dahl‘s concept of polyarchy, or “rule by many”. A polyarchy is neither a dictatorship nor a democracy, but rather is a form of government where many different people with different interests, but then again probably not everybody, is in charge. It represents some necessary but probably insufficient conditions for democracy.

This view of power seems evidently correct in most political units within the United States. Now I am wondering if I should be reading Dahl instead of Enteman. It appears that Dahl was mainly offering this political theory in contrast to a view that posited that political power was mainly held by a single dominant elite. In a polyarchy, power is held by many different kinds of elites in contest with each other. At its democratic best, these elites are responsive to citizen interests in a pluralistic way, and this works out despite the inability of most people to participate in government.

I certainly recommend the Wikipedia articles linked above. I find I’m sympathetic to this view, having come around to something like it myself but through the perhaps unlikely path of Bourdieu.

This still limits the discussion of political power in terms of the powers of particular people. Managerialism, if I’m reading it right, makes the case that individual power is not atomic but is due to organizational power. This makes sense; we can look at powerful individuals having an influence on government, but a more useful lens could look to powerful companies and civil society organizations, because these shape the incentives of the powerful people within them.

I should make a shift I’ve made just now explicit. When we talk about democracy, we are often talking about a formal government, like a sovereign nation or municipal government. But when we talk about powerful organizations in society, we are no longer just talking about elected officials and their appointees. We are talking about several different classes of organizations–businesses, civil society organizations, and governments among them–interacting with each other.

It may be that that’s all there is to it. Maybe Capitalism is an ideology that argues for more power to businesses, Socialism is an ideology that argues for more power to formal government, and Democracy is an ideology that argues for more power to civil society institutions. These are zero-sum ideologies. Managerialism would be a theory that acknowledges the tussle between these sectors at the organizational level, as opposed to at the atomic individual level.

The reason why this is a relevant perspective to engage with today is that there has probably in recent years been a transfer of power (I might say ‘control’) from government to corporations–especially Big Tech (Google, Amazon, Facebook, Apple). Frank Pasquale makes the argument for this in a recent piece. He writes and speaks with a particular policy agenda that is far better researched than this blog post. But a good deal of the work is framed around the surprise that ‘governance’ might shift to a private company in the first place. This is a framing that will always be striking to those who are invested in the politics of the state; the very word “govern” is unmarkedly used for formal government and then surprising when used to refer to something else.

Managerialism, then, may be a way of pointing to an option where more power is held by non-state actors. Crucially, though, managerialism is not the same thing as neoliberalism, because neoliberalism is based on laissez-faire market ideology and contempory information infrastructure oligopolies look nothing like laissez-faire markets! Calling the transfer of power from government to corporation today neoliberalism is quite anachronistic and misleading, really!

Perhaps managerialism, like polyarchy, is a descriptive term of a set of political conditions that does not represent an ideal, but a reality with potential to become an ideal. In that case, it’s worth investigating managerialism more carefully and determining what it is and isn’t, and why it is on the rise.

## December 06, 2017

Ph.D. student

#### beginning Enteman’s Managerialism

I’ve been writing about managerialism without having done my homework.

Today I got a new book in the mail, Willard Enteman’s Managerialism: The Emergence of a New Ideology, a work of analytic political philosophy that came out in 1993. The gist of the book is that none of the dominant world ideologies of the time–capitalism, socialism, and democracy–actually describe the world as it functions.

Enter Enteman’s managerialism, which considers a society composed of organizations, not individuals, and social decisions as a consequence of the decisions of organizational managers.

It’s striking that this political theory has been around for so long, though it is perhaps more relevant today because of large digital platforms.

Ph.D. student

#### Assembling Critical Practices Reading List Posted

At the Berkeley School of Information, a group of researchers interested in the areas of critically-oriented design practices, critical social theory, and STS have hosted a reading group called “Assembling Critical Practices,” bringing together literature from these fields, in part to track their historical continuities and discontinuities, as well as to see new opportunities for design and research when putting them in conversation together.
I’ve posted our reading list from our first iterations of this group. Sections 1-3 focus on critically-oriented HCI, early critiques of AI, and an introduction to critical theory through the Frankfurt School. This list comes from an I School reading group put together in collaboration with Anne Jonas and Jenna Burrell.

Section 4 covers a broader range of social theories. This comes from a reading group sponsored by the Berkeley Social Science Matrix organized by myself and Anne Jonas with topic contributions from Nick Merrill, Noura Howell, Anna Lauren Hoffman, Paul Duguid, and Morgan Ames (Feedback and suggestions are welcome! Send an email to richmond@ischool.berkeley.edu).

## December 02, 2017

Ph.D. student

#### How to promote employees using machine learning without societal bias

Though it may at first read as being callous, a managerialist stance on inequality in statistical classification can help untangle some of the rhetoric around this tricky issue.

Consider the example that’s been in the news lately:

Suppose a company begins to use an algorithm to make decisions about which employees to promote. It uses a classifier trained on past data about who has been promoted. Because of societal bias, women are systematically under-promoted; this is reflected in the data set. The algorithm, naively trained on the historical data, reproduces the historical bias.

This example describes a bad situation. It is bad from a social justice perspective; by assumption, it would be better if men and women had equal opportunity in this work place.

It is also bad from a managerialist perspective. Why? Because if the point of using an algorithm were not to correct for societal biases introducing irrelevancies into the promotion decision, then it would not make managerial sense to change business practices over to using an algorithm. The whole point of using an algorithm is to improve on human decision-making. This is a poor match of an algorithm to a problem.

Unfortunately, what makes this example compelling is precisely what makes it a bad example of using an algorithm in this context. The only variables discussed in the example are the socially salient ones thick with political implications: gender, and promotion. What are more universal concerns than gender relations and socioeconomic status?!

But from a managerialist perspective, promotions should be issued based on a number of factors not mentioned in the example. What factors are these? That’s a great and difficult question. Promotions can reward hard work and loyalty. They can also be issued to those who demonstrate capacity for leadership, which can be a function of how well they get along with other members of the organization. There may be a number of features that predict these desirable qualities, most of which will have to do with working conditions within the company as opposed to qualities inherent in the employee (such as their past education, or their gender).

If one were to start to use machine learning intelligently to solve this problem, then one would go about solving it in a way entirely unlike the procedure in the problematic example. One would rather draw on soundly sourced domain expertise to develop a model of the relationship between relevant, work-related factors. For many of the key parts of the model, such as general relationships between personality type, leadership style, and cooperation with colleagues, one would look outside the organization for gold standard data that was sampled responsibly.

Once the organization has this model, then it can apply it to its own employees. For this to work, employees would need to provide significant detail about themselves, and the company would need to provide contextual information about the conditions under which employees work, as these may be confounding factors.

Part of the merit of building and fitting such a model would be that, because it is based on a lot of new and objective scientific considerations, it would produce novel results in recommending promotions. Again, if the algorithm merely reproduced past results, it would not be worth the investment in building the model.

When the algorithm is introduced, it ideally is used in a way that maintains traditional promotion processes in parallel so that the two kinds of results can be compared. Evaluation of the algorithm’s performance, relative to traditional methods, is a long, arduous process full of potential insights. Using the algorithm as an intervention at first allows the company to develop a causal understanding its impact. Insights from the evaluation can be factored back into the algorithm, improving the latter.

In all these cases, the company must keep its business goals firmly in mind. If they do this, then the rest of the logic of their method falls out of data science best practices, which are grounded in mathematical principles of statistics. While the political implications of poorly managed machine learning are troubling, effective management of machine learning which takes the precautions necessary to develop objectivity is ultimately a corrective to social bias. This is a case where sounds science and managerialist motives and social justice are aligned.

#### Enlightening economics reads

Nils Gilman argues that the future of the world is wide open because neoliberalism has been discredited. So what’s the future going to look like?

Given that neoliberalism is for the most part an economic vision, and that competing theories have often also been economic visions (when they have not been political or theological theories), a compelling futurist approach is to look out for new thinking about economics. The three articles below have recently taught me something new about economics:

Dani Rodrik. “Rescuing Economics from Neoliberalism”, Boston Review. (link)

This article makes the case that the association frequently made between economics as a social science and neoliberalism as an ideology is overdrawn. Of course, probably the majority of economists are not neoliberals. Rodrik is defending a view of economics that keeps its options open. I think he overstates the point with the claim, “Good economists know that the correct answer to any question in economics is: it depends.” This is just simply incorrect, if questions have their assumptions bracketed well enough. But since Rodrik’s rhetorical point appears to be that economists should not be dogmatists, he can be forgiven this overstatement.

As an aside, there is something compelling but also dangerous to the view that a social science can provide at best narrowly tailored insights into specific phenomena. These kinds of ‘sciences’ wind up being unaccountable, because the specificity of particular events prevent the repeated testing of the theories that are used to explain them. There is a risk of too much nuance, which is akin to the statistical concept of overfitting.

A different kind of article is:

Seth Ackerman. “The Disruptors” Jacobin. (link)

An interview with J.W. Mason in the smart socialist magazine, Jacobin, that had the honor of a shout out from Matt Levine’s popular “Money Talk” Bloomberg column (column?). On of the interesting topics it raises is whether or not mutual funds, in which many people invest in a fund that then owns a wide portfolio of stocks, are in a sense socialist and anti-competitive because shareholders no longer have an interest in seeing competition in the market.

This is original thinking, and the endorsement by Levine is an indication that it’s not a crazy thing to consider even for the seasoned practical economists in the financial sector. My hunch at this point in life is that if you want to understand the economy, you have to understand finance, because they are the ones whose job it is to profit from their understanding of the economy. As a corollary, I don’t really understand the economy because I don’t have a great grasp of the financial sector. Maybe one day that will change.

Speaking of expertise being enhanced by having ‘skin in the game’, the third article is:

Nassim Nicholas Taleb. “Inequality and Skin in the Game,” Medium. (link)

I haven’t read a lot of Taleb though I acknowledge he’s a noteworthy an important thinker. This article confirmed for me the reputation of his style. It was also a strikingly fresh look at economics of inequality, capturing a few of the important things mainstream opinion overlooks about inequality, namely:

• Comparing people at different life stages is a mistake when analyzing inequality of a population.
• A lot of the cause of inequality is randomness (as opposed to fixed population categories), and this inequality is inevitable

He’s got a theory of what kinds of inequality people resent versus what they tolerate, which is a fine theory. It would be nice to see some empirical validation of it. He writes about the relationship between ergodicity and inequality, which is interesting. He is scornful of Piketty and everyone who was impressed by Piketty’s argument, which comes off as unfriendly.

Much of what Taleb writes about the need to understand the economy through a richer understanding of probability and statistics strikes me as correct. If it is indeed the case that mainstream economics has not caught up to this, there is an opportunity here!

## November 28, 2017

Ph.D. student

#### mathematical discourse vs. exit; blockchain applications

Continuing my effort to tie together the work on this blog into a single theory, I should address the theme of an old post that I’d forgotten about.

The post discusses the discourse theory of law, attributed to the later, matured Habermas. According to it, the law serves as a transmission belt between legitimate norms established by civil society and a system of power, money, and technology. When it is efficacious and legitimate, society prospers in its legitimacy. The blog post toys with the idea of normatively aligned algorithm law established in a similar way: through the norms established by civil society.

I wrote about this in 2014 and I’m surprised to find myself revisiting these themes in my work today on privacy by design.

What this requires, however, is that civil society must be able to engage in mathematical discourse, or mathematized discussion of norms. In other words, there has to be an intersection of civil society and science for this to make sense. I’m reminded by how inspired I’ve felt by Nick Doty’s work on multistakerholderism in Internet standards as a model.

I am more skeptical of this model than I have been before, if only because in the short term I’m unsure if a critical mass of scientific talent can engage with civil society well enough to change the law. This is because scientific talent is a form of capital which has no clear incentive for self-regulation. Relatedly, I’m no longer as confident that civil society carries enough clout to change policy. I must consider other options.

The other option, besides voicing ones concerns in civil society, is, of course, exit, in Hirschmann‘s sense. Theoretically an autonomous algorithmic law could be designed such that it encourages exit from other systems into itself. Or, more ecologically, competing autonomous (or decentralized, …) systems can be regulated by an exit mechanism. This is in fact what happens now with blockchain technology and cryptocurrency. Whenever there is a major failure of one of these currencies, there is a fork.

## November 27, 2017

Ph.D. student

#### Re: Tear down the new institutions

Hiya Ben,

And with enough social insight, you can build community standards into decentralized software.
https://words.werd.io

Yes! I might add, though, that community standards don't need to be enacted entirely in the source code, although code could certainly help. I was in New York earlier this month talking with Cornell Tech folks (for example, Helen Nissenbaum, a philosopher) about exactly this thing: there are "handoffs" between human and technical mechanisms to support values in sociotechnical systems.

What makes federated social networking like Mastodon most of interest to me is that different smaller communities can interoperate while also maintaining their own community standards. Rather than every user having to maintain massive blocklists or trying alone to encourage better behavior in their social network, we can support admins and moderators, self-organize into the communities we prefer and have some investment in, and still basically talk with everyone we want to.

As I understand it, one place to have this design conversation is the Social Web Incubator Community Group (SocialCG), which you can find on W3C IRC (#social) and Github (but no mailing list!), and we talked about harassment challenges at a small face-to-face Social Web meeting at TPAC a few weeks back. Or I'm @npd@octodon.social; there is a special value (in a Kelty recursive publics kind of way) in using a communication system to discuss its subsequent design decisions. I think, as you note, that working on mitigations for harassment and abuse (whether it's dogpiling or fake news distribution) in the fediverse is an urgent and important need.

In a way, then, I guess I'm looking to the creation of new institutions, rather than their dismantling. Or, as cwebber put it:

I'm not very interested in how to tear systems down nearly as much as what structure to replace them with (and how you realistically think we'll get there)
@cwebber@octodon.social

While I agree that the outsize power of large social networking platforms can be harmful even as it seemed to disrupt old gatekeepers, I do want to create new institutions, institutions that reflect our values and involve widespread participation from often underserved groups. The utopia that "everything would be free" doesn't really work for autonomy, free expression and democracy, rather, we need to build the system we really want. We need institutions both in the sense of valued patterns of behavior and in the sense of community organizations.

If you're interested in helping or have suggestions of people that are, do let me know.
Cheers,
Nick

## November 26, 2017

MIMS 2012

#### My Talk at Lean Kanban Central Europe 2017

On a chilly fall day a few weeks back, I gave a talk at the cozy Lean Kanban Central Europe in Hamburg, Germany. I was honored to be invited to give a reprise of the talk I gave with Keith earlier this year at Lean Kanban North America.

I spoke about Optimizely’s software development process, and how we’ve used ideas from Lean Kanban and ESP (Enterprise Service Planning) to help us ship faster, with higher quality, to better meet customer needs. Overall it went well, but I had too much content and rushed at the end. If I do this talk again, I would cut some slides and make the presentation more focused and concise. Watch the talk below.

## Epilogue

One of the cool things this conference does is give the audience green, yellow, and red index cards they can use to give feedback to the speakers. Green indicates you liked the talk, red means you didn’t like it, and yellow is neutral.

I got just one red card, with the comment, “topic title not accurate (this is not ESP?!).” In retrospect, I realized this person is correct — my talk really doesn’t talk about ESP much. I touch on it, but that was what Keith covered. Since he dropped out, I mostly cut those sections of the presentation since I can’t speak as confidently about them. If I did this talk solo again, I would probably change the title. So thank you, anonymous commenter 🙏

I also got two positive comments on green cards:

Thanks for sharing. Some useful insights + good to see it used in industry. - Thanks.

And:

Thank you! Great examples, (maybe less slides next time?) but this was inspiring

I also got some good tweets, like this and this.

Ph.D. student

#### Recap

Sometimes traffic on this blog draws attention to an old post from years ago. This can be a reminder that I’ve been repeating myself, encountering the same themes over and over again. This is not necessarily a bad thing, because I hope to one day compile the ideas from this blog into a book. It’s nice to see what points keep resurfacing.

One of these points is that liberalism assumes equality, but this challenged by society’s need for control structures, which creates inequality, which then undermines liberalism. This post calls in Charles Taylor (writing about Hegel!) to make the point. This post makes the point more succinctly. I’ve been drawing on Beniger for the ‘society needs control to manage its own integration’ thesis. I’ve pointed to the term managerialism as referring to an alternative to liberalism based on the acknowledgement of this need for control structures. Managerialism looks a lot like liberalism, it turns out, but it justifies things on different grounds and does not get so confused. As an alternative, more Bourdieusian view of the problem, I consider the relationship between capital, democracy, and oligarchy here. There are some useful names for what happens when managerialism goes wrong and people seem disconnected from each other–anomie–or from the control structures–alienation.

A related point I’ve made repeatedly is the tension between procedural legitimacy and getting people the substantive results that they want. That post about Hegel goes into this. But it comes up again in very recent work on antidiscrimination law and machine learning. What this amounts to is that attempts to come up with a fair, legitimate procedure are going to divide up the “pie” of resources, or be perceived to divide up the pie of resources, somehow, and people are going to be upset about it, however the pie is sliced.

A related theme that comes up frequently is mathematics. My contention is that effective control is a technical accomplishment that is mathematically optimized and constrained. There are mathematical results that reveal necessary trade-offs between values. Data science has been misunderstood as positivism when in fact it is a means of power. Technical knowledge and technology are forms of capital (Bourdieu again). Perhaps precisely because it is a rare form of capital, science is politically distrusted.

To put it succinctly: lack of mathematics education, due to lack of opportunity or mathophobia, lead to alienation and anomie in an economy of control. This is partly reflected in the chaotic disciplinarity of the social sciences, especially as they react to computational social science, at the intersection of social sciences, statistics, and computer science.

Lest this all seem like an argument for the mathematical certitude of totalitarianism, I have elsewhere considered and rejected this possibility of ‘instrumentality run amok‘. I’ve summarized these arguments here, though this appears to have left a number of people unconvinced. I’ve argued this further, and think there’s more to this story (a formalization of Scott’s arguments from Seeing Like a State, perhaps), but I must admit I don’t have a convincing solution to the “control problem” yet. However, it must be noted that the answer to the control problem is an empirical or scientific prediction, not a political inclination. Whether or not it is the most interesting or important question regarding technological control has been debated to a stalemate, as far as I can tell.

As I don’t believe singleton control is a likely or interesting scenario, I’m more interested in practical ways of offering legitimacy or resistance to control structures. I used to think the “right” political solution was a kind of “hacker class consciousness“; I don’t believe this any more. However, I still think there’s a lot to the idea of recursive publics as actually existing alternative power structures. Platform coops are interesting for the same reason.

All this leads me to admit my interest in the disruptive technology du jour, the blockchain.

## November 24, 2017

Ph.D. student

#### Values in design and mathematical impossibility

Under pressure from the public and no doubt with sincere interest in the topic, computer scientists have taken up the difficulty task of translating commonly held values into the mathematical forms that can be used for technical design. Commonly, what these researches discover is some form of mathematical impossibility of achieving a number of desirable goals at the same time. This work has demonstrated the impossibility of having a classifier that is fair with respect to a social category without data about that very category (Dwork et al., 2012), having a fair classifier that is both statistically well calibrated for the prediction of properties of persons and equalizing the false positive and false negative rates of partitions of that population (Klienberg et al., 2016), of preserving privacy of individuals after an arbitrary number of queries to a database, however obscured (Dwork, 2008), or of a coherent notion of proxy variable use in privacy and fairness applications that is based on program semantics (as opposed to syntax) (Datta et al., 2017).

These are important results. An important thing about them is that they transcend the narrow discipline in which they originated. As mathematical theorems, they will be true whether or not they are implemented on machines or in human behavior. Therefore, these theorems have a role comparable to other core mathematical theorems in social science, such as Arrow’s Impossibility Theorem (Arrow, 1950), a theorem about the impossibility of having a voting system with reasonable desiderata for determining social welfare.

There can be no question of the significance of this kind of work. It was significant a hundred years ago. It is perhaps of even more immediate, practical importance when so much public infrastructure is computational. For what computation is is automation of mathematics, full stop.

There are some scholars, even some ethicists, for whom this is an unwelcome idea. I have been recently told by one ethics professor that to try to mathematize core concepts in ethics is to commit a “category mistake”. This is refuted by the clearly productive attempts to do this, some of which I’ve cited above. This belief that scientists and mathematicians are on a different plane than ethicists is quite old: Hannah Arendt argued that scientists should not be trusted because their mathematical language prevented them from engaging in normal political and ethical discourse (Arendt, 1959). But once again, this recent literature (as well as much older literature in such fields as theoretical economics) demonstrates that this view is incorrect.

There are many possible explanations for the persistence of the view that mathematics and the hard sciences do not concern themselves with ethics, are somehow lacking in ethical education, or that engineers require non-technical people to tell them how to engineer things more ethically.

One reason is that the sciences are much broader in scope than the ethical results mentioned here. It is indeed possible to get a specialist’s education in a technical field without much ethical training, even in the mathematical ethics results mentioned above.

Another reason is that whereas understanding the mathematical tradeoffs inherent in certain kinds of design is an important part of ethics, it can be argued by others that what’s most important about ethics is some substantive commitment that cannot be mathematically defended. For example, suppose half the population believes that it is most ethical for members of the other half to treat them with special dignity and consideration, at the expense of the other half. It may be difficult to arrive at this conclusion from mathematics alone, but this group may advocate for special treatment out of ethical consideration nonetheless.

These two reasons are similar. The first states that mathematics includes many things that are not ethics. The second states that ethics potentially (and certainly in the minds of some people) includes much that is not mathematical.

I want to bring up a third reason, which is perhaps more profound than the other two, which is this: what distinguishes mathematics as a field is its commitment to logical non-contradiction, which means that it is able to baldly claim when goals a impossible to achieve, Acknowledging tradeoffs is part of what mathematicians and scientists do.

Acknowledging tradeoffs is not something that everybody else is trained to do, and indeed many philosophers are apparently motivated by the ability to surpass limitations. Alain Badiou, who is one of the living philosophers that I find to be most inspiring and correct, maintains that mathematics is the science of pure Being, of all possibilities. Reality is just a subset of these possibilities, and much of Badiou’s philosophy is dedicated to the Event, those points where the logical constraints of our current worldview are defeated and new possibilities open up.

This is inspirational work, but it contradicts what many mathematicians do in fact, which is identity impossibility. Science forecloses possibilities where a poet may see infinite potential.

Other ethicists, especially existentialist ethicists, see the limitation and expansion of possibility, especially in the possibility of personal accomplishment, as fundamental to ethics. This work is inspiring precisely because it states so clearly what it is we hope for and aspire to.

What mathematical ethics often tells us is that these hopes are fruitless. The desiderata cannot be met. Somebody will always get the short stick. Engineers, unable to triumph against mathematics, will always disappoint somebody, and whoever that somebody is can always argue that the engineers have neglected ethics, and demand justice.

There may be good reasons for making everybody believe that they are qualified to comment on the subject of ethics. Indeed, in a sense everybody is required to act ethically even when they are not ethicists. But the preceding argument suggests that perhaps mathematical education is an essential part of ethical education, because without it one can have unrealistic expectations of the ethics of others. This is a scary thought because mathematics education is so often so poor. We live today, as we have lived before, in a culture with great mathophobia (Papert, 1980) and this mathophobia is perpetuated by those who try to equate mathematical training with immorality.

References

Arendt, Hannah. The human condition:[a study of the central dilemmas facing modern man]. Doubleday, 1959.

Arrow, Kenneth J. “A difficulty in the concept of social welfare.” Journal of political economy 58.4 (1950): 328-346.

Benthall, Sebastian. “Philosophy of computational social science.” Cosmos and History: The Journal of Natural and Social Philosophy 12.2 (2016): 13-30.

Datta, Anupam, et al. “Use Privacy in Data-Driven Systems: Theory and Experiments with Machine Learnt Programs.” arXiv preprint arXiv:1705.07807 (2017).

Dwork, Cynthia. “Differential privacy: A survey of results.” International Conference on Theory and Applications of Models of Computation. Springer, Berlin, Heidelberg, 2008.

Dwork, Cynthia, et al. “Fairness through awareness.” Proceedings of the 3rd Innovations in Theoretical Computer Science Conference. ACM, 2012.

Kleinberg, Jon, Sendhil Mullainathan, and Manish Raghavan. “Inherent trade-offs in the fair determination of risk scores.” arXiv preprint arXiv:1609.05807 (2016).

Papert, Seymour. Mindstorms: Children, computers, and powerful ideas. Basic Books, Inc., 1980.

## November 22, 2017

Ph.D. student

#### Pondering “use privacy”

I’ve been working carefully with Datta et al.’s “Use Privacy” work (link), which makes a clear case for how a programmatic, data-driven model may be statically analyzed for its use of a proxy of a protected variable, and repaired.

Their system has a number of interesting characteristics, among which are:

• The use of a normative oracle for determining which proxy uses are prohibited.
• A proof that there is no coherent definition of proxy use which has all of a set of very reasonable properties defined over function semantics.

Given (2), they continue with a compelling study of how a syntactic definition of proxy use, one based on the explicit contents of a function, can support a system of detecting and repairing proxies.

My question is to what extent the sources of normative restriction on proxies (those characterized by the oracle in (1)) are likely to favor syntactic proxy use restrictions, as opposed to semantic ones. Since ethicists and lawyers, who are the purported sources of these normative restrictions, are likely to consider any technical system a black box for the purpose of their evaluation, they will naturally be concerned with program semantics. It may be comforting for those responsible for a technical program to be able to, in a sense, avoid liability by assuring that their programs are not using a restricted proxy. But, truly, so what? Since these syntactic considerations do not make any semantic guarantees, will they really plausibly address normative concerns?

A striking result from their analysis which has perhaps broader implications is the incoherence of a semantic notion of proxy use. Perhaps sadly but also substantively, this result shows that a certain plausible normative is impossible for a system to fulfill in general. Only restricted conditions make such a thing possible. This seems to be part of a pattern in these rigorous computer science evaluations of ethical problems; see also Kleinberg et al. (2016) on how it’s impossible to meet several plausible definitions of “fairness” in the risk-assessment scores across social groups except under certain conditions.

The conclusion for me is that what this nobly motivated computer science work reveals is that what people are actually interested in normatively is not the functioning of any particular computational system. They are rather interested in social conditions more broadly, which are rarely aligned with our normative ideals. Computational systems, by making realities harshly concrete, are disappointing, but it’s a mistake to make that a disappointment with the computing systems themselves. Rather, there are mathematical facts that are disappointing regardless of what sorts of systems mediate our social world.

This is not merely a philosophical consideration or sociological observation. Since the the interpretation of laws are part of the process of informing normative expectations (as in a normative oracle), it is an interesting an perhaps open question how lawyers and judges, in their task of legal interpretation, make use of the mathematical conclusions about normative tradeoffs being offered up by computer scientists.

References

Datta, Anupam, et al. “Use Privacy in Data-Driven Systems: Theory and Experiments with Machine Learnt Programs.” arXiv preprint arXiv:1705.07807 (2017).

Kleinberg, Jon, Sendhil Mullainathan, and Manish Raghavan. “Inherent trade-offs in the fair determination of risk scores.” arXiv preprint arXiv:1609.05807 (2016).

Ph.D. student

#### Interrogating Biosensing Privacy Futures with Design Fiction (video)

I presented this talk in November 2017, at the Berkeley I School PhD Research Reception. The talk discusses findings from 2 of our papers:

Richmond Y. Wong, Ellen Van Wyk and James Pierce. (2017). Real-Fictional Entanglements: Using Science Fiction and Design Fiction to Interrogate Sensing Technologies. In Proceedings of the ACM Conference on Designing Interactive Systems (DIS ’17). https://escholarship.org/uc/item/7r229796

Richmond Y. Wong, Deirdre K. Mulligan, Ellen Van Wyk, James Pierce and John Chuang. (2017). Eliciting Values Reflections by Engaging Privacy Futures Using Design Workbooks. Proceedings of the ACM Human Computer Interaction (CSCW 2018 Online First). 1, 2, Article 111 (November 2017), 27 pages. https://escholarship.org/uc/item/78c2802k

More about this project and some of the designs can be found here: biosense.berkeley.edu/projects/sci-fi-design-fiction/

## November 19, 2017

Ph.D. student

#### On achieving social equality

When evaluating a system, we have a choice of evaluating its internal functions–the inside view–or evaluating its effects situated in a larger context–the outside view.

Decision procedures (whether they are embodied by people or performed in concert with mechanical devices–I don’t think this distinction matters here) for sorting people are just such a system. If I understand correctly, the question of which principles animate antidiscrimination law hinge on this difference between the inside and outside view.

We can look at a decision-making process and evaluate whether as a procedure it achieves its goals of e.g. assigning credit scores without bias against certain groups. Even including processes of the gathering of evidence or data in such a system, it can in principle be bounded and evaluated by its ability to perform its goals. We do seem to care about the difference between procedural discrimination and procedural nondiscrimination. For example, an overtly racist policy that ignores truly talent and opportunity seems worse than a bureaucratic system that is indifferent to external inequality between groups that then gets reflected in decisions made according to other factors that are merely correlated with race.

The latter case has been criticized in the outside view. The criticism is captured by the phrasing that “algorithms can reproduce existing biases”. The supposedly neutral algorithm (which can, again, be either human or machine) is not neutral in its impact because in making its considerations of e.g. business interest are indifferent to the conditions outside it. The business is attracted to wealth and opportunity, which are held disproportionately by some part of the population, so the business is attracted to that population.

There is great wisdom in recognizing that institutions that are neutral in their inside view will often reproduce bias in the outside view. But it is incorrect to therefore conflate neutrality in the inside view with a biased inside view, even though their effects may be under some circumstances the same. When I say it is “incorrect”, I mean that they are in fact different because, for example, if the external conditions of procedurally neutral institution change, then it will reflect those new conditions. A procedurally biased institution will not reflect those new conditions in the same way.

Empirically it is very hard to tell when an institution is being procedurally neutral and indeed this is the crux of an enormous amount of political tension today. The first line of defense of an institution blamed of bias is to claim that their procedural neutrality is merely reflecting environmental conditions outside of its control. This is unconvincing for many politically active people. It seems to me that it is now much more common for institutions to avoid this problem by explicitly declaring their bias. Rather than try to accomplish the seemingly impossible task of defending their rigorous neutrality, it’s easier to declare where one stands on the issue of resource allocation globally and adjust ones procedure accordingly.

I don’t think this is a good thing.

One consequence of evaluating all institutions based on their global, “systemic” impact as opposed to their procedural neutrality is that it hollows out the political center. The evidence is in that politics has become more and more polarized. This is inevitable if politics becomes so explicitly about maintaining or reallocating resources as opposed to about building neutrally legitimate institutions. When one party in Congress considers a tax bill which seems designed mainly to enrich ones own constituencies at the expense of the other’s things have gotten out of hand. The idea of a unified idea of ‘good government’ has been all but abandoned.

An alternative is a commitment to procedural neutrality in the inside view of institutions, or at least some institutions. The fact that there are many different institutions that may have different policies is indeed quite relevant here. For while it is commonplace to say that a neutral institution will “reproduce existing biases”, “reproduction” is not a particularly helpful word here. Neither is “bias”. What we can say more precisely is that the operations of procedurally neutral institution will not change the distribution of resources even though they are unequal.

But if we do not hold all institutions accountable for correcting the inequality of society, isn’t that the same thing as approving of the status quo, which is so unequal? A thousand times no.

First, there’s the problem that many institutions are not, currently, procedurally neutral. Procedural neutrality is a higher standard than what many institutions are currently held to. Consider what is widely known about human beings and their implicit biases. One good argument for transferring decision-making authority to machine learning algorithms, even standard ones not augmented for ‘fairness’, is that they will not have the same implicit, inside, biases as the humans that currently make these decisions.

Second, there’s the fact that responsibility for correcting social inequality can be taken on by some institutions that are dedicated to this task while others are procedurally neutral. For example, one can consistently believe in the importance of a progressive social safety net combined with procedurally neutral credit reporting. Society is complex and perhaps rightly has many different functioning parts; not all the parts have to reflect socially progressive values for the arc of history to bend towards justice.

Third, there is reason to believe that even if all institutions were procedurally neutral, there would eventually be social equality. This has to do with the mathematically bulletproof but often ignored phenomenon of regression towards the mean. When values are sampled from a process at random, their average will approach the mean of the distribution as more values are accumulated. In terms of the allocation of resources in a population, there is some random variation in the way resources flow. When institutions are fair, inequality in resource allocation will settle into an unbiased distribution. While their may continue to be some apparent inequality due to disorganized heavy tail effects, these will not be biased, in a political sense.

Fourth, there is the problem of political backlash. Whenever political institutions are weak enough to be modified towards what is purported to be a ‘substantive’ or outside view neutrality, that will always be because some political coalition has attained enough power to swing the pendulum in their favor. The more explicit they are about doing this, the more it will mobilize the enemies of this coallition to try to swing the pendulum back the other way. The result is war by other means, the outcome of which will never be fair, because in war there are many who wind up dead or injured.

I am arguing for a centrist position on these matters, one that favors procedural neutrality in most institutions. This is not because I don’t care about substantive, “outside view” inequality. On the contrary, it’s because I believe that partisan bickering that explicitly undermines the inside neutrality of institutions undermines substantive equality. Partisan bickering over the scraps within narrow institutional frames is a distraction from, for example, the way the most wealthy avoid taxes while the middle class pays even more. There is a reason why political propaganda that induces partisan divisions is a weapon. Agreement about procedural neutrality is a core part of civic unity that allows for collective action against the very most abusively powerful.

References

Zachary C. Lipton, Alexandra Chouldechova, Julian McAuley. “Does mitigating ML’s disparate impact require disparate treatment?” 2017

## November 18, 2017

Ph.D. student

#### what to do about the blog

Initially, I thought, I needed to get bcc.npdoty.name to load over HTTPS. Previously I had been using TLS transit part of the way using Cloudflare, but I've moved away from that, I'd rather not have the additional service, it was only a partial solution, and I'm tired of seeing Certificate Transparency alerts from Facebook when CloudFlare creates a new cert every week for my domain name and a thousand others, but now I've heard that Google has announced good HTTPS support for custom domain names when using Google App Engine and so I should be good to go. HTTPS is important, and I should fix that before I post more on this blog.

I was plagued for weeks trying to use Google's new developer console, reading through various documentation that was out of date, confronted by the vaguest possible error messages. Eventually, I discover that there's just a bug for most or all long-time App Engine users who created custom domains on applications years ago using a different system; the issue is acknowledged; no timeline for a fix; no documentation; no workaround.* Just a penalty for being a particularly long-time customer. Meanwhile, Google is charging me for server time on the blog that sees no usage, for some other reason I haven't been able to nail down.

I start to investigate other blogging software: is Ghost the preferred customizable blogging platform these days? What about static-site generation, from Jekyll, or Hugo? Can I find something written in a language where I could comfortably customize it (JavaScript, Python) and still have a well-supported and simple infrastructure for creating static pages that I can easily host on my existing simple infrastructure? I go through enough of the process to actually set up a sample Ghost installation on WebFaction, before realizing (and I really credit the candor of their documentation here) that this is way too heavyweight for what I'm trying to do.

Ah, I fell into that classic trap! This isn't blogging. This isn't even working on building a new and better blogging infrastructure or social media system. This isn't writing prose, this isn't writing code. This is meta-crap, this is clicking around, comparing feature lists, being annoyed about technology. So, to answer the original small question to myself "what to do about the blog", how about, for now, "just fucking post on whatever infrastructure you've got".

—npd

* I see that at least one of the bugs has some updates now, and maybe using a different (command-line) tool I could unblock myself with that particular sub-issue.
Maybe. Or maybe I would hit their next undocumented error message and get stuck again, having invested several more hours in it. And it does actually seem important to move away from this infrastructure; I'm not really sure to what extent Google is supporting it, but I do know that when I run into completely blocking issues that there is no way for me to contact Google's support team or get updates on issues (beyond, search various support forums for hours to reverse-engineer your problem, see if there's an open bug on their issue tracker, click Star), and that in the meantime they are charging me what I consider a significant amount of money.

## November 15, 2017

Ph.D. student

#### Notes on fairness and nondiscrimination in machine learning

There has been a lot of work done lately on “fairness in machine learning” and related topics. It cannot be a coincidence that this work has paralleled a rise in political intolerance that is sensitized to issues of gender, race, citizenship, and so on. I more or less stand by my initial reaction to this line of work. But very recently I’ve done a deeper and more responsible dive into this literature and it’s proven to be insightful beyond the narrow problems which it purports to solve. These are some notes on the subject, ordered so as to get to the point.

The subject of whether and to what extent computer systems can enact morally objectionable bias goes back at least as far as Friedman and Nissenbaum’s 1996 article, in which they define “bias” as systematic unfairness. They mean this very generally, not specifically in a political sense (though inclusive of it). Twenty years later, Kleinberg et al. (2016) prove that there are multiple, competing notions of fairness in machine classification which generally cannot be satisfied all at once; they must be traded off against each other. In particular, a classifier that uses all available information to optimize accuracy–one that achieves what these authors call calibration–cannot also have equal false positive and false negative rates across population groups (read: race, sex), properties that Hardt et al. (2016) call “equal opportunity”. This is no doubt inspired by a now very famous ProPublica article asserting that a particular kind of commercial recidivism prediction software was “biased against blacks” because it had a higher false positive rate for black suspects than white offenders. Because bail and parole rates are set according to predicted recidivism, this led to cases where a non-recidivist was denied bail because they were black, which sounds unfair to a lot of people, including myself.

While I understand that there is a lot of high quality and well-intentioned research on this subject, I haven’t found anybody who could tell me why the solution to this problem was to stop using predicted recidivism to set bail, as opposed to futzing around with a recidivism prediction algorithm which seems to have been doing its job (Dieterich et al., 2016). Recidivism rates are actually correlated with race (Hartney and Vuong, 2009). This is probably because of centuries of systematic racism. If you are serious about remediating historical inequality, the least you could do is cut black people some slack on bail.

This gets to what for me is the most baffling aspect of this whole research agenda, one that I didn’t have the words for before reading Barocas and Selbst (2016). A point well-made by them is that the interpretation anti-discrimination law, which motivates a lot of this research, is fraught with tensions that complicate its application to data mining.

“Two competing principles have always undergirded anti-discrimination law: nondiscrimination and antisubordination. Nondiscrimination is the narrower of the two, holding that the responsibility of the law is to eliminate the unfairness individuals experience a the hands of decisionmakers’ choices due to membership in certain protected classes. Antisubordination theory, in contrast, holds that the goal of antidiscrimination law is, or at least should be, to eliminate status-based inequality due to membership in those classes, not as a matter of procedure, but substance.” (Barocas and Selbst, 2016)

More specifically, these two principles motivate different interpretations of the two pillars of anti-discrimination law, disparate treatment and disparate impact. I draw on Barocas and Selbst for my understanding of each:

A judgment of disparate treatment requires either a formal disparate treatment (across protected groups) of similarly situated people, or an intent to discriminate. Since in a large data mining application protected group membership will be proxied by many other factors, it’s not clear if the ‘formal’ requirement makes much sense here. And since machine learning applications only very rarely have racist intent, that option seems challengeable as well. While there are interpretations of these criteria that are tougher on decision-makers (i.e. unconscious intents), these seem to be motivated by antisubordination rather than the weaker nondiscrimination principle.

A judgment of disparate impact is perhaps more straightforward, but it can be mitigated in cases of “business necessity”, which (to get to the point) is vague enough to plausibly include optimization in a technical sense. Once again, there is nothing to see here from a nondiscrimination standpoint, though a nonsubordinationist would rather that these decision-makers have to take correcting for historical inequality into account.

I infer from their writing that Barocas and Selbst believe that nonsubordination is an important principle for nondiscrimination. In any case, they maintain that making the case for applying nondiscrimination laws to data mining effectively requires a commitment to “substantive remediation”. This is insightful!

Just to put my cards on the table: as much as I may like the idea of substantive remediation in principle, I personally don’t think that every application of nondiscrimination law needs to be animated by it. For many institutions, narrow nondiscrimination seems to be adequate if not preferable. I’d prefer remediation to occur through other specific policies, such as more public investment in schools in low-income districts. Perhaps for this reason, I’m not crazy about “fairness in machine learning” as a general technical practice. It seems to me to be trying to solve social problems with a technical fix, which despite being quite technical myself I don’t always see as a good idea. It seems like in most cases you could have a machine learning mechanism based on normal statistical principles (the learning step) and then use a decision procedure separately that achieves your political ends.

I wish that this research community (and here I mean more the qualitative research community surrounding it more than the technical community, which tends to define its terms carefully) would be more careful about the ways it talks about “bias”, because often it seems to encourage a conflation between statistical or technical senses of bias and political senses. The latter carry so much political baggage that it can be intimidating to try to wade in and untangle the two senses. And it’s important to do this untangling, because while bad statistical bias can lead to political bias, it can, depending on the circumstances, lead to either “good” or “bad” political bias. But it’s important, from the sake of numeracy (mathematical literacy) to understand that even if a statistically bad process has a politically “good” outcome, that is still, statistically speaking, bad.

My sense is that there are interpretations of nondiscrimination law that make it illegal to make certain judgments taking into account certain facts about sensitive properties like race and sex. There are also theorems showing that if you don’t take into account those sensitive properties, you are going to discriminate against them by accident because those sensitive variables are correlated with anything else you would use to judge people. As a general principle, while being ignorant may sometimes make things better when you are extremely lucky, in general it makes things worse! This should be a surprise to nobody.

References

Barocas, Solon, and Andrew D. Selbst. “Big data’s disparate impact.” (2016).

Dieterich, William, Christina Mendoza, and Tim Brennan. “COMPAS risk scales: Demonstrating accuracy equity and predictive parity.” Northpoint Inc (2016).

Friedman, Batya, and Helen Nissenbaum. “Bias in computer systems.” ACM Transactions on Information Systems (TOIS) 14.3 (1996): 330-347.

Hardt, Moritz, Eric Price, and Nati Srebro. “Equality of opportunity in supervised learning.” Advances in Neural Information Processing Systems. 2016.

Hartney, Christopher, and Linh Vuong. “Created equal: Racial and ethnic disparities in the US criminal justice system.” (2009).

Kleinberg, Jon, Sendhil Mullainathan, and Manish Raghavan. “Inherent trade-offs in the fair determination of risk scores.” arXiv preprint arXiv:1609.05807 (2016).

## November 10, 2017

Ph.D. student

#### Stewart+Brown Production Managment

Style Archive:  Early in my Stewart+Brown career, I was the production assistant and part of my job was tracking samples and production.  I kept a list on excel of all the styles we had ever made and one of my first programs ever was putting that list online.  I just looked at some of the other code on the site and after figuring out what an array was, made it work for me.  Over the years I built more and more functions into the system and by the time I left it had a life of it’s own and was responsible for tracking nearly every aspect of design, development and production.  It was so efficient that we even opted out of purchasing an expensive out of the box system that was really popular within the industry.  I like to pat myself on the back for that and the fact that even a year after I’ve left the system is still up and running with no major problems or errors.  What you see to the left is a list of all the styles for a season with boxes representing every season they have been produced in.

Style Info Page: This is the page you get to when you click from the previous page.  It displays all of the product information and is continually updated as the product is developed.  Available colors are specific to the season (as tracked by another tool) and are specific to each delivery.  This information is used everywhere this style is shown on the site and so updating information is as easy as updating it in this single location.
Autogenerated Line Sheets: This information in the style archive populates a linesheet that is used to send to buyers and showrooms to place their orders.  Previously, we build these files in Illustrator and they took forever to make and were always filled with errors since information is constantly changing.  Since all of the information was most up to date, I proposed converting the linesheet to being something created online.  I had to fight for it since others were afraid that it would compromise the design and layout.  I created the layout based on an existing line sheet and made if fully customizable.  You can control what styles go on which page, the order of the styles and even control which distributors could see which styles.  To create the pdf, one just hits print and the prints it to a pdf with out all the editing marks.  Works like a charm.
Style Orders Page:  This shows the order grids for this style.  Orders are input online in a common area and are used by the system to calculate the amount of fabric and materials to order in addition to providing a central location in which to view an share information.  Before this, all orders were on paper and revisions were lost and mistakes  rampant.  Even when we were just testing the system, we found a discrepancy between orders, hazzah!  Orders are also subdivided for delivery and tracked by projections and actuals.
Style Materials: This tab on the style shows the materials and amounts of materials used based on the order grids.  This aids the production team in tracking orders and pricing.
Color Archive:  Similar to the styles archive, another archive  exists for each color Stewart+Brown developed showing which season it was used as well as a swatch that is used in every other place on the site where this color is shown.  When adding a color, it checks to make sure the color code or name hasn’t been used.
Pattern Specs Tracking: Before a style can be sampled there has to be an approved pattern for the style.  It works as follows, the designer comes up with an idea for a garment.  The pattern maker (who’s craft is amazingly interesting to me) makes a pattern for the idea.  The pattern is sewn and fit and the pattern is adjusted for a better fit.  This database tracks these revisions and all the files are hosted on the server so they can be downloaded and shared at anytime.  Status updates are also applied to let others know when the pattern is approved or what adjustments it needed.

Fabric Usage Chart: This chart is an extension to our production tracking system. I love how colorful it is, I like to think that It makes looking at the information a little more fun. What we are showing in this chart is how much of each fabric it takes to make 1 of a certain style. They are ordered by fabrication (i.e. organic jersey, hemp-jersey, fleece) and totals are shown at the bottom of each fabrication as well as a grand total grid at the bottom of the sheet. This helps the girls in development get an accurate picture of how much fabric they will have to order for a given season. Before orders are actually placed, they can just get a rough idea by style and after orders are input, they get an even more accurate number because for each production run, the system will automatically calculate how much of each style was ordered and multiply it by the yield. This tool also references our “Spec Archive” where the girls in development and the pattern maker go to upload spec’s for each style and, if the spec is approved, the style shows up in yellow.

This Buyers Area is a password protected area on the Stewart+Brown site for retailers to go to preview the incoming season early. The design was based on our e-store but formatted so that the buyer can see each fabrication on it’s own page. Currently, I have the buyers area linked into a number of back end tools that manage the production and development process. If someone in production decides we’re not going run a color in a style, they can update it in the back-end and it will automatically update in the Buyers Area as well. This has really helped us cut down on communication errors and it ensures that buyers are always getting the most up to date information.

Buyers Area Style Selection: I built this tool that guides users through the process of adding a new season to the buyer area.  Each link on the left is a step in the order it should be performed and the first step is selecting the styles that you want to be shown.
Edit Fabrications and Sidebar Ordering:  In the buyers area, styles are organized by fabrication.  This interface allows you to edit the images shown at the header for each fabrication, the text used to describe the interface as well as the order of the fabrication on the sidebar.

Buyers Area Editing Sandbox: I realize the fact that using admin tools isn’t the ideal way for many people to add, edit and view information so I build this “sandbox” to use for editing.  It is an exact replica of the buyers area except that each editable field has a link to the place that information can be edited.

Admin Documentation: I set another wordpress blog to serve as a help area for anyone using the system.  Every tool I built has it’s own “how to” page and since it was built in WordPress, it came with the search functionality, the categorization and comment capabilities.  The comments have been used as a way to add to or comment on the help instructions.

## November 09, 2017

Center for Technology, Society & Policy

#### Data for Good Competition — Call for Proposals

See the people and projects that advanced to the seed grant phase.

The Center for Technology, Society & Policy (CTSP) seeks proposals for a Data for Good Competition. The competition will be hosted and promoted by CTSP in coordination with the UC Berkeley School of Information IMSA, and made possible through funds provided by Facebook.

Team proposals will apply data science skills to address a social good problem with public open data. The objective of the Data for Good Competition is to incentivize students from across the UC Berkeley campus to apply their data science skills towards a compelling public policy or social justice issue.

The competition is intended to encourage the creation of data tools or analyses of open data. Open datasets may be local, state, national, or international so long as they are publicly accessible. The data tool or analysis may include, but is not limited to:

1. integration or combination of two or more disparate datasets, including integration with private datasets;
2. data conversions into more accessible formats;
3. visualization of data graphically, temporally, and/or spatially;
4. data validations or verifications with other open data sources;
5. platforms that help citizens access and/or manipulate data without coding experience; etc.

Issues that may be relevant and addressed via this competition include environmental issues, civic engagement (e.g., voting), government accountability, land use (e.g., housing challenges, agriculture), criminal justice, access to health care, etc. CTSP suggests that teams should consider using local or California state data since there may be additional opportunities for access and collaboration with agencies who produce and maintain these datasets.

The competition will consist of three phases:

• an initial proposal phase when teams work on developing proposals
• seed grant execution phase when selected teams execute on their proposals
• final competition and presentation of completed projects at an event in early April 2018

Teams selected for the seed grant must be able to complete a working prototype or final product ready for demonstration at the final competition and presentation event. It is acceptable for submitted proposals to already have some groundwork already completed or serve as a substantial extension of an existing project, but we are looking to fund something novel and not already completed work.

# Initial Proposal Phase

The initial proposal phase ends at 11:59pm (PST) on January 28th, 2018 when proposals are due. Proposals will then be considered against the guidelines below. CTSP will soon announce events to support teams in writing proposals and to share conversations on data for good and uses of public open data.

Note: This Data for Good Competition is distinct from the CTSP yearlong fellowship RFP.

## Proposal Guidelines

Each team proposal (approximately 2-3 pages) is expected to answer the following questions:

### Project Title and Team Composition

• What is the title of your project, and the names, department affiliations, student classification (undergraduate/graduate), and email contact information?

### Problem

• What is the social good problem?
• How do you know it is a real problem?
• If you are successful how will your data science approach address this problem?  Who will use the data and how will they use it to address the problem?

### Data

• What public open data will you be using?

### Output & Projected Timeframe

• What will your output be? How may this be used by the public, stakeholders, or otherwise used to address your social good problem?
• Outline a timeframe of how the project will be executed in order to become a finished product or working prototype by the April competition. Will any additional resources be needed in order to achieve the outlined goal?

### Privacy Risks and Social Harms

• What, if any, are the potential negative consequences of your project and how do you propose to minimize them? For example, does your project create new privacy risks?  Are there other social harms?  Is the risk higher for any particular group?  Alternatively, does your project aim to address known privacy risks, social harms, and/or aid open data practitioners in assessing risks associated with releasing data publicly?

Proposals will be submitted through the CTSP website. Successful projects will demonstrate knowledge of the proposed subject area by explaining expertise and qualifications of team members and/or citing sources that validate claims presented. This should be a well-developed proposal, and the team should be prepared to execute the project in a short timeframe before the competition. Please include all relevant information needed for CTSP evaluation–a bare bones proposal is unlikely to advance to the seed funding stage.

Four to six teams will advance to the seed grant phase. This will be announced in February 2018. Each member of an accepted project proposal team becomes a CTSP Data for Good grantee, and each team will receive $800 to support development of their project. If you pass to the seed grant phase we will be working with you to connect you with stakeholder groups and other resources to help improve the final product. CTSP will not directly provide teams with hardware, software, or data. # Final Competition and Presentation Phase This phase consists of an April evening of public presentation before judges from academia, Facebook, and the public sector and a decision on the competition winner. The top team will receive$5000 and the runner-up will receive \$2000.

Note: The presentation of projects will support the remote participation of distance-learning Berkeley students, including Master of Information and Data Science (MIDS) students in the School of Information.

## Final Judging Criteria

In addition to examining continued consideration of the project proposal guidelines, final projects will be judged by the following criteria and those judgments are final:

• Quality of the application of data science skills
• Demonstration of how the proposal or project addresses a social good problem
• Advancing the use of public open data

# After the Competition

Materials from the final event (e.g., video) and successful projects will be hosted on a public website for use by policymakers, citizens, and students. Teams will be encouraged to publish a blogpost on CTSP’s Citizen Technologist Blog sharing their motivation, process, and lessons learned.

# General Rules

• Open to current UC Berkeley students (undergraduate and graduate) from all departments (Teams with outside members will not be considered. However, teams that have a partnership with an external organization who might use the tool or analysis will be considered.)
• Teams must have a minimum of two participants
• Participants must use data sets that are considered public or open.

# Code of Conduct

This code of conduct has been adapted from the 2017 Towards Inclusive Tech conference held at the UC Berkeley School of Information:

The organizers of this competition are committed to principles of openness and inclusion. We value the participation of every participant and expect that we will show respect and courtesy to one another during each phase and event in the competition. We aim to provide a harassment-free experience for everyone, regardless of gender, sexual orientation, disability, physical appearance, body size, race, or religion. Attendees who disregard these expectations may be asked to leave the competition. Thank you for helping make this a respectful and collaborative event for all.

# Questions

Please direct all questions about the application or competition process to CTSP@berkeley.edu.

# Apply

Ph.D. student

#### Personal data property rights as privacy solution. Re: Cofone, 2017

I’m working my way through Ignacio Cofone’s “The Dynamic Effect of Information Privacy Law” (2017) (link), which is an economic analysis of privacy. Without doing justice to the full scope of the article, it must be said that it is a thorough discussion of previous information economics literature and a good case for property rights over personal data. In a nutshell, one can say that markets are good for efficient and socially desirable resource allocation, but they are only good at this when there are well crafted property rights to the goods involved. Personal data, like intellectual property, is a tricky case because of the idiosyncrasies of data–its has zero-ish marginal cost, it seems to get more valuable when it’s aggregated, etc. But like intellectual property, we should expect under normal economic rationality assumptions that the more we protect the property rights of those who create personal data, the more they will be incentivized to create it.

I am very warm to this kind of argument because I feel there’s been a dearth of good information economics in my own education, though I have been looking for it! I do believe there are economic laws and that they are relevant for public policy, let alone business strategy.

I have concerns about Cofone’s argument specifically, which are these:

First, I have my doubts that seeing data as a good in any classical economic sense is going to work. Ontologically, data is just too weird for a lot of earlier modeling methods. I have been working on a different way of modeling information flow economics that tries to capture how much of what we’re concerned with are information services, not information goods.

My other concern is that Cofone’s argument gives users/data subjects credit for being rational agents, capable of addressing the risks of privacy and acting accordingly. Hoofnagle and Urban (2014) show that this is empirically not the case. In fact, if you take the average person who is not that concerned about their privacy on-line and start telling them facts about how their data is being used by third-parties, etc., they start to freak out and get a lot more worried about privacy.

This throws a wrench in the argument that stronger personal data property rights would lead to more personal data creation, therefore (I guess it’s implied) more economic growth. People seem willing to create personal data and give it away, despite actual adverse economic incentives, because cat videos are just so damn appealing. Or something. It may generally be the case that economic modeling is used by information businesses but not information policy people because average users are just so unable to act rationally; it really is a domain better suited to behavioral economics and usability research.

I’m still holding out though. Just because big data subjects are not homo economicus doesn’t mean that an economic analysis of their activity is pointless. It just means we need to have a more sophisticated economic model, on that takes into account how there are many different classes of user that are differently informed. This kind of economic modeling, and empirically fitting it to data, is within our reach. We have the technology.

References

Cofone, Ignacio N. “The Dynamic Effect of Information Privacy Law.” Minn. JL Sci. & Tech. 18 (2017): 517.

Hoofnagle, Chris Jay, and Jennifer M. Urban. “Alan Westin’s privacy homo economicus.” (2014).

## November 07, 2017

Ph.D. student

#### Why managerialism: it acknowledges political role of internal corporate policies

One modern difficulty with political theory in contemporary times is the confusion between government and corporate policy. This is due in no small part to the extent to which large corporations now mediate social life. Telecommunications, the Internet, mobile phones, and social media all depend on layers and layers of operating organizations. The search engine, which didn’t exist thirty years ago, now is arguably an essential cultural and political facility (Pasquale, 2011), which sharpens the concerns that have been raised about their politics (Introna and Nissenbaum, 2000; Bracha and Pasquale, 2007).

Corporate policies influence customers when those policies drive product design or are put into contractual agreements. They can also govern employees and shape corporate culture. Sometimes these two kinds of policies are not easily demarcated. For example, Uber has an internal privacy policy about who can access which users’ information, like most companies with a lot of user data. The privacy features that Uber implicitly guarantees to their customers are part of their service. But their ability to provide this service is only as good as their company culture is reliable.

Classically, there are states, which may or may not be corrupt, and there are markets, which may or may not be competitive. With competitive markets, corporate policies are part of what make firms succeed or fail. One point of success is a company’s ability to attract and maintain customers. This should in principle drive companies to improve their policies.

An interesting point made recently by Robert Post is that in some cases, corporate policies can adopt positions that would be endorsed by some legal scholars even if the actual laws state otherwise. His particular example was a case enforcing the right to be forgotten in Spain against Google.

Since European law is statute driven, the judgments of its courts are not amenable to creative legal reasoning as they are in the United States. Post’s criticism of the EU’s judgment in this case is because of their rigid interpetation of data protection directives. Post argues a different legal perspective on privacy is better at balancing other social interests. But putting aside the particulars of the law, Post makes the point that Google’s internal policy matches his own legal and philosophical framework (which prefers dignitary privacy over data privacy) more than EU statutes do.

One could argue that we should not trust the market to make Google’s policies just. But we could also argue that Google’s market share, which is significant, depends so much on its reputation and users trust that in fact it is under great pressure to adjucate disputes with its users wisely. It is a company that must set its own policies, which do have political significance. It has the benefits of more direct control over the way these policies get interpreted and enforced in the state, faster feedback on whether the policies are successful, and a less chaotic legislative process for establishing policy in the first place.

Political liberals would dismiss this kind of corporate control as just one commercial service among many, or else wring their hands with concern over a company coming to have such power over the public sphere. But managerialists would see the emergence of search engines as an organization among others, comparable to other private entities that have been part of the public sphere, such as newspapers.

But a sound analysis of the politics of search engines need not depend on analogies with past technologies. This is a function of legal reasoning. Managerialism, which is perhaps more a descendent of business reasoning, would ask how, in fact, search engines make policy decisions and how does this affect political outcomes. It does not prima facie assume that a powerful or important corporate policy is wrong. It does ask what the best corporate policy is, given a particular sector.

References

Bracha, Oren, and Frank Pasquale. “Federal Search Commission-Access, Fairness, and Accountability in the Law of Search.” Cornell L. Rev. 93 (2007): 1149.

Introna, Lucas D., and Helen Nissenbaum. “Shaping the Web: Why the politics of search engines matters.” The information society 16.3 (2000): 169-185.

Pasquale, Frank A. “Dominant search engines: an essential cultural & political facility.” (2011).

## November 06, 2017

Ph.D. student

#### Why managerialism: it’s tolerant and meritocratic

In my last post, I argued that we should take managerialism seriously as a political philosophy. A key idea in managerialism (as I’m trying to define it) is that it acknowledges that sociotechnical organizations are relevant units of political power, and is concerned with the relationship between these organizations. These organizations can be functionally specific. They can have hierarchical, non-democratic control in limited, not totalitarian ways. They check and balance each other, probably. Managerialism tends to think that organizations can be managed well, and that good management matters, politically.

This is as opposed to liberalism, which is grounded in rights of the individual, which then becomes a foundation for democracy. It’s also opposed to communitarianism, which holds the political unit of interest to be a family unit or other small community. I’m positioning managerialism as a more cybernetic political idea, as well as one more adapted to present economic conditions.

It may sound odd to hear somebody argue in favor of managerialism. I’ll admit that I am doing so tentatively, to see what works and what doesn’t. Given that a significant percentage of American political thought now is considering such baroque alternatives to liberalism as feudalism and ethnic tribalism, perhaps because liberalism everywhere has been hijacked by plutocracy, it may not be crazy to discuss alternatives.

One reason why somebody might be attracted to managerialism is that it is (I’d argue) essentially tolerant and meritocratic. Sociotechnical organizations that are organized efficiently to perform their main function need not make a lot of demands of their members besides whatever protocols are necessary for the functioning of the whole. In many cases, this should lead to a basic indifference to race, gender, and class background, from the internal perspective of the organization. As there’s good research indicating that diversity leads to greater collective intelligence in organizations, there’s a good case for tolerant policies in managerial institutions. Merit, defined relative to the needs of the particular organization, would be the privileged personal characteristic here.

I’d like to distinguish managerialism from technocracy in the following sense, which may be a matter of my own terminological invention. Technocracy is the belief that experts should run the state. It offers an expansion of centralized power. Managerialism is, I want to argue, not compatible with centralized state control. Rather, it recognizes many different spheres of life that nevertheless need to be organized to be effective. These spheres or sectors will be individually managed, perhaps by competing organizations, but regulate each other more than they require central regulation.

The way these organizations can regulate each other is Exit, in Hirschman’s sense. While the ideas of Exit, Loyalty, and Voice are most commonly used to discuss how individuals can affect the organizations they are a part of, similar ideas can function at higher scales of analysis, as organizations interact with each other. Think about international trade agreements, and sanctions.

The main reason to support managerialism is not that it is particularly just or elegant. It’s that it is more or less the case that the political structures in place now are some assemblage of sociotechnical organizations interacting with each other. Those people who have power are those with power within one or more of these organizations. And to whatever extent there is a shared ideological commitment among people, it is likely because a sociotechnical organization has been turned to the effect of spreading that ideology. This is a somewhat abstract way of saying what lots of people say in a straightforward way all the time: that certain media institutions are used to propagate certain ideologies. This managerialist framing is just intended to abstract away from the particulars in order to develop a political theory.

## November 05, 2017

Ph.D. student

#### Managerialism as political philosophy

Technologically mediated spaces and organizations are frequently described by their proponents as alternatives to the state. From David Clark’s maxim of Internet architecture, “We reject: kings, presidents and voting. We believe in: rough consensus and running code”, to cyberanarchist efforts to bypass the state via blockchain technology, to the claims that Google and Facebook, as they mediate between billions of users, are relevant non-state actor in international affairs, to Lessig’s (1999) ever prescient claim that “Code is Law”, there is undoubtedly something going on with technology’s relationship to the state which is worth paying attention to.

There is an intellectual temptation (one that I myself am prone to) to take seriously the possibility of a fully autonomous technological alternative to the state. Something like a constitution written in source code has an appeal: it would be clear, precise, and presumably based on something like a consensus of those who participate in its creation. It is also an idea that can be frightening (Give up all control to the machines?) or ridiculous. The example of The DAO, the Ethereum ‘distributed autonomous organization’ that raised millions of dollars only to have them stolen in a technical hack, demonstrates the value of traditional legal institutions which protect the parties that enter contracts with processes that ensure fairness in their interpretation and enforcement.

It is more sociologically accurate, in any case, to consider software, hardware, and data collection not as autonomous actors but as parts of a sociotechnical system that maintains and modifies it. This is obvious to practitioners, who spend their lives negotiating the social systems that create technology. For those for whom it is not obvious, there’s reams of literature on the social embededness of “algorithms” (Gillespie, 2014; Kitchin, 2017). These themes are recited again in recent critical work on Artificial Intelligence; there are those that wisely point out that a functioning artificially intelligent system depends on a lot of labor (those who created and cleaned data, those who built the systems they are implemented on, those that monitor the system as it operates) (Kelkar, 2017). So rather than discussing the role of particular technologies as alternatives to the state, we should shift our focus to the great variety of sociotechnical organizations.

One thing that is apparent, when taking this view, is that states, as traditionally conceived, are themselves sociotechnical organizations. This is, again, an obvious point well illustrated in economic histories such as (Beniger, 1986). Communications infrastructure is necessary for the control and integration of society, let alone effective military logistics. The relationship between those industrial actors developing this infrastructure, whether it be building roads, running a postal service, laying rail or telegram wires, telephone wires, satellites, Internet protocols, and now social media–and the state has always been interesting and a story of great fortunes and shifts in power.

What is apparent after a serious look at this history is that political theory, especially liberal political theory as it developed in the 1700’s an onward as a theory of the relationship between individuals bound by social contract emerging from nature to develop a just state, leaves out essential scientific facts of the matter of how society has ever been governed. Control of communications and control infrastructure has never been equally dispersed and has always been a source of power. Late modern rearticulations of liberal theory and reactions against it (Rawls and Nozick, both) leave out technical constraints on the possibility of governance and even the constitution of the subject on which a theory of justice would have its ground.

Were political theory to begin from a more realistic foundation, it would need to acknowledge the existence of sociotechnical organizations as a political unit. There is a term for this view, “managerialism“, which, as far as I can tell is used somewhat pejoratively, like “neoliberalism”. As an “-ism”, it’s implied that managerialism is an ideology. When we talk about ideologies, what we are doing is looking from an external position onto an interdependent set of beliefs in their social context and identifying, through genealogical method or logical analysis, how those beliefs are symptoms of underlying causes that are not precisely as represented within those beliefs themselves. For example, one critiques neoliberal ideology, which purports that markets are the best way to allocate resources and advocates for the expansion of market logic into more domains of social and political life, but pointing out that markets are great for reallocating resources to capitalists, who bankroll neoliberal ideologues, but that many people who are subject to neoliberal policies do not benefit from them. While this is a bit of a parody of both neoliberalism and the critiques of it, you’ll catch my meaning.

We might avoid the pitfalls of an ideological managerialism (I’m not sure what those would be, exactly, having not read the critiques) by taking from it, to begin with, only the urgency of describing social reality in terms of organization and management without assuming any particular normative stake. It will be argued that this is not a neutral stance because to posit that there is organization, and that there is management, is to offend certain kinds of (mainly academic) thinkers. I get the sense that this offendedness is similar to the offense taken by certain critical scholars to the idea that there is such a thing as scientific knowledge, especially social scientific knowledge. Namely, it is an offense taken to the idea that a patently obvious fact entails ones own ignorance of otherwise very important expertise. This is encouraged by the institutional incentives of social science research. Social scientists are required to maintain an aura of expertise even when their particular sub-discipline excludes from its analysis the very systems of bureaucratic and technical management that its university depends on. University bureaucracies are, strangely, in the business of hiding their managerialist reality from their own faculty, as alternative avenues of research inquiry are of course compelling in their own right. When managerialism cannot be contested on epistemic grounds (because the bluff has been called), it can be rejected on aesthetic grounds: managerialism is not “interesting” to a discipline, perhaps because it does not engage with the personal and political motivations that constitute it.

What sets managerialism aside from other ideologies, however, is that when we examine its roots in social context, we do not discover a contradiction. Managerialism is not, as far as I can tell, successful as a popular ideology. Managerialism is attractive only to that rare segment of the population that work closely with bureaucratic management. It is here that the technical constraints of information flow and its potential uses, the limits of autonomy especially as it confronts the autonomies of others, the persistence of hierarchy despite the purported flattening of social relations, and so on become unavoidable features of life. And though one discovers in these situations plenty of managerial incompetence, one also comes to terms with why that incompetence is a necessary feature of the organizations that maintain it.

Little of what I am saying here is new, of course. It is only new in relation to more popular or appealing forms of criticism of the relationship between technology, organizations, power, and ethics. So often the political theory implicit in these critiques is a form of naive egalitarianism that sees a differential in power as an ethical red flag. Since technology can give organizations a lot of power, this generates a lot of heat around technology ethics. Starting from the perspective of an ethicist, one sees an uphill battle against an increasingly inscrutable and unaccountable sociotechnical apparatus. What I am proposing is that we look at things a different way. If we start from general principles about technology its role in organizations–the kinds of principles one would get from an analysis of microeconomic theory, artificial intelligence as a mathematical discipline, and so on–one can try to formulate managerial constraints that truly confront society. These constraints are part of how subjects are constituted and should inform what we see as “ethical”. If we can broker between these hard constraints and the societal values at stake, we might come up with a principle of justice that, if unpopular, may at least be realistic. This would be a contribution, at the end of the day, to political theory, not as an ideology, but as a philosophical advance.

References

Beniger, James R. “The Control Revolution: Technological and Economic Origins of the.” Information Society (1986).

Bird, Sarah, et al. “Exploring or Exploiting? Social and Ethical Implications of Autonomous Experimentation in AI.” (2016).

Gillespie, Tarleton. “The relevance of algorithms.” Media technologies: Essays on communication, materiality, and society 167 (2014).

Kelkar, Shreeharsh. “How (Not) to Talk about AI.” Platypus, 12 Apr. 2017, blog.castac.org/2017/04/how-not-to-talk-about-ai/.

Kitchin, Rob. “Thinking critically about and researching algorithms.” Information, Communication & Society 20.1 (2017): 14-29.

Lessig, Lawrence. “Code is law.” The Industry Standard 18 (1999).

## November 02, 2017

Ph.D. student

#### Robert Post on Data vs. Dignitary Privacy

I was able to see Robert Post present his article, “Data Privacy and Dignitary Privacy: Google Spain, the Right to Be Forgotten, and the Construction of the Public Sphere”, today. My other encounter with Post’s work was quite positive, and I was very happy to learn more about his thinking at this talk.

Post’s argument was based off of the facts of the Google Spain SL v. Agencia Española de Protección de Datos (“Google Spain”) case in the EU, which set off a lot of discussion about the right to be forgotten.

I’m not trained as a lawyer, and will leave the legal analysis to the verbatim text. There were some broader philosophical themes that resonate with topics I’ve discussed on this blog andt in my other research. These I wanted to note.

If I follow Post’s argument correctly, it is something like this:

• According to EU Directive 95/46/EC, there are two kinds of privacy. Data privacy rules over personal data, establishing control and limitations on use of it. The emphasis is on the data itself, which is property reasoned about analogously to. Dignitary privacy is about maintaining appropriate communications between people and restricting those communications that may degrade, humiliate, or mortify them.
• EU rules about data privacy are governed by rules specifying the purpose for which data is used, thereby implying that the use of this data must be governed by instrumental reason.
• But there’s the public sphere, which must not be governed by instrumental reason, for Habermasian reasons. The public sphere is, by definition, the domain of communicative action, where actions must be taken with the ambiguous purpose of open dialogue. That is why free expression is constitutionally protected!
• Data privacy, formulated as an expression of instrumental reason, is incompatible with the free expression of the public sphere.
• The Google Spain case used data privacy rules to justify the right to be forgotten, and in this it developed an unconvincing and sloppy precedent.
• Dignitary privacy is in tension with free expression, but not incompatible with it. This is because it is based not on instrumental reason, but rather on norms of communication (which are contextual)
• Future right to be forgotten decisions should be made on the basis of dignitary privac. This will result in more cogent decisions.

I found Post’s argument very appealing. I have a few notes.

First, I had never made the connection between what Hildebrandt (2013, 2014) calls “purpose binding” in EU data protection regulation and instrumental reason, but there it is. There is a sense in which these purpose clauses are about optimizing something that is externally and specifically defined before the privacy judgment is made (cf. Tschantz, Datta, and Wing, 2012, for a formalization).

This approach seems generally in line with the view of a government as a bureaucracy primarily involved in maintaining control over a territory or population. I don’t mean this in a bad way, but in a literal way of considering control as feedback into a system that steers it to some end. I’ve discussed the pervasive theme of ‘instrumentality run amok’ in questions of AI superintelligence here. It’s a Frankfurt School trope that appears to have made its way in a subtle way into Post’s argument.

The public sphere is not, in Habermasian theory, supposed to be dictated by instrumental reason, but rather by communicative rationality. This has implications for the technical design of networked publics that I’ve scratched the surface of in this paper. By pointing to the tension between instrumental/purpose/control based data protection and the free expression of the public sphere, I believe Post is getting at a deep point about how we can’t have the public sphere be too controlled lest we lose the democratic property of self-governance. It’s a serious argument that probably should be addressed by those who would like to strengthen rights to be forgotten. A similar argument might be made for other contexts whose purposes seem to transcend circumscription, such as science.

Post’s point is not, I believe, to weaken these rights to be forgotten, but rather to put the arguments for them on firmer footing: dignitary privacy, or the norms of communication and the awareness of the costs of violating them. Indeed, the facts behind right to be forgotten cases I’ve heard of (there aren’t many) all seem to fall under these kinds of concerns (humiliation, etc.).

What’s very interesting to me is that the idea of dignitary privacy as consisting of appropriate communication according to contextually specific norms feels very close to Helen Nissenbaum’s theory of Contextual Integrity (2009), with which I’ve become very familiar in past year through my work with Prof. Nissenbaum. Contextual integrity posits that privacy is about adherence to norms of appropriate information flow. Is there a difference between information flow and communication? Isn’t Shannon’s information theory a “mathematical theory of communication”?

The question of whether and under what conditions information flow is communication and/or data are quite deep, actually. More on that later.

For now though it must be noted that there’s a tension, perhaps a dialectical one, between purposes and norms. For Habermas, the public sphere needs to be a space of communicative action, as opposed to instrumental reason. This is because communicative action is how norms are created: through the agreement of people who bracket their individual interests to discuss collective reasons.

Nissenbaum also has a theory of norm formation, but it does not depend so tightly on the rejection of instrumental reason. In fact, it accepts the interests of stakeholders as among several factors that go into the determination of norms. Other factors include societal values, contextual purposes, and the differentiated roles associated with the context. Because contexts, for Nissenbaum, are defined in part by their purposes, this has led Hildebrandt (2013) to make direct comparisons between purpose binding and Contextual Integrity. They are similar, she concludes, but not the same.

It would be easy to say that the public sphere is a context in Nissenbaum’s sense, with a purpose, which is the formation of public opinion (which seems to be Post’s position). Properly speaking, social purposes may be broad or narrow, and specially defined social purposes may be self-referential (why not?), and indeed these self-referential social purposes may be the core of society’s “self-consciousness”. Why shouldn’t there be laws to ensure the freedom of expression within a certain context for the purpose of cultivating the kinds of public opinions that would legitimize laws and cause them to adapt democratically? We could possibly make these frameworks more precise if we could make them a little more formal and could lose some of the baggage; that would be useful theory building in line with Nissenbaum and Post’s broader agendas.

A test of this perhaps more nuanced but still teleological (indeed, instrumental, but maybe actually more properly speaking pragmatic (a la Dewey), in that it can blend several different metaethical categories) is to see if one can motivate a right to be forgotten in a public sphere by appealing to the need for communicative action, thereby especially appropriate communication norms around it, and dignitary privacy.

This doesn’t seem like it should be hard to do at all.

References

Hildebrandt, Mireille. “Slaves to big data. Or are we?.” (2013).

Hildebrandt, Mireille. “Location Data, Purpose Binding and Contextual Integrity: What’s the Message?.” Protection of Information and the Right to Privacy-A New Equilibrium?. Springer International Publishing, 2014. 31-62.

Nissenbaum, Helen. Privacy in context: Technology, policy, and the integrity of social life. Stanford University Press, 2009.

Post, Robert, Data Privacy and Dignitary Privacy: Google Spain, the Right to Be Forgotten, and the Construction of the Public Sphere (April 15, 2017). Duke Law Journal, Forthcoming; Yale Law School, Public Law Research Paper No. 598. Available at SSRN: https://ssrn.com/abstract=2953468 or http://dx.doi.org/10.2139/ssrn.2953468

Tschantz, Michael Carl, Anupam Datta, and Jeannette M. Wing. “Formalizing and enforcing purpose restrictions in privacy policies.” Security and Privacy (SP), 2012 IEEE Symposium on. IEEE, 2012.

## October 30, 2017

MIMS 2012

#### My Progress with Hand Lettering

I started hand lettering about a year and half ago, and I thought it would be fun to see the progress I’ve made by comparing my early, crappy work to my recent work. I started hand lettering because a coworker of mine is a great letterer and I was inspired by the drawings he would make. I tried a few of his pens and found that trying to recreate the words he drew forced me to focus on the shape of the letter and the movement of the pen, which was intoxicating and meditative. So I bought a few pens and started practicing. Here’s the progress I’ve made so far.

## Early Shiz

A bunch of shitty G’s. Poor control of the pen; poor letter shapes.

Goldsmiths. Better pen control and shapes, but still pretty bad.

A lot of really inconsistent and shaky “a’s” and “n’s”.

Happy Holidays. Better, but still some pretty poor loops and letter spacing.

Some really shitty looking M’s.