School of Information Blogs

October 18, 2017

Ph.D. student

“To be great is to be misunderstood.”

A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines. With consistency a great soul has simply nothing to do. He may as well concern himself with his shadow on the wall. Speak what you think now in hard words, and to-morrow speak what to-morrow thinks in hard words again, though it contradict every thing you said to-day. — `Ah, so you shall be sure to be misunderstood.’ — Is it so bad, then, to be misunderstood? Pythagoras was misunderstood, and Socrates, and Jesus, and Luther, and Copernicus, and Galileo, and Newton, and every pure and wise spirit that ever took flesh. To be great is to be misunderstood. –
Emerson, Self-Reliance

Lately in my serious scientific work again I’ve found myself bumping up against the limits of intelligibility. This time, it is intelligibility from within a technical community: one group of scientists who are, I’ve been advised, unfamiliar with another, different technical formalism. As a new entrant, I believe the latter would be useful to understand the domain of the former. But to do this, especially in the context of funders (who need to explain things to their own bosses in very concrete terms), would be unproductive, a waste of precious time.

Reminded by recent traffic of some notes I wrote long ago in frustration at Hannah Arendt, I found something apt about her comments. Science in the mode of what Kuhn calls “normal science” must be intelligible to itself and its benefactors. But that is all. It need not be generally intelligible to other scientists; it need not understand other scientists. It need only be a specialized and self-sustaining practice, a discipline.

Programming (which I still study) is actually quite different from science in this respect. Because software code is a medium used for communication by programmers, and software code is foremost interpreted by a compiler, one relates as a programmer to other programmers differently than the way scientists relate to other scientists. To some extent the productive formal work has moved over into software, leaving science to be less formal and more empirical. This is, in my anecdotal experience, now true even in the fields of computer science, which were once one of the bastions of formalism.

Arendt’s criticism of scientists, that should be politically distrusted because “they move in a world where speech has lost its power”, is therefore not precisely true because scientific operations are, certainly, mediated by language.

But this is normal science. Perhaps the scientists who Arendt distrusted politically were not normal scientists, but rather those sorts of scientists that were responsible for scientific revolutions. These scientist must not have used language that was readily understood by their peers, at least initially, because they were creating new concepts, new ideas.

Perhaps these kinds of scientists are better served by existentialism, as in Nietzsche’s brand, as an alternative to politics. Or by Emerson’s transcendentalism, which Sloterdijk sees as very spiritually kindred to Nietzsche but more balanced.


by Sebastian Benthall at October 18, 2017 03:14 AM

October 17, 2017

Ph.D. student

A quick recap: from political to individual reasoning about ends

So to recap:

Horkheimer warned in Eclipse of Reason that formalized subjective reason that optimizes means was going to eclipse “objective reason” about social harmony, the good life, the “ends” that really matter. Technical efficacy which is capitalism which is AI would expose how objective reason is based in mythology and so society would be senseless and miserable forever.

There was at one point a critical reaction against formal, technical reason that was called the Science Wars in the 90’s, but though it continues to have intellectual successors it is for the most part self-defeating and powerless. Technical reasoning is powerful because it is true, not true because it is powerful.

It remains an open question whether it’s possible to have a society that steers itself according to something like objective reason. One could argue that Habermas’s project of establishing communicative action as a grounds for legitimate pluralistic democracy was an attempt to show the possibility of objective reason after all. This is, for some reason, an unpopular view in the United States, where democracy is often seen as a way of mediating agonistic interests rather than finding common ones.

But Horkheimer’s Frankfurt School is just one particularly depressing and insightful view. Maybe there is some other way to go. For example, one could decide that society has always been disappointing, and that determining ones true “ends” is an individual, rather than collective, endeavor. Existentialism is one such body of work that posits a substantive moral theory (or at least works at one) that is distrustful of political as opposed to individual solutions.


by Sebastian Benthall at October 17, 2017 03:29 AM

MIMS 2014

Front Page Clues

They say the best place to hide a dead body is on page two of the Google search results. I’d argue that a similar rule applies to reading the news, especially online. If a story is not on the landing page of whatever news site I’m looking at, chances are I’m not gonna find it. All this is to say: news outlets wield considerable power to direct our attention where they want it simply by virtue of how they organize content on their sites.

During presidential elections, the media is often criticized for giving priority to the political horse race between dueling candidates, preoccupying us with pageantry over policy. But to what extent is this true? And if it is true, which specific policy issues suffer during election cycles? Do some suffer more than others? What are we missing out on because we are too busy keeping up with the horse race instead?

If you want to go straight to the answers to these questions (and some other interesting stuff), skip down to the Findings section of the post. But for anyone who’s interested in the how behind the results (both in terms of data and methodology), the next two sections are for you.

Data

Lucky for us (or maybe just me), the New York Times generously makes a ton of its data available online for free, easily retrievable via calls to a REST API (specifically, their Archive API). Just a few dozen calls and I was in business. This amazing resource not only has information going back to 1851 (!!), it also includes keywords from each article as part of its metadata. Even better, since 2006, they have ranked the keywords in each article by their importance. This means that for any article that is keyword-ranked, you can easily extract its main topic—whatever person, place, or subject it might be.

Having ranked keywords makes this analysis much easier. For one thing, we don’t have to sift through words from mountains of articles in order to surmise what each article is about using fuzzy or inexact NLP methods. And since they’ve been ranking keywords since 2006, this gives us three presidential elections to include as part of our analysis (2008, 2012, and 2016).

The other crucial dimension included in the NYT article metadata is the print page. Personally, I don’t ever read the NYT on paper anymore (or any newspaper, for that matter—they’re just too unwieldy), so you might argue that the print page is irrelevant. Possibly, but unfortunately we don’t have data about placement on the NYT’s website. And moreover, I would argue that the print page is a good proxy for this. It gets at the essence of what we’re trying to measure, which is the importance NYT editors place on a particular topic over others.

Model

logit(\pi_t) = log(\frac{\pi_t}{1-\pi_t}) = \alpha + \sum_{k=1}^{K} \beta_{k} * Desk_k + \beta * is\_election

A logistic regression model underpins the analysis here. The log-odds that a topic \textit{t} will appear on the front page of the NYT is modeled as a function of the other articles appearing on the front page (the Desk variables, more on those below), as well as a dummy variable indicating whether or not the paper was published during an election cycle.

Modeling the other articles on the front page is essential since they have obvious influence over whether topic \textit{t} will make the front page on a given day. But in modeling these other articles, a choice is made to abstract from the topics of the articles to the news desk from which they originated. Using the topics themselves unfortunately leads to two problems: sparsity and singularity. Singularity is a problem that arises when your data has too many variables and too few observations. Fortunately, there are statistical methods to overcome this issue—namely penalized regression. Penalized regression is often applied to machine learning problems, but recent developments in statistics have extended the methodology of significance testing to penalized models like ridge regression. This is great since we are actually concerned with interpreting our model rather just pure prediction—the more common aim in machine learning applications.

Ultimately though, penalized methods do not overcome the sparsity problem. Simply put, there are too many other topics that might appear (and appear too infrequently) on the front page to get a good read on the situation. Therefore as an alternative, we aggregate the other articles on the front page according to the news desk they came from (things like Foreign, Style, Arts & Culture, etc). Doing so allows our model to be readily interpretable while retaining information about the kinds of articles that might be crowding out topic \textit{t}.

The  is\_election variable is a dummy variable indicating whether or not the paper was published in an election season. This is determined via a critical threshold illustrated by the red line in the graph below. The same threshold was applied across all three elections.

election_timeseries_2016

In some specifications, the is\_election variable might be broken into separate indicators, one for each election. In other specifications, these indicators might be interacted with one or several news desk variables—though only when the interactions add explanatory value to the overall model as determined by an analysis of deviance.

Two other modeling notes. First, for some topics, the model might suffer from quasi or complete separation. This occurs when for example, all instances of topic \textit{t} appearing on page one occur when there are also less than two Sports desk articles appearing on page one. Separation can mess up logistic regression coefficient estimates, but fortunately, a guy named Firth (not Colin, le sigh) came up with a clever workaround, which is known as Firth Regression. In cases where separation is an issue, I switch out the standard logit model for Firth’s alternative. This is easily done using R‘s logistf package, and reinforces why I favor R over python when it comes to doing serious stats.

Second, it should be pointed out that our model does run somewhat afoul of one of the basic assumptions of logistic regression—namely, independence. In regression models, it is regularly assumed that the observations are independent of (i.e. don’t influence) each other. That is probably not true in this case, since news cycles can stretch over the span of several newspaper editions. And whether a story makes front page news is likely influenced by whether it was on the front page the day before.

Model-wise, this is a tough nut to crack since the data is not steadily periodic—as is the case with regular time series data. It might be one, two, or sixty days between appearances of a given topic. In the absence of a completely different approach, I test the robustness of my findings by including an additional variable in my specification—a dummy indicating whether or not topic \textit{t} appeared on the front page the day before.

Findings

For this post, I focused on several topics I believe are consistently relevant to national debate, but which I suspected might get less attention during a presidential election cycle. It appears the 2016 election cycle was particularly rough on healthcare coverage. The model finds a statistically significant effect (\beta = -4.519; p = 0.003), which means that for the average newspaper, the probability that a healthcare article made the front page dropped by 60% during the 2016 election season—from a probability of 0.181 to 0.071. This calculation is made by comparing the predicted values with and without the 2016 indicator activated—while holding all other variables fixed at their average levels during the 2016 election season.

Another interesting finding is the significant coefficient (p = 0.045) found on the interaction term between the 2016 election and articles from the NYT’s National desk, which is actually positive (\beta = 0.244).  The National desk is one of the top-five story-generating news desks at the New York Times, so you would expect that more National stories would come at the expense of just about any other story. And this is indeed the case outside of the 2016 election season, where the probability healthcare will make the front page drops 45% when an additional National article is run on the front page of the average newspaper. During the 2016 election season, however, the probability actually increases by 25%.  These findings were robust to whether or not healthcare was front page news the day before.

The coefficient flip on National coverage here is curious, and raises the question as to why it might be happening. Perhaps NYT editors had periodic misgivings about the adequacy of their National coverage during the 2016 election and decided to make a few big pushes to give more attention to domestic issues including healthcare. Finding the answer requires more digging. In the end however, even if healthcare coverage was buoyed by other National desk articles, it still suffered overall during the 2016 election.

The other topic strongly associated with election cycles is gun control (\beta = -3.822; p = 0.007). Articles about gun control are 33% less likely to be run on the front page of the average newspaper during an election cycle. One thing that occurred to me about gun control however is that it generally receives major coverage boons in the wake of mass shootings. It’s possible that the association here is being driven by a dearth of mass shootings during presidential elections, but I haven’t looked more closely to see whether a drop off in mass shootings during election cycles actually exists.

Surprisingly, coverage about the U.S. economy is not significantly impacted by election cycles, which ran against my expectations. However, coverage about the economy was positively associated with coverage about sports, which raises yet more interesting questions. For example, does our attention naturally turn to sports when the economic going is good?

Unsurprisingly, elections don’t make a difference to coverage about terrorism. However, when covering stories about terrorism in foreign countries, other articles from the Foreign desk significantly influence whether the story will make the front page cut (\beta = -1.268; p = 8.49e-12). Starting with zero Foreign stories on page one, just one other Foreign article will lower the chances that an article about foreign terrorism will appear on page one by 40%. In contrast, no news desk has any systematic influence on whether stories about domestic terrorism make it on to page one.

Finally, while elections don’t make a difference to front page coverage about police brutality and misconduct, interestingly, articles from the NYT Culture desk do. There is a significant and negative effect (\beta = -0.364; p = 0.033), which for the average newspaper, means a roughly 17% drop in the probability that a police misconduct article will make the front page with an additional Culture desk article present. Not to knock the Culture desk or nothing, but this prioritization strikes me as somewhat problematic.

In closing, while I have managed to unearth several insights in this blog post, many more may be surfaced using this rich data source from the New York Times. Even if some of these findings raise questions about how the NYT does its job, it is a testament to the paper as an institution that they are willing to open themselves to meta-analyses like this. Such transparency enables an important critical discussion about the way we consume our news. More informed debate—backed by hard numbers—can hopefully serve the public good in an era when facts in the media are often under attack.

biglebow


by dgreis at October 17, 2017 01:53 AM

October 16, 2017

Ph.D. student

Notes on Sloterdijk’s “Nietzsche Apostle”

Fascisms, past and future, are politically nothing than insurrections of energy-charged losers, who, for a time of exception, change the rules in order to appear as victors.
— Peter Sloterdijk, Nietzsche Apostle

Speaking of existentialism, today I finished reading Peter Sloterdijk’s Semiotext(e) issue, “Nietzsche Apostle”. A couple existing reviews can better sum it up than I can. These are just some notes.

Sloterdijk has a clear-headed, modern view of the media and cultural complexes around writing and situates his analysis of Nietzsche within these frames. He argues that Nietzsche created an “immaterial product”, a “brand” of individualism that was a “market maker” because it anticipated what people would crave when they realized they were allowed to want. He does this through a linguistic innovation: blatant self-aggrandizement on a level that had been previously taboo.

One of the most insightful parts of this analysis is Sloterdijk’s understanding of the “eulogistic function” of writing, something about which I have been naive. He’s pointing to the way writing increases its authority by referencing other authorities and borrowing some of their social capital. This was once done, in ancient times, through elaborate praises of kings and ancestors. There have been and continue to be (sub)cultures where references to God or gods or prophets or scriptures give a text authority. In the modern West among the highly educated this is no longer the case. However, in the academy citations of earlier scholars serves some of this function: citing a classic work still gives scholarship some gravitas, though I’ve noted this seems to be less and less the case all the time. Most academic work these days serves its ‘eulogistic function’ in a much more localized way of mutually honoring peers within a discipline and the still living and active professors who might have influence over ones hiring, grants, and/or tenure.

Sloterdijk’s points about the historical significance of Nietzsche are convincing, and he succeeds in building an empathetic case for the controversial and perhaps troubled figure. Sloterdijk also handles most gracefully the dangerous aspects of Nietzsche’s legacy, most notably when in a redacted and revised version his work was coopted by the Nazis. Partly through references to Nietzsche’s text and partly by illustrating the widespread phenomenon of self-serving redactionist uses of hallowed texts (he goes into depth about Jefferson’s bible, for example), he shows that any use of his work to support a movement of nationalist resentment is a blatant misappropriation.

Indeed, Sloterdijk’s discussion of Nietzsche and fascism is prescient for U.S. politics today (I’ve read this volume was based on a lecture in 2000). For Sloterdijk, both far right and far left politics are often “politics of resentment”, which is why it is surprisingly easy for people to switch from one side to the other when the winds and opportunities change. Nietzsche’s famously denounced “herd morality” as that system of morality that deplores the strong and maintains the moral superiority of the weak. In Nietzsche’s day, this view was represented by Christianity. Today, it is (perhaps) represented by secular political progressivism, though it may just as well be represented by those reactionary movements that feed on resentment towards coastal progressive elites. All these political positions that are based on arguments about who is entitled to what and who isn’t getting their fair share are the same for Sloterdijk’s Nietzsche. They miss the existential point.

Rather, Nietzsche advocates for an individualism that is free to pursue self-enhancement despite social pressures to the contrary. Nietzsche is anti-egalitarian, at least in the sense of not prioritizing equality for its own sake. Rather, he proposes a morality that is libertarian without any need for communal justification through social contract or utilitarian calculus. If there is social equality to be had, it is through the generosity of those who have excelled.

This position is bound to annoy the members of any political movement whose modus operandi is mobilization of resentful solidarity. It is a rejection of that motive and tactic in favor of more joyful and immediate freedom. It may not be universally accessible; it does not brand itself that way. Rather, it’s a lifestyle option for “the great”, and it’s left open who may self-identify as such.

Without judging its validity, it must be noted that it is a different morality than those based on resentment or high-minded egalitarianism.


by Sebastian Benthall at October 16, 2017 01:35 AM

October 10, 2017

MIDS student

Privacy matters of nations..conclusion

Apologies for the delay in posting this piece . 

Espionage

Espionage, or spying stands in stark contrast with Intelligence. Intelligence is essentially gathering of information which are public or private in nature whereas espionage involves obtaining classified information through human or other sources. Stephen Grey, in his wonderfully written masterpiece “The new Spymasters” calls spies “ the best ever liars ”

Espionage, by definition, may violate a number of international treaties concerning Human Rights as well as civil liberties such as the right to privacy amongst others. There are many national laws such as the Espionage Act of 1917/Sedition Act (United States) that intend to tame incidents of its sensitive information being leaked to other nations while remaining silent about it collecting information about other nations.Edward Snowden, the NSA whistleblower was charged under this act. Recently Germany had passed a controversial espionage act that allows their intelligence agency (BND) to spy under ambiguous conditions such as “early interception of dangers”. In fact, none of the international laws have been able to include espionage directly as a matter of direct concern. As a 2007 paper on Espionage and International Law states , “Espionage and international law are not in harmony.” Diplomatic missions, under the protection of diplomatic immunity are a common way of carrying out such clandestine activities. In fact, espionage is probably the only institutionalised clandestine activity carried out for political or military gains.It needs to be noted that spying occurs not only on rogue nations or known enemy states but also on the so-called allies.

As mentioned earlier, I will not be focussing on internal surveillance carried on by governments on their own citizens.

This field of spying was the domain of government agencies but the equation is now more fluid and complex with private players like WikiLeaks coming into play.

The question is why has espionage become a necessary part of a nation’s policy. Thomas Finger, in his 2011 publication points out that a short answer to this question is “to reduce uncertainty”. Reducing uncertainty involves research and analysis to gain “new knowledge” i.e. better understanding and new insights derived by using existing information in possession or efforts to substantiate or disconfirm a hunch regarding another nation. Historical origins aside, all countries feel that they may be left behind if they do not have the inside information about what their enemies and allies were up to at all times. In this sense, it acts as a deterrent on the same lines as acquisition of nuclear warfare.It also acts as a equaliser between countries of uneven economic might.It also seems to be the only way to get reliable information about “Rogue nations” such as North Korea.

From the days of its origin till the very recent past, the Intelligence community in the US were focussed on the actions of other nation states (such as the former USSR during the cold war era) . September 11, 2001 attacks brought attention to non-state entities that seemed to be causing more damage and disruption. Thus the evolution of the source of national threat convinced nations to be more vigilant i.e. increase spying.

However, it is important to understand that more espionage does not imply a more secure nation. The field of international espionage is rife with examples of failures .One such case was the Iraq invasion by the USA on the basis of the Weapons of Mass Destruction National Intelligence estimates produced in 2002. Chemical weapons analysts mistook a fire truck for a “special” truck  for transfer of munitions . This was a very costly mistake in both financial and human life terms.

It is easy to see that the issues in the field of espionage, especially due to its pervasiveness in today’s technologically connected world is not an easy choice of black or white but consists of all possible hues of grey.

Ethical Spy

The classification of moral/immoral acts in the field of spying are difficult to achieve due to lack of clarity of laws. As an example, the National Intelligence Strategy of the United States of America says very little about ethical code to be followed by agents on field. It states that the members of the Intelligence Community (IC) need to uphold the “Principles of Professional Ethics for the Intelligence Community” that include respect for civil liberties and integrity. These seem to be applicable only in relation to domestic matters but seem contrary to the job requirements of a spy placed in a foreign country by United States , as an example. More details can be found here. It is an impossible task to list down all possible scenarios that a spy may face when on duty especially in relation to a foreign nation. However, it is imperative to acknowledge that in practise, the guiding principles do not remain the same in the two situations. Having taken this step, the basic guidelines for conduct will be an easier task to undertake. Such a moral framework is essential to contain harm done to international relationships due to agent’s on-field actions based on his/her own moral compass. Author Bruce Schneider in his book “Data and Goliath-the hidden battles to collect your data and control your world (2015)” suggests that the NSA, for example should in fact be split into two divisions – one focussed on surveillance and the other on espionage to clearly demarcate guidelines and duties .    

Such a framework can be built by leaning on principles that are used in related fields such as competitive intelligence . The Society for Competitive Intelligence lists a detailed guideline and code of conduct to differentiate Competitive Intelligence vs Corporate Espionage. The laws for corporate espionage are not very well developed and hence companies need to fall back on their own code of ethics.As the scope of corporate espionage and its effects are much smaller in extent to that of a national espionage activities, it is imperative that such guidelines be built with a sense of urgency. However, as with any other global initiative, this will succeed only if ratified by all nations – a monumental if not an impossible task.

Reuse & Recycle?

Is it possible to reuse the privacy laws and frameworks that work for the protection of individual’s privacy to protect privacy of a nation against peeping toms? It would be useful to see if the current privacy definitions and laws work well for aggregated levels beyond an individual.

Extensive work has been done to provide framework for definition and protection of individual’s privacy . In fact, the laws have become very specialized and focussed – eg law for protection of children’s privacy (Children’s Online Privacy Protection Act of 1998) . These laws cover a wide range of fields of application such as healthcare, social media, finance among others.

As the guiding concerns and effects are the same, these principles can be easily extended to higher aggregates such as a family unit. For example, the concerns raised in the Google’s “Wi-Fi Sniffing Debacle” were linked to the tracking of the wi-fi payload of various homes as the Street View cars were being driven around. The payload was linked to the computer and not necessarily to an individual. Federal Communications Commission made references to the federal Electronic Communications Privacy Act (ECPA) in its report. Similar concerns were raised elsewhere in the world in relation to this unconsented collection of data . Another incident which highlighted the concerns for addressing family level privacy was the famous HeLa genome study . Henrietta Lacks was a woman from Baltimore suffering from cervical cancer. Her cells were taken in 1951 without her consent. Scientists have since been studying her genome sequences to solve some challenging medical concerns. By publishing the genome sequence of her cells, the scientists had inadvertently advertised this private aspect of everyone connected to Henrietta by genes i.e. her family. The study had to be taken down when it became clear that the family’s consent had not been sought. These cases highlight the fact that the guidelines that protect individuals can also be used as a guiding principle in the context of families as a unit.

As a next level of aggregation, we look at society as a unit. Society, as a concept, can be quite ambiguous. We assume that any group of people bound together by a common thread such as residents of a given neighbourhood, consumers of a certain product etc can be thought of as belonging to a society. For example, in the case of the website Ashley Madison’s data breach, the whole user group’s privacy ( or in this case, secrecy) was at stake. Hackers had threatened to release private information of many of its users unless the website was shut down. While this was related to the personally identifiable information for each individual, the issue escalated drastically as it affected a majority of the 36 million users of the website . The Privacy commissioner of Canada stated that the Toronto-based company had in fact breached many privacy laws in Canada and elsewhere. Thus, any privacy violation that is not specific to one particular individual but a much larger group of which the individual is a member, is also looked through the lens of the same privacy laws.There are many other instances of “us vs the nosy corporates” that have been spoken about recently . For eg, due to the privacy setup and the inherent nature of the product, location of all users of Foursquare can be tracked in real time . Additionally the concept of society and privacy are quite intertwined as pointed out by sociologist Barrington Moore(1984), “the need for privacy is a socially created need. Without society there would be no need for privacy.” As an interesting observation, Dan Solove states “Society is fraught with conflict and friction. Individuals, institutions, and governments can all engage in activities that have problematic effects on the lives of others.”

Let us now turn our attention to the next higher level of aggregation – nations. The scale of impact of any privacy violation is enormous as it affects not just the nation’s population but based on the nature of the violation, it also affects its allies and enemy states and eventually can have a global impact (e.g. Weapons of Mass destruction “discovery” in Iraq). Additional complications arise from the fact that the privacy of a nation affects the economic development, defence strategies, regional power imbalances and other world-wide impacts. Thus, while we can take inspirations from existing frameworks, the scale of impact makes it imperative to modify them dramatically.  

Summary

As the Sun Microsystem’s CEO  Scott McNealy said in 1999,”You have zero privacy anyway. Get over it”, it seems that the nations of the world have accepted it as a reality and are in a race to outdo each other. A new challenge in this arena are the non-state entities such as terrorist organizations, private players like WikiLeaks which are forcing a rethink of the level of cooperation required between nations against such “outsiders”. In spite of the popular notion developed partly due to the popular spy thrillers on the mainstream cinema, Intelligence agencies prefer to use publicly available data due to low risk and low cost. It may be combined with some clandestinely acquired information to improve accuracy of information . There are only a few cases such as terrorist activities where the agencies have to rely exclusively on the latter. More details can be found hereTaking a cue from Raab and Wright’s 4 level Privacy Impact Assessment (PIA) from their 2012 work, the inclusion of a PIA within the relevant Intelligence organization will influence its culture, structure and behaviour  – helping make necessity of an espionage more palpable even though the PIA cannot act as a panacea for all types of privacy violations. PIA, in its usual form can only assess the impact that a given functionality has on an individual’s privacy. However, with a measure with widespread potential impact like espionage, the PIA needs to be done at multiple levels. These levels are incrementally built on top of one another. In the suggested approach, the following four levels are a must in order for the PIA to be effective .

Level 1 : PIA1 This follows the common wisdom of assessing impact of spying operation on any individual who may have been a subject of it at a personal level

Level 2 : PIA2 This covers the impacts from PIA1 and additionally covers the effect such an operation will have on the individual’s social and political standing and relationships

Level 3 : PIA3 This includes PIA2 impacts as well as the impact on any groups or categories that they may belong to. For example, “vulnerable population” such as children and adults who are not in a decision-making capacity at the moment .

Level 4: PIA4 This has PIA3 and the impact of spying on the working of society and the political system per se.

It is important to note that different spying activities will affect the PIAs in differing forms. It is vital to understand the context before making any recommendations based on the PIA. As expected, effects on privacy in terms of severity will differ across the spectrum of espionage activities. However, it is necessary to have this structure in place for a common thread of assessment across different agents and departments bringing in uniformity of structure and ease of transfer of information across  sister agencies.

Imbalance of economic power between countries also bring in an additional level of fear i.e. the fear of having non-existent bargaining powers in any bilateral or multilateral disputes. This fear can cause countries to behave irrationally and hence it is imperative for the economically advanced nations to be more active in region based or commerce based organisations . This will help ease concerns for the smaller nations in the group. Additionally, there are more efforts being made to have regional co-operations on Intelligence exercises especially for battling issues like terrorism.

 


by arvinsahni at October 10, 2017 10:52 AM

October 06, 2017

Ph.D. student

Existentialism in Design: Comparison with “Friendly AI” research

Turing Test [xkcd]

I made a few references to Friendly AI research in my last post on Existentialism in Design. I positioned existentialism as an ethical perspective that contrasts with the perspective taken by the Friendly AI research community, among others. This prompted a response by a pseudonymous commenter (in a sadly condescending way, I must say) who linked me to a a post, “Complexity of Value” on what I suppose you might call the elite rationalist forum Arbital. I’ll take this as an invitation to elaborate on how I think existentialism offers an alternative to the Friendly AI perspective of ethics in technology, and particularly the ethics of artificial intelligence.

The first and most significant point of departure between my work on this subject and Friendly AI research is that I emphatically don’t believe the most productive way to approach the problem of ethics in AI is to consider the problem of how to program a benign Superintelligence. This is for reasons I’ve written up in “Don’t Fear the Reaper: Refuting Bostrom’s Superintelligence Argument”, which sums up arguments made in several blog posts about Nick Bostrom’s book on the subject. This post goes beyond the argument in the paper to address further objections I’ve heard from Friendly AI and X-risk enthusiasts.

What superintelligence gives researchers is a simplified problem. Rather than deal with many of the inconvenient contingencies of humanity’s technically mediated existence, superintelligence makes these irrelevant in comparison to the limiting case where technology not only mediates, but dominates. The question asked by Friendly AI researchers is how an omnipotent computer should be programmed so that it creates a utopia and not a dystopia. It is precisely because the computer is omnipotent that it is capable of producing a utopia and is in danger of creating a dystopia.

If you don’t think superintelligences are likely (perhaps because you think there are limits to the ability of algorithms to improve themselves autonomously), then you get a world that looks a lot more like the one we have now. In our world, artificial intelligence has been incrementally advancing for maybe a century now, starting with the foundations of computing in mathematical logic and electrical engineering. It proceeds through theoretical and engineering advances in fits and starts, often through the application of technology to solve particular problems, such as natural language processing, robotic control, and recommendation systems. This is the world of “weak AI”, as opposed to “strong AI”.

It is also a world where AI is not the great source of human bounty or human disaster. Rather, it is a form of economic capital with disparate effects throughout the total population of humanity. It can be a source of inspiring serendipity, banal frustration, and humor.

Let me be more specific, using the post that I was linked to. In it, Eliezer Yudkowsky posits that a (presumeably superintelligent) AI will be directed to achieve something, which he calls “value”. The post outlines a “Complexity of Value” thesis. Roughly, this means that the things that we want AI to do cannot be easily compressed into a brief description. For an AI to not be very bad, it will need to either contain a lot of information about what people really want (more than can be easily described) or collect that information as it runs.

That sounds reasonable to me. There’s plenty of good reasons to think that even a single person’s valuations are complex, hard to articulate, and contingent on their circumstances. The values appropriate for a world dominating supercomputer could well be at least as complex.

But so what? Yudkowsky argues that this thesis, if true, has implications for other theoretical issues in superintelligence theory. But does it address any practical questions of artificial intelligence problem solving or design? That it is difficult to mathematically specify all of values or normativity, and that to attempt to do so one would need to have a lot of data about humanity in its particularity, is a point that has been apparent to ethical philosophy for a long time. It’s a surprise or perhaps disappointment only to those who must mathematize everything. Articulating this point in terms of Kolmogorov complexity does not particularly add to the insight so much as translate it into an idiom used by particular researchers.

Where am I departing from this with “Existentialism in Design”?

Rather than treat “value” as a wholly abstract metasyntactic variable representing the goals of a superintelligent, omniscient machine, I’m approaching the problem more practically. First, I’m limiting myself to big sociotechnical complexes wherein a large number of people have some portion of their interactions mediated by digital networks and data centers and, why not, smartphones and even the imminent dystopia of IoT devices. This may be setting my work up for obsolescence, but it also grounds the work in potential action. Since these practical problems rely on much of the same mathematical apparatus as the more far-reaching problems, there is a chance that a fundamental theorem may arise from even this applied work.

That restriction on hardware may seem banal; but it’s a particular philosophical question that I am interested in. The motivation for considering existentialist ethics in particular is that it suggests new kinds of problems that are relevant to ethics but which have not been considered carefully or solved.

As I outlined in a previous post, many ethical positions are framed either in terms of consequentialism, evaluating the utility of a variety of outcomes, or deontology, concerned with the consistency of behavior with more or less objectively construed duties. Consequentialism is attractive to superintelligence theorists because they imagine their AI’s to have to ability to cause any consequence. The critical question is how to give it a specification the leads to the best or adequate consequences for humanity. This is a hard problem, under their assumptions.

Deontology is, as far as I can tell, less interesting to superintelligence theorists. This may be because deontology tends to be an ethics of human behavior, and for superintelligence theorists human behavior is rendered virtually insignificant by superintelligent agency. But deontology is attractive as an ethics precisely because it is relevant to people’s actions. It is intended as a way of prescribing duties to a person like you and me.

With Existentialism in Design (a term I may go back and change in all these posts at some point; I’m not sure I love the phrase), I am trying to do something different.

I am trying to propose an agenda for creating a more specific goal function for a limited but still broad-reaching AI, assigning something to its ‘value’ variable, if you will. Because the power of the AI to bring about consequences is limited, its potential for success and failure is also more limited. Catastrophic and utopian outcomes are not particularly relevant; performance can be evaluated in a much more pedestrian way.

Moreover, the valuations internalized by the AI are not to be done in a directly consequentialist way. I have suggested that an AI could be programmed to maximize the meaningfulness of its choices for its users. This is introducing a new variable, one that is more semantically loaded than “value”, though perhaps just as complex and amorphous.

Particular to this variable, “meaningfulness”, is that it is a feature of the subjective experience of the user, or human interacting with the system. It is only secondarily or derivatively an objective state of the world that can be evaluated for utility. To unpack in into a technical specification, we will require a model (perhaps a provisional one) of the human condition and what makes life meaningful. This very well may include such things as the autonomy, or the ability to make one’s own choices.

I can anticipate some objections along the lines that what I am proposing still looks like a special case of more general AI ethics research. Is what I’m proposing really fundamentally any different than a consequentialist approach?

I will punt on this for now. I’m not sure of the answer, to be honest. I could see it going one of two different ways.

The first is that yes, what I’m proposing can be thought of as a narrow special case of a more broadly consequentialist approach to AI design. However, I would argue that the specificity matters because of the potency of existentialist moral theory. The project of specify the latter as a kind of utility function suitable for programming into an AI is in itself a difficult and interesting problem without it necessarily overturning the foundations of AI theory itself. It is worth pursuing at the very least as an exercise and beyond that as an ethical intervention.

The second case is that there may be something particular about existentialism that makes encoding it different from encoding a consequentialist utility function. I suspect, but leave to be shown, that this is the case. Why? Because existentialism (which I haven’t yet gone into much detail describing) is largely a philosophy about how we (individually, as beings thrown into existence) come to have values in the first place and what we do when those values or the absurdity of circumstances lead us to despair. Existentialism is really a kind of phenomenological metaethics in its own right, one that is quite fluid and resists encapsulation in a utility calculus. Most existentialists would argue that at the point where one externalizes one’s values as a utility function as opposed to living as them and through them, one has lost something precious. The kinds of things that existentialism derives ethical imperatives from, such as the relationship between one’s facticity and transcendence, or one’s will to grow in one’s potential and the inevitability of death, are not the kinds of things a (limited, realistic) AI can have much effect on. They are part of what has been perhaps quaintly called the human condition.

To even try to describe this research problem, one has to shift linguistic registers. The existentialist and AI research traditions developed in very divergent contexts. This is one reason to believe that their ideas are new to each other, and that a synthesis may be productive. In order to accomplish this, one needs a charitably considered, working understanding of existentialism. I will try to provide one in my next post in this series.


by Sebastian Benthall at October 06, 2017 01:15 PM

October 03, 2017

Ph.D. student

“The Microeconomics of Complex Economies”

I’m dipping into The microeconomics of complex economies: Evolutionary, institutional, neoclassical, and complexity perspectives, by Elsner, Heinrich, and Schwardt, all professors at the University of Bremen.

It is a textbook, as one would teach a class from. It is interesting because it is self-consciously written as a break from neoclassical microeconomics. According to the authors, this break had been a long time coming but the last straw was the 2008 financial crisis. This at last, they claim, showed that neoclassical faith in market equilibrium was leaving something important out.

Meanwhile, “heterodox” economics has been maturing for some time in the economics blogosphere, while complex systems people have been interested in economics since the emergence of the field. What Elsner, Heinrich, and Schwardt appear to be doing with this textbook is providing a template for an undergraduate level course on the subject, legitimizing it as a discipline. They are not alone. They cite Bowles’s Microeconomics as worthy competition.

I have not yet read the chapter of the Elsner, Heinirch, and Schwardt book that covers philosophy of science and its relationship to the validity of economics. It looks from a glance at it very well done. But I wanted to note my preliminary opinion on the matter given my recent interest in Shapiro and Varian‘s information economics and their claim to be describing ‘laws of economics’ that provide a reliable guide to business strategy.

In brief, I think Shapiro and Varian are right: they do outline laws of economics that provide a reliable guide to business strategy. This is in fact what neoclassical economics is good for.

What neoclassical economics is not always great at is predicting aggregate market behavior in a complex world. It’s not clear if any theory could ever be good at predicting aggregate market behavior in a complex world. It is likely that if there were one, it would be quickly gamed by investors in a way that would render it invalid.

Given vast information asymmetries it seems the best one could hope for is a theory of the market being able to assimilate the available information and respond wisely. This is the Hayekian view, and it’s not mainstream. It suffers the difficulty that it is hard to empirically verify that a market has performed optimally given that no one actor, including the person attempting the verify Hayekian economic claims, has all the information to begin with. Meanwhile, it seems that there is no sound a priori reason to believe this is the case. Epstein and Axtell (1996) have some computational models where they test when agents capable of trade wind up in an equilibrium with market-clearing prices and in their models this happens under only very particular an unrealistic conditions.

That said, predicting aggregate market outcomes is a vastly different problem than providing strategic advice to businesses. This is the point where academic critiques of neoclassical economics miss the mark. Since phenomena concerning supply and demand, pricing and elasticity, competition and industrial organization, and so on are part of the lived reality of somebody working in industry, formalizations of these aspects of economic life can be tested and propagated by many more kinds of people than the phenomena of total market performance. The latter is actionable only for a very rare class of policy-maker or financier.

References

Bowles, S. (2009). Microeconomics: behavior, institutions, and evolution. Princeton University Press.

Elsner, W., Heinrich, T., & Schwardt, H. (2014). The microeconomics of complex economies: Evolutionary, institutional, neoclassical, and complexity perspectives. Academic Press.

Epstein, Joshua M., and Robert Axtell. Growing artificial societies: social science from the bottom up. Brookings Institution Press, 1996.


by Sebastian Benthall at October 03, 2017 02:41 PM

September 29, 2017

Center for Technology, Society & Policy

Join CTSP for social impact Un-Pitch Day on October 27th

Are you a local nonprofit or community organization that has a pressing challenge that you think technology might be able to address, but you don’t know where to start?

If so, join us and the UC Berkeley School of Information’s IMSA (Information Management Student Association) for Un-Pitch Day on October 27th from 4 – 7pm, where graduate students will offer their technical expertise to help address your organization’s pressing technology challenges. During the event, we’ll have you introduce your challenge(s) and desired impact and partner you with grad students with activities to explore your challenge(s) and develop refined questions to push the conversation forward.

You’d then have the opportunity to pitch your challenge(s) with the goal of potentially matching with a student project group to adopt your project. By attending Un-Pitch day, you would gain a more defined sense of how to address your technology challenge, and, potentially, a team of students interested in working with your org to develop a prototype or a research project to address it.

Our goal is to both help School of Information grad students (and other UCB grad students) identify potential projects they can adopt for the 2017-2018 academic year (ending in May). Working in collaboration with your organization, our students can help develop a technology-focused project or conduct technology-related research to aid your organization.

There is also the possibility of qualifying for funding ($2000 per project team member) for technology projects with distinct public interest/public policy goals through the Center for Technology, Society & Policy (funding requires submitting an application to the Center, due in late November). Please note that we cannot guarantee that each project presented at Un-Pitch Day will match with an interested team.

Event Agenda

Friday, October 27th from 4 – 7pm at South Hall on the UC Berkeley campus

Light food & drinks will be provided for registered attendees.

Registration is required for this event; click here to register.

4:00 – 4:45pm Social impact organization introductions and un-pitches of challenges

4:45 – 5:00pm CTSP will present details about public interest project funding opportunities and deadlines.

5:00 – 6:00pm Team up with grad students through “speed dating” activities to break the ice and explore challenge definitions and develop fruitful questions from a range of diverse perspectives.

6:00 – 7:00pm Open house for students and organizations to mingle and connect over potential projects. Appetizers and refreshments provided by CTSP.

by Daniel Griffin at September 29, 2017 05:45 PM

September 27, 2017

Ph.D. student

New article about algorithmic systems in Wikipedia and going ‘beyond opening up the black box’

I'm excited to share a new article, "Beyond opening up the black box: Investigating the role of algorithmic systems in Wikipedian organizational culture" (open access PDF here). It is published in Big Data & Society as part of a special issue on "Algorithms in Culture," edited by Morgan Ames, Jason Oakes, Massimo Mazzotti, Marion Fourcade, and Gretchen Gano. The special issue came out of a fantastic workshop of the same name held last year at UC-Berkeley, where we presented and workshopped our papers, which were all taking some kind of socio-cultural approach to algorithms (broadly defined). This was originally a chapter of my dissertation based on my ethnographic research into Wikipedia, and it has gone through many rounds of revision across a few publications, as I've tried to connect what I see in Wikipedia to broader conversations about the role of highly-automated, data-driven systems across platforms and domains.

I use the case of Wikipedia's unusually open algorithmic systems to rethink the "black box" metaphor, which has become a standard way to think about ethical, social, and political issues around artificial intelligence, machine learning, expert systems, and other automated, data-driven decisionmaking processes. Entire conferences are being held on these topics, like Fairness, Accountability, and Transparency in Machine Learning (FATML) and Governing Algorithms. In much current scholarship and policy advocacy, there is often an assumption that we are after some internal logic embedded into the codebase (or "the algorithm") itself, which has been hidden from us under reasons of corporate or state secrecy. Many times this is indeed the right goal, but scholars are increasingly raising broader and more complex issues around algorithmic systems, such as work from Nick Seaver (PDF), Tarleton Gillespie (PDF), and Kate Crawford (link), and Jenna Burrell (link), which I build on in the case of Wikipedia. What happens when the kind of systems that are kept under tight lock-and-key at Google, Facebook, Uber, the NSA, and so on are not just open sourced in Wikipedia, but also typically designed and developed in an open, public process in which developers have to explain their intentions and respond to questions and criticism?

In the article, I discuss these algorithmic systems as being a part of Wikipedia's particular organizational culture, focusing on how becoming and being a Wikipedian involves learning not just traditional cultural norms, but also familiarity with various algorithmic systems that operate across the site. In Wikipedia's unique setting, we see how the questions of algorithmic transparency and accountability subtly shift away from asking if such systems are open to an abstract, aggregate "public." Based on my experiences in Wikipedia, I instead ask: For whom are these systems open, transparent, understandable, interpretable, negotiable, and contestable? And for whom are they as opaque, inexplicable, rigid, bureaucratic, and even invisible as the jargon, rules, routines, relationships, and ideological principles of any large-scale, complex organization? Like all cultures, Wikipedian culture can be quite opaque, hard to navigate, difficult to fully explain, constantly changing, and has implicit biases – even before we consider the role of algorithmic systems. In looking to approaches to understanding culture from the humanities and the interpretive social sciences, we get a different perspective on what it means for algorithmic systems to be open, transparent, accountable, fair, and explainable.


I should say that I'm a huge fan and advocate of work on "opening the black box" in a more traditional information theory approach, which tries to audit and/or reverse engineer how Google search results are ranked, how Facebook news feeds are filtered, how Twitter's trending topics are identified, or similar kinds of systems that are making (or helping make) decisions about setting bail for a criminal trial, who gets a loan, or who is a potential terrorist threat. So many of these systems that make decisions about the public are opaque to the public, protected as trade secrets or for reasons of state security. There is a huge risk that such systems have deeply problematic biases built-in (unintentionally or otherwise), and many people are trying to reverse engineer or otherwise audit such systems, as well as looking at issues like biases in the underlying training data used for machine learning. For more on this topic, definitely look through the proceedings of FATML, read books like Frank Pasquale's The Black Box Society and Cathy O'Neill's Weapons of Math Destruction, and check out the Critical Algorithms Studies reading list.

Yet when I read this kind of work and hear these kinds of conversations, I often feel strangely out of place. I've spent many years investigating the role of highly-automated algorithmic systems in Wikipedia, whose community has strong commitments to openness and transparency. And now I'm in the Berkeley Institute for Data Science, an interdisciplinary academic research institute where open source, open science, and reproducibility are not only core values many people individually hold, but also a major focus area for the institute's work.

So I'm not sure how to make sense of my own position in the "algorithms studies" sub-field when I hear of heroic (and sometimes tragic) efforts to try and pry open corporations and governmental institutions that are increasingly relying on new forms of data-driven, automated decision-making and classification. If anything, I have the opposite problem: in the spaces I tend to spend time in, the sheer amount of code and data I can examine can be so open that it is overwhelming to navigate. There are so many people in academic research and the open source / free culture movements who are wanting a fresh pair of eyes on the work they've done, which often use many the same fundamental approaches and technologies that concern us when hidden away by corporations and governments.

Wikipedia has received very little attention from those who focus on issues around algorithmic opacity and interpretability (even less so than scientific research, but that's a different topic). Like almost all the major user-generated content platforms, Wikipedia deeply relies on automated systems for reviewing and moderating the massive number of contributions made to Wikipedia articles every day. Yet almost all of the code and most of the data keeping Wikipedia running is open sourced, including the state-of-the-art machine learning classifiers trained to distinguish good contributions from bad ones (for different definitions of good and bad).

The design, development, deployment, and discussion of such systems generally takes place in public forums, including wikis, mailing lists, chat rooms, code repositories, and issue/bug trackers. And this is not just a one-way mirror into the organization, as volunteers can and do participate in these debates and discussions. In fact, the people who are paid staff at the Wikimedia Foundation tasked with developing and maintaining these systems are often recruiting volunteers to help, since the Foundation is a non-profit that doesn't have the resources that a large company or even a smaller startup has.


From all this, Wikipedia may appear to be the utopia of algorithmic transparency and accountability that many scholars, policymakers, and even some industry practitioners are calling for in other major platforms and institutions. So for those of us who are concerned with black-boxed algorithmic systems, I ask: is open source, open data, and open process the solution to all our problems? Or more constructively, when those artificial constraints on secrecy are not merely removed by some external fiat, but something that people designing, developing, and deploying such systems strongly oppose on ideological grounds, what will our next challenge be?

In trying to work through my understanding of this issue, I argue we need to take an expanded micro-sociological view of algorithmic systems as deeply entwined with particular facets of culture. We need to look at algorithmic systems not just in terms of how they make decisions or recommendations by transforming inputs into outputs, but also asking how they transform what it means to participate in a particular socio-technical space. Wikipedia is a great place to study that, and many Wikipedia researchers have focused on related topics. For example, newcomers to Wikipedia must learn that in order to properly participate in the community, they have to directly and indirectly interact with various automated systems, such as tagging requests with machine-readable codes so that they are properly circulated to others in the community. And in terms of newcomer socialization, it probably isn't wise to tell someone about how to properly use these machine-readable templates, then send them to the code repository for the bot that parses these templates to assist with the task at hand.

It certainly makes sense that newcomers to a place like Wikipedia have to learn its organizational culture to fully participate. I'm not arguing that these barriers to entry are inherently bad and should be dismantled as a matter of principle. Over time, Wikipedians have developed a specific organizational culture through various norms, jargon, rules, processes, standards, communication platforms beyond the wiki, routinized co-located events, as well as bots, semi-automated tools, browser extensions, dashboards, scripted templates, and code directly built into the platform. This is a serious accomplishment and it is a crucial part of the story about how Wikipedia became one of the most widely consulted sources of knowledge today, rather than the frequently-ridiculed curiosity I remember it being in the early 2000s. And it is an even greater accomplishment that virtually all of this work is done in ways that are, in principle, accessible to the general public.


But what does that openness of code and development mean in practice? Who can meaningfully make use of what even to a long-time Wikipedian like me often feels like an overwhelming amount of openness? My argument isn't that open source, open code, and open process somehow doesn't make a difference. It clearly does in many different ways, but Wikipedia shows us that we should asking: when, where, and for whom does openness make more or less of a difference? Openness is not equally distributed, because openness takes certain kinds of work, expertise, self-efficacy, time, and autonomy to properly take advantage of it, as Nate Tkacz has noted with Wikipedia in general. For example, I reference Ezster Hargattai's work on digital divides, in which she argues that just giving access to the Internet isn't enough; we have to also teach people how to use and take advantage of the Internet, and these "second-level digital divides" are often where demographic gaps widen even more.

There is also an analogy here with Jo Freeman's famous piece The Tyranny of Structurelessness, in which she argues that documented, formalized rules and structures can be far more inclusive than informal, unwritten rules and structures. Newcomers can more easily learn what is openly documented and formalized, while it is often only possible to learn the informal, unwritten rules and structures by either having a connection to an insider or accidentally breaking them and being sanctioned. But there is also a problem with the other extreme, when the rules and structures grow so large and complex that they become a bureaucratic labyrinth that is just as hard for the newcomer to learn and navigate.

So for veteran Wikipedians, highly-automated workflows like speedy deletion can be a powerful way to navigate and act within Wikipedia at scale, in a similar way that Wikipedia's dozens of policies make it easy for veterans to speak volumes just by saying that an article is a CSD#A7, for example. For its intended users, it sinks into the background and becomes second nature, like all good infrastructure does. The veteran can also foreground the infrastructure and participate in complex conversations and collective decisions about how these tools should change based on various ideas about how Wikipedia should change – as Wikipedians frequently do. But for the newcomer, the exact same system – which is in principle almost completely open and contestable to anyone who opens up a ticket on Phabricator – can look and feel quite different. And just knowing "how to code" in the abstract isn't enough, as newcomers must learn how code operates in Wikipedia's unique organizational culture, which has many differences from other large-scale open source software projects.


So this article might seem on the surface to be a critique of Wikipedia, but it is more a critique of my wonderful, brilliant, dedicated colleagues who are doing important work to try and open up (or at least look inside) the proprietary algorithmic systems that are playing important roles in major platforms and institutions. Make no mistake: despite my critiques of the information theory metaphor of the black box, their work within this paradigm is crucial, because there can be many serious biases and inequalities that are intentionally or unintentionally embedded in and/or reinforced through such systems.

However, we must also do research in the tradition of the interpretive social sciences to understand the broader cultural dynamics around how people learn, navigate, and interpret algorithmic systems, alongside all of the other cultural phenomena that remain as "black boxed" as the norms, discourses, practices, procedures and ideological principles present in all cultures. I'm not the first one to raise these kinds of concerns, and I also want to highlight the work like that of Motahhare Eslami et al (PDF1, PDF2) on people's various "folk theories" of opaque algorithmic systems in social media sites. The case of Wikipedia shows how when such systems are quite open, it is perhaps even more important to understand how these differences make a difference.

by R. Stuart Geiger at September 27, 2017 07:00 AM

September 24, 2017

Ph.D. student

Existentialism in Design: Motivation

There has been a lot of recent work on the ethics of digital technology. This is a broad area of inquiry, but it includes such topics as:

  • The ethics of Internet research, including the Facebook emotional contagion study and the Encore anti-censorship study.
  • Fairness, accountability, and transparnecy in machine learning.
  • Algorithmic price-gauging.
  • Autonomous car trolley problems.
  • Ethical (Friendly?) AI research? This last one is maybe on the fringe…

If you’ve been reading this blog, you know I’m quite passionate about the intersection of philosophy and technology. I’m especially interested in how ethics can inform the design of digital technology, and how it can’t. My dissertation is exploring this problem in the privacy engineering literature.

I have a some dissatisfaction towards this field which I don’t expect to make it into my dissertation. One is that the privacy engineering literature and academic “ethics of digital technology” more broadly tends to be heavily informed by the law, in the sense of courts, legislatures, and states. This is motivated by the important consideration that technology, and especially technologists, should in a lot of cases be compliant with the law. As a practical matter, it certainly spares technologists the trouble of getting sued.

However, being compliant with the law is not precisely the same things as being ethical. There’s a long ethical tradition of civil disobedience (certain non-violent protest activities, for example) which is not strictly speaking legal though it has certainly had impact on what is considered legal later on. Meanwhile, the point has been made but maybe not often enough that legal language often looks like ethical language, but really shouldn’t be interpreted that way. This is a point made by Oliver Wendell Holmes Junior in his notable essay, “The Path of the Law”.

When the ethics of technology are not being framed in terms of legal requirements, they are often framed in terms of one of two prominent ethical frameworks. One framework is consequentialism: ethics is a matter of maximizing the beneficial consequences and minimizing the harmful consequences of ones actions. One variation of consequentialist ethics is utilitarianism, which attempts to solve ethical questions by reducing them to a calculus over “utility”, or benefit as it is experienced or accrued by individuals. A lot of economics takes this ethical stance. Another, less quantitative variation of consequentialist ethics is present in the research ethics principle that research should maximize benefits and minimize harms to participants.

The other major ethical framework used in discussions of ethics and technology is deontological ethics. These are ethics that are about rights, duties, and obligations. Justifying deontological ethics can be a little trickier than justifying consequentialist ethics. Frequently this is done by invoking social norms, as in the case of Nissenbaum’s contextual integrity theory. Another variation of a deontological theory of ethics is Habermas’s theory of transcendental pragmatics and legitimate norms developed through communicative action. In the ideal case, these norms become encoded into law, though it is rarely true that laws are ideal.

Consequentialist considerations probably make the world a better place in some aggregate sense. Deontological considerations probably maybe the world a fairer or at least more socially agreeable place, as in their modern formulations they tend to result from social truces or compromises. I’m quite glad that these frameworks are taken seriously by academic ethicists and by the law.

However, as I’ve said I find these discussions dissatisfying. This is because I find both consequentialist and deontological ethics to be missing something. They both rely on some foundational assumptions that I believe should be questioned in the spirit of true philosophical inquiry. A more thorough questioning of these assumptions, and tentative answers to them, can be found in existentialist philosophy. Existentialism, I would argue, has not had its due impact on contemporary discourse on ethics and technology, and especially on the questions surrounding ethical technical design. This is a situation I intend to one day remedy. Though Zach Weinersmith has already made a fantastic start:

“Self Driving Car Ethics”, by Weinersmith

SMBC: Autonomous vehicle ethics

What kinds of issues would be raised by existentialism in design? Let me try out a few examples of points made in contemporary ethics of technology discourse and a preliminary existentialist response to them.

Ethical Charge Existentialist Response
A superintelligent artificial intelligence could, if improperly designed, result in the destruction or impairment of all human life. This catastrophic risk must be avoided. (Bostrom, 2014) We are all going to die anyway. There is no catastrophic risk; there is only catastrophic certainty. We cannot make an artificial intelligence that prevents this outcome. We must instead design artificial intelligence that makes life meaningful despite its finitude.
Internet experiments must not direct the browsers of unwitting people to test the URLs of politically sensitive websites. Doing this may lead to those people being harmed for being accidentally associated with the sensitive material. Researchers should not harm people with their experiments. (Narayanan and Zevenbergen, 2015) To be held responsible by a state’s criminal justice system for the actions taken by ones browser, controlled remotely from America, is absurd. This absurdity, which pervades all life, is the real problem, not the suffering potentially caused by the experiment (because suffering in some form is inevitable, whether it is from painful circumstance or from ennui.) What’s most important is the exposure of this absurdity and the potential liberation from false moralistic dogmas that limit human potential.
Use of Big Data to sort individual people, for example in the case of algorithms used to choose among applicants for a job, may result in discrimination against historically disadvantaged and vulnerable groups. Care must be taken to tailor machine learning algorithms to adjust for the political protection of certain classes of people. (Barocas and Selbst, 2016) The egalitarian tendency in ethics which demands that the greatest should invest themselves in the well-being of the weakest is a kind of herd morality, motivated mainly by ressentiment of the disadvantaged who blame the powerful for their frustrations. This form of ethics, which is based on base emotions like pity and envy, is life-negating because it denies the most essential impulse of life: to overcome resistance and to become great. Rather than restrict Big Data’s ability to identify and augment greatness, it should be encouraged. The weak must be supported out of a spirit of generosity from the powerful, not from a curtailment of power.

As a first cut at existentialism’s response to ethical concerns about technology, it may appear that existentialism is more permissive about the use and design of technology than consequentialism and deontology. It is possible that this conclusion will be robust to further investigation. There is a sense in which existentialism may be the most natural philosophical stance for the technologist because a major theme in existentialist thought is the freedom to choose ones values and the importance of overcoming the limitations on ones power and freedom. I’ve argued before that Simone de Beauvoir, who is perhaps the most clear-minded of the existentialists, has the greatest philosophy of science because it respects this purpose of scientific research. There is a vivacity to existentialism that does not sweat the small stuff and thinks big while at the same time acknowledging that suffering and death are inevitable facts of life.

On the other hand, existentialism is a morally demanding line of inquiry precisely because it does not use either easy metaethical heuristics (such as consequentialism or deontology) or the bald realities of the human condition as a stopgap. It demands that we tackle all the hard questions, sometimes acknowledging that they are answerable or answerable only in the negative, and muddle on despite the hardest truths. Its aim is to provide a truer, better morality than the alternatives.

Perhaps this is best illustrated by some questions implied by my earlier “existentialist responses” that address the currently nonexistent field of existentialism in design. These are questions I haven’t yet heard asked by scholars at the intersection of ethics and technology.

  • How could we design an artificial intelligence (or, to make it simpler, a recommendation system) that makes the most meaningful choices for its users?
  • What sort of Internet intervention would be most liberatory for the people affected by it?
  • What technology can best promote generosity from the world’s greatest people as a celebration of power and life?

These are different questions from any that you read about in the news or in the ethical scholarship. I believe they are nevertheless important ones, maybe more important than the ethical questions that are more typically asked. The theoretical frameworks employed by most ethicists make assumptions that obscure what everybody already knows about the distribution of power and its abuses, the inevitability of suffering and death, life’s absurdity and especially the absurdity if moralizing sentiment in the face of the cruelty of reality, and so on. At best, these ethical discussions inform the interpretation and creation of law, but law is not the same as morality and to confuse the two robs morality of what is perhaps most essential component, which is that is grounded meaningfully in the experience of the subject.

In future posts (and, ideally, eventually in a paper derived from those posts), I hope to flesh out more concretely what existentialism in design might look like.

References

Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact.

Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. OUP Oxford.

Narayanan, A., & Zevenbergen, B. (2015). No Encore for Encore? Ethical questions for web-based censorship measurement.

Weinersmith, Z. “Self Driving Car Ethics”. Saturday Morning Breakfast Cereal.


by Sebastian Benthall at September 24, 2017 03:19 AM

September 20, 2017

Ph.D. student

Market segments and clusters of privacy concerns

One result from earlier economic analysis is that in the cases where personal information is being used to judge the economic value of an agent (such as when they are going to be hired, or offered a loan), the market is divided between those that would prefer more personal information to flow (because they are highly qualified, or highly credit-worthy), and those that would rather information not flow.

I am naturally concerned about whether this microeconomic modeling has any sort of empirical validity. However, there is some corroborating evidence in the literature on privacy attitudes. Several surveys (see references) have discovered that people’s privacy attitudes cluster into several groups, those only “marginally concerned”, the “pragmatists”, and the “privacy fundamentalists”. These groups have, respectively, stronger and stronger views on the restriction of their flow of personal information.

It would be natural to suppose that some of the variation in privacy attitudes has to do with expected outcomes of information flow. I.e., if people are worried that their personal information will make them ineligible for a job, they are more likely to be concerned about this information flowing to potential employers.

I need to dig deeper into the literature to see whether factors like income have been shown to be correlated with privacy attitudes.

References

Ackerman, M. S., Cranor, L. F., & Reagle, J. (1999, November). Privacy in e-commerce: examining user scenarios and privacy preferences. In Proceedings of the 1st ACM conference on Electronic commerce (pp. 1-8). ACM.

B. Berendt et al., “Privacy in E-Commerce: Stated Preferences versus Actual Behavior,” Comm. ACM, vol. 484, pp. 101-106, 2005.

K.B. Sheehan, “Toward a Typology of Internet Users and Online Privacy Concerns,” The Information Soc., vol. 1821, pp. 21-32, 2002.


by Sebastian Benthall at September 20, 2017 03:50 PM

September 18, 2017

Ph.D. student

Economic costs of context collapse

One motivation for my recent studies on information flow economics is that I’m interested in what the economic costs are when information flows across the boundaries of specific markets.

For example, there is a folk theory of why it’s important to have data protection laws in certain domains. Health care, for example. The idea is that it’s essential to have health care providers maintain the confidentiality of their patients because if they didn’t then (a) the patients could face harm due to this information getting into the wrong hands, such as those considering them for employment, and (b) this would disincentivize patients from seeking treatment, which causes them other harms.

In general, a good approximation of general expectations of data privacy is that data should not be used for purposes besides those for which the data subjects have consented. Something like this was encoded in the 1973 Fair Information Practices, for example. A more modern take on this from contextual integrity (Nissenbaum, 2004) argues that privacy is maintained when information flows appropriately with respect to the purposes of its context.

A widely acknowledged phenomenon in social media, context collapse (Marwick and boyd, 2011; Davis and Jurgenson, 2014), is when multiple social contexts in which a person is involved begin to interfere with each other because members of those contexts use the same porous information medium. Awkwardness and sometimes worse can ensue. These are some of the major ways the world has become aware of what a problem the Internet is for privacy.

I’d like to propose that an economic version of context collapse happens when different markets interfere with each other through network-enabled information flow. The bogeyman of Big Brother through Big Data, the company or government that has managed to collect data about everything about you in order to infer everything else about you, has as much to do with the ways information is being used in cross-purposed ways as it has to do with the quantity or scope of data collection.

It would be nice to get a more formal grip on the problem. Since we’ve already used it as an example, let’s try to model the case where health information is disclosed (or not) to a potential employer. We already have the building blocks for this case in our model of expertise markets and our model of labor markets.

There are now two uncertain variables of interest. First, let’s consider a variety of health treatments J such that m = \vert J \vert. The distribution of health conditions in society is distributed such that the utility of a random person i receiving a treatment j is w_{i,j}. Utility for one treatment is not independent from utility from another. So in general \vec{w} \sim W, meaning a person’s utility for all treatments is sampled from an underlying distribution W.

There is also the uncertain variable of how effective somebody will be at a job they are interested in. We’ll say this is distributed according to X, and that a person’s aptitude for the job is x_i \sim X.

We will also say that W and X are not independent from each other. In this model, there are certain health conditions that are disabling with respect to a job, and this has an effect on expected performance.

I must note here that I am not taking any position on whether or not employers should take disabilities into account when hiring people. I don’t even know for sure the consequences of this model yet. You could imagine this scenario taking place in a country which does not have the Americans with Disabilities Act and other legislation that affects situations like this.

As per the models that we are drawing from, let’s suppose that normal people don’t know how much they will benefit from different medical treatments; i doesn’t know \vec{w}_i. They may or may not know x_i (I don’t yet know if this matters). What i does know is their symptoms, y_i \sim Y.

Let’s say person x_i goes to the doctor, reporting y_i, on the expectation that the doctor will prescribe them treatment \hat{j} that maximizes their welfare:

\hat j = arg \max_{j \in J} E[X_j \vert y]

Now comes the tricky part. Let’s say the doctor is corrupt and willing to sell the medical records of her patients to her patient’s potential employers. By assumption y_i reveals information both about w_i and x_i. We know from our earlier study that information about x_i is indeed valuable to the employer. There must be some price (at least within our neoclassical framework) that the employer is willing to pay the corrupt doctor for information about patient symptoms.

We also know that having potential employers know more about your aptitudes is good for highly qualified applicants and bad for not as qualified applicants. The more information employers know about you, the more likely they will be able to tell if you are worth hiring.

The upshot is that there may be some patients who are more than happy to have their medical records sold off to their potential employers because those particular symptoms are correlated with high job performance. These will be attracted to systems that share their information across medical and employment purposes.

But for those with symptoms correlated with lower job performance, there is now a trickier decision. If doctors are corrupt, it may be that they choose not to reveal their symptoms accurately (or at all) because this information might hurt their chances of employment.

A few more wrinkles here. Suppose it’s true the fewer people will go to corrupt doctors because they suspect or know that information will leak to their employers. If there are people who suspect or know that the information that leaks to their employers will reflect on them favorably, that creates a selection effect on who goes to the doctor. This means that the information that i has gone to the doctor, or not, is a signal employers can use to discriminate between potential applicants. So to some extent the harms of the corrupt doctors fall on the less able even if they opt out of health care. They can’t opt out entirely of the secondary information effects.

We can also add the possibility that not all doctors are corrupt. Only some are. But if it’s unknown which doctors are corrupt, the possibility of corruption still affects the strategies of patients/employees in a similar way, only now in expectation. Just as in the Akerlof market for lemons, a few corrupt doctors ruins the market.

I have not made these arguments mathematically specific. I leave that to a later date. But for now I’d like to draw some tentative conclusions about what mandating the protection of health information, as in HIPAA, means for the welfare outcomes in this model.

If doctors are prohibited from selling information to employers, then the two markets do not interfere with each other. Doctors can solicit symptoms in a way that optimizes benefits to all patients. Employers can make informed choices about potential candidates through an independent process. The latter will serve to select more promising applicants from less promising applicants.

But if doctors can sell health information to employers, several things change.

  • Employers will benefit from information about employee health and offer to pay doctors for the information.
  • Some doctors will discretely do so.
  • The possibility of corrupt doctors will scare off those patients who are afraid their symptoms will reveal a lack of job aptitude.
  • These patients no longer receive treatment.
  • This reduces the demand for doctors, shrinking the health care market.
  • The most able will continue to see doctors. If their information is shared with employers, they will be more likely to be hired.
  • Employers may take having medical records available to be bought from corrupt doctors as a signal that the patient is hiding something that would reveal poor aptitude.

In sum, without data protection laws, there are fewer people receiving beneficial treatment and fewer jobs for doctors providing beneficial treatment. Employers are able to make more advantageous decisions, and the most able employees are able to signal their aptitude through the corrupt health care system. Less able employees may wind up being identified anyway through their non-participation in the medical system. If that’s the case, they may wind up returning to doctors for treatment anyway, though they would need to have a way of paying for it besides employment.

That’s what this model says, anyway. The biggest surprise for me is the implication that data protection laws serve this interests of service providers by expanding their customer base. That is a point that is not made enough! Too often, the need for data protection laws is framed entirely in terms of the interests of the consumer. This is perhaps a politically weaker argument, because consumers are not united in their political interest (some consumers would be helped, not harmed, by weaker data protection).

References

Akerlof, G. A. (1970). The market for” lemons”: Quality uncertainty and the market mechanism. The quarterly journal of economics, 488-500.

Davis, J. L., & Jurgenson, N. (2014). Context collapse: theorizing context collusions and collisions. Information, Communication & Society, 17(4), 476-485.

Marwick, A. E., & Boyd, D. (2011). I tweet honestly, I tweet passionately: Twitter users, context collapse, and the imagined audience. New media & society, 13(1), 114-133.

Nissenbaum, H. (2004). Privacy as contextual integrity. Wash. L. Rev., 79, 119.


by Sebastian Benthall at September 18, 2017 02:08 AM

September 13, 2017

Ph.D. student

Credit scores and information economics

The recent Equifax data breach brings up credit scores and their role in the information economy. Credit scoring is a controversial topic in the algorithmic accountability community. Frank Pasquale, for example, writes about it in The Black Box Society. Most of the critical writing on the subject points to how credit scoring might be done in a discriminatory or privacy-invasive way. As interesting as those critiques are from a political and ethical perspective, it’s worth reviewing what credit scores are for in the first place.

Let’s model this as we have done in other cases of information flow economics.

There’s a variable of interest, the likelihood that a potential borrower will not default on a loan, X. Note that any value sampled from this x will vary within the interval [0,1] because it is a value of probability.

There’s a decision to be made by a bank: whether or not to provide a random borrower a loan.

To keep things very simple, let’s suppose that the bank gets a payoff of 1 if the borrower is given a loan and does not default and gets a payoff of -1 if the borrower gets the loan and defaults. The borrower gets a payoff of 1 if he gets the loan and 0 otherwise. The bank’s strategy is to avoid giving loans that lead to negative expected payoff. (This is a gross oversimplification of, but is essentially consistent with, the model of credit used by Blöchlinger and Leippold (2006).

Given a particular x, the expected utility of the bank is:

x (1) + (1 - x) (-1) = 2x - 1

Given the domain of [0,1], this function ranges from -1 to 1, hitting 0 when x = .5.

We can now consider welfare outcomes under conditions of now information flow, total information flow, and partial information flow.

Suppose the bank has no insight into x besides a prior expectation X. Then the expected value of the bank upon offering the loan is E[2x+1]. If it is above zero, the bank will offer the loan and the borrower gets a positive payoff. If it is below zero, the bank will not offer the loan and both the bank and potential borrower will get zero payoff. The outcome depends entirely on the prior probability of loan default and is either rewards borrowers or not depending on that distribution.

If the bank has total insight into x, then the outcomes are different. The bank can use the option to reject borrowers for whom x is less than .5, and accept those for whom x is greater than .5. If we see the game as repeated over many borrowers whose chances of paying off their loan are all sampled from X. Then the additional knowledge of the bank creates two classes of potential borrowers, one that gets loans and one that does not. This increases inequality among borrowers.

It also increases the utility of the bank. This is perhaps best illustrated with a simple example. Suppose the distribution X is uniform over the unit interval [0,1]. Then the expected value of the bank’s payoff under complete information is

\int_{.5}^{1} 2x - 1 dx = 0.25

which is a significant improvement over the expected payoff of 0 in the uninformed case.

Putting off an analysis of the partial information case for now, suffice it to say that we expect partial information (such as a credit score) to lead to an intermediate result, improving bank profits and differentiating borrowers with respect to the bank’s choice to loan.

What is perhaps most interesting about this analysis is the similarity between it and Posner’s employment market. In both cases, the subject of the variable of interest X is a person’s prospects for improving the welfare of the principle decision-maker upon their being selected, where selection also implies benefit to the subject. Uncertainty about the prospects leads to equal treatment of prospective persons and reduced benefit to the principle. More information leads to differentiated impact to the prospects and benefit to the principle.

References

Blöchlinger, A., & Leippold, M. (2006). Economic benefit of powerful credit scoring. Journal of Banking & Finance, 30(3), 851-873.


by Sebastian Benthall at September 13, 2017 06:01 PM

September 12, 2017

Ph.D. alumna

Data & Society’s Next Stage

In March 2013, in a flurry of days, I decided to start a research institute. I’d always dreamed of doing so, but it was really my amazing mentor and boss – Jennifer Chayes – who put the fire under my toosh. I’d been driving her crazy about the need to have more people deeply interrogating how data-driven technologies were intersecting with society. Microsoft Research didn’t have the structure to allow me to move fast (and break things). University infrastructure was even slower. There were a few amazing research centers and think tanks, but I wanted to see the efforts scale faster. And I wanted to build the structures to connect research and practices, convene conversations across sectors, and bring together a band of what I loved to call “misfit toys.”  So, with the support of Jennifer and Microsoft, I put pen to paper. And to my surprise, I got the green light to help start a wholly independent research institute.

I knew nothing about building an organization. I had never managed anyone, didn’t know squat about how to put together a budget, and couldn’t even create a check list of to-dos. So I called up people smarter than I to help learn how other organizations worked and figure out what I should learn to turn a crazy idea into reality. At first, I thought that I should just go and find someone to run the organization, but I was consistently told that I needed to do it myself, to prove that it could work. So I did. It was a crazy adventure. Not only did I learn a lot about fundraising, management, and budgeting, but I also learned all sorts of things about topics I didn’t even know I would learn to understand – architecture, human resources, audits, non-profit law. I screwed up plenty of things along the way, but most people were patient with me and helped me learn from my mistakes. I am forever grateful to all of the funders, organizations, practitioners, and researchers who took a chance on me.

Still, over the next four years, I never lost that nagging feeling that someone smarter and more capable than me should be running Data & Society. I felt like I was doing the organization a disservice by not focusing on research strategy and public engagement. So when I turned to the board and said, it’s time for an executive director to take over, everyone agreed. We sat down and mapped out what we needed – a strategic and capable leader who’s passionate about building a healthy and sustainable research organization to be impactful in the world. Luckily, we had hired exactly that person to drive program and strategy a year before when I was concerned that I was flailing at managing the fieldbuilding and outreach part of the organization.

I am overwhelmingly OMG ecstatically bouncing for joy to announce that Janet Haven has agreed to become Data & Society’s first executive director. You can read more about Janet through the formal organizational announcement here.  But since this is my blog and I’m telling my story, what I want to say is more personal. I was truly breaking when we hired Janet. I had taken off more than I could chew. I was hitting rock bottom and trying desperately to put on a strong face to support everyone else. As I see it, Janet came in, took one look at the duct tape upon which I’d built the organization and got to work with steel, concrete, and wood in her hands. She helped me see what could happen if we fixed this and that. And then she started helping me see new pathways for moving forward. Over the last 18 months, I’ve grown increasingly confident that what we’re doing makes sense and that we can build an organization that can last. I’ve also been in awe watching her enable others to shine.

I’m not leaving Data & Society. To the contrary, I’m actually taking on the role that my title – founder and president – signals. And I’m ecstatic. Over the last 4.5 years, I’ve learned what I’m good at and what I’m not, what excites me and what makes me want to stay in bed. I built Data & Society because I believe that it needs to exist in this world. But I also realize that I’m the classic founder – the crazy visionary that can kickstart insanity but who isn’t necessarily the right person to take an organization to the next stage. Lucky for me, Janet is. And together, I can’t wait to take Data & Society to the next level!

by zephoria at September 12, 2017 02:34 PM

September 11, 2017

Ph.D. student

Information flow in economics

We have formalized three different cases of information economics:

What we discovered is that each of these cases has, to some extent, a common form. That form is this:

There is a random variable of interest, x \sim X (that is, a value x sampled from a probability distribution X), that has direct effect on the welfare outcome of decisions made be agents in the economy. In our cases this was the aptitude of job applicants, consumers willingness to pay, and the utility of receiving a range of different expert recommendations, respectively.

In the extreme cases, the agent at the focus of the economic model could act with extreme ignorance of x, or extreme knowledge of it. Generally, the agent’s situation improves the more knowledgeable they are about x. The outcomes for the subjects of X vary more widely.

We also considered the possibility that the agent has access to partial information about X through the observation of a different variable y \sim Y. Upon observation of y, they can make their judgments based on an improved subjective expectation of the unknown variable, P(x \vert y). We assumed that the agent was a Bayesian reasoner and so capable of internalizing evidence according to Bayes rule, hence they are able to compute:

P(X \vert Y) \propto P(Y \vert X) P(X)

However, this depends on two very important assumptions.

The first is that the agent knows the distribution X. This is the prior in their subjective calculation of the Bayesian update. In our models, we have been perhaps sloppy in assuming that this prior probability corresponds to the true probability distribution from which the value x is drawn. We are somewhat safe in this assumption because for the purposes of determining strategy, only subjective probabilities can be taken into account and we can relax the distribution to encode something close to zero knowledge of the outcome if necessary. In more complex models, the difference between agents with different knowledge of X may be more strategically significant, but we aren’t there yet.

The second important assumption is that the agent knows the likelihood function P(Y | X). This is quite a strong assumption, as it implies that the agent knows truly how Y covaries with X, allowing them to “decode” the message y into useful information about x.

It may be best to think of access and usage of the likelihood function as a rare capability. Indeed, in our model of expertise, the assumption was that the service provider (think doctor) knew more about the relationship between X (appropriate treatment) and Y (observable symptoms) than the consumer (patient) did. In the case of companies that use data science, the idea is that some combination of data and science gives the company an edge in knowing the true value of some uncertain property than its competitors.

What we are discovering is that it’s not just the availability of y that matters, but also the ability to interpret y with respect to the probability of x. Data does not speak for itself.

This incidentally ties in with a point which we have perhaps glossed over too quickly in the present discussion, which is what is information, really? This may seem like a distraction in a discussion about economics but it is a question that’s come up in my own idiosyncratic “disciplinary” formation. One of the best intuitive definitions of information is provided by philosopher Fred Dretske (1981; 1983). Made a presentation of Fred Dretske’s view on information and its relationship to epistemological skepticism and Shannon information theory; you can find this presentation here. But for present purposes I want to call attention to his definition of what it means for a message to carry information, which is:

[A] message carries the information that X is a dingbat, say, if and only if one could learn (come to know) that X is a dingbat from the message.

When I say that one could learn that X was a dingbat from the message, I mean, simply, that the message has whatever reliable connection with dingbats is required to enable a suitably equipped, but otherwise ignorant receiver, to learn from it that X is a dingbat.

This formulation is worth mentioning because it supplies a kind of philosophical validation for our Bayesian formulation of information flow in the economy. We are modeling situations where Y is a signal that is reliably connected with X such that instantiations of Y carry information about the value of the X. We might express this in terms of conditional entropy:

H(X|Y) < H(X)

While this is sufficient for Y to carry information about X, it is not sufficient for any observer of Y to consequently know X. An important part of Dretske's definition is that the receiver must be suitably equipped to make the connection.

In our models, the “suitably equipped” condition is represented as the ability to compute the Bayesian update using a realistic likelihood function P(Y \vert X). This is a difficult demand. A lot of computational statistics has to do with the difficulty of tractably estimating the likelihood function, let alone computing it perfectly.

References

Dretske, F. I. (1983). The epistemology of belief. Synthese, 55(1), 3-19.

Dretske, F. (1981). Knowledge and the Flow of Information.


by Sebastian Benthall at September 11, 2017 09:42 PM

Economics of expertise and information services

We have no considered two models of how information affects welfare outcomes.

In the first model, inspired by an argument from Richard Posner, the are many producers (employees, in the specific example, but it could just as well be cars, etc.) and a single consumer. When the consumer knows nothing about the quality of the producers, the consumer gets an average quality producer and the producers split the expected utility of the consumer’s purchase equally. When the consumer is informed, she benefits and so does the highest quality producer, at the detriment of the other producers.

In the second example, inspired by Shapiro and Varian’s discussion of price differentiation in the sale of information goods, there was a single producer and many consumers. When the producer knows nothing about the “quality” of the consumers–their willingness to pay–the producer charges all consumers a profit-maximizing price. This price leaves many customers out of reach of the product, and many others getting a consumer surplus because the product is cheap relative to their demand. When the producer is more informed, they make more profit by selling as personalized prices. This lets the previously unreached customers in on the product at a compellingly low price. It also allows the producer to charge higher prices to willing customers; they capture what was once consumer surplus for themselves.

In both these cases, we have assumed that there is only one kind of good in play. It can vary numerically in quality, which is measured in the same units as cost and utility.

In order to bridge from theory of information goods to theory of information services, we need to take into account a key feature of information services. Consumers buy information when they don’t know what it is they want, exactly. Producers of information services tailor what they provide to the specific needs of the consumers. This is true for information services like search engines but also other forms of expertise like physician’s services, financial advising, and education. It’s notable that these last three domains are subject to data protection laws in the United States (HIPAA, GLBA, and FERPA) respectively, and on-line information services are an area where privacy and data protection are a public concern. By studying the economics of information services and expertise, we may discover what these domains have in common.

Let’s consider just a single consumer and a single producer. The consumer has a utility function \vec{x} \sim X (that is, sampled from random variable X, specifying the values it gets for the consumption of each of m = \vert J \vert products. We’ll denote with x_j the utility awarded to the consumer for the consumption of product j \in J.

The catch is that the consumer does not know X. What they do know is y \sim Y, which is correlated with X is some way that is unknown to them. The consumer tells the producer y, and the producer’s job is to recommend to them j \in J that will most benefit them. We’ll assume that the producer is interested in maximizing consumer welfare in good faith because, for example, they are trying to promote their professional reputation and this is roughly in proportion to customer satisfaction. (Let’s assume they pass on costs of providing the product to the consumer).

As in the other cases, let’s consider first the case where the acting party has no useful information about the particular customer. In this case, the producer has to choose their recommendation \hat j based on their knowledge of the underlying probability distribution X, i.e.:

\hat j = arg \max_{j \in J} E[X_j]

where X_j is the probability distribution over x_j implied by X.

In the other extreme case, the producer has perfect information of the consumer’s utility function. They can pick the truly optimal product:

\hat j = arg \max_{j \in J} x_j

How much better off the consumer is in the second case, as opposed to the first, depends on the specifics of the distribution X. Suppose X_j are all independent and identically distributed. Then an ignorant producer would be indifferent to the choice of \hat j, leaving the expected outcome for the consumer E[X_j], whereas the higher the number of products m the more \max_{j \in J} x_j will approach the maximum value of X_j.

In the intermediate cases where the producer knows y which carries partial information about \vec{x}, they can choose:

\hat j = arg \max_{j \in J} E[X_j \vert y] =

arg \max_{j \in J} \sum x_j P(x_j = X_j \vert y) =

arg \max_{j \in J} \sum x_j P(y \vert x_j = X_j) P(x_j = X_j)

The precise values of the terms here depend on the distributions X and Y. What we can know in general is that the more informative is y is about x_j, the more the likelihood term P(y \vert x_j = X_j) dominates the prior P(x_j = X_j) and the condition of the consumer improves.

Note that in this model, it is the likelihood function P(y \vert x_j = X_j) that is the special information that the producer has. Knowledge of how evidence (a search query, a description of symptoms, etc.) are caused by underlying desire or need is the expertise the consumers are seeking out. This begins to tie the economics of information to theories of statistical information.


by Sebastian Benthall at September 11, 2017 01:25 AM

September 09, 2017

Ph.D. student

Formalizing welfare implications of price discrimination based on personal information

In my last post I formalized Richard Posner’s 1981 argument concerning the economics of privacy. This is just one case of the economics of privacy. A more thorough analysis of the economics of privacy would consider the impact of personal information flow in more aspects of the economy. So let’s try another one.

One major theme of Shapiro and Varian’s Information Rules (1999) is the importance of price differentiation when selling information goods and how the Internet makes price differentiation easier than ever. Price differentiation likely motivates much of the data collection on the Internet, though it’s a practice that long predates the Internet. Shapiro and Varian point out that the “special offers” one gets from magazines for an extension to a subscription may well offer a personalized price based on demographic information. What’s more, this personalized price may well be an experiment, testing for the willingness of people like you to pay that price. (See Acquisti and Varian, 2005 for a detailed analysis of the economics of conditioning prices on purchase history.)

The point of this post is to analyze how a firm’s ability to differentiate its prices is a function of the knowledge it has about its customers and hence outcomes change with the flow of personal information. This makes personalized price differentiation a sub-problem of the economics of privacy.

To see this, let’s assume there are a number of customers for a job, i \in I, where the number of customers is n = \left\vert{I}\right\vert. Let’s say each has a willingness to pay for the firm’s product, x_i. Their willingness to pay is sampled from an underlying probability distribution x_i \sim X.

Note two things about how we are setting up this model. The first is that it closely mirrors our formulation of Posner’s argument about hiring job applicants. Whereas before the uncertain personal variable was aptitude for a job, in this case it is willingness to pay.

The second thing to note is that whereas it is typical to analyze price differentiation according to a model of supply and demand, here we are modeling the distribution of demand as a random variable. This is because we are interested in modeling information flow in a specific statistical sense. What we will find is that many of the more static economic tools translate well into a probabilistic domain, with some twists.

Now suppose the firm knows X but does not know any specific x_i. Knowing nothing to differentiate the customers, the firm will choose to offer the product at the same price z to everybody. Each customer will buy the product if x_i > z, and otherwise won’t. Each customer that buys the product contributes z to the firm’s utility (we are assuming an information good with near zero marginal cost). Hence, the firm will pick \hat z according to the following function:

\hat z = arg \max_z E[\sum_i z [x_i > z]] =

\hat z = arg \max_z \sum_i E[z [x_i > z]] =

\hat z = arg \max_z \sum_i z E[[x_i > z]] =

\hat z = arg \max_z \sum_i z P(x_i > z) =

\hat z = arg \max_z \sum_i z P(X > z)

Where [x_i > z] is a function with value 1 if x_i > z and 0 otherwise; this is using Iverson bracket notation.

This is almost identical to the revenue-optimizing strategy of price selection more generally, and it has a number of similar properties. One property is that for every customer for whom x_i > z, there is a consumer surplus of utility $late x_i – z$, that feeling of joy the customer gets for having gotten something valuable for less than they would have been happy to pay for it. There is also the deadweight loss of customers for whom z > x_i. These customers get 0 utility from the product and pay nothing to the producer despite their willingness to pay.

Now consider the opposite extreme, wherein the producer knows the willingness to pay of each customer x_i and can pick a personalized price z_i accordingly. The producer can price z_i = x_i - \epsilon, effectively capturing the entire demand \sum_i x_i as producer surplus, while reducing all consumer surplus and deadweight loss to zero.

What are the welfare implications of the lack of consumer privacy?

Like in the case of Posner’s employer, the real winner here is the firm, who is able to capture all the value added to the market by the increased flow of information. In both cases we have assumed the firm is a monopoly, which may have something to do with this result.

As for consumers, there are two classes of impact. For those with x_i > \hat z, having their personal willingness to pay revealed to the firm means that they lose their consumer surplus. Their welfare is reduced.

For those consumers with x_i < \hat z, these discover that they now can afford the product as it is priced close to their willingness to pay.

Unlike in Posner's case, "the people" here are more equal when their personal information is revealed to the firm because now the firm is extracting every spare ounce of joy it can from each of them, whereas before some consumers were able to enjoy low prices relative to their idiosyncratically high appreciation for the good.

What if the firm has access to partial information about each consumer y_i that is a clue to their true x_i without giving it away completely? Well, since the firm is a Bayesian reasoner they now have the subjective belief P(x_i \vert y_i) and will choose each z_i in a way that maximizes their expected profit from each consumer.

z_i = arg \max_z E[z [P(x_i > z \vert y_i)]]

The specifics of the distributions X, Y, and P(Y | X) all matter for the particular outcomes here, but intuitively one would expect the results of partial information to fall somewhere between the extremes of undifferentiated pricing and perfect price discrimination.

Perhaps the more interesting consequence of this analysis is that the firm has, for each consumer, a subjective probabilistic distribution of that consumer’s demand. Their best strategy for choosing the personalized price is similar to that of choosing a price for a large uncertain consumer demand base, only now the uncertainty is personalized. This probabilistic version of classic price differentiation theory may be more amenable to Bayesian methods, data science, etc.

References

Acquisti, A., & Varian, H. R. (2005). Conditioning prices on purchase history. Marketing Science, 24(3), 367-381.

Shapiro, C., & Varian, H. R. (1998). Information rules: a strategic guide to the network economy. Harvard Business Press.


by Sebastian Benthall at September 09, 2017 02:15 PM

September 07, 2017

Ph.D. student

Formalizing Posner’s economics of privacy argument

I’d like to take a more formal look at Posner’s economics of privacy argument, in light of other principles in economics of information, such as those in Shapiro and Varian’s Information Rules.

By “formal”, what I mean is that I want to look at the mathematical form of the argument. This is intended to strip out some of the semantics of the problem, which in the case of economics of privacy can lead to a lot of distracting anxieties, often for legitimate ethical reasons. However, there are logical realities that one must face despite the ethical conundrums they cause. Indeed, if there weren’t logical constraints on what is possible, then ethics would be unnecessary. So, let’s approach the blackboard, shall we?

In our interpretation of Posner’s argument, there are a number of applicants for a job, i \in I, where the number of candidates is n = \left\vert{I}\right\vert. Let’s say each is capable of performing at a certain level based on their background and aptitude, x_i. Their aptitude is sampled from an underlying probability distribution x_i \sim X.

There is an employer who must select an applicant for the job. Let’s assume that their capacity to pay for the job is fixed, for simplicity, and that all applicants are willing to accept the wage. The employer must pick an applicant i and gets utility x_i for their choice. Given no information on which to base her choice, she chooses a candidate randomly, which is equivalent to sampling once from X. Her expected value, given no other information on which to make the choice, is E[X]. The expected welfare of each applicant is their utility from getting the job (let’s say it’s 1 for simplicity) times their probability of being picked, which comes to \frac{1}{n}.

Now suppose the other extreme: the employer has perfect knowledge of the abilities of the applicants. Since she is able to pick the best candidate, her utility is \max x_i. Let \hat i = arg\max_{i \in I} x_i. Then the utility for applicant \hat i is 1, and it is 0 for the other applicants.

Some things are worth noting about this outcome. There is more inequality. All expected utility from the less qualified applicants has moved to the most qualified applicant. There is also an expected surplus of (\max x_i) - E[X] that accrues to the totally informed employer. One wonders if a “safety net” were to be provided those who have lost out in this change; if it could be, it would presumably be funded from this surplus. If the surplus were entirely taxed and redistributed among the applicants who did not get the job, it would provide each rejected applicant with \frac{(\max x_i) - E[X]}{n-1} utility. Adding a little complexity to the model we could be more precise by computing the wage paid to the worker and identify whether redistribution could potentially recover the losses of the weaker applicants.

What about intermediary conditions? These get more analytically complex. Suppose that each applicant i produces an application y_i which is reflective of their abilities. When the employer makes her decision, her expectation of the performance of each applicant is

P(x_i \vert y_i) \propto P(y_i \vert x_i)P(x_i)

because naturally the employer is a Bayesian reasoner. She makes her decision by maximizing her expected gain, based on this evidence:

arg\max E[P(x_i \vert y_i)] =

arg\max \sum_{x_i} x_i p(x_i \vert y_i) =

arg\max \sum_{x_i} x_i p(y_i \vert x_i) p(x_i)

The particulars of the distributions X and Y and especially P(Y \vert X) matter a great deal to the outcome. But from the expanded form of the equation we can see that the more revealing y_i is about x_i< the more the likelihood term p(y_i \vert x_i) will overcome the prior expectations. It would be nice to be able to capture the impact of this additional information in a general way. One would think that providing limited information about applicants to the employer would result in an intermediate outcome. Under reasonable assumptions, more qualified applicants would be more likely to be hired and the employer would accrue more value from the work.

What this goes to show is how ones evaluation of Posner's argument about the economics of privacy really has little to do with the way one feels about privacy and much more to do with how one feels about the equality and economic surplus. I've heard that a similar result has been discovered by Solon Barocas, though I'm not sure where in his large body of work to find it.


by Sebastian Benthall at September 07, 2017 10:21 PM

September 06, 2017

Ph.D. student

From information goods to information services

Continuing to read through Information Rules, by Shapiro and Varian (1999), I’m struck once again by its clear presentation and precise wisdom. Many of the core principles resonate with my experience in the software business when I left it in 2011 for graduate school. I think it’s fair to say that Shapiro and Varian anticipated the following decade of  the economics of content and software distribution.

What they don’t anticipate, as far as I can tell, is what has come to dominate the decade after that, this decade. There is little in Information Rules that addresses the contemporary phenomena of cloud computing and information services, such as Software-as-a-Service, Platforms-as-a-Service, and Infrastructure-as-a-Service. Yet these are clearly the kinds of services that have come to dominate the tech market.

That’s an opening. According to a business manager in 2014, there’s no book yet on how to run an SaaS company. While sure that if I were slightly less lazy I would find several, I wonder if they are any good. By “any good”, I mean would they hold up to scientific standards in their elucidation of economic law, as opposed to being, you know, business books.

One of the challenges of working on this which has bothered me since I first became curious about these problems is that there is not very good elegant formalism available for representing competition between computing agents. The best that’s out there is probably in the AI literature. But that literature is quite messy.

Working up from something like Information Rules might be a more promising way of getting at some of these problems. For example, Shapiro and Varian start from the observation that information goods have high fixed (often, sunk) costs and low marginal costs to reproduce. This leads them to the conclusion that the market cannot look like a traditional competitive market with multiple firms selling similar goods but rather must either have a single dominant firm or a market of many similar but differentiated products.

The problem here is that most information services, even “simple” ones like a search engine, are not delivering a good. They are being responsive to some kind of query. The specific content and timing of the query, along with the state of the world at the time of the query, are unique. Consumers may make the same query with varying demand. The value-adding activity is not so much creating the good as it is selecting the right response to the query. And who can say how costly this is, marginally?

On the other hand, this framing obscures something important about information goods, which is that all information goods are, in a sense, a selection of bits from the wide range of possible bits one might send or receive. This leads to my other frustration with information economics, which is that it is insufficiently tied to the statistical definition of information and the modeling tools that have been built around it. This is all the more frustrating because I suspect that in advanced industrial settings these connections have been made and are used with confidence. However, it had been slow to make it into mainstream understanding. There’s another opportunity here.


by Sebastian Benthall at September 06, 2017 01:37 AM

September 02, 2017

MIMS 2014

Movies! (Now with More AI!!)

Earlier, I played around with topic modeling/recommendation engines in Apache Spark. Since then, I’ve been curious to see if I could make any gains by adopting another text processing approach in place of topic modeling—word2vec. For those who don’t know word2vec, it takes individual words and maps them into a vector space where the vector weights are determined by a neural network that trains on a corpus of text documents.

I won’t go into major depth on neural networks here (a gentle introduction for those who are interested), except to say that they are considered among many to be the bleeding-edge of artificial intelligence. Personally, I like word2vec because you don’t necessarily have to train the vectors yourself. Google has pre-trained vectors derived from a massive corpus of news documents they’ve indexed. These vectors are rich in semantic meaning, so it’s pretty cool that you can leverage their value with no extra work. All you have to do is download the (admittedly large 1.5 gig) file onto your computer and you’re good to go.

Almost. Originally, I had wanted to do this on top of my earlier spark project, using the same pseudo-distributed docker cluster on my old-ass laptop. But when I tried to load the pre-trained Google word vectors into memory, I got a big fat MemoryError, which I actually thought was pretty generous because it was nice enough to tell me exactly what it was.

I had three options: commandeer some computers in the cloud on Amazon, try to finagle spark’s configuration like I did last time, or finally, try running Spark in local mode. Since I am still operating on the cheap, I wasn’t gonna go with option one. And since futzing around with Spark’s configuration put me in a dead end last time, I decided to ditch the pseudo-cluster and try running Spark in local mode.

Although local mode was way slower on some tasks, it could still load Google’s pre-trained word2vec model, so I was in business. Similar to my approach with topic modeling, I created a representative vector (or ‘profile’) for each user in the Movielens dataset. But whereas in the topic model, I created a profile vector by taking the max value in each topic across a user’s top-rated movies, here I instead averaged the vectors I derived from each movie (which were themselves averages of word vectors).

Let’s make this a bit more clear. First you take a plot summary scraped from Wikipedia, and then you remove common stop words (‘the’, ‘a’, ‘my’, etc.). Then you pass those words through the pre-trained word2vec model. This maps each word to a vector of length 300 (a word vector can in principle be of any length, but Google’s are of length 300). Now you have D vectors of length 300, where D is the number of words in a plot summary. If you average the values in those D vectors, you arrive at a single vector that represents one movie’s plot summary.

Note: there are other ways of aggregating word vectors into a single document representation (including doc2vec), but I proceeded with averages because I was curious to see whether I could make any gains by using the most dead simple approach.

Once you have an average vector for each movie, you can get a profile vector for each user by averaging (again) across a user’s top-rated movies. At this point, recommendations can be made by ranking the cosine similarity between a user’s profile and the average vectors for each movie. This could power a recommendation engine its own—or supplement explicit ratings for (user, movie) pairs that aren’t observed in the training data.

Cognizant of the hardware limitations I ran up against last time, I opted for the same approach I adopted then, which was to pretend I knew less about users and their preferences than I really did. My main goal was to see whether word2vec could beat out the topic modeling approach, and in fact it did. With 25% of the data covered up, the two algorithms performed roughly the same against the covered up data. But with 75% of the data covered up, word2vec resulted in an 8% performance boost (as compared with 3% gained from topic modeling)

So with very little extra work (simple averaging and pre-trained word vectors), word2vec has pretty encouraging out of the box performance. It definitely makes me eager to use word2vec in the future.

Also a point in word2vec’s favor: when I sanity checked the cosine similarity scores of word2vec’s average vectors across different movies, The Ipcress File shot to the top of the list of movies most similar The Bourne Ultimatum. Still don’t know what The Ipcress File is? Then I don’t feel bad re-using the same joke as a meme sign-off.

mycocaine


by dgreis at September 02, 2017 08:23 PM

August 28, 2017

Ph.D. student

Shapiro and Varian: scientific “laws of economics”

I’ve been amiss in not studying Shapiro and Varian’s Information Rules: A Strategic Guide to the Network Economy (1998, link) more thoroughly. In my years in the tech industry and academic study, there are few sources that deal with the practical realities of technology and society as clearly as Shapiro and Varian. As I now turn my attention more towards the rationale for various forms of information law and find how much of it is driven by considerations of economics, I have to wonder why this was not something I’ve given more emphasis in my graduate study so far.

The answer that comes immediately to mind is that throughout my academic study of the past few years I’ve encountered a widespread hostility to economics from social scientists of other disciplines. This hostility resembles, though is somewhat different from, the hostility social scientists other other stripes have had (in my experience) for engineers. The critiques have been along the lines that economists are powerful disproportionately to the insight provided by the field, that economists are focused too narrowly on certain aspects of social life to the exclusion of others that are just as important, that economists are arrogant in their belief that their insights about incentives apply to other areas of social life besides the narrow concerns of the economy, that economists mistakenly think their methods are more scientific or valid than other social scientists, that economics is in the business of enshrining legal structures into place that give their conclusions more predictive power than they would have in other legal regimes and, as of the most recent news cycle, that the field of economics is hostile to women.

This is a strikingly familiar pattern of disciplinary critique, as it seems to be the same one levied at any field that aims to “harden” inquiry into social life. The encroachment of engineering disciplines and physicists into social explanation has come with similar kinds of criticism. These criticisms, it must be noted, contain at least one contradiction: should economists be concerned about issues besides the economy, or not? But the key issue, as with most disciplinary spats, is the politics of a lot of people feeling dismissed or unheard or unfunded.

Putting all this aside, what’s interesting about the opening sections of Shapiro and Varian’s book is their appeal to the idea of laws of economics, as if there were such laws analogous to laws of physics. The idea is that trends in the technology economy are predictable according to these laws, which have been learned through observation and formalized mathematically, and that these laws should therefore be taught for the benefit of those who would like to participate successfully in that economy.

This is an appealing idea, though one that comes under criticism, you know, from the critics, with a predictability that almost implies a social scientific law. This has been a debate going back to discussions of Marx and communism. Early theorists of the market declared themselves to have discovered economic laws. Marx, incidentally, also declared that he had discovered (different) economic laws, albeit according to the science of dialectical materialism. But the latter declared that the former economic theories hide the true scientific reality of the social relations underpinning the economy. These social relations allowed for the possibility of revolution in a way that an economy of goods and prices abstracted from society did not.

As one form of the story goes, the 20th century had its range of experiments with ways of running an economy. Those most inspired by Marxism had mass famines and other unfortunate consequences. Those that took their inspiration from the continually evolving field of increasingly “neo”-classical economics, with its variations of Keynesianism, monetarism, and the rest, had some major bumps (most recently the 2008 financial crisis) but tends to improve over time with historical understanding and the discovery of, indeed, laws of economics. And this is why Janet Yellen and Mario Draghi are now warning against removing the post-crisis financial market regulations.

This offers an anecdotal counter to the narrative that all economists ever do is justify more terrible deregulation at the expense of the lived experience of everybody else. The discovery of laws of economics can, indeed, be the basis for economic regulation; in fact this is often the case. In point of fact, it may be that this is one of the things that tacitly motivates the undermining of economic epistemology: the fact that if the laws of economics were socially determined to be true, like the laws of physics, such that everybody ought to know them, it would lead to democratic will for policies that would be opposed to the interests of those who have heretofore enjoyed the advantage of their privileged (i.e., not universally shared) access to the powerful truth about markets, technology, etc.

Which is all to say: I believe that condemnations of economics as a field are quite counterproductive, socially, and that the scientific pursuit of the discovery of economic laws is admirable and worthy. Those that criticize economics for this ambition, and teach their students to do so, imperil everyone else and should stop.


by Sebastian Benthall at August 28, 2017 05:12 PM

August 25, 2017

Ph.D. student

Reason returns to Berkeley

I’ve been struck recently by a subtle shift in messaging at UC Berkeley since Carol T. Christ has become the university’s Chancellor. Incidentally, she is the first woman chancellor of the university, with a research background in Victorian literature. I think both of these things may have something to do with the bold choice she’s made in recent announcements: the inclusion of reason as among the University’s core values.

Notably, the word has made its appearance next to three other terms that have had much more prominence in the university in recent years: equity, inclusion, and diversity. For example, in the following statements:

In “Thoughts on Charlottesville”:

We must now come together to oppose what are dangerous threats to the values we hold dear as a democracy and as a nation. Our shared belief in reason, diversity, equity, and inclusion is what animates and supports our campus community and the University’s academic mission. Now, more than ever, those values are under assault; together we must rise to their defense.

And, strikingly, this message on “Free Speech”:

Nonetheless, defending the right of free speech for those whose ideas we find offensive is not easy. It often conflicts with the values we hold as a community—tolerance, inclusion, reason and diversity. Some constitutionally-protected speech attacks the very identity of particular groups of individuals in ways that are deeply hurtful. However, the right response is not the heckler’s veto, or what some call platform denial. Call toxic speech out for what it is, don’t shout it down, for in shouting it down, you collude in the narrative that universities are not open to all speech. Respond to hate speech with more speech.

The above paragraph comes soon after this one, in which Chancellor Christ defends Free Speech on Millian philosophical grounds:

The philosophical justification underlying free speech, most powerfully articulated by John Stuart Mill in his book On Liberty, rests on two basic assumptions. The first is that truth is of such power that it will always ultimately prevail; any abridgement of argument therefore compromises the opportunity of exchanging error for truth. The second is an extreme skepticism about the right of any authority to determine which opinions are noxious or abhorrent. Once you embark on the path to censorship, you make your own speech vulnerable to it.

This slight change in messaging strikes me as fundamentally wise. In the past year, the university has been wracked by extreme passions and conflicting interests, resulting in bad press externally and I imagine discomfort internally. But this was not unprecedented; the national political bifurcation could take hold at Berkeley precisely because it had for years been, with every noble intention, emphasizing inclusivity and equity without elevating a binding agent that makes diversity meaningful and productive. This was partly due to the influence of late 20th century intellectual trends that burdened “reason” with the historical legacy of those regimes that upheld it as a virtue, which tended to be white and male. There was a time when “reason” was so associated with these powers that the term was used for the purposes of exclusion–i.e. with the claim that new entrants to political and intellectual power were being “unreasonable”.

Times have changed precisely because the exclusionary use of “reason” was a corrupt one; reason in its true sense is impersonal and transcends individual situation even as it is immanent in it. This meaning of reason would be familiar to one steeped in an older literature.

Carol Christ’s wording reflects a 21st century theme which to me gives me profound confidence in Berkeley’s future: the recognition that reason does not oppose inclusion, but rather demands it, just as scientific logic demands properly sampled data. Perhaps the new zeitgeist at Berkeley has something to do with the new Data Science undergraduate curriculum. Given the state of the world, I’m proud to see reason make a comeback.


by Sebastian Benthall at August 25, 2017 02:52 PM

August 24, 2017

Center for Technology, Society & Policy

Preparing for Blockchain

by Ritt Keerati, CTSP Fellow | Permalink

Policy Considerations and Challenges for Financial Regulators (Part I)

Blockchain―a distributed ledger technology that maintains a continuously-growing list of records―is an emerging technology that has captured the imagination and investment of Silicon Valley and Wall Street. The technology has propelled the invention of virtual currencies such as Bitcoin and now holds promise to revolutionize a variety of industries including, most notably, the financial sector. Accompanying its disruptive potential, blockchain also carries significant implications and raises questions for policymakers. How will blockchain change the ways financial transactions are conducted? What risks will that pose to consumers and the financial system? How should the new technology be regulated? What roles should the government play in promoting and managing the technology?

Blockchain represents a disruptive technology because it enables the creation of a “trustless network.” The technology enables parties lacking pre-existing trust to transact with one another without the need for intermediaries or central authority. It may revolutionize how financial transactions are conducted, eliminate certain roles of existing institutions, improve transaction efficiencies, and reduce costs.

Despite its massive potential, blockchain is still in its early innings in terms of deployment. So far, the adoption of blockchain within the financial industry has been to facilitate business-to-business transactions or to improve record-keeping processes of existing financial institutions. Besides Bitcoin, direct-to-consumer applications remain limited, and such applications still rely on the existing financial infrastructure. For instance, although blockchain has the potential to disintermediate banks and enable customers to transfer money directly between each other, money transfer applications using blockchain are still linked to bank accounts. As a result, financial institutions still serve as gatekeepers, helping ensure regulatory compliance and consumer protection.

With that said, new use-cases of blockchain are emerging rapidly, and accompanying these developments are risks and challenges. From the regulators’ perspectives, below are some of the key risks that financial regulators must consider in dealing with the emergence of blockchain:

  • Lack of Clarity on Compliance Requirements: New use-cases of blockchain—such as digital token and decentralized payment system—raise questions about applicability of the existing regulatory requirements. For instance, how should an application created by a community of developers to facilitate transfer of digital currencies be regulated? Who should be regulated, given that the software is created by a group of independent developers? How should state-level regulations be applied, especially if the states cannot identify the actual users given blockchain anonymity? Such lack of clarity could lead to the failure to comply and/or higher costs of compliance.
  • Difficulty in Adjusting Regulations to Handle Industry Changes: The lack of effective engagement by the regulators could prevent them from acquiring sufficient knowledge about the technology to be able to issue proper rules and responses or to assist Congress in devising appropriate legislation. For instance, there remains a disagreement on how digital tokens should be treated: as currencies, commodities, or securities?
  • Risks from Industry Front-running Policymakers: The lack of clarity on the existing regulatory framework, coupled with possible emergence of new regulations, could incentivize some industry players to “front-run” the regulators by rolling out their products before new guidelines emerge in hope of forcing the regulators to yield to industry demand. The most evident comparison is Uber, in which the application continues in violation of labor laws.
  • Challenges arising from New Business Models: Blockchain will propel several new business models, some of which could pose regulatory challenges and unknown consequences. For instance, the emergence of a decentralized transaction system raises questions about how such a system should be regulated, how to confirm the identities of relevant parties, how to prevent fraud and money laundering, who to be responsible in the case of fraud and errors, and more.
  • Potential Technical Issues: Blockchain is a new technology—it has been in existence for less than a decade. Therefore, the robustness of the technology has not yet been proven. In fact, there remain several issues to be resolved even with Bitcoin―the most recognized blockchain application―such as scalability, lag time, and other technical glitches. Moreover, features such as identity verification, privacy, and security also have not been fully integrated. Finally, the use of blockchain to upgrade the technical infrastructure also raises questions about interoperability, technology transition, and system robustness.
  • Potential New Systemic Risks: Blockchain has the potential to transform the nature of the transaction network from a centralized to a decentralized system. In addition, it enhances the speed of transaction settlement and clearing and improves transaction visibility. Questions remain whether these features will increase or undermine the stability of the financial system. For instance, given the transaction expediency enabled by blockchain, will the regulators be able to analyze transactional data in real-time, and will they be able to respond quickly to prevent a potential disaster?
  • Risks from Bad Actors: Any financial system is exposed to risks from bad actors; unfortunately, frauds, pyramid schemes, and scams are bound to happen. Because blockchain and digital currencies are new, such risks are potentially heightened as consumers, companies, and regulators are less familiar with the technology. The fact that blockchain changes the way people do business also raises questions about who should be responsible in the case of frauds, whether the damaged parties should be protected and compensated, and who should bear the responsibility of preventing such events and safeguarding consumers.
  • Other Potential Challenges and Opportunities: Blockchain’s revolutionary potential could unveil other policy and societal challenges, not only in the financial industry but also to the society at large. For instance, blockchain could alter the roles of some financial intermediaries, such as banks and brokers, leading to job shrinkage and displacement. At the same time, it could provide other opportunities that would benefit society.

Because blockchain has the potential to transform several industries and because the technology is evolving rapidly, unified and consistent engagement by financial regulators is crucial. However, based on the current dynamics, there is a lack of unified and effective engagement by regulators and legislators in the development and deployment of blockchain technology. The regulators, therefore, must find better ways to interact with the financial and technology industries, balancing between (1) regulating too loosely and thereby introducing risks into the financial system, and (2) regulating too tightly and thereby stifling innovation. Such engagement should aim to help the government monitor activities within industry, learn about the technology and its use-cases, collaborate with industry players, and lead the industry to produce public benefits. Policy alternatives that would facilitate such engagement should aim to achieve the following three objectives:

  • Engage Policymakers in Discussions on Blockchain in Unified and Effective Manners: The policy should promote collaboration between the regulators and industry participants as well as coordination across regulatory agencies. It should create a platform that allows the regulators to (1) convey clear and consistent messages to industry participants, (2) learn from such interaction and use the lessons learned to adjust their rules and responses, (3) provide appropriate recommendations to legislators to help them adjust the policy frameworks, if necessary.
  • Allow Policymakers to Ensure Regulatory Compliance and Maintain Stability of the Financial System: Second, the policy should enable the regulators to ensure industry compliance. More importantly, it should preserve the stability of the financial system. This means that the policy should allow the regulators to anticipate and respond quickly to potential risks that may be introduced by the technology into the financial system.
  • Promote Technological Innovation in Blockchain / distributed ledger technology: Finally, while the policy should aim to enhance the regulators’ understanding of the technology, it should refrain from undermining the industry’s incentives to innovate and utilize the technology. While regulatory compliance and consumer protection are crucial, they should not come at the price of innovation.

Part II of this series will discuss potential alternatives that policymakers may utilize to enhance collaboration among various regulatory agencies and to improve interactions with industry participants.

 

 

Policy Alternatives for Financial Regulators and Policymakers (Part II)

In Part I, we discussed potential regulatory concerns arising from the emergence of blockchain technology. Such issues include lack of clarity on compliance requirements, challenges in regulating new business models, potential technical glitches, potential new systemic risks, and challenges in controlling bad actors.

To mitigate these issues, effective interaction between regulators and industry participants is crucial. Currently, there is a lack of unified and effective engagement by regulators and policymakers in the development and deployment of blockchain. Soundly addressing these matters will require better collaboration among regulators and more frequent interactions with industry participants. Rather than maintaining status quo, policymakers may choose among these alternatives to enhance collaboration between the regulators and industry participants:

  • Adjustment of Existing Regulatory Framework: Under this approach, the regulators either modify the existing laws or issue new laws to facilitate the emergence of the new technology. Examples of this approach include (1) the plan by the Office of the Comptroller of the Currency (OCC) to issue fintech charter to technology companies offering financial services and (2) the enactment of BitLicense regulation by the State of New York. Essentially, this policy alternative allows financial regulators to create a “derivative framework” based on existing regulations.
  • Issuance of Regulatory Guideline: Because some regulations are ambiguous when applied to blockchain-based businesses, regulatory agencies may choose to provide preliminary perspectives on how they plan to regulate the new technology. This may come in the form of a statement specifying how the regulators plan to manage blockchain applications, how active or passive the regulators will engage with industry players, how strict or flexible the rules will be, what the key priorities are, and how the regulators plan to use the technology themselves. Such a guideline will provide industry participants with added clarity, while offering them flexibility and autonomy for self-regulation.
  • Creation of Multi-Party Working Group: A multi-party working group represents an effort by regulatory agencies and industry participants to collaborate and arrive at a standard framework or shared best practices for technology development and regulation. Under this approach, various regulatory agencies would work together to formulate and issue a single policy framework for the industry. They may also collaborate with industry participants to learn from their experiences and take their feedbacks to adjust their policies accordingly.
  • Establishment of Regulatory Sandbox: Several foreign regulators—such as the United Kingdom, Singapore, Australia, Hong Kong, France, and Canada—have established regulatory sandboxes to manage the emergence of blockchain. A sandbox essentially provides a well-defined space in which companies can experiment with new technology and business models in a relaxed regulatory environment and in some cases with support of the regulators for a period of time. This leads to several potential benefits, including: reduced time-to-market of new technology, reduced cost, better access to financing for companies, and more innovative products reaching the market.

Each of the aforementioned policy alternatives has different advantages and disadvantages. For instance, while status quo is clearly the easiest to implement, it fails to solve many policy problems arising from the existing regulatory framework. On the other hand, although a regulatory sandbox will be the most effective in promoting innovation while protecting consumers, it will also be the most difficult to implement and the costliest to scale. Given the trade-offs between these alternatives and the fact that these alternatives are not mutually exclusive, the best solution will likely be a combination of some or all of the above approaches. Specifically, this report recommends a three-prong approach, including:

  • Issuance of Regulatory Guideline: Financial regulators should provide a general guideline of how they plan to regulate blockchain-based applications. Such guideline should include details such as: key priorities from the regulators’ perspectives (such as consumer protection and overall financial stability), the nature of engagement between the regulators and industry players (such as how active the regulators plan to monitor companies’ activities and how much leeway the industry will have for self-regulation), how the regulators plan to address potential issues that may arise (such as those arising from the incompatibility between the new business models and the existing regulations), and how industry players may correspond with the regulators to avoid noncompliance. To the extent that such an indication could come from the President, it would also provide consistency in the framework across agencies.
  • Creation of Public-Private Working Group: The regulators should establish a public-private working group that would allow various financial regulatory agencies and industry players to interact, share insights and best practices, and brainstorm ideas to promote innovation and effective regulation. Participants in the working group will include representatives from various financial regulatory agencies as well as industry players. The working group will aim to promote knowledge sharing, while the actual authority will remain with each regulatory agency. It will also serve as a central point of contact when interacting with foreign and international agencies. Note that although similar working groups, such as the Blockchain Alliance, exist currently, they are typically spearheaded by the industry and geared toward promoting industry’s preferences. The regulators should instead create their own platform that would allow them to learn about the technology, discuss emerging risks and potential options, and explore potential policy options in an unbiased fashion.
  • Enactment of Suitable Safe Harbor: Although blockchain may expose consumers and the financial system to some risks, regulators may not need to regulate every minute aspect of these new use-cases, particularly if the risks are small. Hence, under certain conditions, the regulators may consider creating safe harbor that would allow industry players to experiment with their ideas without being overly concerned with the regulatory burden while also limiting the risks to consumers and the financial system. For instance, with respect to money transfer applications, FinCEN may consider creating safe harbor for transactions below a certain amount.

This recommendation essentially aims to promote a prudent and flexible market-based solution. The recommendation affords industry players the freedom to operate within the existing regulatory environment, while also giving them greater clarity on the applicability of the regulations and enabling productive interaction with the regulators. It also allows the regulators to protect consumers and the financial system without stifling innovation. Lastly, this solution is viable within the existing political context and despite the complex regulatory regime that exists currently.

For policymakers, the most important near-term goal should be to ensure that regulators are well educated about blockchain and that they understand its trends and implications. With respect to regulatory compliance, policymakers should be attentive to the adoption of the technology by existing financial institutions, particularly in the area of money transfer, clearing and settlement of assets, and trade finance. Longer-term, Congress also ought to find ways to reform the existing financial regulatory framework and to consolidate both regulatory agencies and regulations in order to reduce cross-jurisdictional complexity and promote innovation and efficiency.

The emergence of blockchain and digital ledger technology represents a potential pivot-point in the ongoing global efforts to apply technology to improve the financial system. The United States has the opportunity to strengthen its leadership in the world of global finance by pursuing supportive policies that promote financial technology innovation, while making sure that consumers are protected and the financial system remains sound. This will require a policy framework that balances an open-market approach with a circumspect supervision. The next 5-10 years represents an opportune time for U.S. policymakers to evaluate their approaches toward financial regulation, pursue necessary reform and adjustment efforts, and work together with technology companies and financial institutions to make the United States both a global innovation hub and an international financial center.

 

Link: Preparing for Blockchain Whitepaper

by Rohit Raghavan at August 24, 2017 02:50 AM

August 23, 2017

Ph.D. student

Notes on Posner’s “The Economics of Privacy” (1981)

Lately my academic research focus has been privacy engineering, the designing of information processing systems that preserve privacy of their users. I have been looking the problem particularly through the lens of Contextual Integrity, a theory of privacy developed by Helen Nissenbaum (2004, 2009). According to this theory, privacy is defined as appropriate information flow, where “appropriateness” is determined relative to social spheres (such as health, education, finance, etc.) that have evolved norms based on their purpose in society.

To my knowledge most existing scholarship on Contextual Integrity is comprised by applications of a heuristic process associated with Contextual Integrity that evaluates the privacy impact of new technology. In this process, one starts by identifying a social sphere (or context, but I will use the term social sphere as I think it’s less ambiguous) and its normative structure. For example, if one is evaluating the role of a new kind of education technology, one would identify the roles of the education sphere (teachers, students, guardians of students, administrators, etc.), the norms of information flow that hold in the sphere, and the disruptions to these norms the technology is likely to cause.

I’m coming at this from a slightly different direction. I have a background in enterprise software development, data science, and social theory. My concern is with the ways that technology is now part of the way social spheres are constituted. For technology to not just address existing norms but deal adequately with how it self-referentially changes how new norms develop, we need to focus on the parts of Contextual Integrity that have heretofore been in the background: the rich social and metaethical theory of how social spheres and their normative implications form.

Because the ultimate goal is the engineering of information systems, I am leaning towards mathematical modeling methods that trade well between social scientific inquiry and technical design. Mechanism design, in particular, is a powerful framework from mathematical economics that looks at how different kinds of structures change the outcomes for actors participating in “games” that involve strategy action and information flow. While mathematical economic modeling has been heavily critiqued over the years, for example on the basis that people do not act with the unbounded rationality such models can imply, these models can be a first step and valuable in a technical context especially as they establish the limits of a system’s manipulability by non-human actors such as AI. This latter standard makes this sort of model more relevant than it has ever been.

This is my roundabout way of beginning to investigate the fascinating field of privacy economics. I am a new entrant. So I found what looks like one of the earliest highly cited articles on the subject written by the prolific and venerable Richard Posner, “The Economics of Privacy”, from 1981.

Richard Posner, from Wikipedia

Wikipedia reminds me that Posner is politically conservative, though apparently he has changed his mind recently in support of gay marriage and, since the 2008 financial crisis, the laissez faire rational choice economic model that underlies his legal theory. As I have mainly learned about privacy scholarship from more left-wing sources, it was interesting reading an article that comes from a different perspective.

Posner’s opening position is that the most economically interesting aspect of privacy is the concealment of personal information, and that this is interesting mainly because privacy is bad for market efficiency. He raises examples of employers and employees searching for each other and potential spouses searching for each other. In these cases, “efficient sorting” is facilitated by perfect information on all sides. Privacy is foremost a way of hiding disqualifying information–such as criminal records–from potential business associates and spouses, leading to a market inefficiency. I do not know why Posner does not cite Akerlof (1970) on the “market for ‘lemons'” in this article, but it seems to me that this is the economic theory most reflective of this economic argument. The essential question raised by this line of argument is whether there’s any compelling reason why the market for employees should be any different from the market for used cars.

Posner raises and dismisses each objective he can find. One objection is that employers might heavily weight factors they should not, such as mental illness, gender, or homosexuality. He claims that there’s evidence to show that people are generally rational about these things and there’s no reason to think the market can’t make these decisions efficiently despite fear of bias. I assume this point has been hotly contested from the left since the article was written.

Posner then looks at the objection that privacy provides a kind of social insurance to those with “adverse personal characteristics” who would otherwise not be hired. He doesn’t like this argument because he sees it as allocating the costs of that person’s adverse qualities to a small group that has to work with that person, rather than spreading the cost very widely across society.

Whatever one thinks about whose interests Posner seems to side with and why, it is refreshing to read an article that at the very least establishes the trade offs around privacy somewhat clearly. Yes, discrimination of many kinds is economically inefficient. We can expect the best performing companies to have progressive hiring policies because that would allow them to find the best talent. That’s especially true if there are large social biases otherwise unfairly skewing hiring.

On the other hand, the whole idea of “efficient sorting” assumes a policy-making interest that I’m pretty sure logically cannot serve the interests of everyone so sorted. It implies a somewhat brutally Darwinist stratification of personnel. It’s quite possible that this is not healthy for an economy in the long term. On the other hand, in this article Posner seems open to other redistributive measures that would compensate for opportunities lost due to revelation of personal information.

There’s an empirical part of the paper in which Posner shows that percentage of black and Hispanic populations in a state are significantly correlated with existence of state level privacy statutes relating to credit, arrest, and employment history. He tries to spin this as an explanation for privacy statutes as the result of strongly organized black and Hispanic political organizations successfully continuing to lobby in their interest on top of existing anti-discrimination laws. I would say that the article does not provide enough evidence to strongly support this causal theory. It would be a stronger argument if the regression had taken into account the racial differences in credit, arrest, and employment state by state, rather than just assuming that this connection is so strong it supports this particular interpretation of the data. However, it is interesting that this variable ways more strongly correlated with the existence of privacy statutes than several other variables of interest. It was probably my own ignorance that made me not consider how strongly privacy statutes are part of a social justice agenda, broadly speaking. Considering that disparities in credit, arrest, and employment history could well be the result of other unjust biases, privacy winds up mitigating the anti-signal that these injustices have in the employment market. In other words, it’s not hard to get from Posner’s arguments to a pro-privacy position based of all things on market efficiency.

It would be nice to model that more explicitly, if it hasn’t been done yet already.

Posner is quite bullish on privacy tort, thinking that it is generally not so offensive from an economic perspective largely because it’s about preventing misinformation.

Overall, the paper is a valuable starting point for further study in economics of privacy. Posner’s economic lens swiftly and clearly puts the trade-offs around privacy statutes in the light. It’s impressively lucid work that surely bears directly on arguments about privacy and information processing systems today.

References

Akerlof, G. A. (1970). The market for” lemons”: Quality uncertainty and the market mechanism. The quarterly journal of economics, 488-500.

Nissenbaum, H. (2004). Privacy as contextual integrity. Wash. L. Rev., 79, 119.

Nissenbaum, H. (2009). Privacy in context: Technology, policy, and the integrity of social life. Stanford University Press.

Posner, R. A. (1981). The economics of privacy. The American economic review, 71(2), 405-409. (jstor)


by Sebastian Benthall at August 23, 2017 07:41 PM

August 22, 2017

Ph.D. student

Ulanowicz on thermodynamics as phenomenology

I’ve finally worked my way back to Ulanowicz, whose work so intrigued me when I first encountered it over four years ago. Reading a few of his papers on theoretical ecology gave the impression that he is both a serious scientist and onto something profound. Now I’m reading Growth and Development: Ecosystems Phenomenology (1986), which looked to be the most straightforwardly mathematical introduction to his theory of ecosystem ascendancy, which is his theory of how ecosystems can grow and develop over time.

I am eager to get to the hard stuff, where he cashes out the theory in terms of matrix multiplication representing networks of energy flows. I see several parallels to my own work and I’m hoping there are hints in here about how I can best proceed.

But first I must note a few interesting ways in which Ulanowicz positions his argument.

One important one is that he uses the word “phenomenology” in the title and in the opening argument about the nature of thermodynamics. Thermodynamics, he argues, is unlike many other more reductionist parts of physics because it draws general statistical laws on microscopically observed systems which can be reduced to many different configurations of microphenomena. This gives it both a kind of empirical weakness compared to the lower-level laws; nevertheless there is a compelling universality to its descriptive power that informs the application of so many other more specialized sciences.

This resonates with many of the themes I’ve been exploring through my graduate study. Ulanowicz never cites Francisco Varela though the latter is almost a contemporary and similarly interested in combining principles of cybernetics with the life sciences (in Varela’s case, biology). Both Ulanowicz and Varela come to conclusions about the phenomenological nature of the life sciences which are unusual in the hard sciences.

Naturally, the case has been made that the social sciences are phenomenological as well, though generally these claims are made without a hope of making a phenomenological social science as empirically rigorous as ecology, let alone biology. Nevertheless Ulanowicz does hint, as does Varela, at the possibility of extending his models to social systems.

This is of course fascinating given the difficult problem of the “macro-micro link” (see Sawyer). Ecosystem size and the properties Ulanowicz derives about them are “emergent” properties of an ecosystem; his theory is I gather an attempt at a universal description of how these properties emerge.

Somehow, Ulanowicz manages to take on these problems without ever invoking the murky language of “complex adaptive systems”. This is, I suspect, a huge benefit to his work as he seems to write strictly as a scientist and does not mystify things by using undefined language of ‘complexity’.

It is a deeper technical dive than I’ve been used to for some time, but I’m very gratefully in a more technical academic milieu now than I’ve been in for several years. More soon.

References

Ulanowicz, Robert E. “Growth and development: A phenomenological perspective.” (1986).


by Sebastian Benthall at August 22, 2017 04:29 PM

August 17, 2017

Center for Technology, Society & Policy

Bodily Integrity in the Age of Dislocated Human Eggs

by Allyn Benintendi, CTSP Fellow | Permalink

In late October of 2012, soon after the American Society for Reproductive Medicine (ASRM) lifted the experimental label from human egg freezing, the good news spread like wildfire (Frappier 2012). Egg freezing is a medical procedure that harvests and removes a female’s mature oocytes (eggs) from her body for rapid freezing and storage for later use. Even though the ASRM report deliberately warned against healthy women freezing their eggs for the sole purpose of delaying childbearing, some saw with egg freezing a world-changing opportunity. This opportunity rested in the idea that the institutional failures that females faced as both laborers and eventual mothers could be relieved by a medical procedure. Bloomberg Businessweek aptly identified the solution and the problem in a 2014 headline, “Freeze Your Eggs, Free Your Career.” For tech giants Facebook and Apple, egg freezing is now a part of professional benefits packages.

The purpose of this article is to explore the entangled, and valuable, arguments happening at the forefront of debates about the ethical merits of this procedure. This article explores the implications of meeting the institutional failures that women experience in labor and maternity, which largely fall in the categories of wages, policy, and distribution, with solutions in the medical, physical, and surgical realm. What is the impact of this procedure on the bodily integrity of the female patients implicated, and to what end we will we see such physical distortion in the name of labor? Justice Cardazo famously promised the right to bodily integrity in the case of Scholoendorff v. The Society of New York Hospital (1914): “Every human being of adult years and sound mind has a right to determine what shall be done with his own body; and a surgeon who performs an operation without his patient’s consent, commits an assault, for which he is liable in damages.” The question here is by what social processes will we put the female body under the surgeon’s knife, remove her fertility and put it in the freezer, and it be considered socially acceptable?

The notion of a disjointed body, or the removal of body pieces, parts, and bits for sale is disturbing. Medical anthropologists and ethicists have long been exploring what makes a body and why we make rules, regulations, and moral codes to protect its integrity. Most agree that in order to understand the fragmentation and commodification of the body, we must inevitably define the body and what it is (Sharp 2000). In order to understand what “the body” is, one must delve into the particularized context of a human being, and its world, to understand how boundaries of the body are invented, shifted, and made to matter.

As it emerged, the medical procedure for egg freezing was marketed to promote female liberation. Campaigns encouraging perfectly healthy women to undergo egg freezing happened almost immediately after Sheryl Sandberg famously published Lean In: Women, Work, and the Will to Lead (2013). The book’s release launched global campaigns that foregrounded conversations about gender and the workplace, critiqued bad policies for working mothers, and initiated public dialogue.  Lean In faced critique, notably by Pulitzer Prize-winner Susan Faludi and celebrated author bell hooks. These writers argued that Sandberg’s message was corporate-driven, encouraged self-objectification at the mercy of capitalism, discouraged solidarity among women, and was ignorant to race and intersectionality. The medical technology for egg freezing could not have been timelier.

My earliest impressions of elective human egg freezing were of those that I had seen depicted in the media: “the great equalizer” (Time), “liberating…for professional women,” (The Guardian), and the key to “offering women a chance” (Vogue). The insistence of freedom that came with this new medical technology became a site of important dialogue by and among women. Revelations about women in the corporate world that surrounded the emergence of egg freezing made the procedure one that could be something of a solution: the deeply institutionalized, exclusionary consequences of capitalism could be fixed for an individual, on the individual level of her body. Would this individual-level solution remedy the problems being articulated by working women? Or would it fall prey to the same institutionalized systems that marginalize them? The answers invite nuance. One way to answer these questions is to use bioethical frameworks.

On the individual level lies the most obvious quandary within medical ethics debates: the question of ‘rights,’ and whether or not women should have the ‘right to’ this procedure. A female’s eggs are in her body, and comprise her body. All humans, to some extent, share a common sense about a right over our own body; our bodies and our body parts are not just resources to be allocated according to normal principles of public policy (Wilkinson 2011). The debates of ‘rights’ and ‘should’ offer us insight into the suffering and adversity that created the grounds on which such a gendered procedure could emerge, and be contested.

Other debates bring to light concerns for the safety and efficacy of the procedure, and the dismally poor record of doctors obtaining informed consent from their patients. Concerns for research ethics shed light on disparities between the demographic of research subjects, and the demographic being marketed the procedure. The subjects that established influential data to prove that egg freezing could be successful were “young donors, (whose eggs were) frozen for a relatively short time and used for IVF cycles in patients younger than thirty five years of age” (Linkeviciute 2015). The large majority of women seeking egg freezing procedures are in their late thirties (Nekkebroeck et al. 2010). In fact, the National Summary Report in 2014 by the Society for Assisted Reproductive Technology did not even report the cumulative outcomes per intended egg retrieval for women younger than 35 (SART 2014).

Additionally, studies show that egg freezing is least likely to work for women in the late-thirties age bracket (Pelin Cil et. al 2013). Women who do choose to freeze their eggs in their late thirties for later reimplantation of a fertilized embryo are made vulnerable to a whole host of possible risks associated with advanced maternal age, including gestational diabetes, preeclampsia (Von Wolff et al. 2015), placentation defects (Jackson et al. 2015), etc. This may suggest that the physical distortion of the body is not just in the removal of eggs but in the way the body responds to the procedure, adjusts to its side effects, and how it may heal or suffer.

Most offensively, a study published in 2014 proved that out of 147 clinics that offer elective egg freezing in the United States of America, only 7 present all of the relevant information to their prospective clients, and 119 clinics fail to provide the necessary and sufficient information for informed consent to be present (Avraham et al. 2014). Informed consent is at the heart of modern medical ethics (Wilkinson 2011). So long as the patient of this procedure does not have an informed consent transaction, we cannot do justice to understanding the complexity of the choices she makes with respect to her fertility. This may require a trip back to the drawing board with respect to the information we think should cultivate this consent process, especially for a procedure toeing frontiers in reproductive, elective, and preventative medicine.  

Testimonies, narratives, and even quantitative data on the suffering of infertility reveal it to be comparable to the suffering of other serious medical conditions, including cancer, (Domar et. al 1993), principally in the anxiety and depression that accompany. Infertility is frequently compared to cancer, which is ironic because the biomedical research originally developing elective human egg freezing was on behalf of youth cancer patients, who were facing infertility due to cancer treatment. Cancer, in this case, is caught in the “trappings of metaphor,” revealing infertility, as the subject of this metaphor, as a condition which is “ill-omened, abominable, (and) repugnant to the senses (Sontag 1978).

Infertility is widely understood to invoke great pain for those who suffer its reality. Egg freezing therefore purports to heal this ailment of infertility that occurs when women cannot have children due to aging. Campaigns for egg freezing argue that women miss a crucial window in their fertility because they are too busy working or pursuing higher education. Emerging research is revealing that despite campaigns marketing egg freezing as a way to help free time for women to pursue their education and career, women actually seeking the procedure are doing so because they do not yet have a partner (Kyweluk 2017).

These campaigns argue, perhaps rightfully so, that the “biological clock” makes women a poor fit for employment institutions that rely on a system of “working your way up.” Women are perhaps a poor fit because these employment institutions were not designed around their inclusion. Removing one’s eggs allows her to work and learn free of the pressure to have children within a strict window of time. This procedure claims to protect, and even enhance a woman’s fertility with the decision to remove her eggs. Therefore, as a means of circumventing the pain of infertility while empowering women, egg freezing is complexly framed in a way that makes the surgical techniques, the physical distortion and modification of the body, and reimplantation, a way to keep the pregnant female body in tact. As this procedure promises to maintain the integrity of the pregnant female body, we can see it holding together the very social fabric that finds itself threatened by the career woman, and by the complex circumstances she encounters.

How can we measure the implications of a medical procedure that fragments the body both literally, by surgically dislocating human eggs, and metaphorically, in marketing, imaging, and popular media? Most practically, what are the implications in delineating a boundary between a female person and her reproductive, material body parts, and making this boundary the site of object-formation and commodification? Reproductive technologies, like egg freezing, have the potential to change all that we know to have biological truth, certainty, and reality (Franklin 1997). We must strive toward new modes of analysis to meet our changing world. We must focus on how new medical technologies test our moral sensibilities with attention to the social contexts that provide their emergence and application. Bioethical analyses of elective human egg freezing must consider political theory, economics, and more to unveil the ties that this medical technology has to object-formation and identity. Contemporary bioethics and biomedical ethics “speak a language of individual rights and a freedom for individual patients to make choices regarding their treatments in the absence of undue external pressures” (Moazam 2006). Modes of bioethical inquiry that do not attempt to break down or work through the Anglo-American paradigm that underscores what questions we ask will never be enough. What is an individual’s right to determination, as a matter of what is ethical, if we have not thought through how her body became a commodity in the first place?

Framing the questions we ask with respect to controversial medical procedures, within the language, processes, and frameworks that give rise to their emergence, is not enough. If we don’t push these boundaries, we allow marginalized bodies to be caught in impossible debates about whether or not they “should have the right” to certain medical interventions. We become distracted by this question without realizing the subjectivity of medicine itself, as a moral endeavor deeply tied to our moral sensibilities, and to our taken for granted processes which place value on lives themselves. Policy should not intervene until social science research meets these demands. Before we decide what is or is not medically sound, we must demand answers to the most obvious of questions: Why this, why now?

by Rohit Raghavan at August 17, 2017 06:55 PM

August 09, 2017

Ph.D. student

Differing ethnographic accounts of the effectiveness of technology

I’m curious as I compare two recent papers, one by Christin [2017] and one by Levy [2015], both about the role of technology in society. and backed by ethnographic data.

What interests me is that the two papers both examine the use of algorithms in practice, but they differ in their account of the effectiveness of the algorithms used. Christin emphasizes the way web journalists and legal professionals deliberately undermine the impact of algorithms. Levy discusses how electronic monitoring achieves central organizational control over truckers.

I’m interested in the different framings because, as Christin points out, a central point of contention in the critical scholarship around data and algorithms is the effectiveness of the technology, especially “in practice”. Implicitly if not explicitly, if the technology is not as effective as its advocates say it is, then it is overhyped and this debunking is an accomplishment of the critical and often ethnographic field.

On the other hand, if the technology is effective at control, as Levy’s article argues that it is, then it poses a much more real managerialist threat to worker’s autonomy. Identifying that this is occurring is also a serious accomplishment of the ethnographic field.

What must be recognized, however, is that these two positions contradict each other, at least as general perspectives on data-collection and algorithmic decision-making. The use of a particular technology in a particular place cannot be both so ineffective as to be overhyped and so effective as to constitute a managerialist threat. The substance of the two critiques is at odds with each other, and they call for different pragmatic responses. The former suggests a rhetorical strategy of further debunking, the latter demands a material strategy of changing working conditions.

I have seen both strategies used in critical scholarship, sometimes even in the same article, chapter, or book. I have never seen critical scholars attempt to resolve this difference between themselves using their shared assumptions and methods. I’d like to see more resolution in the ethnographic field on this point.

Correction, 8/10/17:

The apparent tension is resolved on a closer reading of Christin (2017). The argument there is that technology (in the managerialist use common to both papers) is ineffective when its intended use is resisted by those being managed by it.

That shifts the ethnographic challenge to technology away from an attack on the technical quality of the work (which is a non-starter) to accomplish what it is designed to do, but rather on the uncontroversial proposition that the effectiveness of technology depends in part on assumptions on how it will be used, and that these assumptions can be violated.

The political question of to what extent these new technologies should be adopted can then be addressed straightforwardly in terms of whether or not it is fully and properly adopted, or only partially and improperly adopted. Using language like this would be helpful in bridging technical and ethnographic fields.

References

Christin, 2017. “Algorithms in practice: Comparing journalism and criminal justice.” (link)

Levy, 2015. “The Contexts of Control: Information, Power, and Truck-Driving Work.” (link)


by Sebastian Benthall at August 09, 2017 06:25 PM

August 06, 2017

MIMS 2018

Don’t Let Vegetarian Guilt Get You Down

Refraining from eating meat is just a means to an end.

Vegetarianism isn’t just about rabbit food either, but these colors sure do pop. Source: Flickr.

Every time I go to a barbecue, I find myself talking smack about vegetarians. It’s not just because as a vegetarian myself, I’m grumpy to be eating a dry, mulch-colored hockey puck while everyone else eats succulent, reddish-colored hockey pucks. It’s also a way to separate myself in the eyes of others from those who see vegetarianism as an ideology.

Vegetarianism is not really an ism. An ism a belief in something. Vegetarianism is not the belief that it is inherently good to refrain from eating animals. Rather, it’s a means to realize other ism’s.

Some vegetarians believe in a moral imperative against killing animals. Others, like myself, believe in environmentalism. Still others believe in religions like Hinduism and Rastafarianism. Even people who do it for health reasons believe that personal sacrifice now is worth future health outcomes.

Unfortunately, many non-meat eaters treat vegetarianism or veganism as the end rather than the means. They stick so ardently to their diet’s mores that they burn out, or worse, become insufferable. The latter, though it seems noble, harms any ism that gains from wider adoption. A vegetarian whose easygoing attitude convinces someone else to only eat meat at dinner has done more for environmentalism than the vegan who doesn’t sit on leather seats.

Burning out comes from the same fallacy. When a newly minted vegetarian succumbs to the smell of bacon (and who hasn’t?) they often think that they have gone astray of their belief. The seal has been broken so they may as well go back to eating meat. But eating a strip of bacon does not forgo the belief that killing animals is wrong! No one argues that committing adultery forgoes belief in the Judeo-Christian God.

What I’m really saying is, have some chill, folks. Militant vegetarians and vegans: remember your personal ism and don’t hurt others’ ism’s by making us all seem like ass holes. Hesitant meat eaters: it doesn’t take a blood oath to eat less meat. Yes, rules make it easier, but it’s more important that the rules are sustainable. Sometimes, that means they need to be flexible. Fervent anti-vegetarians: 2006 called, it wants it’s shitty attitude back.

by Gabe Nicholas at August 06, 2017 06:07 PM

Ph.D. student

legitimacy in peace; legitimacy in war

I recently wrote a reflection on the reception of Habermas in the United States and argued that the lack of intellectual uptake of his later work have been a problem with politics here. Here’s what I wrote, admittedly venting a bit:

In my experience, it is very difficult to find support in academia for the view that rational consensus around democratic institutions is a worthwhile thing to study or advocate for. Identity politics and the endless contest of perspectives is much more popular among students and scholars coming out of places like UC Berkeley. In my own department, students were encouraged to read Habermas’s early work in the context of the identity politics critique, but never exposed to the later work that reacted to these critiques constructively to build a theory that was specifically about pluralism, which is what identity politics need in order to unify as a legitimate state. There’s a sense in which the whole idea that one should continue an philosophical argument to the point of constructive agreement, despite the hard work and discipline that this demands, was abandoned in favor of an ideology of intellectual diversity that discouraged scrutiny and rigor across boundaries of identity, even in the narrow sense of professional or disciplinary identity.

Tapan Parikh succinctly made the point that Habermas’s philosophy may be too idealistic to ever work out:

“I still don’t buy it without taking history, race, class and gender into account. The ledger doesn’t start at zero I’m afraid, and some interests are fundamentally antagonistic.”

This objection really is the crux of it all, isn’t it? There is a contradiction between agreement, necessary for a legitimate pluralistic state, and antagonistic interests of different social identities, especially as they are historically and presently unequal. Can there ever be a satisfactory resolution? I don’t know. Perhaps the dialectical method will get us somewhere. (This is a blog after all; we can experiment here).

But first, a note on intellectual history, as part of the fantasy of this argument is that intellectual history matters for actual political outcomes. When discussing the origins of contemporary German political theory, we should acknowledge that post-War Germany has been profoundly interested in peace as it has experienced the worst of war. The roots of German theories of peace are in Immanual Kant’s work on “perpetual peace”, the hypothetical situation in which states are no longer at way. He wrote an essay about it in 1795, which by the way begins with this wonderful preface:

PERPETUAL PEACE

Whether this satirical inscription on a Dutch innkeeper’s sign upon which a burial ground was painted had for its object mankind in general, or the rulers of states in particular, who are insatiable of war, or merely the philosophers who dream this sweet dream, it is not for us to decide. But one condition the author of this essay wishes to lay down. The practical politician assumes the attitude of looking down with great self-satisfaction on the political theorist as a pedant whose empty ideas in no way threaten the security of the state, inasmuch as the state must proceed on empirical principles; so the theorist is allowed to play his game without interference from the worldly-wise statesman. Such being his attitude, the practical politician–and this is the condition I make–should at least act consistently in the case of a conflict and not suspect some danger to the state in the political theorist’s opinions which are ventured and publicly expressed without any ulterior purpose. By this clausula salvatoria the author desires formally and emphatically to deprecate herewith any malevolent interpretation which might be placed on his words.

When the old masters are dismissed as being irrelevant or dense, it denies them the credit for being very clever.

That said, I haven’t read this essay yet! But I have a somewhat informed hunch that more contemporary work that deals with the problems it raises directly make good headway on problem of political unity. For example, this article by Bennington (2012) “Kant’s Open Secret” is good and relevant to discussions of technical design and algorithmic governance. Cederman, who has been discussed here before, builds a computational simulation of peace inspired by Kant.

Here’s what I can sketch out, perhaps ignorantly. What’s at stake is whether antagonistic actors can resolve their differences and maintain peace. The proposed mechanism for this peace is some form of federated democracy. So to paint a picture: what I think Habermas is after is a theory of how governments can be legitimate in peace. What that requires, in his view, is some form of collective deliberation where actors put aside their differences and agree on some rules: the law.

What about when race and class interests are, as Parikh suggests, “fundamentally antagonistic”, and the unequal ledger of history gives cause for grievances?

Well, all too often, these are the conditions for war.

In the context of this discussion, which started with a concern about the legitimacy of states and especially the United States, it struck me that there’s quite a difference between how states legitimize themselves at peace versus how they legitimize themselves while at war.

War, in essence, allows some actors in the state to ignore the interests of other actors. There’s no need for discursive, democratic, cosmopolitan balancing of interests. What’s required is that an alliance of interests maintain the necessary power over rivals to win the war. War legitimizes autocracy and deals with dissent by getting rid of it rather than absorbing and internalizing it. Almost by definition, wars challenge the boundaries of states and the way underlying populations legitimize them.

So to answer Parikh, the alternative to peaceful rule of law is war. And there certainly have been serious race wars and class wars. As an example, last night I went to an art exhibit at the Brooklyn Museum entitled “The Legacy of Lynching: Confronting Racial Terror in America”. The phrase “racial terror” is notable because of how it positions racist lynching as a form of terrorism, which we have been taught to treat as the activity of rogue, non-state actors threatening national security. This is deliberate, as it frames black citizens as in need of national protection from white terrorists who are in a sense at war with them. Compare and contrast this with right-wing calls for “securing our borders” from allegedly dangerous immigrants, and you can see how both “left” and “right” wing political organizations in the United States today are legitimized in part by the rhetoric of war, as opposed to the rhetoric of peace.

To take a cynical view of the current political situation in the United States, which may be the most realistic view, the problem appears to be that we have a two party system in which the two parties are essentially at war, whether rhetorically or in terms of their actions in Congress. The rhetoric of the current president has made this uncomfortable reality explicit, but it is not a new state of affairs. Rather, one of the main talking points in the previous administration and the last election was the insistence by the Democratic leadership that the United States is a democracy that is at peace with itself, and so cooperation across party lines was a sensible position to take. The efforts by the present administration and Republican leadership to dismantle anything of the prior administration’s legacy make the state of war all too apparent.

I don’t mean “war” in the sense of open violence, of course. I mean it in the sense of defection and disregard for the interests of those outside of ones political alliance. The whole question of whether and how foreign influence in the election should be considered is dependent in part on whether one sees the contest between political parties in the United States as warfare or not. It is natural for different sides in a war to seek foreign allies, even and almost especially if they are engaged in civil war or regime change. The American Revolutionary was backed by the French. The Bulshevik Revolution in Russia was backed by Germany. That’s just how these things go.

As I write this, I become convinced that this is really what it comes in the United States today. There are “two Americas”. To the extent that there is stability, it’s not a state of peace, it’s a state of equilibrium or gridlock.


by Sebastian Benthall at August 06, 2017 05:40 PM

August 04, 2017

Ph.D. student

The meaning of gridlock in governance

I’ve been so intrigued by this article, “Dems Can Abandon the Center — Because the Center Doesn’t Exist”, by Eric Levitz in NY Mag. The gist of the article is that most policies that we think of as “centrist” are actually very unrepresentative of the U.S. population’s median attitude on any particular subject, and are held only by a small minority that Levitz associates with former Mayor Bloomberg of New York City. It’s a great read and cites much more significant research on the subject.

One cool thing the article provides is this nice graphic showing the current political spectrum in the U.S.:

The U.S. political spectrum , from Levitz, 2017.

In comparison to that, this blog post is your usual ramble of no consequence.

Suppose there’s an organization whose governing body doesn’t accomplish anything, despite being controversial, well-publicized, and apparently not performing satisfactorily. What does that mean?

From an outside position (somebody being governed by such a body), what is means is sustained dissatisfaction and the perception that the governing body is dys- or non- functional. This spurs the dissatisfied party to invest resources or take action to change the situation.

However, if the governing body is responsive to the many and conflicting interests of the governed, the stasis of the government could mean one of at least two things.

One thing it could mean is that the mechanism through which the government changes is broken.

Another thing it could mean is that the mechanism through which the government changes is working, and the state of governance reflects the equilibrium of the powers the contest for control of the government.

The latter view is not a politically exciting view and indeed it is politically self-defeating for whoever holds it. If we see government as something responding to the activity of many interests, mediating between them and somehow achieving their collective agenda, then the problem with seeing a government in gridlock as having achieved a “happy” equilibrium, or a “correct” view, is that it discourages partisan or interested engagement. If one side stops participating in the (expensive, exhausting) arm wrestle, then the other side gains ground.

On the other hand, the stasis should not in itself be considered cause for alarm, apart from the dissatisfaction resulting from ones particular perspective on the total system.

Another angle on this is that from every point in the political spectrum, and especially those points at the extremes, the procedural mechanisms of government are going to look broken because they don’t result in satisfying outcomes. (Consider the last election, where both sides argued that the system was rigged when they thought they were losing or had lost.) But, of course, these mechanisms are always already part of the governance system itself and subject to being governed by it, so pragmatically one will approve of them just in so far as it gives ones own position influence over outcomes (here I’m assuming strict proceduralism are somewhere on the multidimensional political spectrum themselves and is motivated by e.g. the appeal of the stability or legitimacy in some sense).


by Sebastian Benthall at August 04, 2017 10:37 PM

Habermas seems quaint right now, but shouldn’t

By chance I was looking up Habermas’s later philosophical work today, like Between Facts and Norms (1992), which has been said to be the culmination of the project he began with The Structural Transformation of the Public Sphere in 1962. In it, he argues that the law is what gives pluralistic states their legitimacy, because the law enshrines the consent of the governed. Power cannot legitimize itself; democratic law is the foundation for the legitimate state.

Habermas’s later work is widely respected in the European Union, which by and large has functioning pluralistic democratic states. Habermas emerged from the Frankfurt School to become a theorist of modern liberalism and was good at it. While it is an empirical question how much education in political theory is tied to the legitimacy and stability of the state, anecdotally we can say that Habermas is a successful theorist and the German-led European Union is, presently, a successful government. For the purposes of this post, let’s assume that this is at least in part due to the fact that citizens are convinced, through the education system, of the legitimacy of their form of government.

In the United States, something different happened. Habermas’s earlier work (such as the The Structural Transformation of the Public Sphere) was introduced to United States intellectuals through a critical lens. Craig Calhoun, for example, argued in 1992 that the politics of identity was more relevant or significant than the politics of deliberation and democratic consensus.

That was over 25 years ago, and that moment was influential in the way political thought has unfolded in Europe and the United States. In my experience, it is very difficult to find support in academia for the view that rational consensus around democratic institutions is a worthwhile thing to study or advocate for. Identity politics and the endless contest of perspectives is much more popular among students and scholars coming out of places like UC Berkeley. In my own department, students were encouraged to read Habermas’s early work in the context of the identity politics critique, but never exposed to the later work that reacted to these critiques constructively to build a theory that was specifically about pluralism, which is what political identities need in order to unify as a legitimate state. There’s a sense in which the whole idea that one should continue a philosophical argument to the point of constructive agreement, despite the hard work and discipline that this demands, was abandoned in favor of an ideology of intellectual diversity that discouraged scrutiny and rigor across boundaries of identity, even in the narrow sense of professional or disciplinary identity.

The problem with this approach to intellectualism is that it is fractious and undermines itself. When these qualities are taken as intellectual virtues, it is no wonder that boorish overconfidence can take advantage of it in an open contest. And indeed the political class in the United States today has been undermined by its inability to justify its own power and institutions in anything but the fragmented arguments of identity politics.

It is a sad state of affairs. I can’t help but feel my generation is intellectually ill-equipped to respond to the very prominent challenges to the legitimacy of the state that are being leveled at it every day. Not to put too fine a point on it, I blame the intellectual laziness of American critical theory and its inability to absorb the insights of Habermas’s later theoretical work.

Addendum 8/7/17a:

It has come to my attention that this post is receiving a relatively large amount of traffic. This seems to happen when I hit a nerve, specifically when I recommend Habermas over identitarianism in the context of UC Berkeley. Go figure. I respectfully ask for comments from any readers. Some have already helped me further my thinking on this subject. Also, I am aware that a Wikipedia link is not the best way to spread understanding of Habermas’s later political theory. I can recommend this book review (Chriss, 1998) of Between Facts and Norms as well as the Habermas entry in the Stanford Encyclopedia of Philosophy which includes a section specifically on Habermasian cosmopolitanism, which seems relevant to the particular situation today.

Addendum 8/7/17b:

I may have guessed wrong. The recent traffic has come from Reddit. Welcome, Redditors!

 


by Sebastian Benthall at August 04, 2017 06:24 PM

August 02, 2017

Ph.D. alumna

How “Demo-or-Die” Helped My Career

I left the Media Lab 15 years ago this week. At the time, I never would’ve predicted that I learned one of the most useful skills in my career there: demo-or-die.

(Me debugging an exhibit in 2002)

The culture of “demo-or-die” has been heavily critiqued over the years. In doing so, most folks focus on the words themselves. Sure, the “or-die” piece is definitely an exaggeration, but the important message there is the notion of pressure. But that’s not what most people focus on. They focus on the notion of a “demo.”

To the best that anyone can recall, the root of the term stems back from early days at the Media Lab, most likely because of Nicholas Negroponte’s dismissal of “publish-or-perish” in academia. So the idea was to focus not on writing words but producing artifacts. In mocking what it was that the Media Lab produced, many critics focused on the way in which the Lab had a tendency to create vaporware, performed to visitors through the demo. In 1987, Stewart Brand called this “handwaving.” The historian Molly Steenson has a more nuanced view so I can’t wait to read her upcoming book. But the mockery of the notion of a demo hasn’t died. Given this, it’s not surprising that the current Director (Joi Ito) has pushed people to stop talking about demoing and start thinking about deploying. Hence, “deploy-or-die.”

I would argue that what makes “demo-or-die” so powerful has absolutely nothing to do with the production of a demo. It has to do with the act of doing a demo. And that distinction is important because that’s where the skill development that I relish lies.

When I was at the Lab, we regularly received an onslaught of visitors. I was a part of the “Sociable Media Group,” run by Judith Donath. From our first day in the group, we were trained to be able to tell the story of the Media Lab, the mission of our group, and the goal of everyone’s research projects. Furthermore, we had to actually demo their quasi functioning code and pray that it wouldn’t fall apart in front of an important visitor. We were each assigned a day where we were “on call” to do demos to any surprise visitor. You could expect to have at least one visitor every day, not to mention hundreds of visitors on days that were officially sanctioned as “Sponsor Days.”

The motivations and interests of visitors ranged wildly. You’d have tour groups of VIP prospective students, dignitaries from foreign governments, Hollywood types, school teachers, engineers, and a whole host of different corporate actors. If you were lucky, you knew who was visiting ahead of time. But that was rare. Often, someone would walk in the door with someone else from the Lab and introduce you to someone for whom you’d have to drum up a demo in very short order with limited information. You’d have to quickly discern what this visitor was interested in, figure out which of the team’s research projects would be most likely to appeal, determine how to tell the story of that research in a way that connected to the visitor, and be prepared to field any questions that might emerge. And oy vay could the questions run the gamut.

I *hated* the culture of demo-or-die. I felt like a zoo animal on display for others’ benefit. I hated the emotional work that was needed to manage stupid questions, not to mention the requirement to smile and play nice even when being treated like shit by a visitor. I hated the disruptions and the stressful feeling when a demo collapsed. Drawing on my experience working in fast food, I developed a set of tricks for staying calm. Count how many times a visitor said a certain word. Nod politely while thinking about unicorns. Experiment with the wording of a particular demo to see if I could provoke a reaction. Etc.

When I left the Media Lab, I was ecstatic to never have to do another demo in my life. Except, that’s the funny thing about learning something important… you realize that you are forever changed by the experience.

I no longer produce demos, but as I developed in my career, I realized that “demo-or-die” wasn’t really about the demo itself. At the end of the day, the goal wasn’t to pitch the demo — it was to help the visitor change their perspective of the world through the lens of the demo. In trying to shift their thinking, we had to invite them to see the world differently. The demo was a prop. Everything about what I do as a researcher is rooted in the goal of using empirical work to help challenge people’s assumptions and generate new frames that people can work with. I have to understand where they’re coming from, appreciate their perspective, and then strategically engage them to shift their point of view. Like my days at the Media Lab, I don’t always succeed and it is indeed frustrating, especially because I don’t have a prop that I can rely on when everything goes wrong. But spending two years developing that muscle has been so essential for my work as an ethnographer, researcher, and public speaker.

I get why Joi reframed it as “deploy-or-die.” When it comes to actually building systems, impact is everything. But I really hope that the fundamental practice of “demo-or-die” isn’t gone. Those of us who build systems or generate knowledge day in and day out often have too little experience explaining ourselves to the wide array of folks who showed up to visit the Media Lab. It’s easy to explain what you do to people who share your ideas, values, and goals. It’s a lot harder to explain your contributions to those who live in other worlds. Impact isn’t just about deploying a system; it’s about understanding how that system or idea will be used. And that requires being able to explain your thinking to anyone at any moment. And that’s the skill that I learned from the “demo-or-die” culture.

by zephoria at August 02, 2017 01:50 AM

July 26, 2017

MIMS 2010

Writing pull requests your coworkers might enjoy reading

Programmers like writing code but few love reviewing it. Although code review is mandatory at many companies, enjoying it is not. Here are some tips I’ve accumulated for getting people to review your code. The underlying idea behind these suggestions is that the person asking for review should spend extra time and effort making the pull request easy to review. In general, you can do this by discussing code changes beforehand, making them small, and describing them clearly.

At this point you may be wondering who died and made me king of code review (spoiler: nobody). This advice is based on my experience doing code review for other engineers at Twitter. I’ve reviewed thousands of pull requests, posted hundreds of my own, and observed what works and doesn’t across several teams. Some of the tips may apply to pull requests to open-source projects, but I don’t have much experience there so no guarantees.

I primarily use Phabricator and ReviewBoard, but I use the term “pull request” because I think that’s a well understood term for code proposed for review.

Plan the change before you make a pull request

If you talk to the people who own code before you make a change, they’ll be more likely to review it. This makes sense purely from a social perspective: they become invested in your change and doing a code review is just the final step in the process. You’ll save time in review because these people will already have some context on what you’re trying to do. You may even save time before review because you can consider different designs before you implement one.

The problem with skipping this step is that it’s important to separate the design of the change from the implementation. Once you post code for review you generally have a strong bias towards the design that you just implemented. It’s hard to hear “start over” and it’s hard for reviewers to say it as well.

Pick reviewers who are relevant to the change

Figure out why are you asking people to review this code.

  • Is it something they worked on?
  • Is it related to something they are working on?
  • Do you think they understand the thing you’re changing?

If the answer to these questions is no, find better people to review your change.

Tell reviewers what is going on

Write a good summary and description of the change. Long is not the same as good; absent is usually not good. Reviewers need to understand the context of the pull request. Explain why you are making this change. Reading through the commits associated with the request usually doesn’t say enough. If there is a bug, issue, or ticket that provides context for the change, link to it.

Ideally you have written clear, readable code with adequate documentation, but that doesn’t necessarily get you off the hook here. How your change does what it says it does may still not be obvious. Give your readers a guide. What parts of the change should they look at first? What part is the most important? For example, “The main change is adding a UTF-8 reader to class XYZ. Everything else is updating callers to use the new method.” This focuses readers’ attention on the meat of the change immediately.

You may find it helpful to write the description of your pull request while tests are running, or code is compiling, or another time where you would otherwise check email. I often keep a running description of the change open while I am writing the code. If I make a decision that I think will strike reviewers as unusual, I add a brief explanation to that doc and use to write the pull request.

Finagle uses a Problem/Solution format for pull requests that I find pleasant. It’s also be fun to misuse on occasion. I don’t recommend that, but I do plenty of things I don’t recommend.

Make the change as small as possible while still being understandable

Sometimes fixing a bug or creating a new feature requires changes to a dozen-odd files. This alone can be tricky to follow before you mix in other refactorings, clean-ups, and changes. Fixing unrelated things makes it harder to understand the pull request as a whole. Correcting a typo here or there is fine; fixing a different bug, or a heavy refactoring is not. (Teams will, of course, have different tolerances for this, but inasmuch as possible it’s nice to separate review of these parts.)

Even if you have a branch where you change a bunch of related things, you may want to extract isolated parts that can be reviewed and merged independently. Aim for a change that has a single, well-scoped answer to the question “What does this change do?”. Note that this is more about the change being conceptually small rather than small in the actual number of files modified. If you change a class and have to update usages in 50 other files, that might still count as small.

Of course there are caveats: having 20 small pull requests, each building on the previous, isn’t ideal either so you have to strike some balance between size and frequency. Sometimes splitting things up makes it harder to understand. Rely on your reviewers for feedback about how they prefer changes.

Send your pull request when it’s ready to review

Is your change actually ready to merge when reviewers OK it? Have you verified that the feature you have added works, or that the bug you fixed is actually fixed? Does the code compile? Do tests and linters pass? If not, you are going to waste reviewers’ time when you have to change things and ask for another review. Some of these checks can be automated—maybe tests are run against your branch; use a checklist for ones that can’t. (One obvious exception to this is an RFC-style pull request where you are seeking input before you implement everything—one way to “Plan the change”).

Once you have enough feedback from reviewers and have addressed the relevant issues, don’t keep updating the request with new changes. Merge it! It’s time for a new branch.

Closing thoughts

Not all changes need to follow these tips. You probably don’t need peer buy-in before you update some documentation, you may not have time to provide a review guide for an emergency fix, and sometimes it’s just really convenient to lump a few changes together. In general, though, I find that discussing changes ahead of time, keeping them small, and connecting the dots for your readers is worthwhile. Going the extra mile to help people reviewing your pull requests will result in faster turnaround, more focused feedback, and happier teammates. No guarantees, but it’s possible they’ll even enjoy it.

Thanks to Goran Peretin and Sarah Brown for reviewing this post and their helpful suggestions. Cross-posted at Medium.

by Ryan at July 26, 2017 03:01 PM

July 23, 2017

adjunct professor

Unmasking Slurs

I'm sympathetic to many of the arguments offered in a guest post by Robert Henderson, Peter Klecha, and Eric McCready (HK&M) in response to Geoff Pullum's post on "nigger in the woodpile," no doubt because they are sympathetic to some of the things I said in my reply to Geoff. But I have to object when they scold me for spelling out the word nigger rather than rendering it as n****r. It seems to me that "masking" the letters of slurs with devices such as this is an unwise practice—it reflects a misunderstanding of the taboos surrounding these words, it impedes serious discussion of their features, and most important, it inadvertently creates an impression that works to the advantage of certain racist ideologies. I have to add that it strikes me that HK&M's arguments, like a good part of the linguistic and philosophical literature on slurs, suffer from a certain narrowness of focus, a neglect both of the facts of actual usage of these words and the complicated discourses that they evoke. So, are you sitting comfortably?

HK&M say of nigger (or as they style it, n****r):

The word literally has as part of its semantic content an expression of racial hate, and its history has made that content unavoidably salient. It is that content, and that history, that gives this word (and other slurs) its power over and above other taboo expressions. It is for this reason that the word is literally unutterable for many people, and why we (who are white, not a part of the group that is victimized by the word in question) avoid it here.

Yes, even here on Language Log. There seems to be an unfortunate attitude — even among those whose views on slurs are otherwise similar to our own — that we as linguists are somehow exceptions to the facts surrounding slurs discussed in this post. In Geoffrey Nunberg’s otherwise commendable post on July 13, for example, he continues to mention the slur (quite abundantly), despite acknowledging the hurt it can cause. We think this is a mistake. We are not special; our community includes members of oppressed groups (though not nearly enough of them), and the rest of us ought to respect and show courtesy to them.

This position is a version of the doctrine that Luvell Anderson and Ernie Lepore call "silentism" (see also here). It accords with the widespread view that the word nigger is phonetically toxic: simply to pronounce it is to activate it, and it isn’t detoxified by placing it in quotation marks or other devices that indicate that the word is being mentioned rather than used, even written news reports or scholarly discussions. In that way, nigger and words like it seem to resemble strong vulgarities. Toxicity, that is, is a property that’s attached to the act of pronouncing a certain phonetic shape, rather than to an act of assertion, which is why some people are disconcerted when all or part of the word appears as a segment of other words, as in niggardly or even denigrate.

Are Slurs Nondisplaceable?

This is, as I say, a widespread view, and HK&M apparently hold that that is reason enough to avoid the unmasked utterance of the word (written or spoken), simply out of courtesy. It doesn't matter whether the insistence on categorial avoidance reflects only the fact that “People have had a hard time wrapping their heads around the fact that referring to the word is not the same as using it,” as John McWhorter puts it—people simply don't like to hear it spoken or see it written, so just don't.

But HK&M also suggest that the taboo on mentioning slurs has a linguistic basis:

There is a consensus in the semantic/pragmatic and philosophical literature on the topic that slurs aggressively attach to the speaker, committing them to a racist attitude even in embedded contexts. Consider embedded slurs; imagine Ron Weasley says “Draco thought that Harry was a mudblood”, where attributing the thought to Draco isn’t enough to absolve Ron of expressing the attitudes associated with the slur. Indeed, even mentioning slurs is fraught territory, which is why the authors of most papers on these issues are careful to distance themselves from the content expressed.

The idea here is that slurs, like other expressives, are always speaker-oriented. A number of semanticists have made this claim, but always on the basis of intuitions about spare constructed examples—in the present case, one involving an imaginary slur: “imagine Ron Weasley says “Draco thought that Harry was a mudblood.” This is always a risky method in getting at the features of socially charged words, and particularly with these, since most of the people who write about slurs are not native speakers of them, and their intuitions are apt to be shaped by their preconceptions. The fact is that people routinely produce sentences in which the attitudes implicit in a slur are attributed to someone other than the speaker. The playwright Harvey Fierstein produced a crisp example on MSNBC, “Everybody loves to hate a homo.” Here are some others:

In fact We lived, in that time, in a world of enemies, of course… but beyond enemies there were the Micks, and the spics, and the wops, and the fuzzy-wuzzies. A whole world of people not us… (edwardsfrostings.com)

So white people were given their own bathrooms, their own water fountains. You didn’t have to ride on public conveyances with niggers anymore. These uncivilized jungle bunnies, darkies.…You had your own cemetery. The niggers will have theirs over there, and everything will be just fine. (Ron Daniels in Race and Resistance: African Americans in the 21st Century)

All Alabama governors do enjoy to troll fags and lesbians as both white and black Alabamians agree that homos piss off the almighty God. (Encyclopedia Dramatica)

[Marcus Bachmann] also called for more funding of cancer and Alzheimer’s research, probably cuz all those homos get all the money now for all that AIDS research. (Maxdad.com)

And needless to say, slurs are not speaker-oriented when they're quoted. When the New York Times reports that “Kaepernick was called a nigger on social media,” no one would assume that the Times endorses the attitudes that the word conveys.

I make this point not so much because it's important here, but because it demonstrates the perils of analyzing slurs without actually looking at how people use them or regard them—a point I'll come back to in a moment.

Toxicity in Speech and Writing

The assimilation of slurs to vulgarities obscures several important differences between the two. For one thing, mentioning slurs is less offensive in writing than in speech. That makes slurs different from vulgarisms like fucking. The New York Times has printed the latter word only twice, most recently in its page one report of Trump’s Access Hollywood tapes. But it has printed nigger any number of times [added} presumably with the approval of its African American executive editor Dean Banquet (though in recent years it tends to avoid the word in headlines):

The rhymes include the one beginning, “Eeny, meeny, miney mo, catch a nigger by the toe,” and another one that begins, “Ten little niggers …” May 8, 2014

The Word 'Nigger' Is Part of Our Lexicon Jan. 8, 2011

I live in a city where I probably hear the word “nigger” 50 times a day from people of all colors and ages… Jan 6, 2011

In fan enclaves across the web, a subset of Fifth Harmony followers called Ms. Kordei “Normonkey,” “coon,” and “nigger” Aug 12, 2016

 Gwen [Ifill] came to work one day to find a note in her work space that read “Nigger, go home. Nov. 11, 2016

… on the evening of July 7, 2007, Epstein "bumped into a black woman" on the street in the Georgetown section of Washington … He "called her a 'nigger,' and struck her in the head with an open hand." Charles M. Blow, June 6, 2009.

By contrast, the word is almost never heard in broadcast or free cable (when it does occur, e.g., in a recording, it is invariably bleeped). When I did a Nexis search several years ago on broadcast and cable news transcripts for the year 2012, I found it had been spoken only three times, in each instance by blacks recalling the insults they endured in their childhoods.

To HK&M, this might suggest only that the Times is showing insufficient courtesy to African Americans by printing nigger in full. And it's true that other media are more scrupulous about masking the word than the Times is, notably the New York Post and Fox News and its outlets:

Walmart was in hot water on Monday morning after a product’s description of “N___ Brown” was found on their website. Fox32news, 2027

After Thurston intervened, Artiles continued on and blamed "six n——" for letting Negron rise to power. Fox13news.com, April 19, 2017

In a 2007 encounter with his best friend’s wife, Hogan unleashed an ugly tirade about his daughter Brooke’s black boyfriend.“I mean, I’d rather if she was going to f–k some n—-r, I’d rather have her marry an 8-foot-tall n—-r worth a hundred million dollars! Like a basketball player! I guess we’re all a little racist. F—ing n—-r,” Hogan said, according to a transcript of the recording. New York Post May 2, 2016

"Racism, we are not cured of it," Obama said. "And it's not just a matter of it not being polite to say n***** in public." Foxnews.com June 22, 2015

One might conclude from this, following HK&M's line of argument, that the New York Post and Fox News are demonstrating a greater degree of racial sensitivity than the Times. Still, given the ideological bent of these outlets, one might also suspect that masking is doing a different kind of social work.

Slurs in Scholarship

As an aside, I should note that the deficiencies of the masking approach are even more obvious when we turn to the mention of these words in linguistic or philosophical discussions of slurs and derogative terms, which often involve numerous mentions of a variety of terms. In my forthcoming paper “The Social Life of Slurs,” I discuss dozens of derogative terms, including not just racial, religious, and ethnic slurs, but political derogatives (libtard, commie), geographical derogations (cracker, It. terrone), and derogations involving disability (cripple, spazz, retard), class (pleb, redneck), sexual orientation (faggot, queer, poofter), and nonconforming gender (tranny). I'm not sure how HK&M would suggest I decide which of these called out for masking with asterisks—just the prototypical ones like nigger and spic, or others that may be no less offensive to the targeted group? Cast the net narrowly and you seem to be singling out certain forms of bigotry for special attention; cast it widely and the texts starts to look circus poster. Better to assume that the readers of linguistics and philosophy journals—and linguistics blogs—are adult discerning enough to deal with the unexpurgated forms.

What's Wrong with Masking?

The unspoken assumption behind masking taboo words is that they’re invested with magical powers—like a conjuror’s spell, they are inefficacious unless they are pronounced or written just so. This is how we often think of vulgarisms of course—that writing fuck as f*ck or fug somehow denatures it, even though the reader knows perfectly well what the word is. That's what has led a lot of people in recent years to assimilate racial slurs to vulgarisms—referring to them with the same kind of initialized euphemism used for shit and fuck and describing them with terms like “obscenity” and “curse word” with no sense of speaking figuratively.

But the two cases are very different. Vulgarities rely for their effect on a systematic hypocrisy: we officially stigmatize them in order to preserve their force when they are used transgressively. (Learning to swear involves both being told to avoid the words and hearing them used, ideally by the same people.) But that’s exactly the effect that we want to avoid with slurs: we don’t want their utterers to experience the flush of guilty pleasure or the sense of complicity that comes of violating a rule of propriety—we don't want people ever to use the words, or even think them. Yet that has been one pernicious effect of the toxification of certain words.

It should give us pause to realize that the assimilation of nigger to naughty words has been embraced not just by many African Americans, but also by a large segment of the cultural and political right. Recall the reactions when President Obama remarked in an interview with Marc Maron’s "WTF" podcast that curing racism was “not just a matter of it not being polite to say ‘nigger’ in public.” Some African Americans were unhappy with the remark—the president of the Urban League said the word "ought to be retired from the English language." Others thought it was appropriate.

But the response from many on the right was telling. They, too, disapproved of Obama’s use of the word, but only because it betrayed his crudeness. A commentator on Fox News wrote:

And then there's the guy who runs the "WTF" podcast — an acronym for a word I am not allowed to write on this website. President Obama agreed to a podcast interview with comedian Marc Maron — a podcast host known for his crude language. But who knew the leader of the free world would be more crude than the host?

The Fox News host Elisabeth Hasselbeck also referenced the name of Maron’s podcast and said,

I think many people are wondering if it’s only there that he would say it, and not, perhaps, in a State of the Union or more public address.

Also on Fox News, the conservative African American columnist Deneen Borelli said, that Obama “has really dragged in the gutter speak of rap music. So now he is the first president of rap, of street?”

It’s presumably not an accident that Fox News’s online reports of this story all render nigger as n****r. It reflects the "naughty word" understanding of the taboo that led members of a fraternity at the University of Oklahoma riding on a charter bus to chant, “There will never be a nigger at SAE/You can hang him from a tree, but he'll never sign with me,” with the same gusto that male college students of my generation would have brought to a sing-along of “Barnacle Bill the Sailor.”

That understanding of nigger as a dirty word also figures in the rhetorical move that some on the right have made, in shifting blame for the usage from white racists to black hip hop artists—taking the reclaimed use of the word as a model for white use. That in turn enables them to assimilate nigger—which they rarely distinguish from nigga—to the vulgarities that proliferate in hip hop. Mika Brzezinski and Joe Scarborough of Morning Joe blamed the Oklahoma incident on hip hop, citing the songs of Waka Flocka Flame, who had canceled a concert at the university; as Brzezinski put it:

If you look at every single song, I guess you call these, that he’s written, it’s a bunch of garbage. It’s full of n-words, it’s full of f-words. It’s wrong. And he shouldn’t be disgusted with them, he should be disgusted with himself.

On the same broadcast, Bill Kristol added that “popular culture has become a cesspool,” again subsuming the use of racist slurs, via hip hop, under the heading of vulgarity and obscenity in general.

I don’t mean to suggest that Brzezinski, Scarborough and Kristol aren’t genuinely distressed by the use of racial slurs (I have my doubts about some of the Fox News hosts). But for the respectable sectors of cultural right—I mean as opposed to the unreconstructed bigots who have no qualms about using nigger at Trump rallies or on Reddit forums—the essential problem with powerful slurs is that they’re vulgar and coarse, and only secondarily that they’re the instruments of social oppression. And the insistence on categorically avoiding unmasked mentions of the words is very easy to interpret as supporting that view. In a way, it takes us back to the disdain for the word among genteel nineteenth-century Northerners. A contributor to an 1894 number of the Century Magazine wrote that “An American feels something vulgar in the word ‘nigger’. A ‘half-cut’ [semi-genteel] American, though he might use it in speech, would hardly print it.” And a widely repeated anecdote had William Seward saying of Stephen Douglas that the American people would never elect as president “[a] man who spells negro with two g’s,” since “the people always mean to elect a gentleman for president.” (That expression, "spelling negro with two g's" was popular at the time, a mid-nineteenth-century equivalent to the form n*****r.)

This all calls for care, of course. There are certainly contexts in which writing nigger in full is unwise. But in serious written discussions of slurs and their use, we ought to be able to spell the words out, in the reasonable expectation that our readers will discern our purpose.

As John McWhorter put this point in connection with the remarks Obama made on the Marc Maron podcast:

Obama should not have to say “the N-word” when referring to the word, and I’m glad he didn’t. Whites shouldn’t have to either, if you ask me. I am now old enough to remember when the euphemism had yet to catch on. In a thoroughly enlightened 1990s journalistic culture, one could still say the whole word when talking about it.… What have we gained since then in barring people from ever uttering the word even to discuss it—other than a fake, ticklish nicety that seems almost designed to create misunderstandings?

by Geoff Nunberg at July 23, 2017 01:36 AM

July 20, 2017

Ph.D. student

What are the right metrics for evaluating the goodness of government?

Let’s assume for a moment that any politically useful language (“freedom”, “liberalism”, “conservatism”, “freedom of speech”, “free markets”, “fake news”, “democracy”, “fascism”, “theocracy”, “radicalism”, “diversity”, etc.) will get coopted by myriad opposed political actors that are either ignorant or uncaring of its original meaning and twisted to reflect only the crudest components of each ideology.

It follows from this assumption that an evaluation of a government based on these terms is going to be fraught to the point of being useless.

To put it another way: the rapidity and multiplicity of framings available for the understanding of politics, and the speed with which framings can assimilate and cleverly reverse each other, makes this entire activity a dizzying distraction from substantive evaluation of the world we live in.

Suppose that nevertheless we are interested in justice, broadly defined as the virtue of good government or well-crafted state.

It’s not going to be helpful to frame this argument, as it has been classically, in the terminology that political ideological battles have been fought in for centuries.

For domestic policy, legal language provides some kind of anchoring of political language. But legal language still accommodates drift (partly by design) and it does not translate well internationally.

It would be better to use an objective, scientific approach for this sort of thing.

That raises the interesting question: if one were to try to measure justice, what would one measure? Assuming one could observe and quantify any relevant mechanism in society, which ones would be the ones worth tracking an optimizing to make society more just?


by Sebastian Benthall at July 20, 2017 02:25 AM

July 19, 2017

Ph.D. student

Glass Enterprise Edition Doesn’t Seem So Creepy

Google Glass has returned — as Glass Enterprise Edition. The company’s website suggests that it can be used in professional settings–such as manufacturing, logistics, and healthcare — for specific work applications, such as accessing training videos, annotated images, handsfree checklists, or sharing your viewpoint with an expert collaborator. This is a very different imagined future with Glass than in the 2012 “One Day” concept video where a dude walks around New York City taking pictures and petting dogs. In fact, the idea of using this type of product in a professional working space, collaborating with experts from your point of view sounds a lot like the original Microsoft HoloLens concept video (mirror).

This is not to say one company followed or copied another (and in fact Hololens’ more augmented-reality-like interface and Glass’ more heads-up-display-like interface will likely be used for different types of applications. It is, however, a great example of how a product’s creepiness is partly related to whether it’s envisioned as a device to be used in constrained contexts or not.  In a great opening line which I think sums this well,  Levi Sumagaysay at Silicon Beat says:

Now Google Glass is productive, not creepy.

As I’ve previously written with Deirdre Mulligan [open access version] about the future worlds imagined by the original video presentations of Glass and HoloLens, Glass’ original portrayal of being always-on (and potentially always recording), invisible to others, taking information from one social context and using it in another, used in public spaces, made it easier to see it as a creepy and privacy-infringing device. (It didn’t help that the first Glass video also only showed the viewpoint of a single imagined user, a 20-something-year-old white man). Its goal seemed to be to capture information about a person’s entire life — from riding the subway to getting coffee with friends, to shopping, to going on dates. And a lot of people reacted negatively to Glass’ initial explorer edition, with Glass bans in some bars and restaurants, campaigns against it, and the rise of the colloquial term “glasshole.” In contrast, HoloLens was depicted as a very visible and very bulky device that can be easily seen, and its use was limited to a few familiar, specific places and contexts — at work or at home, so it’s not portrayed as a device that could record anything at any time. Notably, the HoloLens video also avoided showing the device in public spaces. HoloLens was also presented as a productivity tool to help complete specific tasks in new ways (such as CAD, helping someone complete a task by sharing their point of view, and the ever exciting file sharing), rather than a device that could capture everything about a user’s life. And there were few public displays of concern over privacy. (If you’re interested in more, I have another blog entry with more detail). 

Whether explicit or implicit, the presentation of Glass Enterprise Edition seems to recognize some of the lessons about constraining the use of such an expansive set of capabilities to particular contexts and roles. Using Glass’ sensing, recording, sharing, and display capabilities within the confines of professionals doing manufacturing, healthcare, or other work on the whole helps position the device as something that will not violate people’s privacy in public spaces. (Though it is perhaps still to be seen what types of privacy problems related to Glass will emerge in workplaces, and how those might be addressed through design, use rules, training, an so forth). What is perhaps more broadly interesting is how the same technology can take on different meanings with regards to privacy based on how it’s situated, used, and imagined within particular contexts and assemblages.


by Richmond at July 19, 2017 06:30 PM

July 14, 2017

adjunct professor

Polysemous Pejoratives

Geoff Pullum suggests that the flap over an MP’s use of nigger in the woodpile is overdone:

Anne Marie Morris, the very successful Conservative MP for Newton Abbot in the southwestern county of Devon, did not call anyone a nigger.…
Ms. Morris used a fixed phrase with its idiomatic meaning, and it contained a word which, used in other contexts, can be a decidedly offensive way of denoting a person of negroid racial type, or an outright insult or slur. Using such a slur — referring to a black person as a nigger — really would be a racist act. But one ill-advised use of an old idiom containing the word, in a context where absolutely no reference to race was involved, is not.

Oh, dear. As usual, Geoff's logic is impeccable, but in this case it's led him terribly astray.

As it happens, I addressed this very question in a report I wrote on behalf of the petitioners who asked the Trademark Board to cancel the mark of the Washington Redskins on the grounds that it violated the Lanham Act’s disparagement clause. The team argued, among other things, that “the fact that the term ‘redskin,’ used in singular, lower case form, references an ethnic group does not automatically render it disparaging when employed as a proper noun in the context of sports.” The idea here is that the connotations of a pejorative word do not persist when it acquires a transferred meaning—as the team’s lead attorney put it, “It’s what our word means.” In fact, they added, the use of the name as team name has only positive associations.

I responded, in part:

Nigger has distinct denotations when it is used for a black person, a shade of dark brown, or in phrases like nigger chaser, nigger fish, or niggertoe (a Brazil nut), and in phrases like nigger in the woodpile. All of those expressions are “different words” from the slurring ethnonym nigger from which they are derived, but each of them necessarily inherits its disparaging connotations. The OED now labels all of them as "derogatory" or "offensive." On consideration, it’s obvious why these connotations should persist when an expression acquires a transferred meaning—for more-or-less the same reason the connotations of fuck persist when it's incorporated in fuckwad. The power of a slur is derived from its history of use, a point that Langston Hughes made powerfully in a passage from his 1940 memoir The Big Sea:

The word nigger sums up for us who are colored all the bitter years of insult and struggle in America the slave-beatings of yesterday, the lynchings of today, the Jim Crow cars…the restaurants where you may not eat, the jobs you may not have, the unions you cannot join. The word nigger in the mouths of little white boys at school, the word nigger in the mouth of the foreman at the job, the word nigger across the whole face of America! Nigger! Nigger!

When one uses a slur like nigger, that is, one is "making linguistic community with a history of speakers,” as Judith Butler puts it. One speaks with their voice and evokes their attitudes toward the target, which is why the force of the word itself trumps the speaker’s individual beliefs or intentions. Whoever it was who decided to name a color nigger brown or to call a slingshot a nigger shooter could only have been someone who already used the word to denote black people and who presumed that that usage was common in his community. (Someone who was diffident about using the word in its literal meaning would hardly be comfortable using it metaphorically.) To continue to use those expressions, accordingly, is to set oneself in the line of those who have used the term as a racial slur in the past. Slurs keep their force even when they’re detached from their original reference. That’s why, in 1967, the US Board on Geographic Names removed Nigger from 167 place names. People may have formed agreeable associations in the past around a place called Nigger Beach, or a company called Nigger Lake Holidays, but they don’t redeem the word.

Tony Thorne says that, as late as the 1960s, it was possible to use the expression nigger in the woodpile “without having a conscious racist intention,” and Geoff argues that Morris’s utterance was not a racist act. That depends on what a "racist act" comes down to. It’s fair to assume that she didn’t utter the phrase with any deliberate intention of manifesting her contempt for blacks. But intention or no, anyone who uses any expression containing the word nigger in this day and age is culpably obtuse—all the more since nigger, more than other slurs, has become so phonetically toxic that people are reluctant even to mention it, in the philosophical sense, at least in speech. “Racially insensitive” doesn’t begin to say it.

It's that same obtuseness, I’d argue, that makes the Washington NFL team’s use of redskin objectionable, despite the insistence of the owners and many fans that they intend only to show “reverence toward the proud legacy and traditions of Native Americans” (even if the name of their team is a wholly different word). True, that word seems different from nigger, it’s only because the romanticized redskin is at a remove from the facts of history. Say “redskin” and what comes to mind is a sanitized and reassuring image of the victims of a long and brutal genocidal war, familiar from a hundred movie Westerns: the fierce, proud primitives, hopelessly outmatched by the forces of civilization, who nonetheless resisted courageously and died like me. (As Pat Buchanan put it in defending the team’s use of the name, “These were people who stood, fought and died and did not whimper.”)

In fact the most deceptive slurs aren’t the ones that express unmitigated contempt for their targets, like nigger and spic. They’re the ones that are tinged with sentimentality, condescension, pity, or exoticism, which are no less reductive or dehumanizing but are much easier to justify to ourselves. Recall the way the hipsters and hippies used spade as what Ken Kesey described as “a term of endearment.” Think of Oriental or cripple, or a male executive’s description of his secretary as “my gal.” Did that usage become sexist only when feminists pointed it out? Was it sexist only to women who objected to it? That's the thing about obtuseness, you can look deep in your heart and come up clean.

[Note: Just to anticipate a potential red herring, the recent Supreme Court decision invalidating the relevant clause of the Lanham Act didn't bear on the Redskins' claim that their name was not disparaging. The Court simply said that disparagement wasn't grounds for denying registration of a mark. The most recent judicial determination in this matter was that of the Court of Appeals, which upheld the petitioners' case.]

by Geoff Nunberg at July 14, 2017 02:06 AM

July 12, 2017

MIMS 2014

Which Countries are the Most Addicted to Coffee?

This is my last blog post about coffee (I promise). Ever since stumbling upon this Atlantic article about which countries consume the most coffee per capita, I’ve pondered a deeper question—not just who drinks the most coffee, but who’s most addicted to the stuff. You might argue that these questions are one and the same, but when it comes to studying addiction, it actually makes more sense to look at it through the lens of elasticity rather than gross consumption.

You might recall elasticity from an earlier blog post. Generally speaking, elasticity is the economic concept relating sensitivity of consumption to changes in another variable (in my earlier post, that variable was income). When it comes to studying addiction, economists focus on price elasticity—i.e. % change in quantity divided by the % change in price. And it makes sense. If you want to know how addicted people are to something, why not see how badly they need it after you jack up the price? If they don’t react very much, you can surmise that they are more addicted than if they don’t. Focusing on elasticity rather than gross consumption allows for a richer understanding of addiction. That’s why economists regularly employ this type of analysis when it comes to designing public policy around cigarette taxes.

I would never want to tax coffee, but I am interested in applying the same approach to calculate the price elasticity of coffee across different countries. While price elasticity is not a super complicated idea to grasp, in practice it is actually quite difficult to calculate. In the rest of this blog post, I’ll discuss these challenges in detail.

Gettin’ Da Data

The first problem for any data analysis is locating suitable data; in this case, the most important data is information for the two variables that make up the definition of elasticity: price and quantity. Thanks to the International Coffee Organization (ICO), finding data about retail coffee prices was surprisingly easy for 26 different countries reaching (in some cases) back to 1990.

Although price data was remarkably easy to find, there were still a few wrinkles to deal with. First, for a few countries (e.g. the U.S.), there was missing data in some years. To remedy this, I used the handy R package imputeTS, to generate reasonable values for the gaps in the affected time series.

The other wrinkle related to inflation. I searched around ICO’s website to see if their prices were nominal or whether they controlled for the effects of inflation. Since I couldn’t find any mention of inflation, I assumed the prices were nominal. Thus, I had to make a quick stop at the World Bank to grab inflation data so that I could deflate nominal prices to real prices.

While I was at the Bank, I also grabbed data on population and real GDP. The former is needed to get variables on a per capita basis (where appropriate), while the latter is needed as a control in our final model. Why? If you want to see how people react to changes in the price of coffee, it is important to hold constant any changes in their income. We’ve already seen how positively associated coffee consumption is with income, so this is definitely a variable you want to control for.

Getting price data might have been pretty easy, but quantity data posed more of a challenge. In fact, I wasn’t able to find any publicly available data about country-level coffee consumption. What I did find, however, was data about coffee production, imports and exports (thanks to the UN’s Food and Agriculture Organization). So, using a basic accounting logic (i.e. production – exports + imports), I was able to back into a net quantity of coffee left in a country in a given year.

There are obvious problems with this approach. For one thing, it assumes that all coffee left in a country after accounting for imports and exports is actually consumed. Although coffee is a perishable good, it is likely that at least in some years, quantity is carried over from one year to another. And unfortunately, the UN’s data gives me no way to account for this. The best I can hope for is that this net quantity I calculate is at least correlated with consumption. Since elasticity is chiefly concerned with the changes in consumption rather than absolute levels, if they are correlated, then my elasticity estimates shouldn’t be too severely biased. In all, the situation is not ideal. But if I couldn’t figure out some sort of workaround for the lack of publicly available coffee consumption data, I would’ve had to call it a day.

Dogged by Endogeneity

Once you have your data ducks in a row, the next step is to estimate your statistical model. There are several ways to model addiction, but the most simple is the following:

log(cof\_cons\_pc) = \alpha + \beta_{1} * log(price) + \beta_{2} * log(real\_gdp\_pc) + \beta_{3} * year + \varepsilon

The above linear regression equation models per capita coffee consumption as a function of price while controlling for the effects of income and time. The regression is taken in logs mostly as a matter of convenience, since logged models allow the coefficient on price, \beta_1, to be your estimate of elasticity. But there is a major problem with estimating the above equation, as economists will tell you, and that is the issue of endogeneity.

Endogeneity can mean several different things, but in this context, it refers to the fact that you can’t isolate the effect of price on quantity because the data you have is the sum total of all the shocks that shift both supply and demand over the course of a year. Shocks can be anything that affect supply/demand apart from price itself—from changing consumer tastes to a freak frostbite that wipes out half the annual Colombian coffee crop. These shocks are for the most part unobserved, but all together they define the market dynamics that jointly determine the equilibrium quantity and price.

To isolate the effect of price, you have to locate the variation in price that is not also correlated with the unobserved shocks in a given year. That way, the corresponding change in quantity can safely be attributed to the change in price alone. This strategy is known as using an instrumental variable (IV). In the World Bank Tobacco Toolkit, one suggested IV is lagged price (of cigarettes in their case, though the justification is the same for coffee). The rationale is that shocks from one year are not likely to carry over to the next, while at the same time, lagged price remains a good predictor of price in the current period. This idea has its critics (which I’ll mention later), but has obvious appeal since it doesn’t require any additional data.

To complete implementing the IV strategy, you must first run the model:

 

price_t = \alpha + \beta_{1} * price_{t-1} + \beta_{2} * real\_gdp\_pc_t + \beta_{3} * year_t + \varepsilon_t

You then use predicted values of price_t from this model as the values for price in the original elasticity model I outlined above. This is commonly known as as Two Stage Least Squares regression.

Next Station: Non-Stationarity

Endogeneity is a big pain, but it’s not the only issue that makes it difficult to calculate elasticity. Since we’re relying on time series data (i.e. repeated observations of country level data) as our source of variation, we open ourselves to the various problems that often pester inference in time series regression as well.

Perhaps the most severe problem posed by time series data is the threat of non-stationarity. What is non-stationarity? Well, when a variable is stationary, its mean and variance remain constant over time. Having stationary variables in linear regression is important because when they’re not, it can cause the coefficients you estimate in the model to be spurious—i.e. meaningless. Thus, finding a way to make sure your variables are stationary is rather important.

This is all made more complicated by the fact that there are several different flavors of stationarity. A series might be trend stationary, which means that it’s stationary around a trend line. Or it might be difference stationary. That means that it’s stationary after you difference the series, where differencing is to subtract away the value of the series from the year before, so you’re just left with the change from year to year. A series could also have structural breaks, like an outlier or a lasting shift in the mean (or multiple shifts). And finally, if two or more series are non-stationary but related to one another by way of something called co-integration, then you have to apply a whole different analytical approach.

At this point, a concrete example might help to illustrate the type of adjustments that need to be made to ensure stationarity of a series. Take a look at this log-scaled time series of coffee consumption in The Netherlands:

ne_plot

It seems like overall there is a slightly downward trend from 1990 through 2007 (with a brief interruption in 1998/99). Then, in 2008 through 2010, there was a mean shift downward, followed by another shift down in 2011. But starting in 2011, there seems to be a strong upward trend. All of these quirks in the series are accounted for in my elasticity model for The Netherlands using dummy variables—interacted with year when appropriate to allow for different slopes in different epochs described above.

This kind of fine-grained analysis had to be done on three variables per model—across twenty-six. different. models. . . Blech. Originally, I had hoped to automate much of this stage of the analysis, but the idiosyncrasies of each series made this impossible. The biggest issue were the structural breaks, which easily throw off the Augmented Dickey-Fuller test, the workhorse for detecting statistically whether or not a series is stationary.

This part of the project definitely took the longest to get done. It also involved a fair amount of judgment calls—when should a series be de-trended, or differenced, or how to separate the epochs when structural breaks were present. All this lends to a critique of time series analysis that it can often be more of an art than science. The work was tedious, but at the very least, it gave me confidence that it might be a while before artificial intelligence replaces humans for this particular task. In prior work, I actually implemented a structural break detection algorithm I once found in a paper, but I wasn’t impressed with its performance, so I wasn’t going to go down that rabbit hole again (for this project, at least).

Other Complications

Even after you’ve dealt with stationarity, there are still other potential problem areas. Serial correlation is one of them. What is serial correlation? Well, one of the assumptions in linear regression is that the error term, or \varepsilon, as it appears in the elasticity model above, is independent across different observations. Since you observe the same entity multiple times in time series data, the observations in your model are by definition dependent, or correlated. A little serial correlation isn’t a big deal, but a lot of serial correlation can cause your standard errors to become biased, and you need those for fun stuff like statistical inference (confidence intervals/hypothesis testing).

Another problem that can plague your \varepsilon‘s is heteroskedascity, which is a complicated word that means the variance of your errors is not constant over time. Fortunately, both heteroskedascity and serial correlation can be controlled for using robust covariance calculations known as Newey-West estimators. These methods are easily accessible in R via the sandwich package, and I used it whenever I observed heteroskedascity or serial correlation in my models.

 

A final issue is the problem of multicollinearity. Multicollinearity is not strictly a time series related issue; it occurs whenever the covariates in your model are highly correlated with one another. When this happens, your beta estimates become highly unreliable and unstable. This occurred in the models for Belgium and Luxembourg between the IV price variable and GDP. There are not many good options when your model suffers from multicollinearity. Keep the troublesome covariate, and you’re left with some really weird coefficient values. Throw it out, and your model could suffer from omitted variable bias. In the end, I excluded GDP from these models because the estimated coefficients looked less strange.

Results/Discussion

In the end, only two of the elasticities I estimated ended up being statistically significant—or reliable—estimates of elasticity (for Lithuania and Poland). The rest were statistically insignificant (at \alpha = 0.05), which means that positive values of elasticity are among the plausible values for a majority of the estimates. From an economic theory standpoint this makes little sense, since it is a violation of the law of demand. Higher prices should lead to a fall in demand, not a rise. Economists have a name for goods that violate the law of demand—Giffen goods—but they are very rare to encounter in the real world (some say that rice in China is one of them; I’m pretty sure coffee is a normal, well-behaved good.

Whenever you wind up with insignificant results, there is always a question of how to present them (if at all). Since the elasticity models produce actual point estimates for each country, I could have just put those point estimates in descending order and spit out that list as a ranking of the countries most addicted to coffee. But that would be misleading. Instead, I think it’s better to display entire confidence intervals—particularly to demonstrate the variance in certainty (i.e. width of the interval) across the different country estimates. The graphic below ranks countries from top to bottom in descending order by width of confidence interval. The vertical line at -1.0% is a reference for the threshold between goods considered price elastic ( \epsilon < -1.0% ) versus price inelastic ( -1.0% > \epsilon >  0.0%).

cofcis.png

When looking at the graphic above, it is important to bear in mind that apart from perhaps a few cases, it is not possible to draw conclusions about the differences in elasticities between individual countries. You cannot, for example, conclude that coffee is more elastic in the United States relative to Spain. To generate a ranking of elasticities across all countries (and arrive at an answer to the question posed by the post title), we would need to perform a battery of pairwise comparisons between all the the different countries ([26*25]/2 = 320 in total). Based on the graphic above, however, I am not convinced this would be worth the effort. Given the degree of overlap across confidence intervals—and the fact that the significance-level correction to account for multiple comparisons would only make this problem worse—I think the analysis would just wind up being largely inconclusive.

In the end, I’m left wondering what might be causing the unreliable estimates. In some cases, it could just be a lack of data; perhaps with access to more years—or more granular data taken at monthly or quarterly intervals—confidence intervals would shrink toward significance. In other cases, I might have gotten unlucky in terms of the variation of a given sample. But I am also not supremely confident in the fidelity of my two main variables, quantity and price, since both variables have artificial qualities to them. Quantity is based on values I synthetically backed into rather than coming from a concrete, vetted source, and price is derived from IV estimation. Although I trusted the World Bank when it said lagged price was a valid IV, I subsequently read some literature that said it may not solve the endogeneity issue after all. Specifically, it argues the assumption that shocks are not serially correlated is problematic.

If lagged price is not a valid IV, then another variable must be found that is correlated with price, but not with shocks to demand. Upon another round of Googling, I managed to find data with global supply-side prices through the years. It would be interesting to compare the results using these two different IVs. But then again, I did promise that this would be my last article about coffee… Does that mean the pot is empty?

terrytate

 

 

 


by dgreis at July 12, 2017 11:23 PM

Ph.D. student

Overdetermined outcomes in social science

One of the reasons why it’s important to think about explicitly about downward causation in society is how it interacts with considerations of social and economic justice.

Purely bottom-up effects can seem to have a different social valence than top-down effects.

One example, as noted by David Massad, has to do with segregation in housing. Famously, the Schelling segregation model shows how segregation in housing could be the result of autonomous individual decisions by people with a small preference for being with others like themselves (homophily). But historically in the United States, one factor influencing segregation was redlining, a top-down legal process.

Today, there is no question that there is great inequality in society. But the mechanism behind that inequality is unknown (at least to me, in my current informal investigation of the topic). One explanation, no doubt overly simplified, would be to say that wealth distribution is just a disorganized heavy tail distribution. A more specific account from Piketty would frame the problem as an organized heavy tail distribution based on the feedback effect of the relative difference in rate of return on capital versus labor. Naidu would argue that this difference in the rate of return is due to political agency on the part of capitalists, which would imply a downward causation mechanism from capitalist class interest to individual wealth distributions.

The key thing to note here is that the mere fact of inequality does not give us a lot to distinguish empirically between these competing hypotheses.

It is possible that the specific distribution (i.e cumulative density function) of inequality can shed light on which, if any, of these hypotheses hold. To work this out, we would need to come up with a likelihood function for the probability of the wealth distributions occurring under each hypothesis. Likely the result would be subtle: the difference in the likelihood functions would be about not that but how much inequality results, and whether and in what ways the wealth distribution is stratified.

Of course, another approach would be to collect other data besides the wealth distribution that bears on the problem. But what would that be? The legal record of the tax code, perhaps. But this does not straightforwardly solve our problem. Whatever the laws are and however they have changed, we cannot be sure of their effect on economic outcomes without testing them somehow against the empirical distribution again.

Another challenge to teasing these hypotheses apart is that they are not entirely distinct from each other. A disorganized heavy tail distribution posits a large number of contributing factors. Difference in rate of return on capital may be one important factor. But is it everything? Need it be everything to be an important social scientific theory?

A principled way of going about the problem would be to regress the total distribution against a number of potential factors, including capital returns and income and whatever other factors come to mind. This is the approach naturally taken in data science and machine learning. The result would be the identification of a vector of coefficients that would indicate the relative importance of different factors on total wealth.

Suppose there are 20 such factors, any one of which can be removed with minimal impact on the overall outcome. What then?


by Sebastian Benthall at July 12, 2017 03:56 PM

July 11, 2017

Ph.D. student

Why disorganized heavy tail distributions?

I wrote too soon.

Miller and Page (2009) do indeed address “fat tail” distributions explicitly in the same chapter on Emergence discussed in my last post.

However, they do not touch on the possibility that fat tail distributions might be log normal distributions generated by the Central Limit Theorem, as is well-documented by Mitzenmacher (2004).

Instead, they explicitly make a different case. They argue that there are two kinds of complexity:

  • disorganized complexity, complexity where extreme values balance each other out to create average aggregate behavior according to the Law of Large Numbers and Central Limit Theorem.
  • organized complexity, where positive and negative feedback can result in extreme outcomes, best characterized by power law or “heavy tail” distributions. Preferential attachment is an example of a feedback based mechanism for generating power law distributions (in the specific case of network degrees).

Indeed, this rough breakdown of possible scientific explanations (the relatively orderly null-hypothesis world of normal distributions, and the chaotic, more accurately rendered world of heavy tail distributions) was the one I had before I started studying complex systems and statistics more seriously in grad school.

Only later did I come to the conclusion that this is a pervasive error, because of the ease with which log normal distributions (which may be “disorganized”) can be confused with power law distributions (which tend to be explained by “organized” processes). I am a bit disappointed that Miller and Page repeat this error, but then again their book is written in 2009. I wonder whether the methodological realization (which I assume I’m not alone in, as I hear it confirmed informally in conversations with smart people sometimes) is relatively recent.

Because this is something so rarely discussed in focus, I think it may be worth pondering exactly why disorganized heavy tail distributions are not favored in the literature. There are several reasons I can think of, which I’ll offer informally here as possibilities or hypotheses.

One reason that I’ve argued for before here is that organized processes are more satisfying as explanations than disorganized processes. Most people are not very good at thinking about probabilities (Tetlock and Gardner (2016) have a great, accessible discussion of why this is the case). So to the extent that the Law of Large Numbers or Central Limit Theorem have true explanatory power, it may not be the kind of explanation most people are willing to entertain. This apparently includes scientists. Rather, a simple explanation in terms of feedback may be the kind of thing that feels like a robust scientific finding, even if there’s something spurious about it when viewed rigorously. (This is related, I think, to arguments about the end of narrative in social science.)

Another reason why disorganized heavy tail distributions may be underutilized as scientific explanations is that it is counter-intuitive that a disorganized process can produce such extreme inequality in outcomes.

This has to do with the key transformation that is the difference between a normal and a log normal distribution. A normal distribution is a bell-shaped distribution one gets when one adds a large number of independent random variables.

The log normal distribution is a heavy tail distribution one gets by multiplying a large number of positively valued independent random variables. While it does have a bell or hump, the top of the bell is not at the arithmetic mean, because the sides of the bell are skewed in size. But this is not necessarily because of the dominance of any particular factor (as would be expected if, for example, a single factor were involved in a positive feedback loop). Rather, it is the mathematical fact of many factors multiplied creating extraordinarily high values which creates the heavy right-hand side of the bell.

One way to put it is that rather than having a “deep” positive feedback loop where a single factor amplifies itself many times over, disorganized heavy tails have “shallow” positive feedback where each of many factors has a single and simultaneous amplifying effect on the impact of all the others. This amplification effect is, like multiplication itself, commutative, which means that no single factor can be considered to be causally prior to the others.

Once again, this defies specificity in an explanation, which may be for some people an explanatory desideratum.

But these extreme values are somehow ones that people demand specific explanations for. This is related, I believe, at the desire for a causal lever with which people can change outcomes, especially their own personal outcomes.

There’s an important political question implicated by all this, which is: why is wealth and power concentrated in the hands of the very few?

One explanation that must be considered is the possibility that society is accumulated history, and over thousands of years an innumerable number of independent factors have affected the distribution of wealth and power. Though rather disorganized, these factors amplify each other multiplicatively, resulting in the distribution that we see today.

The problem with this explanation is that it seems there is little to be done about this state of affairs. A person can effect a handful of the factors that contribute to their own wealth or the wealth of another, but if there are thousands of them then it’s hard to get a grip. One must view the other as simply lucky or unlucky. How can one politically mobilize around that?

References

Miller, John H., and Scott E. Page. Complex adaptive systems: An introduction to computational models of social life. Princeton university press, 2009

Mitzenmacher, Michael. “A brief history of generative models for power law and lognormal distributions.” Internet mathematics 1.2 (2004): 226-251.

Tetlock, Philip E., and Dan Gardner. Superforecasting: The art and science of prediction. Random House, 2016.


by Sebastian Benthall at July 11, 2017 03:12 PM

July 10, 2017

Ph.D. student

The Law: Miller and Page on Emergence, and statistics in social science

I’m working now through Complex Adaptive Systems by Miller and Page and have been deeply impressed with the clarity with which they lay out key scientific principles.

In their chapter on “Emergence”, they discuss the key problem in science of accounting for how some phenomena emerge from lower level phenomena. In the hard sciences, examples include how the laws and properties of chemistry emerge from the laws and properties of particles as determined by physics. It has been suggested that the psychological states of the mind emerge from the physical states of the brain. In social sciences, there is the open question of how social forms emerge from individual behavior.

Miller and Page acknowledge that “unfortunately, emergence is one of those complex systems ideas that exists in a well-trodden, but relatively untracked, bog of discussions”. Epstein’s (2006) treatment of it is particular aggressive, as he takes aim at early emergence theorists who used the term in a kind of mystifying sense and then attempts to replace this usage with his own much more concrete one.

So far in my reading on the subject there has been a lack of mathematical rigor in the treatment of the subject, but I’ve been impressed now with what Miller and Page specifically bring to bear on the problem.

Miller and Page provide two clear criteria for an emergent phenomenon:

  • “Emergence is a phenomenon whereby well-formulated aggregate behavior arises from localized, individual behavior.
  • “Such aggregate behavior should be immune to reasonable variations in the individual behavior.”

Significantly, their first example of such an effect comes from statistics: it’s the Law of Large Numbers and related theorems like the Central Limit Theorem.

These are basic theorems in statistics about the properties of a sample of random variables. The Law of Large Numbers states that the average of a large number of samples will converge on the expected value of the expected value of one sample. The Central Limit Theorem states that the distribution of the sum of many identical and independent random variables will tends towards a normal (or Gaussian) distribution whatever the distribution of the underlying variables are.

Though mathematically statements about random variables and their aggregate value, Miller and Page correctly generalize from this to say that these Laws apply to the relationship between individual behavior and aggregate patterns. The emergent phenomena here (the mean or distribution of outcomes) fulfill their criteria for emergent properties: they are well formed and depend less and less on individual behavior the more individuals there are involved.

These Laws are taught in Statistics 101. What is under-emphasized, in my experience, is the extent to which these Laws are determinative of social phenonema. Miller and Page cite an intriguing short story by Robert Coates, entitled “The Law” (1956), that explores the idea of what would happen if the Law of Large Numbers gave out. Suddenly traffic patterns would be radically unpredictable as the number of people on the road, or in a shopping mall, or outdoors enjoying nature, would be far from average far more often than we’re used to. Absurdly, the short story ends when the statistical law is at last adopted by Congress. This is absurd because of course this is one Law that affects all social and physical reality all the time.

Where this fact crops up less frequently than it should is in discussions of the origins of distributions of wide inequality. Physicists have for a couple decades been promoting the idea that the highly unequal “long tail” distributions found in society are likely power law distributions. Clauset, Shalizi, and Newman have developed a statistical test which, when applied, demonstrates that the empirical support for many of these claims isn’t truly there. Often these distributions are empirically closer to a log normal distribution, which can be explained by the Central Limit Theorem when one combines variables through multiplication rather than addition. My own small and flawed contribution to this long and significant line of research is here.

As far as explanatory hypotheses go, the immutable laws of statistics have advantages and disadvantages. Their advantage is that they are always correct. The disadvantage of these Laws in particular is that they do not lend themselves to narrative explanation, which means they are in principle excluded from those social sciences that hold themselves to argument via narration. Narration, it is argued, is more interesting and compelling for audiences not well-versed in the general science of statistics. Since many social sciences are interested in discussion of inequality in society, this seems to put these disciplines at odds with each other. Some disciplines, the ones converging now into computational social science, will use these Laws and be correct, but uninteresting. Other disciplines will ignore these laws and be incorrect but more compelling to popular audiences.

This is a disturbing conclusion, one that I believe strikes deeply at the heart of the epistemic crisis affecting politics today. No wonder we have “post-truth” media and “fake news” when our social scientists can’t even bring themselves to accept the inconvenience of learning basic statistics. I’m not speaking out of abstract concern here. I’ve encountered this problem personally and quite dramatically myself through my early dissertation work. Trying to make this very point proved so anathema to the way social sciences have been constructed that I had to abandon the project for lack of comprehending faculty support. This is despite The Law, as Coates refers to it whimsically, being well known and “on the books” for a very, very long time.

It is perhaps disconcerting to social scientists that their fields of expertise may be characterized well by the same kind of laws, grounded in mathematics, that determine chemical interactions that the evolution of biological ecosystems. And indeed there is a strong discourse around downward causation in social systems that discusses the ways in which individuals in society may be different from individuals random variables in a large sample. However, a clear understanding of statistical generative processes must be brought to bear on the understanding of social phenomena as a kind of null hypothesis. These statistical laws are due high prior probability, in the Bayesian sense. I hope to discover one day how to formalize this intuitively clear conclusion in more authoritative, mathematical terms.

References

Benthall, S. “Testing Generative Models of Online Collaboration with BigBang (pp. 182–189).” Proceedings of the 14th Python in Science Conference. Available at https://conference. scipy. org/proceedings/scipy2015/sebastian_benthall. html. 2015.

Benthall, Sebastian. “Philosophy of computational social science.” Cosmos and History: The Journal of Natural and Social Philosophy 12.2 (2016): 13-30.

Coates, Robert M. 1956. “The Law.” In The World of Mathematics, Vol. 4, edited by James R. Newman, 2268-71. New York: Simon and Schuster.

Clauset, Aaron, Cosma Rohilla Shalizi, and Mark EJ Newman. “Power-law distributions in empirical data.” SIAM review 51.4 (2009): 661-703.

Epstein, Joshua M. Generative social science: Studies in agent-based computational modeling. Princeton University Press, 2006.

Miller, John H., and Scott E. Page. Complex adaptive systems: An introduction to computational models of social life. Princeton university press, 2009.

Sawyer, R. Keith. “Simulating emergence and downward causation in small groups.” Multi-agent-based simulation. Springer Berlin Heidelberg, 2000. 49-67.


by Sebastian Benthall at July 10, 2017 05:07 PM

adjunct professor

July 06, 2017

Ph.D. student

Capital, democracy, and oligarchy

1. Capital

Bourdieu nicely lays out a taxonomy of forms of capital (1986), including economic capital (wealth) which we are all familiar with, as well as cultural capital (skills, elite tastes) and social capital (relationships with others, especially other elites). By saying that all three categories are forms of capital, what he means is that each “is accumulated labor (in its materialized form or its ‘incorporated,’ embodied form) which, when appropriated on a private, i.e., exclusive, basis by agents or groups of agents, enables them to appropriate social energy in the form of reified or living labor.” In his account, capital in all its forms are what give society its structure, including especially its economic structure.

[Capital] is what makes the games of society – not least, the economic game – something other than simple games of chance offering at every moment the possibility of a miracle. Roulette, which holds out the opportunity of winning a lot of money in a short space of time, and therefore of changing one’s social status quasi-instantaneously, and in which the winning of the previous spin of the wheel can be staked and lost at every new spin, gives a fairly accurate image of this imaginary universe of perfect competition or perfect equality of opportunity, a world without inertia, without accumulation, without heredity or acquired properties, in which every moment is perfectly independent of the previous one, every soldier has a marshal’s baton in his knapsack, and every prize can be attained, instantaneously, by everyone, so that at each moment anyone can become anything. Capital, which, in its objectified or embodied forms, takes time to accumulate and which, as a potential capacity to produce profits and to reproduce itself in identical or expanded form, contains a tendency to persist in its being, is a force inscribed in the objectivity of things so that everything is not equally possible or impossible. And the structure of the distribution of the different types and subtypes of capital at a given moment in time represents the immanent structure of the social world, i.e. , the set of constraints, inscribed in the very reality of that world, which govern its functioning in a durable way, determining the chances of success for practices.

Bourdieu is clear in his writing that he does not intend this to be taken as unsubstantiated theoretical posture. Rather, it is a theory he has developed through his empirical research. Obviously, it is also informed by many other significant Western theorists, including Kant and Marx. There is something slightly tautological about the way he defines his terms: if capital is posited to explain all social structure, then any social structure may be explained according to a distribution of capital. This leads Bourdieu to theorize about many forms of capital less obvious than wealth, such as the symbolic capital, like academic degrees.

The costs of such a theory is that it demands that one begin the difficult task of enumerate different forms of capital and, importantly, the ways in which some forms of capital can be converted into others. It is a framework which, in principle, could be used to adequately explain social reality in a properly scientific way, as opposed to other frameworks that seem more intended to maintain the motivation of a political agenda or academic discipline. Indeed there is something “interdisciplinary” about the very proposal to address symbolic and economic power in a way that deals responsibly with their commensurability.

So it has to be posited simultaneously that economic capital is at the root of all the other types of capital and that these transformed, disguised forms of economic capital, never entirely reducible to that definition, produce their most specific effects only to the extent that they conceal (not least from their possessors) the fact that economic capital is at their root, in other words – but only in the last analysis – at the root of their effects. The real logic of the functioning of capital, the conversions from one type to another, and the law of conservation which governs them cannot be understood unless two opposing but equally partial views are superseded: on the one hand, economism, which, on the grounds that every type of capital is reducible in the last analysis to economic capital, ignores what makes the specific efficacy of the other types of capital, and on the other hand, semiologism (nowadays represented by structuralism, symbolic interactionism, or ethnomethodology), which reduces social exchanges to phenomena of communication and ignores the brutal fact of universal reducibility to economics.

[I must comment that after years in an academic environment where sincere intellectual effort seemed effectively boobytrapped by disciplinary trip wires around ethnomethodology, quantification, and so on, this Bourdieusian perspective continues to provide me fresh hope. I’ve written here before about Bourdieu’s Science of Science and Reflexivity (2004), which was a wake up call for me that led to my writing this paper. That has been my main entrypoint into Bourdieu’s thought until now. The essay I’m quoting from now was published at least fifteen years prior and by its 34k citations appears to be a classic. Much of what’s written here will no doubt come across as obvious to the sophisticated reader. It is a symptom of a perhaps haphazard education that leads me to write about it now as if I’ve discovered it; indeed, the personal discovery is genuine for me, and though it is not a particularly old work, reading it and thinking it over carefully does untangle some of the knots in my thinking as I try to understand society and my role in it. Perhaps some of that relief can be shared through writing here.]

Naturally, Bourdieu’s account of capital is more nuanced and harder to measure than an economist’s. But it does not preclude an analysis of economic capital such as Piketty‘s. Indeed, much of the economist’s discussion of human capital, especially technological skill, and its relationship to wages can be mapped to a discussion of a specific form of cultural capital and how it can be converted into economic capital. A helpful aspect of this shift is that it allows one to conceptualize the effects of class, gender, and racial privilege in the transmission of technical skills. Cultural capital is, explicitly in Bourdieu’s account, labor intensive to transmit and often done so informally. Cultural tendencies to transmit this kind of capital preferentially to men instead of women in the family home become a viable explanation for the gender cap in the tech industry. While this is perhaps not a novel explanation, it is a significant one and Bourdieu’s theory helps us formulate it in a specific and testable way that transcends, as he says, both economism and semiologism, which seems productive when one is discussing society in a serious way.

One could also use a Bourdieusian framework to understand innovation spillover effects, as economists like to discuss, or the rise of Silicon Valley’s “Regional Advantage” (Saxenian, 1996), to take a specific case. One of Saxenian’s arguments (as I gloss it) is that Silicon Valley was more economically effective as a region than Route 128 in Massachusetts because the influx of engineers experimenting with new business models and reinvesting their profits into other new technology industries created a confluence of relevant cultural capital (technical skill) and economic capital (venture capital) that allowed the economic capital to be deployed more effectively. In other words, it wasn’t that the engineers in Silicon Valley were better engineers than the engineers in Route 128; it was that the economic capital was being deployed in a way that was less informed by technical knowledge. [Incidentally, if this argument is correct, then in some ways it undermines an argument put forward recently for setting up a “cyber workforce incubator” for the Federal Government in the Bay Area based on the idea that it’s necessary to tap into the labor pool there. If what makes Silicon Valley is smart capital rather than smart engineers, then that explains why there are so many engineers there (they are following the money) but also suggests that the price of technical labor there may be inflated. Engineers elsewhere may be just as good at being part of a cyber workforce. Which is just to say that when Bourdieusian theory is taken seriously, it can have practical policy implications.]

One must imagine, when considering society thus, that one could in principle map out the whole of society and the distribution of capitals within it. I believe Bourdieu does something like this in Distinction (1979), which I haven’t read–it is sadly referred to in the United States as the kind of book that is too dense to read. This is too bad.

But I was going to talk about…

2. Democracy

There are at least two great moments in history when democracy flourished. They have something in common.

One is Ancient Greece. The account of the polis in Hannah Arendt’s The Human Condition (1, cf (2 3) makes the familiar point that the citizens of the Ancient Greek city-state were masters of economically independent households. It was precisely the independence of politics (polis – city) from household economic affairs (oikos – house) that defined political life. Owning capital, in this case land and maybe slaves, was a condition for democratic participation. The democracy, such as it was, was the political unity of otherwise free capital holders.

The other historical moment is the rise of the mercantile class and the emergence of the democratic public sphere, as detailed by Habermas. If the public sphere Habermas described (and to some extent idealized) has been critiqued as being “bourgeois masculinist” (Fraser), that critique is telling. The bourgeoisie were precisely those who were owners of newly activated forms of economic capital–ships, mechanizing technologies, and the like.

If we can look at the public sphere in its original form realistically through the disillusionment of criticism, the need for rational discourse among capital holders was strategically necessary for the bourgeoisie to make strategic decisions about how to collectively allocate their economic capital. The Viewed through the objective lens of information processing and pure strategy, the public sphere was an effective means of economic coordination that complemented the rise of the Weberian bureaucracy, which provided a predictable state and also created new demand for legal professionals and the early information workers: clerks and scriveners and such.

The diversity of professions necessary for the functioning of the modern mercantile state created a diversity of forms of cultural capital that could be exchanged for economic capital. Hence, capital diffused from its concentration in the aristocracy into the hands of the widening class of the bourgeoisie.

Neither the Ancient Greek nor the mercantile democracies were particularly inclusive. Perhaps there is no historical precedent for a fully inclusive democracy. Rather, there is precedent for egalitarian alliances of capital holders in cases where that capital is broadly enough distributed to constitute citizenship as an economic class. Moreover, I must insert here that the Bourdieusian model suggests that citizenship could extend through the diffusion of non-economic forms of capital as well. For example, membership in the clergy was a form of capital taken on by some of the gentry; this came, presumably, with symbolic and social capital. The public sphere creates opportunities for the public socialite that were distinct from the opportunities of the courtier or courtesan. And so on.

However exclusive these democracies were, Fraser’s account of subaltern publics and counterpublics is of course very significant. What about the early workers and womens movements? Arguably these too can be understood in Bourdieusian terms. There were other forms of (social and cultural, if not economic) capital that workers and women in particular had available that provided the basis for their shared political interest and political participation.

What I’m suggesting is that:

  • Historically, the democratic impulse has been about uniting the interests of freeholders of capital.
  • A Bourdieusian understanding of capital allows us to maintain this (analytically helpful) understanding of democracy while also acknowledging the complexity of social structure, through the many forms of capital
  • That the complexity of society through the proliferation of forms of capital is one of, if not the, main mechanism of expanding effective citizenship, which is still conditioned on capital ownership even though we like to pretend it’s not.

Which leads me to my last point, which is about…

3. Oligarchy

If a democracy is a political unity of many different capital holders, what then is oligarchy in contrast?

Oligarchy is rule of the few, especially the rich few.

We know, through Bourdieu, that there are many ways to be rich (not just economic ways). Nevertheless, capital (in its many forms) is very unevenly distributed, which accounts for social structure.

To some extent, it is unrealistic to expect the flattening of this distribution. Society is accumulated history and there has been a lot of history and most of it has been brutally unkind.

However, there have been times when capital (in its many forms) has diffused because of the terms of capital exchange, broadly speaking. The functional separation of different professions was one way in which capital was fragmented into many differently exchangeable forms of cultural, social, and economic capitals. A more complex society is therefore a more democratic one, because of the diversity of forms of capital required to manage it. [I suspect there’s a technically specific way to make this point but don’t know how to do it yet.]

There are some consequences of this.

  1. Inequality in the sense of a very skewed distribution of capital and especially economic capital does in fact undermine democracy. You can’t really be a citizen unless you have enough capital to be able to act (use your labor) in ways that are not fully determined by economic survival. And of course this is not all or nothing; quantity of capital and relative capital do matter even beyond a minimum threshold.
  2. The second is that (1) can’t be the end of the story. Rather, to judge if the capital distribution of e.g. a nation can sustain a democracy, you need to account for many kinds of capital, not just economic capital, and see how these are distribute and exchanged. In other words, it’s necessary to look at the political economy broadly speaking. (But, I think, it’s helpful to do so in terms of ‘forms of capital’.)

One example, which I just learned recently, is this. In the United States, we have an independent judiciary, a third branch of government. This is different from other countries that are allegedly oligarchies, notably Russia but also Rhode Island before 2004. One could ask: is this Separation of Powers important for democracy? The answer is intuitively “yes”, and though I’m sure very smart things have been written to answer the question “why”, I haven’t read them, because I’ve been too busy blogging….

Instead, I have an answer for you based on the preceding argument. It was a new idea for me. It was this: What separation of powers does is its constructs a form of cultural capital associated with professional lawyers which is less exchangeable for economic and other forms of capital than in places where non-independence of the judiciary leads to more regular bribery, graft, and preferential treatment. Because it mediates economic exchanges, this has a massively distortative effect on the ability of economic capital to bulldoze other forms of capital, and the accompanying social structures (and social strictures) that bind it. It also creates a new professional class who can own this kind of capital and thereby accomplish citizenship.

Coda

In this blog post, I’ve suggested that not everybody who, for example, legally has suffrage in nominally democratic state is, in an effective sense, a citizen. Only capital owners can be citizens.

This is not intended in any way to be a normative statement about who should or should not be a citizen. Rather, it is a descriptive statement about how power is distributed in nominal democracies. To be an effective citizen, you need to have some kind of surplus of social power; capital the objectification of that social power.

The project of expanding democracy, if it is to be taken seriously, needs to be understood as the project of expanding capital ownership. This can include the redistribution of economic capital. It can also changing institutions that ground cultural and social capitals in ways that distribute other forms of capital more widely. Diversifying professional roles is a way of doing this.

Nothing I’ve written here is groundbreaking, for sure. It is for me a clearer way to think about these issues than I have had before.


by Sebastian Benthall at July 06, 2017 09:08 PM

July 05, 2017

Ph.D. alumna

Tech Culture Can Change

We need: Recognition, Repentance, Respect, and Reparation.

To be honest, what surprises me most about the current conversation about the inhospitable nature of tech for women is that people are surprised. To say that discrimination, harassment, and sexual innuendos are an open secret is an understatement. I don’t know a woman in tech who doesn’t have war stories. Yet, for whatever reason, we are now in a moment where people are paying attention. And for that, I am grateful.

Like many women in tech, I’ve developed strategies for coping. I’ve had to in order to stay in the field. I’ve tried to be “one of the guys,” pretending to blend into the background as sexist speech was jockeyed about in the hopes that I could just fit in. I’ve tried to be the kid sister, the freaky weirdo, the asexual geek, etc. I’ve even tried to use my sexuality to my advantage in the hopes that maybe I could recover some of the lost opportunity that I faced by being a woman. It took me years to realize that none of these strategies would make me feel like I belonged. Many even made me feel worse.

For years, I included Ani DiFranco lyrics in every snippet of code I wrote, as well as my signature. I’ve maintained a lyrics site since I was 18 because her words give me strength for coping with the onslaught of commentary and gross behavior. “Self-preservation is a full-time occupation.” I can’t tell you how often I’ve sat in a car during a conference or after a meeting singing along off-key at full volume with tears streaming down my face, just trying to keep my head together.

What’s at stake is not about a few bad actors. There’s also a range of behaviors getting lumped together, resulting in folks asking if inescapable sexual overtures are really that bad compared to assault. That’s an unproductive conversation because the fundamental problem is the normalization of atrocious behavior that makes room for a wide range of inappropriate actions. Fundamentally, the problem with systemic sexism is that it’s not the individual people who are the problem. It’s the culture. And navigating the culture is exhausting and disheartening. It’s the collection of particles of sand that quickly becomes a mountain that threatens to bury you.

It’s having to constantly stomach sexist comments with a smile, having to work twice as hard to be heard in a meeting, having to respond to people who ask if you’re on the panel because they needed a woman. It’s about going to conferences where deals are made in the sauna but being told that you have to go to the sauna with “the wives” (a pejoratively constructed use of the word). It’s about people assuming you’re sleeping with whoever said something nice about you. It’s being told “you’re kinda smart for a chick” when you volunteer to help a founder. It’s knowing that you’ll receive sexualized threats for commenting on certain topics as a blogger. It’s giving a talk at a conference and being objectified by the audience. It’s building whisper campaigns among women to indicate which guys to avoid. It’s using Dodgeball/Foursquare to know which parties not to attend based on who has checked in. It’s losing friends because you won’t work with a founder who you watched molest a woman at a party (and then watching Justin Timberlake portray that founder’s behavior as entertainment).

Lots of people in tech have said completely inappropriate things to women. I also recognize that many of those guys are trying to fit into the sexist norms of tech too, trying to replicate the culture that they see around them because they too are struggling for status. But that’s the problem. Once guys receive power and status within the sector, they don’t drop their inappropriate language. They don’t change their behavior or call out others on how insidious it is. They let the same dynamics fester as though it’s just part of the hazing ritual.

For women who succeed in tech, the barrage of sexism remains. It just changes shape as we get older.

On Friday night, after reading the NYTimes article on tech industry harassment, I was deeply sad. Not because the stories were shocking — frankly, those incidents are minor compared to some of what I’ve seen. I was upset because stories like this typically polarize and prompt efforts to focus on individuals rather than the culture. There’s an assumption that these are one-off incidents. They’re not.

I appreciate that Dave and Chris owned up to their role in contributing to a hostile culture. I know that it’s painful to hear that something you said or did hurt someone else when you didn’t intend that to be the case. I hope that they’re going through a tremendous amount of soul-searching and self-reflection. I appreciate Chris’ willingness to take to Medium to effectively say “I screwed up.” Ideally, they will both come out of this willing to make amends and right their wrongs.

Unfortunately, most people don’t actually respond productively when they’re called out. Shaming can often backfire.

One of the reasons that most people don’t speak up is that it’s far more common for guys who are called out on their misdeeds to respond the way that Marc Canter appeared to do, by justifying his behavior and demonizing the woman who accused him of sexualizing her. Given my own experiences with his sexist commentary, I decided to tweet out in solidarity by publicly sharing how he repeatedly asked me for a threesome with his wife early on in my career. At the time, I was young and I was genuinely scared of him; I spent a lot of time and emotional energy avoiding him, and struggled with how to navigate him at various conferences. I wasn’t the only one who faced his lewd comments, often framed as being sex-positive even when they were an abuse of power. My guess is that Marc has no idea how many women he’s made feel uncomfortable, ashamed, and scared. The question is whether or not he will admit that to himself, let alone to others.

I’m not interested in calling people out for sadistic pleasure. I want to see the change that most women in tech long for. At its core, the tech industry is idealistic and dreamy, imagining innovations that could change the world. Yet, when it comes to self-reflexivity, tech is just as regressive as many other male-dominated sectors. Still, I fully admit that I hold it to a higher standard in no small part because of the widespread commitment in tech to change the world for the better, however flawed that fantastical idealism is.

Given this, what I want from men in tech boils down to four Rs: Recognition. Repentance. Respect. Reparation.

Recognition. I want to see everyone — men and women — recognize how contributing to a culture of sexism takes us down an unhealthy path, not only making tech inhospitable for women but also undermining the quality of innovation and enabling the creation of tech that does societal harm. I want men in particular to reflect on how the small things that they do and say that they self-narrate as part of the game can do real and lasting harm, regardless of what they intended or what status level they have within the sector. I want those who witness the misdeeds of others to understand that they’re contributing to the problem.

Repentance. I want guys in tech — and especially those founders and funders who hold the keys to others’ opportunity — to take a moment and think about those that they’ve hurt in their path to success and actively, intentionally, and voluntarily apologize and ask for forgiveness. I want them to reach out to someone they said something inappropriate to, someone whose life they made difficult and say “I’m sorry.”

Respect. I want to see a culture of respect actively nurtured and encouraged alongside a culture of competition. Respect requires acknowledging others’ struggles, appreciating each others’ strengths and weaknesses, and helping each other through hard times. Many of the old-timers in tech are nervous that tech culture is being subsumed by financialization. Part of resisting this transformation is putting respect front and center. Long-term success requires thinking holistically about society, not just focusing on current capitalization.

Reparation. Every guy out there who wants to see tech thrive owes it to the field to actively seek out and mentor, support, fund, open doors for, and otherwise empower women and people of color. No excuses, no self-justifications, no sexualized bullshit. Just behavior change. Plain and simple. If our sector is about placing bets, let’s bet on a better world. And let’s solve for social equity.

I have a lot of respect for the women who are telling their stories, but we owe it to them to listen to the culture that they’re describing. Sadly, there are so many more stories that are not yet told. I realize that these stories are more powerful when people are named. My only hope is that those who are risking the backlash to name names will not suffer for doing so. Ideally, those who are named will not try to self-justify but acknowledge and accept that they’ve caused pain. I strongly believe that changing the norms is the only path forward. So while I want to see people held accountable, I especially want to see the industry work towards encouraging and supporting behavior change. At the end of the day, we will not solve the systemic culture of sexism by trying to weed out bad people, but we can work towards rendering bad behavior permanently unacceptable.

by zephoria at July 05, 2017 07:55 PM

June 26, 2017

Ph.D. student

Framing Future Drone Privacy Concerns through Amazon’s Concept Videos

This blog post is a version of a talk that I gave at the 2016 4S conference and describes work that has since been published in an article in The Journal of Human-Robot Interaction co-authored with Deirdre Mulligan entitled “These Aren’t the Autonomous Drones You’re Looking for: Investigating Privacy Concerns Through Concept Videos.” (2016). [Read online/Download PDF]

Today I’ll discuss an analysis of 2 of Amazon’s concept videos depicting their future autonomous drone service, how they frame privacy issues, and how these videos can be viewed in conversation with privacy laws and regulation.

As a privacy researcher with a human computer interaction background, I’ve become increasingly interested in how processes of imagination about emerging technologies contribute to narratives about the privacy implications of those technologies. Toda I’m discussing some thoughts emerging from a project looking at Amazon’s drone delivery service. In 2013, Amazon – the online retailer – announced Prime Air, a drone-based package delivery service. When they made their announcement, the actual product was not ready for public launch – and it’s still not available as of today. But what’s interesting is that at the time the announcement was made, Amazon also released a video that showed what the world might look like with this service of automated drones. And they released a second similar video in 2015. We call these videos concept videos.

These videos are one way that companies are strategically framing emerging technologies – what they will do, where, for whom, by what means; they’re beginning to associate values and narratives with these technologies. To surface values and narratives related to privacy present in these videos, we did a close reading of Amazon’s videos.

We’re generally interested the time time period after a technology is announced, but before it is publicly releasedDuring this time period, most people only interact with these technologies through their fictional representations of the future–in videos, advertisements, media, and so on. Looking at products during this period is interesting to understand the role that these videos play in framing technologies to become associated with certain values and narratives around privacy.

Background: Concept Videos & Design Fiction

Now creating representations of future concepts and products has a long history – including concept cars, or videos or dioramas of future technologies. Concept videos in particular as we’re conceptualizing them are short videos created by a company, showing a device or product that is not yet available for public purchase, though it might be in the short-term future. Concept videos depict what the world might be like in a few years if that device or product exists, and how people might interact with it or use it – we’ve written about this in some prior work looking at concept videos for augmented reality products.

When we are looking at the videos, we are primarily using the lens of design fiction, a concept from design researchers. Design fictions often show future scenarios, but more importantly, artifacts presented through design fiction exist within a narrative world, story, or fictional reality so that we can confront and think about artifacts in relation to a social and cultural environment. By creating fictional worlds and yet-to-be-realized design concepts, it tries to understand possible alternative futures. Design fictions also interact with broader social discourses outside the fiction. If we place corporate concept videos as design fictions, it suggests that the videos are open to interpretation and that such videos are best considered in dialogue with broader social discourses – for example those about privacy.

Yet we also have to recognize the corporate source of the videos.  The concept videos also share qualities with “vision videos,” corporate research videos that show a company’s research vision. Concept videos also contain elements of corporate advertising.

And they contain elements of video prototyping which often show short use scenarios of a technology, or simulate the use of a technology, although these are often either used internally within a company. In contrast, concept videos are public-facing artifacts.

Analyzing Amazon’s Concept Videos

Amazon released 2 concept videos – one in 2013, and a second at the end of 2015. We we can track changes in the way they frame their service. We did a close reading of the Amazon Drone videos to understand how they frame and address privacy concerns.

Below is Amazon’s first 2013 video, and let’s pay attention to how the physical drone looks, and how the video depicts its flying patterns.

So the drone has 8 rotors, is black and looks roughly like other commercially available hobbyist drones that might hold camera equipment. It then delivers the package flying from the Amazon warehouse to the recipient’s house where it’s able to land on its own.

Below is Amazon’s second 2015 video, so this time let’s pay attention again to how the physical drone looks and how the video depicts its flying patterns which we can compare against the first one.

This video’s presentation is a little more flashy – and narrated by Top Gear host Jeremy Clarkson. You might also have noticed that the word “privacy” is never used in either video. Yet several changes between the videos focusing on how the physical drone looks and its flying patterns can be read as efforts by Amazon to conceptualize and address privacy concerns.

Amazon drone 1

The depiction of the drone’s physical design changes

First off, the physical design of the drone changed in shape and color. It changed from a generic black 8-rotor drone, whereas the second video has a more square-shaped drone that’s a more unique design for Amazon, and it has bright bold Amazon branding. This addresses a potential privacy concern – that people may be uncomfortable if they see an unmarked drone near them, because they don’t know what it’s doing or who it belongs to. It might conjure questions such as “is it the neighbor taking pictures?” or “Who is spying on me?” The unique design and color palette in the later video provides a form of transparency clearly identifying who the drone belongs to and its purpose.

Amazon vertical takeoff

The 2015 video depicts a vertical takeoff as the drone’s first flight phase

The second part is about its flying patterns. The first video just sort of shows the drone fly from the warehouse to the user’s house. The second video breaks this down into 3 distinct flying phases. First is a vertical helicopter-like takeoff mode, the narrator describing it flying straight up to 400 feet, suggesting the drone will be high enough to not surveil or look closely at people, nor will it fly over people’s homes when it’s taking off.

Amazon horizontal flight

In the 2015 video, the drone enters a second horizontal flight phase

The second is a horizontal flight mode, which the narrator compares to an airplane. The airplane metaphor downplays surveillance concerns – most people aren’t concerned about people in an airplane watching them in their backyards or in public space. The “drone’s-eye-view” camera in this part of the video reinforces the airplane metaphor – it only shows a horizontal view from the drone like you would out of a plane, as if suggesting the drone only sees straight ahead while it flies horizontal, and isn’t capturing video or data about people directly below it.

Amazon landing

The 2015 video depicts a third landing phase

The third is the vertical landing phase, the drone’s-eye-view camera switches to look directly down. But this video only shows the house and property of the package recipient within the camera frame – suggesting that it only visually scans the property of the package recipient, and not adjacent property, and only uses its downward facing camera in vertical mode. Together these parts of the video try to frame Amazon’s drones as using cameras in a way consistent with privacy expectations.

Policy Considerations

Beyond differences between the two videos’ framing, it’s interesting to consider the policy discourse occurring when these videos were released. In between the two videos, the US Federal Aviation Administration issued draft regulations about unmanned aerial vehicles, including stipulations that they could fly at a maximum of 400 feet. Through 2014 and 2015 a number of US State laws were passed addressing privacy, citing drones trespassing the air space over private property as a privacy harm. Other policy organizations have noted the need for transparency about who is operating a drone to enable individuals to protect themselves from potential privacy invasions.

We can also think of these video as a policy fiction. The technology shown in the video exists, but the story it tells is not a legal reality. The main thing preventing this service is that the Federal Aviation Administration currently requires a human operator within the line of sight of these types of drones.

In this light, we can read the shift in Amazon’s framing of their delivery service as something more than just updates to their design – it’s also a response to particular types of privacy concerns raised in the ongoing policy discourse around drones, and perhaps they are trying to create a sense of goodwill over privacy issues, so that the regulations can be changed in a way that allows the rest of the service. This suggests that corporate framing through concept videos is not necessarily static, but can shift and evolve throughout the design process in conversation with multiple stakeholders as an ongoing negotiation. Amazon uses these videos to frame the technology for multiple audiences – potential customers, as well as acknowledging the concerns by legislators and regulators.

Concluding Thoughts

A few ideas have emerged from the project. First, we think that close readings of concept videos are a useful activity to surface the ways company frame privacy values in relation to their products. It provides some insight into the strategy companies are using to frame their products to multiple stakeholder groups (like consumers and regulators here) – and that this process of strategic framing is an ongoing negotiation.

Second, these videos present one particular vision of the future. But they may also present  opportunities to keep the future more open by contesting the corporate vision or creating alternative futures.  We as researchers can ask what videos don’t show – technical details about how the drone works, what data it collects, how does it work in an urban setting? Stakeholders can also put forth alternate futures – such as parody concept videos (indeed there have been parody concept videos presenting alternate views of the future – people shooting down drones, stolen and dropped packages, Amazon making you buy package insurance, making it only available for expensive items, that drones will use cameras to spy on people, drone delivery in a bathroom, and reimagining it as a Netflix DVD delivery service).

Third, we think that there may be some potential in using concept videos as a more explicit type of communication tool between companies and regulators and are looking for ways we might explore that in the future.


by Richmond at June 26, 2017 04:10 PM

June 15, 2017

Ph.D. student

Using design fiction and science fiction to interrogate privacy in sensing technologies

This post is a version of a talk I gave at DIS 2017 based on my paper with Ellen Van Wyk and James Pierce, Real-Fictional Entanglements: Using Science Fiction and Design Fiction to Interrogate Sensing Technologies in which we used a science fiction novel as the starting point for creating a set of design fictions to explore issues around privacy.  Find out more on our project page, or download the paper: [PDF link ] [ACM link]

Many emerging and proposed sensing technologies raise questions about privacy and surveillance. For instance new wireless smarthome security cameras sound cool… until we’re using them to watch a little girl in her bedroom getting ready for school, which feels creepy, like in the tweet below.

Or consider the US Department of Homeland Security’s imagined future security system. Starting around 2007, they were trying to predict criminal behavior, pre-crime, like in Minority Report. They planned to use thermal sensing, computer vision, eye tracking, gait sensing, and other physiological signals. And supposedly it would “avoid all privacy issues.”  And it’s pretty clear that privacy was not adequately addressed in this project, as found in an investigation by EPIC.

dhs slide.png

Image from publicintelligence.net. Note the middle bullet point in the middle column – “avoids all privacy issues.”

A lot of these types of products or ideas are proposed or publicly released – but somehow it seems like privacy hasn’t been adequately thought through beforehand. However, parallel to this, we see works of science fiction which often imagine social changes and effects related to technological change – and do so in situational, contextual, rich world-building ways. This led us to our starting hunch for our work:

perhaps we can leverage science fiction, through design fiction, to help us think through the values at stake in new and emerging technologies.

Designing for provocation and reflection might allow us to do a similar type of work through design that science fiction often does.

So we created a set of visual design fictions, inspired by a set of fictional technologies from the 2013 science fiction novel The Circle by Dave Eggers to explore privacy issues in emerging sensing technologies. By doing this we tap into an author’s already existing, richly imagined world, rather than creating our own imagined world from scratch.

Design Fiction and our Design Process

Our work builds on past connections drawn among fiction, design, research, and public imagination, specifically, design fiction. Design fiction has been described as an authorial practice between science fiction and science fact and as diegetic prototypes. In other words, artifacts created through design fiction help imply or create a narrative world, or fictional reality, in which they exist. By creating yet-to-be-realized design concepts, design fiction tries to understand possible alternative futures. (Here we draw on work by Bleecker, Kirby, Linehan et al, and Lindley & Coulton).

In the design research and HCI communities, design fiction has been used in predominantly 1 of 2 ways. One focuses on creating design fiction artifacts in formats such as textual ones, visual, video, and other materials. A second way uses design fiction as an analytical lens to understand fictional worlds created by others – including films, practices, and advertisements – although perhaps most relevant to us are Tanenbaum et al’s analysis of the film Mad Max: Fury Road or Lindley et al’s analysis of the film Her as design fictions.

In our work, we combine these 2 ways of using design fiction: We think about the fictional technologies introduced by Eggers in his novel using the lens of design fiction, and we used those to create our own new design fictions.

Obviously there’s a long history of science fiction in film, television, and literature. There’s a lot that we could have used to inspire our designs, but The Circle was interesting to us for a few reasons.

Debates about literary quality aside, as a mass market book, it presents an opportunity to look at a contemporary and popular depiction of sensing technologies. It reflects timely concerns about privacy and increasing data collection. The book and its fictional universe is accessible to a broad audience – it was a New York Times bestseller and a movie adaptation was released in May 2017. (While we knew that a film version was in production when we did our design work, we created our designs before the film was released).

The novel is set in a near future and focuses on a powerful tech company called The Circle, which keeps introducing new sensing products that supposedly provide greater user value, but to the reader, they seem increasingly invasive of privacy. The novel utilizes a dark humor to portray this, satirizing the rhetoric and culture of today’s technology companies.

It’s set in a near future that’s still very much recognizable – it starts to blur boundaries between fiction and reality in a way that we wanted to explore using design fiction. We used Gaver’s design workbook technique to generate a set of design fictions through several iterative rounds of design, several of which are discussed in this post.

workbook pages

We made a lot of designs – excerpts from our design workbook can be found on our project page

Our set of design fictions draws from 3 technologies from the novel, and 1 non-fictional technology that is being developed but seems like it could fit in the world of The Circle, again playing with this idea blurring fiction and reality. We’ll discuss 2 of them in this post, both of which originate from the Eggers novel (no major plot points are spoiled here!).

The first is SeeChange, which is the most prominent technology in the novel. It’s a wireless camera, about the size of a lollipop. It can record and stream live HD video online, and these live streams can be shared with anyone. Its battery lasts for years it, can be used indoors or outdoors, and it can be mounted discretely or worn on the body. It’s introduced to monitor conditions at outdoor sporting locations, or to monitor spaces to prevent crimes. Later, it’s worn by characters who share their lives through a constant live personal video stream.

The second is ChildTrack, which is part of an ongoing project at the company. It’s a small chip implanted into the bone of a child’s body, allowing parents to monitor their child’s location at all times for safety. Later in the story it’s suggested that these chips can also store a child’s educational records, homework, reading, attendance, and test scores so that parents can access all their child’s information in “one place”.

Adapting The Circle

We’re going to look at some different designs that we created that are variations on SeeChange and ChildTrack. Some designs may seem more real or plausible, while others may seem more fictional. Sometimes the technologies might seem fictional, while other times the values and social norms expressed might seem fictional. These are all things that we were interested in exploring through our design process.

beach

SeeChange Beach

For us, a natural starting point was that the novel doesn’t have any illustrations. So we started by going through the book’s descriptions of SeeChange and tried to interpret the first scene in which it appears. In this scene, a company executive demos SeeChange by showing an audience live images of several beaches from hidden cameras, ostensibly to monitor surfing conditions. Our collage of images felt surprisingly believable after we made it, and slightly creepy as it put us in the position of feeling like we were surveilling people at the beach.

childtrack

ChildTrack App Interface

We did the same thing for ChildTrack, looking at how it was described in the book and then imagining what the interface might look like. We wanted to use the perspective of a parent using an app looking at their child’s data, portraying parental surveillance of one’s child as a type of care or moral responsibility.

The Circle in New Contexts

With our approach, we wanted to use The Circle as a starting point to think through a series of privacy and surveillance concerns. After our initial set of designs we began thinking about how the same set of technologies might be used in other situations within the world of the novel, but not depicted in Eggers’ story; and how that might lead to new types of privacy concerns. We did this by creating variations on our initial set of designs.

amazon

SeeChange being “sold” on Amazon.com

From other research, we know that privacy is experienced differently based on one’s subject position. We wanted to think about how much of SeeChange’s surveillance concerns stem from its technical capabilities versus who uses it or who gets recorded. We made 3 Amazon.com pages to market SeeChange as three different products, targeting different groups. We were inspired by debates about police-citizen interactions in the U.S. and imagined SeeChange as a live streaming police body camera. Like Eggers’ book, we satirize the rhetoric of technological determinism, writing that cameras provide “objective” evidence of wrongdoing – we obviously know that cameras aren’t objective. We also leave ambiguity about if the police officer or citizen is doing the wrongdoing.  Thinking about using cameras for activist purposes – like how PETA uses undercover cameras, or how documentarians sometimes use hidden cameras , we frame SeeChange as a small, hidden, wearable camera for activists groups.  Inspired by political debates in the U.S., we thought about how people who are suspicious of the Federal Government might want to monitor political opponents, so we also market SeeChange as a camera “For Independence, Freedom, and Survival,” for this audience. Some of these framings seem more worrisome when thinking about who gets to use the camera, while others seem more worrisome when thinking about who gets recorded by the camera.

seechange angles

Ubiquitous SeeChange cameras from many angles. Image © Ellen Van Wyk, used with permission.

We also thought about longer term effects within the world of The Circle. What might it be like once these SeeChange cameras become ubiquitous, always recording and broadcasting? It could be nice to be able to re-watch crime scenes from multiple angles. But, it might be creepy to use many multiple angles to watch a person doing daily activities, which we depict here as a person sits in a conference room using his computer. The bottom picture looking between the blinds, simulating a small camera attached to the window is particularly creepy to me – and suggests capabilities that goes beyond today’s closed circuit TV cameras.

New Fictions and New Realities

After our second round of designs, we began thinking about privacy concerns that were not particularly present in the novel or our existing designs. The following designs, while inspired by novel, are imagined to exist in worlds beyond The Circle’s.

law enforcement

User interface of an advanced location-tracking system. Image © Ellen Van Wyk, used with permission.

The Circle never really discusses government surveillance, which we think is important to consider. All the surveillance in the book is done by private companies or by individuals. So, we created a scenario putting SeeChange in the hands of the police or government intelligence agencies, to track people and vehicles. Here, SeeChange might overcome some of the barriers that provide privacy protection for us today: Here, police could also easily use the search bar to find anybody’s location history without need for a warrant or any oversight – suggesting a new social or legal reality.

truwork

Truwork – “An integrated solution for your office or workplace!”

 Similarly, we wanted to think about issues of workplace surveillance. Here’s a scenario advertising a workplace implantable tracking device. Employers can subscribe to the service and make their employees implant these devices to keep track of their whereabouts and work activities to improve efficiency.

In a fascinating twist, a version of this actually occurred at a Swedish company about 6 months after we did our design work, where employees are inserting RFID chips into their hands to open doors, make payments, and so forth.

childtrack advertisers

Childtrack for advertisers infographic. Image © Ellen Van Wyk, used with permission.

The Circle never discusses data sharing with 3rd parties like advertisers, so we imagined a service built on top of ChildTrack aimed at advertisers to leverage all the data collected about a child to target them with advertisements. This represents a legal fiction, as it would likely be illegal to do this in the US and EU under various child data protection laws and regulations.

This third round of designs interrogates the relationship between privacy and personal data from the viewpoints of different stakeholders.

Reflections

After creating these designs, we have a series of reflections that fall broadly into 4 areas, though I’ll mention 2 of them here.

Analyzing Privacy Utilizing External Frameworks

Given our interest in the privacy implications of these technologies, we looked to privacy research before starting our design work. Contemporary approaches to privacy view it as contextual, dependent on one’s subject position and on specific situations. Mulligan et al suggest that rather than trying to define privacy, it’s more productive to map how various aspects of privacy are represented in particular situations along 5 dimensions.

After each round of our design iterations, we analyzed the designs through this framework. This allowed us to have a way to map how broadly we were exploring our problem space. For instance, in our first round of designs we stayed really close to the novel. The privacy harms that occurred with SeeChange were caused by other individual consumers using the cameras, and the harms that occurred with ChildTrack were parents violating their kids’ privacy. In the later designs that we created, we went beyond the ways that Eggers discussed privacy harms by looking at harms stemming from 3rd party data sharing, or government surveillance.

This suggests that design fictions can be design for and analyzed using frameworks for specific empirical topics (such as privacy) as a way to reflect on how we’re exploring a design space.

Blurring Real and Fictional

The second reflection we have is about how our design fictions blurred the real and fictional. After viewing the images, you might be slightly confused about what’s real and what’s fictional – and that is a boundary and a tension that we tried to explore though these designs. And after creating our designs we were surprised to find how some products we had imagined as fiction were close to being realized as “real” (such as the news about Swedish workers getting implanted chips – or Samsung’s new Gear 360 camera looking very much like our lollipop-inspired image of SeeChange). Rather than trying to draw boundaries between real and fictional, we find it useful to blur those boundaries, to recognize the real and fictional as inherently entangled and co-constructed. This lets us draw a myriad of connections that might let us see these technologies and designs in a new light. SeeChange isn’t just a camera in Eggers’ novel, but it’s related to established products like GoPro cameras; to experimental ideas like Google Glass; linked to cameras in other fiction like Orwell’s 1984; and linked to current sociopolitical debates like the role of cameras in policing and surveillance in public spaces. We can use fictional technical capabilities, fictional legal worlds, or social worlds to explore and reflect on how privacy is situated both in the present and how it might be in the future.

Conclusions

In summary, we created a set of design fictions inspired by the novel The Circle that engaged in the blurring of real and fictional to explore and analyze privacy implications of emerging sensing technologies.

Perhaps more pragmatically, we find that tapping into an author’s existing fictional universe provides a concrete starting point to begin design fiction explorations, so that we do not have to create a fictional world from scratch.

Find out more on our project page, or download the paper: [PDF link ] [ACM link]


by Richmond at June 15, 2017 09:02 PM

MIMS 2012

How to Say No to Your CEO Without Saying No

Shortly after I rolled out Optimizely’s Discovery kanban process last year, one of its benefits became immediately obvious: using it as a tool to say No.

This is best illustrated with a story. One day, I was at my desk, minding my own business 🙃, when our CEO came up to me and asked, “Hey, is there a designer who could work on <insert special CEO pet project>?” In my head, I knew it wasn’t a priority. Telling him that directly, though, would have led to us arguing over why we thought the project was or was not important, without grounding the argument in the reality of whether it was higher priority than current work-in-progress. And since he’s the CEO, I would have lost that argument.

So instead of doing that, I took him to our Discovery kanban board and said, “Let’s review what each person is doing and see if there’s anything we should stop doing to work on your project.” I pointed to each card on the board and said why we were doing it: “We’re doing this to reach company goal X… that’s important for customer Y,” and so on.

"Optimizely's Discovery kanban board in action" Optimizely’s Discovery kanban board in action

When we got to the end of the board, he admitted, “Yeah, those are all the right things to be doing,” and walked away. I never heard about the project again. And just like that, I said No to our CEO without saying No.

by Jeff Zych at June 15, 2017 05:19 AM

June 04, 2017

MIMS 2014

Do Hangovers Make Us Drink More Coffee?

hangover_header.jpg

After finishing my last blog post, I grew curious about the relationship between coffee and another beverage I’ve noticed is quite popular amongst backpacker folk: alcohol. Are late-night ragers (and their accompanying brutal hangovers) associated with greater levels of coffee consumption? Or is the idea about as dumb as another Hangover sequel?

When you look at a simple scatter plot associating per capita alcohol and coffee consumption on a national level, you might think that yes, alcohol does fuel coffee consumption (based on the apparent positive correlation).

coffee_vs_alcohol

But does this apparent relationship hold up to closer scrutiny? In my last article, we discovered that variables like country wealth could explain away much of the observed variation in coffee consumption. Could the same thing be happening here as well? That is, do richer countries generally consume more coffee and alcohol just because they can afford to do so?

The answer seems to be: not as much as I would have thought. The thing to notice in the graphs above is how much less sensitive alcohol consumption is to income compared to coffee. In other words, it don’t matter how much money is in your wallet, you gonna get your alcohol on no matter what. But for coffee, things are different. This is evident from the shapes of the data clouds in the respective graphs. That bunch of data points in the top left of the alcohol graph? You don’t see a similar shape in the coffee chart. And that means that consumption of alcohol depends much less on income, relative to coffee.

It’s not too surprising that alcohol consumption is less sensitive to income changes than coffee, but it’s always cool to see intuition borne out in real-life data that you randomly pull of the internet. By looking at what economists call income elasticity of demand—the % change in consumption of a good divided by the % change in income—we can more thoroughly quantify what we’re seeing. Using a log-log model, standard linear regression can be used to get a rough estimate* of the income elasticity of demand. In these models, the beta coefficient on log(income) ends up being the elasticity estimate.

When the elasticity of a good is less than 1, it is considered an inelastic good, i.e. a good that is not very sensitive to changes in income. Inelastic goods are also sometimes referred to as necessity goods. By contrast, goods with elasticity greater than 1 are considered elastic goods, or luxury goods. Sure enough, when you fit a log-log model to the data, the estimated elasticity for coffee is greater than 1 (1.08), while the estimated elasticity for alcohol is less than 1 (0.54). Hmm, so in the end, alcohol is more of a ‘necessity’ than coffee. Perhaps this settles any debate over which beverage is more addictive.

When it comes to drinking however (either coffee or alcohol), one cannot ignore the role that culture plays in driving the consumption of both beverages. Perhaps cultures that drink a lot of one drink a lot of the other, too. Or perhaps a culture has a taboo against alcohol, like we find in predominantly Muslim countries. To control for this, I included region-of-the-world controls in my final model relating alcohol and coffee consumption.

Unfortunately for my initial hypothesis, once you account for culture, any statistically significant relationship between alcohol and coffee consumption vanishes. To be sure that controlling for culture in my model was the right call, I performed a nested model analysis—a statistical method that basically helps make sure you’re not over-complicating things. The nested model analysis concluded that yes, culture does add value to the overall model, so I can’t just ignore it.

Echoing my last article, this is not the final word on the subject, as again, more granular data (at the individual level) could show a significant link between the two. Instead what this analysis says is that if a relationship does exist, it is not potent enough to show up in data at the national level. Oh well, it was worth a shot. Whether or not alcohol and coffee consumption are legit linked to one another, one fact remains indisputably true: hangovers suck.

hangover

 

Data Links:

  • Alcohol – World Health Organization – link
  • Economic Data – Conference Board – link
  • Coffee – Euromonitor (via The Atlantic) – link

* “rough” because ideally, you would look at changes in income in a single country to estimate elasticity rather than look at differences in income across different countries.

 

 


by dgreis at June 04, 2017 07:32 PM

May 22, 2017

Ph.D. student

hard realism about social structure

Sawyer’s (2000) investigations into the theory of downward causation of social structure are quite subtle. He points out several positions in the sociological debate about social structure:

  • Holists, who believe social structures have real, independent causal powers sometimes through internalization by individuals.
  • Subjectivists, who believe that social structures are epiphenomenal, reducible to individuals
  • Interactionists, who see patterns of interaction as primary, not the agents or the structures that may produce the interactions
  • Hybrid theorists, who see an interplay between social structure and independent individual agency.

I’m most interested at the moment in the holist, subjectivist, and hybrid positions. This is not because I don’t see interaction as essential–I do. But I think that recognizing that interactions are the medium if not material of social life does not solve the question of why social interactions seem to be structured the way they do. Or, more positively, the interactionist contributes to the discussion by opening up process theory and generative epistemology (cf. Cederman, 2005) as a way of getting at an answer to the question. It is up to us to take it from there.

The subjectivists, in positing only the observable individuals and their actions, has Occam’s Razor on their side. To posit the unobservable entities of social forms is to “Multiply entities unecessarily”. This perhaps accounts for the durability of the subjectivist thesis. The scientific burden of proof is, in a significant sense, on the holist or hybrid theorist to show why the positing of social forms and structures offers in explanatory power what it lacks in parsimony.

Another reason for the subjectivist position is that it does ideological work. Margaret Thatcher famously once said, “There is not such thing as society”, as a condemnation of the socialist government that she would dismantle in favor of free markets. Margaret Thatcher was highly influenced by Friedrich Hayek, who argued that free markets lead to more intelligent outcomes than planned economies because they are better at using local and distributed information in society. Whatever you think of the political consequences of his work, Hayek was an early theorist in society as a system of multiple agents with “bounded rationality“. A similar model to Hayek’s is developed and tested by Epstein and Axtell (1996).

On the other hand, our natural use of language, and social expectations, and legal system all weigh in favor of social forms, institutions, and other structures. These are, naturally, all “socially constructed” but these social constructs undeniably reproduce themselves; otherwise, they would not continue to exist. This process of self-reproduction is named autopoiesis (from ‘auto-‘ (self-), ‘-poisis’ (-creation)) by Maturana and Varela (1991). The concept has been taken up by Luhmann (1995) in social theory and Brier (2008) in Library and Information Sciences (LIS). As these later theorists argue, the phenomenon of language itself can be explained only as a autopoietic social system.

There is a gap between the positions of autopoiesis theorists and the sociological holists discussed by Sawyer. Autopoiesis is, in Varela’s formulation, a general phenomenon about the organization of matter. It is, in his view, the principle of organization of life on the cellular level.

Contrast this with the ‘holist’ social theorist who sees social structures as being reproduced by the “internalization” of the structure by the constituent agents. Social structures, in this view, depend at least in part on their being understood or “known” by the agents participating in them. This implies that the agents have certain cognitive powers that, e.g., strands of organic chemicals do not. [Sawyer refers to Castelfranchi, 1998 on this point; I have yet to read it.] Arguably, social norms are only norms because they are understood by agents involved. This is the position of Habermas (1985) for example, whose whole ethical theory depends on the rational acceptance of norms in free discussion. (This is the legacy of Immanuel Kant.)

What I am arguing for is that there is, in actuality, another position, not identified by Sawyer (2000), on the emergence of social structure that does not depend on internalization but that nevertheless has causal reality. Social forms may arise from individual activity in the same way that biological organization arises from unconscious chemical interactions. I suppose this is a form of holism.

I’d like to call this view the “hard realist” view of social structure, to contrast with “soft realist” views of social structure that depend on internalization by agents. I don’t mean for this to be taken aggressively, but rather because I have a very concrete distinction in mind. If social structure depends on internalization by agents, then that means (by definition, really) that there exists an intervention on the beliefs of agents that could dissolve the social structure and transform it into something else. For example, an left-wing anarchist might argue that money only has value because we all believe it has value. If we were to just all stop valuing money, we could have a free and equal society at last.

If social structures exist even in spite of the recognition of them by social actors, then the story is quite different. This means (by definition) that interventions on the beliefs of actors will not dissolve the structure. In other words, just because something is a social construct does not mean that it can be socially deconstructed by a process of reversal. Some social structures may truly have a life of their own. (I would expect this to be truer the more we delegate social moderation to technology.)

This story is complicated by the fact that social actors vary in their cognitive capacities and this heterogeneity can materially impact social outcomes. Axtell and Epstein (2006) have a model of the formation of retirement age norms in which a small minority of actors make their decision rationally based on expected outcomes and the rest adopt the behavior of the majority of their neighbors. This results in dynamic adjustments to behavior that, under certain parameters, make the total society look more individually rational than they are in fact. This is encouraging to those of us who sometimes feel our attempts to rationally understand the world are insignificant in the face of social inertia more broadly speaking.

But it also makes it difficult to judge empirically whether a “soft realist” or “hard realist” view of social structure is more accurate. It also makes the empirical distinction between the holist and subjectivist positions difficult, for that matter. Surveying individuals about their perceptions of their social world will tell you nothing about hard realist social structures. If there are heterogenous views about what the social order actually is, that may or may not impact the actual social structure that’s there. Real social structure may indeed create systematic blindnesses in the agents that compose them.

Therefore, the only way to test for hard realist social structure is to look at aggregate social behavior (perhaps on the interactionist level of analysis) and identify where its regularities can be attributed to generative mechanisms. Multi-agents systems and complex adaptive systems look like the primary tools in the toolkit for modeling these kinds of dynamics. So far I haven’t seen an adequate discussion of how these theories can be empirically confirmed using real data.

References

Axtell, Robert L and Epstein, J. M. “COORDINATION IN TRANSIENT SOCIAL NETWORKS: AN AGENT-BASED COMPUTATIONAL MODEL OF THE TIMING OF RETIREMENT ROBERT L. AXTELL AND JOSHUA M. EPSTEIN.” Generative social science: Studies in agent-based computational modeling (2006): 146.

Brier, Søren. Cybersemiotics: Why information is not enough!. University of Toronto Press, 2008.

Castelfranchi, Cristiano. “Simulating with cognitive agents: The importance of cognitive emergence.” International Workshop on Multi-Agent Systems and Agent-Based Simulation. Springer Berlin Heidelberg, 1998.

Cederman, Lars-Erik. “Computational models of social forms: Advancing generative process theory 1.” American Journal of Sociology 110.4 (2005): 864-893.

Epstein, Joshua M., and Robert Axtell. Growing artificial societies: social science from the bottom up. Brookings Institution Press, 1996.

Habermas, Jurgen, Jürgen Habermas, and Thomas McCarthy. The theory of communicative action. Vol. 2. Beacon press, 1985.

Hayek, Friedrich August. “The use of knowledge in society.” The American economic review (1945): 519-530.

Luhmann, Niklas. Social systems. Stanford University Press, 1995.

Maturana, Humberto R., and Francisco J. Varela. Autopoiesis and cognition: The realization of the living. Vol. 42. Springer Science & Business Media, 1991.

Sawyer, R. Keith. “Simulating emergence and downward causation in small groups.” Multi-agent-based simulation. Springer Berlin Heidelberg, 2000. 49-67.


by Sebastian Benthall at May 22, 2017 03:16 PM

May 19, 2017

MIMS 2016

Why is it asking for gender and age? Not sure how that relates to recipes, but I also don’t cook…

Why is it asking for gender and age? Not sure how that relates to recipes, but I also don’t cook…

I do think that if gender is absolutely necessary, it should give gender neutral options as well.

by Andrew Huang at May 19, 2017 01:30 AM

May 18, 2017

Ph.D. student

WannaCry as an example of the insecurity of legacy systems

CLTC’s Steve Weber and Betsy Cooper have written an Op-Ed about the recent WannaCry epidemic. The purpose of the article is clear: to argue that a possible future scenario CLTC developed in 2015, in which digital technologies become generally distrusted rather than trusted, is relevant and prescient. They then go on to elaborate on this scenario.

The problem with the Op-Ed is that the connection between WannaCry is spurious. Here’s how they make the connection:

The latest widespread ransomware attack, which has locked up computers in nearly 150 countries, has rightfully captured the world’s attention. But the focus shouldn’t be on the scale of the attack and the immediate harm it is causing, or even on the source of the software code that enabled it (a previous attack against the National Security Agency). What’s most important is that British doctors have reverted to pen and paper in the wake of the attacks. They’ve given up on insecure digital technologies in favor of secure but inconvenient analog ones.

This “back to analog” moment isn’t just a knee-jerk, stopgap reaction to a short-term problem. It’s a rational response to our increasingly insecure internet, and we are going to see more of it ahead.

If you look at the article that they link to from The Register, which is the only empirical evidence they use to make their case, it does indeed reference the use of pen and paper by doctors.

Doctors have been reduced to using pen and paper, and closing A&E to non-critical patients, amid the tech blackout. Ambulances have been redirected to other hospitals, and operations canceled.

There is a disconnect between what the article says and what Weber and Cooper are telling us. The article is quite clear that doctors are using pen and paper amid the tech blackout. Which is to say, because their computers are currently being locked up by ransomware, doctors are using pen and paper.

Does that mean that “They’ve given up on insecure digital technologies in favor of secure but inconvenient analog ones.”? No. It means that since they are waiting to be able to use their computers again, they have no other recourse but to use pen and paper. Does the evidence warrant the claim that “This “back to analog” moment isn’t just a knee-jerk, stopgap reaction to a short-term problem. It’s a rational response to our increasingly insecure internet, and we are going to see more of it ahead.” No, not at all.

In their eagerness to show the relevance of their scenario, Weber and Cooper rush say where the focus should be (on CLTC’s future scenario planning) that they ignore the specifics of WannaCry, most of which do not help their case. For example, there’s the issue that the vulnerability exploited by WannaCry had been publicly known for two months before the attack, and that Microsoft had already published a patch to the problem. The systems that were still vulnerability either did not apply the software update or were using an unsupported older version of Windows.

This paints a totally different picture of the problem than Weber and Cooper provide. It’s not that “new” internet infrastructure is insecure and “old” technologies are proven. Much of computing and the internet is already “old”. But there’s a life cycle to technology. “New” systems are more resilient (able to adapt to an attack or discovered vulnerability) and are smaller targets. Older legacy systems with a large installed based, like Windows 7, become more globally vulnerability if their weaknesses are discovered and not addressed. And if they are in widespread use, that presents a bigger target.

This isn’t just a problem for Windows. In this research paper, we show how similar principles are at work in the Python ecosystem. The riskiest projects are precisely those that are old, assumed to be secure, but no longer being actively maintained while the technical environment changes around them. The evidence of the WannaCry case further supports this view.


by Sebastian Benthall at May 18, 2017 02:20 PM

May 17, 2017

Ph.D. student

Sawyer on downward causation in social systems

The work of R. Keith Sawyer (2000) is another example of computational social science literature that I wish I had encountered ten years ago. Sawyer’s work from the early ’00’s is about the connections between sociological theory and multi-agent simulations (MAS).

Sawyer uses an example of an improvisational theater skit to demonstrate how emergence and downward causation work in a small group setting. Two actors in the skit exchange maybe ten lines, each building on the expectations set by the prior actions. The first line establishes the scene is a store, and one of the actors is the owner. The second actor approaches; the first greets her as if she is a customer. She acts in a childlike way and speaks haltingly, establishing that she needs assistance.

What changes in each step of the dialogue is the shared “frame” (in Sawyer’s usage) which defines the relationships and setting of the activity. Perhaps because it is improvisational theater, the frame is carefully shared between the actors. The “Yes, And…” rule applies and nobody is contradicted. This creates the illusion of a social reality, shared by the audience.

Reading this resonated with other reading and thinking I’ve done on ideology. I think about situations where I’ve been among people with a shared vision of the world, or where that vision of the world has been contested. Much of what is studied as framing in media studies is about codifying the relations between actors and the interpretation of actions.

Surely, for some groups to survive, they must maintain a shared frame among their members. This both provides a guide for collective action and also a motivation for cohesion. An example is an activist group at a protest. If one doesn’t share some kind of frame about the relationships between certain actors and the strategies being used, it doesn’t make sense to be part of that protest. The same is true for some (but maybe not all) academic disciplines. A shared social subtext, the frame, binds together members of the discipline and gives activity within it meaning. It also motivates the formation of boundaries.

I suppose the reification of Weird Twitter was an example of a viral framing. Or should I say enframing?! (Heidegger joke).

Getting back to Sawyer, his focus is on a particularly thorny aspect of social theory, the status of social structures and their causal efficacy. How do macro- social forms emerge from individual actors (or actions), and how do those macro- forms have micro- influence over individuals (if they do at all)? Broadly speaking in terms of theoretical poles, there are historically holists, like Durkheim and Parsons, who maintain that social structures are real and have causal power through, in one prominent variation, the internalization of the structure by individuals; subjectivists, like Max Weber, who see social structure as epiphenomenal and reduce it to individual subjective states; and interactionists, which focuses on the symbolic interactions between agents and the patterns of activity. There are also hybrid theories that combine two or more of these views, most notably Giddens, who combines holist and subjectivist positions in his theory of structuration.

After explaining all this very clearly and succinctly, he goes on to talk about which paradigms of agent based modeling correspond to which classes of sociological theory.

References

Sawyer, R. Keith. “Simulating emergence and downward causation in small groups.” Multi-agent-based simulation. Springer Berlin Heidelberg, 2000. 49-67.


by Sebastian Benthall at May 17, 2017 07:06 PM

May 16, 2017

Ph.D. student

Similarities between the cognitive science/AI and complex systems/MAS fields

One of the things that made the research traditions of cognitive science and artificial intelligence so great was the duality between them.

Cognitive science tried to understand the mind at the same time that artificial intelligence tried to discover methods for reproducing the functions of cognition artificially. Artificial intelligence techniques became hypotheses for how the mind worked, and empirically confirmed theories of how the mind worked inspired artificial intelligence techniques.

There was a lot of criticism of these fields at one point. Writers like Hubert Dreyfus, Lucy Suchman, and Winograd and Flores critiqued especially heavily one paradigm that’s now called “Good Old Fashioned AI”–the kind of AI that used static, explicit representations of the world instead of machine learning.

That was a really long time ago and now machine learning and cognitive psychology (including cognitive neuroscience) are in happy conversation, with much more successful models of learning that by and large have absorbed the critiques of earlier times.

Some people think that these old critiques still apply to modern methods in AI. Isn’t AI still AI? I believe the main confusion is that lots of people don’t know that “computable” means something very precisely mathematical: it means a function that is calculable by a partial recursive function. It just so happens that computers, the devices we know and love, can compute any computable function.

So what changed in AI was not that they were using computation to solve problems, but the way they used computation. Similarly, while there was a period where cognitive psychology tried to model mental processes using a particular kind of computable representation, and these models are now known to be inaccurate, that doesn’t mean that the mind doesn’t perform other forms of computation.

A similar kind of relationship is going on between the study of complex systems, especially complex social systems, and the techniques of multi-agent system modeling. Multi-agent system modeling is, as Epstein clarifies, about generative modeling of social processes that is computable in the mathematical sense, but the fact that physical computers are involved is incidental. Multi-agent systems are supposed to be a more realistic way of modeling agent interactions than, say, neoclassical game theory, in the same way that machine learning is a more realistic way of modeling cognition than GOFAI.

Given that, despite (or, more charitably because of) the critiques leveled against it, cognitive science and artificial intelligence have developed into widely successful and highly respected fields. We should expect complex systems/multi-agent systems research to follow a similar trajectory.


by Sebastian Benthall at May 16, 2017 09:03 PM

May 13, 2017

Ph.D. student

Varian taught Miller

“The emerging tapestry of complex systems research is being formed by localized individual efforts that are becoming subsumed as part of a greater pattern that holds a beauty and coherence that belies the lack of an omniscient designer.” – John H. Miller and Scott Page, Complex Adaptive Systems: An Introduction to Computational Models of Social Life

I’ve been giving myself an exhilarating crash course in the complex systems literature. Through reading several books and articles on the matter, one gets a sense of the different authors, their biases and emphasis. Cederman works carefully to ground his work in a deeper sociological tradition. Epstein is no-nonsense about the connection between mathematicity and computation and social scientific method. Holland is clear that social systems are, in his view, a special case of a more generalized object of scientific study, complex adaptive systems.

Perhaps the greatest challenge to any system, let alone social system, is self-reference. The capacity of social science as a system (or systems) to examine themselves is the subject of much academic debate and public concern. Miller and Page, in their Complex Adaptive Systems: An Introduction to Computational Models of Social Life, begin with their own comment on the emergence of complex systems research using a symbolic vocabulary drawn from their own field. They are conscious of their work as a self-reflective thesis that forms the basis of a broader and systematic education in their field of research.

As somebody who has attempted social scientific investigation of scientific fields (in my case, open source scientific software communities, along with some quasi-ethnographic work), my main emotions when reacting to this literature are an excitement about its awesome potential and a frustration that I have not been studying it sooner. I have been intellectually hungry for this material while studying at Berkeley, but it wasn’t in the zeitgeist of the places I was a part of to take this kind of work as the basis for study.

I think it’s fair to say that most of the professors there have heard of this line of work but are not experts in it. It is a relatively new field and UC Berkeley is a rather conservative institution. To some extent this explains this intellectual gap.

So then I discovered in the acknowledgements section of Miller and Page that Hal Varian taught John H. Miller when both were at University of Michigan. Hal Varian would then go on to be the first dean of my own department, the School of Information, before joining Google as their “chief economist” in 2002.

Google in 2002. I believe he helped design the advertising auction system, which was the basis of their extraordinary business model.

I’ve had the opportunity to study a little of Varian’s work. It’s really good. Microeconomic theory pertinent to the information economy. It included theory relevant to information security, as Ross Anderson’s recent piece in Edge discusses. This was highly useful stuff that is at the foundation of the modern information economy, at the very least to the extent that Google is at the foundation of the modern information economy, which it absolutely is.

This leaves me with a few burning questions. The first is why isn’t Varian’s work taught to everybody in the School of Information like it’s the f—ing gospel? Here we have a person who founded the department and by all evidence discovered and articulated knowledge of great importance to any information enterprise or professional. So why is it not part of the core curriculum of a professional school aimed at preparing people for Silicon Valley management jobs?

The second question is why isn’t work descending from Varian’s held in higher esteem at Berkeley? Why is it that neoclassical economic modeling, however useful, is seen as passé, and complex systems work almost unheard of? It does not, it seems to me, reflect the lack of prestige awarded the field nationally. I’m seeing Carnegie Mellon, University of Michigan, the Brookings Institute, Johns Hopkins, and Princeton all represented among the scholars studying complex systems. Berkeley is precisely the sort of place you would expect this work to flourish. But I know of only one professor there who teaches it with seriousness, a relatively new hire in the Geography department (who I in no way intend to diminish by writing this post; on the contrary).

One explanation is, to put it bluntly, brain drain. Hal Varian left Berkeley for Google in 2002. That must have been a great move for him. Perhaps he assumed his legacy would be passed on through the education system he helped to found, but that is not exactly what happened. Rather, it seems he left a vacuum for others to fill. Those left to fill it were those with less capacity to join the leadership of the booming technology industry: qualitative researchers. Latourians. The eager ranks of the social studier. (Note the awkardness of the rendering of ‘Studies’ as a discipline to its practicioner, a studier.) Engineering professors stayed on, and so the university churns out capable engineers which go on to lucrative careers. But something, some part of the rigorous strategic vision, was lost.

That’s a fable, of course. But one has to engage in some kind of sense-making to get through life. I wonder what somebody with a closer relationship to the administration of these institutions would say to any of this. For now, I have my story and know what it is I’m studying.


by Sebastian Benthall at May 13, 2017 02:03 PM

May 12, 2017

Ph.D. student

Hurray! Epstein’s ‘generative’ social science is ‘recursive’ or ‘effectively computable’ social science!

I’m finding recent reading on agent-based modeling profoundly refreshing. I’ve been discovering a number of writers with a level of sanity about social science and computation that I have been trying to find for years.

I’ve dipped into Joshua Epstein’s Generative Social Science: Studies in Agent-Based Computational Modeling (2007), which the author styles as a sequel to the excellent Growing Artificial Societies: Social Science from the Bottom Up (1996). Epstein explains that while the first book was a kind of “call to arms” for generative social science, the later book is a firmer and more mature theoretical argument, in the form of a compilation of research offering generative explanations for a wide variety of phenomena, including such highly pertinent ones as the emergence of social classes and norms.

What is so refreshing about reading this book is, I’ll say it again, the sanity of it.

First, it compares generative social science to other mathematical social sciences that use game theory. It notes that, though there are exceptions, the problem with these fields is their tendency to see explanation in terms of Nash equilibria of unboundedly rational agents. There’s lots of interesting social phenomena that are not in such an equilibrium–the phenomenon might itself be a dynamic one–and no social phenomenon worth mentioning has unboundedly rational agents.

This is a correct critique of naive mathematical economic modeling. But Epstein does not throw the baby out with the bathwater. He’s advocating for agent-based modeling through computer simulations.

This leads him to respond preemptively to objections. One of these responses is “The Computer is not the point”. Yes, computers are powerful tools and simulations in particular are powerful instruments. But it’s not important to the content of the social science that the simulations are being run on computers. That’s incidental. What’s important is that the simulations are fundamentally translatable into mathematical equations. This follows from basic theory of computation: every computed program is equivalent to some mathematical function. Hence, “generative social science” might as well be called “recursive social science” or “effectively computable social science”, he says; he took the term “generative” from Chomsky (i.e. “generative grammer”).

Compare this with Cederman’s account of ‘generative process theory‘ in sociology. For Cederman, generative process theory is older than the theory of computation. He locates its origin in Simmel, a contemporary of Max Weber. The gist of it is that you try to explain social phenomena by explaining the process that generates it. This is a triumphant position to take because it doesn’t have all the problems of positivism (theoretical blinders) or phenomenology (relativism).

So there is a sense in which the only thing Epstein is adding on top of this is the claim that proposed generative processes be computable. This is methodologically very open-ended, since computability is a very general mathematical property. Naturally the availability of computers for simulation makes this methodological requirement attractive, just as ‘analytic tractability’ was so important for neoclassical economic theory. But on top of its methodological attractiveness, there is also an ontological attractiveness to the theory. If one accepts what Charles Bennett calls the “physical Church theory”–the idea that the Church-Turing thesis applies not just to formal systems of computation but to all physical systems–then the foundational assumption of Epstein’s generative social science holds not just as a methodological assumption.

This was all written in 2007, two years before Lazer et al.’s “Life in the network: the coming age of computational social science“. “Computational social science”, in their view, is about the availability of data, the Internet, and the ability to look at society with a new rigor known to the hard sciences. Naturally, this is an important phenomenon. But somehow in the hype this version of computational social science became about the computers, while the underlying scientific ambition to develop a generative theory of society was lost. Computability was an essential feature of the method, but the discovery (or conjecture) that society itself is computation was lost.

But it need not be. Just a short dip into it, Epstein’s Generative social science is a fine, accessible book. All we need to do is get everybody to read it so we can all get on the same page.

References

Cederman, Lars-Erik. “Computational models of social forms: Advancing generative process theory 1.” American Journal of Sociology 110.4 (2005): 864-893.

Epstein, Joshua M., and Robert L. Axtell. “Growing artificial societies: Social science from the bottom up (complex adaptive systems).” (1996).

Epstein, Joshua M. Generative social science: Studies in agent-based computational modeling. Princeton University Press, 2006.

Lazer, David, et al. “Life in the network: the coming age of computational social science.” Science (New York, NY) 323.5915 (2009): 721.


by Sebastian Benthall at May 12, 2017 01:57 AM

May 05, 2017

Ph.D. student

Society as object of Data Science, as Multi-Agent System, and/or Complex Adaptive System

I’m drilling down into theory about the computational modeling of social systems. In just a short amount of time trying to take this task seriously, I’ve already run into some interesting twists.

A word about my trajectory so far: my background, such as it is, has been in cognitive science and artificial intelligence, and then software engineering. For the past several years I have been training to be a ‘data scientist’, and have been successful at that. This means getting a familiarity with machine learning techniques (a subset of AI), the underlying mathematical theory, software tooling, and research methodology to get valuable insights out of unstructured or complex observational data. The data sets I’m interested are as a rule generated by some sort of sociotechnical process.

As much as the techniques of data science lead to rigorous understanding of data at hand, there’s been something missing from my toolbox, which is the appropriate modeling language for social processes that can encode the kinds of implicit theories that my analysis surfaces. Hence the transition I am attempting to go from being a data scientist, a diluted term, to a computational social scientist.

The difficulty, navigating as I am out of a very odd intellectual niche, is acquiring the theoretical vocabulary that bridges the gap between social theory and computational theory. In my training at Berkeley’s School of Information, frequently computational theory and social theory have been assumed to be at odds with each other, applying to distinct domains of inquiry. I gather that this is true elsewhere as well. I have found this division intellectually impossible to swallow myself. So now I am embarking on an independent expedition into the world of computational social theory.

One of pieces that’s grounding my study, as I’ve mentioned, is Cederman’s work outline the relationship between generative process theory, multi-agent simulations (MAS), and computational sociology. It is great work for connecting more recent developments in computational sociology with earlier forms of sociology proper. Cederman cites interesting works by R. Keith Sawyer, who goes into depth about how MAS can shed light on some of the key challenges of social theory: how does social order happen? The tricky part here is the relationship between the ‘macro’ level ‘social forms’ and the ‘micro’ level individual actions. I disagree with some of Sawyer’s analysis, but I think he does a great of setting up the problem and its relationship to other sociological work, such as Giddens’s work on structuration.

This is, so far, all theory. As a concrete example of this method, I’ve been reading Epstein and Axtell’s Growing Artificial Societies (1996), which I gather is something of a classic in the field. Their Sugarscape model is very flexible and their simulations shed light on timeless questions of the relationship between economic activity and inequality. Their presentation is also inspiring.

As a rule I’m finding the literature in this space far more accessible than I would have expected. It’s often written in very plain language and depends more on the power of illustration than scientific terminology laden with intellectual authority. What I have encountered so far is, perhaps as a consequence, a little unsatisfying intellectually. But it’s all quite promising.

Based on these leads, I was recommended David Little’s recent blog post about complexity in social science. He’s quite critical of the bolder claims of these scientists; I’d like to revisit these arguments later. But what was most valuable for me were his references. One was a book by Epstein, who I gather has gone on to do a lot more work since co-authoring Growing Artificial Societies. This seems to continue in the vein of ‘generative’ modeling shared by Cederman.

But Little references two other sources: John Holland’s Complexity: A Very Short Introduction and Miller and Page’s Complex Adaptive Systems: An Introduction to Computational Models of Social Life.

This is actually a twist. Holland as well as Miller and Page appear to be concerned mainly with complex adaptive systems (CAS), which appear to be more general than MAS. At least, in Holland’s rendition, which I’m now reading. MAS, Cederman and Sawyer both argue, is inspired in part by Object Oriented Programming (OOP), a programming paradigm that truly does lend itself to certain kinds of simulations. But Holland’s work seems more ambitious, tying CAS back to contributions made by von Neumman and Noam Chomsky. Holland is after a general scientific theory of complexity, not a specific science of modeling social phenomena. Perhaps for this reason his work echoes some work I’ve seen in systems ecology on autocatalysis and Varela’s work on autopoiesis.

Indeed the thread of Varela may well lead to where I’m going. One paper I’ve seen ties computational sociology to Luhmann’s theory of communication; Luhmann drew on Varela’s ideas of autopoeisis explicitly. So there is likely a firm foundation for social theory somewhere in here.

These are fruitful investigations. What I’m wondering now is to what extent the literatures on MAS and CAS are divergent.

 

 


by Sebastian Benthall at May 05, 2017 02:38 PM

May 03, 2017

Ph.D. student

Responding to Kelkar on the study and politics of artificial intelligence

I quite like Shreeharsh Kelkar’s recent piece on artificial intelligence as a thoughtful comment on the meaning of the term today and what Science and Technology Studies (STS) has to offer the public debate about it.

When AI researchers (and today this includes people who label themselves machine learning researchers, data scientists, even statisticians) debate what AI really means, their purpose is clear: to legitimate particular programs of research. When AI researchers (and today this includes people who label themselves machine learning researchers, data scientists, even statisticians) debate what AI really means, their purpose is clear: to legitimate particular programs of research. What agenda do we—as non-participants, yet interested bystanders—have in this debate, and how might it be best expressed through boundary work? STS researchers have argued that contemporary AI is best viewed as an assemblage that embodies a reconfigured version of human-machine relations where humans are constructed, through digital interfaces, as flexible inputs and/or supervisors of software programs that in turn perform a wide-variety of small-bore high-intensity computational tasks (involving primarily the processing of large amounts of data and computing statistical similarities). It is this reconfigured assemblage that promises to change our workplaces, rather than any specific technological advance. The STS agenda has been to concentrate on the human labor that makes this assemblage function, and to argue that it is precisely the invisibility of this labor that allows the technology to seem autonomous. And of course, STS scholars have argued that the particular AI assemblage under construction is disproportionately tilted towards benefiting Silicon Valley capitalists.

This is a compelling and well-stated critique. There’s just a few ways in which I would contest Kelkar’s argument.

The first is to argue that political thrust of the critique, that artificial intelligence often involves a reconfiguration of the relationship between labor and machines, is not in general not one made firmly by STS scholars. In Kelkar’s own characterization, STS researchers are “non-participants, yet interested bystanders” in the debate about AI. This distancing maneuver by STS researchers brackets off how their own workplaces, as white collar information workers, are constantly being reconfigured by artificial intelligence, while their funding is tied up to larger forces in the information economy. Therefore there’s always something disingenuous to the STS’s researcher’s claim to be a bystander, a posturing which allows them to be provocative but take no responsibility for the consequences of the provocation.

In contrast, one could consider the work of Nick Land, who is as far as I can tell not taken seriously by STS researchers though he’s by now a well-known theorist on similar subjects. I haven’t studied Land’s work much myself; I get my understanding mainly through S.C. Hickman’s excellent blogging. I also cannot really speak to Land’s connection with the alt-right; I just don’t know much about it. What I believe Land has done is tried to develop social theory that takes into account the troubling relationship between artificial intelligence and labor, articulated the relationship, and become not just a bystander but a participant in the debate.

Essentially what I’m arguing is that if STS researchers don’t activate the authentic political tendency in their own work, which often is either a flavor of accelerationism or a reaction to it, they are being, to use an old phrase for which I can find no immediate substitute, namby pamby. If one has a sophomore-level understanding of Marxist theory and can make the connection between artificial intelligence and capital, it’s not clear what is added by the STS perspective besides a lot of particularization of the theory.

The other criticism of Kelkar’s argument is that it isn’t at all charitable to AI researchers. Somehow it collapses all discussion of AI into a “contemporary” debate with an underlying economic anxiety. Even the AI researchers are, in this narrative, driven by economic anxiety, as their own articulation of their research agenda exists only for its own legitimization. The natural tendency of STS researchers is to see scientists as engaged primarily in rhetorical practices aimed at legitimizing their own research. This tends to obscure any actual technological advances made by scientists. AI researchers are no exception. Let’s assume that artificial intelligence does indeed reconfigure the relationship between labor and capital, rendering much labor invisible and giving the illusion of autonomy to machines capable of intense computational tasks, for the ultimate benefit of Silicon Valley capitalists. STS researchers, at least those characterized by Kelkar, downplay that there are specific technical advances that make that reconfiguration possible, and that these accomplishments are expensive and require an enormous amount of technical labor, and moreover that there are fundamental mathematical principles underlying the development of this technology, But these are facts of the matter that are extremely important to anybody who is an actual participant in the the debates around AI, let alone the economy that AI is always already reconfiguring.

The claim that AI researchers are mainly legitimizing themselves through the rhetoric of calling their work “artificial intelligence”, as opposed to accomplishing scientific and engineering feats, is totally unhelpful if one is interested in the political consequences of artificial intelligence. In my academic experience, this move is primarily one of projection: STS researchers are constantly engaged in rhetorical practices legitimizing themselves, so why shouldn’t scientists be as well? As long as one is a “bystander”, having no interest in praxis, there is no contest except rhetorical contest for legitimacy of research agendas. This is entirely a product of the effete conditions of academic research disengaged from all reality except courting funding agencies. If STS scholars turned themselves towards the task of legitimizing themselves through actual political gains, their understanding of artificial intelligence would be quite different indeed.


by Sebastian Benthall at May 03, 2017 09:06 PM

May 02, 2017

Ph.D. student

Civil liberties and liberalism in the EU’s General Data Protection Regulation (GDPR)

I’ve been studying the EU’s General Data Protection Regulation and reading the news.

In the news, I’m reading all the time about how the European Union is the last bastion of the “post-war liberal order”, threatened on all sides by ethnonationalism, including from the United State. Some writers have argued that the U.S. has simply moved on from the historical conditions of liberalism, with liberals as a class just having trouble getting over it. Brexit is somehow also framed as an ethnonationalist project. Whether scapegoat or actual change agent, new attention is on Russia, which has never been liberal and justified its action to take back Crimea based on the ethnic Russian-ness of that territory.

Despite it being in many parts of the world normal, one thing that’s upsetting to liberalism about ethnonationalism is the idea that the nation is rooted in an ethnicity, which is a form of social collective bound by genetic and family ties, and not in individual autonomy. From here it is a short step to having that ethnicity empowered in its command of the nation-state. And as we have been taught in history, when you have states acting on behalf of certain ethnicities, those states often treat other ethnicities in ways that are, from a liberal perspective, unjust. One of the first things to go are the freedoms, especially political freedoms (the kinds of freedoms that lead directly or indirectly to political power).

This is just a preface, not intended in any particular way, to explain why I’m interested in some of the language in the General Data Protection Regulation (GDPR). I’m studying GDPR because I’m studying privacy engineering: how to design technical systems that preserve people’s privacy. For practical reasons this requires some review of the relevant legislation. Compliance with the law is, if nothing else, a business concern, and this makes it relevant to technologists. But the GDPR, which is one of the strongest privacy rulings on the horizon, is actually thick with political intent which goes well beyond the pragmatic and mundane concerns of technical design. Here is section 51 of the Recitals, which discuss the motivation of the regulation and are intended to be used in interpretation of the legally binding Articles in the second section (emphasis mine):

(51) Personal data which are, by their nature, particularly sensitive in relation to fundamental rights and freedoms merit specific protection as the context of their processing could create significant risks to the fundamental rights and freedoms.

Those personal data should include personal data revealing racial or ethnic origin, whereby the use of the term ‘racial origin’ in this Regulation does not imply an acceptance by the Union of theories which attempt to determine the existence of separate human races.

The processing of photographs should not systematically be considered to be processing of special categories of personal data as they are covered by the definition of biometric data only when processed through a specific technical means allowing the unique identification or authentication of a natural person.

Such personal data should not be processed, unless processing is allowed in specific cases set out in this Regulation, taking into account that Member States law may lay down specific provisions on data protection in order to adapt the application of the rules of this Regulation for compliance with a legal obligation or for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller.

In addition to the specific requirements for such processing, the general principles and other rules of this Regulation should apply, in particular as regards the conditions for lawful processing.

Derogations from the general prohibition for processing such special categories of personal data should be explicitly provided, inter alia, where the data subject gives his or her explicit consent or in respect of specific needs in particular where the processing is carried out in the course of legitimate activities by certain associations or foundations the purpose of which is to permit the exercise of fundamental freedoms.

It’s not light reading. What I find most significant about Recital 51 is that it explicitly makes the point that data concerning somebody’s racial and ethnic origin are particularly pertinent to “fundamental rights and freedoms” and potential risks to them. This is despite the fact that the EU is denying any theory of racial realism. Recital 51 is in effect saying that race is a social construct but that even though it’s just a social construct it’s so sensitive an issue that processing data about anybody’s race is prima facie seen as creating a risk for their fundamental rights and freedoms. Ethnicity, not denied in the same way as race, is treated similarly.

There are tons of legal exceptions to these prohibitions in the GDPR and I expect that the full range of normal state activities are allowed once all those exceptions are taken into account. But it is curious that revealing race and ethnic origin is considered dangerous by the EU’s GDPR at the same time when there’s this narrative that ethnonationalists want to break up the EU in order to create states affording special privileges to national ethnicities. What it speaks to, among other things, is the point that the idea of a right to privacy is not politically neutral with respect to these questions of nationalism and globalism which seem to define the most important dimensions of political difference today.

Assuming I’m right and the GDPR encodes a political liberalism that opposes ethnonationalism, this raises interesting questions for how it affects geopolitical outcomes once it comes to be enforced. Because of the extra-territorial jurisdiction of the GDPR, it imposes on businesses all over the world policies that respect its laws even if those businesses only operate partially in the EU. Supposing the EU holds together in some form while in other places some moderate form of ethnonationalism takes over. Would the GDPR and its enforcement be strong enough to normalize liberalism into technical and business design globally even while ethonationalist political forces erode civil liberties with respect to the state?


by Sebastian Benthall at May 02, 2017 06:47 PM

April 28, 2017

Ph.D. student

Highlights of Algorithms and Explanations (NYU April 27-28) #algoexpla17

I’ve attended the Algorithms and Explanations workshop at NYU this week. In general, it addressed the problems raised by algorithmic opacity in decision-making. I wasn’t able to attend all the panels; in this post I’ll cover some highlights of what I found especially insightful or surprising.

Overall, I was impressed by the work presented. All of it rose above the naive positions on the related issues; much of it was targeted at debunking these naive positions. This may have been a function of the venue: hosted by the Information Law Institute at NYU Law, the intellectual encounter was primarily between lawyers and engineers. This focuses the conversation. It was not a conference on technology criticism, in a humanities or popular style, which is often too eager to conflate itself with technology policy. In my opinion, this conflation leads to the kinds of excesses Adam Elkus has addressed in his essay on technology policy, which I recommend. For the most part one did not get the sense that the speakers were in the business of creating problems; they were in the business of solving them.

At least this was the tone set by the first panel I attended, which was a collection of computer scientists, statisticians, and engineers who presented tools or conceptualizations that gave algorithmic systems legibility. Of these, I found Anupam Datta’s Quantitative Input Influence measure best motivated from a statistical perspective. I do believe that this measure essentially solves to problem that most vexes people when it comes to the opacity of machine learning systems by giving a clear score for which inputs effect decision outcomes.

I also enjoyed the presentation of Foster Provost, partly for the debunking force of the talk. He drew on his 25+ years of experience designing and deploying decision support systems and pointed out that ever since people started building these tools, the questions of interpretability and accountability have been a part of the job. As a person with technical and industry background who encountered the surge of ‘algorithmic accountability’ in an academic stage, I’ve found many of the questions that have been raised by the field to be baffling largely because the solutions have seemed either obvious or ingrained in engineering culture as among the challenges of dealing with clients. (This tree swing cartoon is a classic illustration of this).

Alexandra Chouldechova gave a very interesting talk on model comparison as a way of identifying bias in black-box algorithms which was new material for me.

In the next panel, dealing specifically with regulation, Deven Desai provided a related historical perspective: there’s a preexisting legal literature in bureaucratic transparency that is relevant to regulatory questions about algorithmic transparency. This awareness is shared, I believe, by those who hold what may be called a physicalist understanding of computation, or what Charles Bennett has called “physical Church’s thesis”: the position that the Church-Turing, which is about how all formal computational systems are reducible to each other and share certain limits as to their power, applies to all physical information processing systems. In particular, this thesis leads to the conclusion that human bureaucratic and information technological systems are essentially up to the same thing when it comes to information processing (this is also the position of Beniger).

But the most galvanizing talk in the regulatory panel was by Sandra Wachter, who presented material relevant to her paper “Why a Right to Explanation of Automated Decision-Making Does Not Exist int eh General Data Protection Legislation“. Companies and privacy scholars in the U.S. turn to the GDPR as a leading and challenging new regulation. It’s bold to show up at a conference on Algorithms and Explanation with an argument that the explainability of algorithms isn’t relevant to the next generation of privacy regulations. This is a space to watch.

The second day’s talks focused on algorithmic explainability in specific sectors. Of these I found the intellectually richest to be the panel on Health Care. Rich Caruana gave a warm and technically focused talk on how the complexity of functions used by a learning system can support or undermine its intelligibility, a topic I personally see as the crux of the problem.

I was especially charmed, however, by Federico Cabitza’s discussion of decision support in the medical context. I wish I could point to any associated papers, but do not have them handy. What was most compelling about the talk was the way it made the case for needing to study algorithmic decision making in vivo, as part of a decision procedure that involves human experts and that learns, as a socio-technical system, over time. In my opinion, too often the perils of opacity of algorithms are framed in terms of a specific judgment or the faults of a specific model. As I try to steer my own work turns more towards sociological process theory, I’m on the lookout for technologists who see technology as part of a sociotechnical, evolutionary process and not in isolation. With this complex ontology in mind, Cabitza was then able to unpack “explanation” into dimensions that targeted different aspects of the decision making process: scrutability, comprehensibility, and interpretability. There was far to much in the talk for me to cover here.

The next panel was on algorithms in consumer credit. All three speakers were very good, though their talks worked in different directions and the tensions between them were never resolved in the questions. Dan Raviv of Lendbuzz explained how his company was bringing credit to those who otherwise have not had access to it: immigrants to the U.S. with firm professional qualifications but no U.S. credit history. Lendbuzz has essentially identified a prime credit population ignored by current FICO scores, and has started a bank to lend to them.

That’s an interesting business and technical accomplishment. Unfortunately, it was largely overlooked as attention moved to later talks in this section. Aaron Rieke of Upturn gave a very realistic picture of the use of big data in credit scoring (it isn’t used much in the U.S.; they mainly use conventional data sources like credit history). What he’s looking for, rather humbly, is ways to be a better advocate, especially for those who are adversely affected by the enormous disparity in credit access.

This disparity was the background to Frank Pasquale’s talk, which was broad in scope. I’m glad he dug into social science theory, presenting some material from “Two Narratives of Platform Capitalism“, which I wish I had read earlier. We seem to share an interest in alternative theories of social scientific explanation and its relationship to the tech economy. It was, as is typical of Pasquale’s work, rather polemical, calling for a critical examination of credit scoring and financial regulation with the aim of exposing exploitation. This exploitation reveals itself in the invasions of privacy suffered by those in poverty as well as the inability of those deemed credit-unworthy to access opportunity.

One cannot fault the political motivation of raising awareness of and supporting the disadvantaged in society. But where the discussion missed the mark, I’m afraid, was in tying these concerns about inequality back to questions of algorithmic transparency. I’m generally of the opinion that the disparities in society are the result of social forces and patterns much more forceful and comprehensive than the nuances of algorithmic credit scoring. It’s not clear how any interventions on these mechanisms can lead to better political outcomes. As Andrew Selbst pointed out in an insightful comment, the very idea of ‘credit worthiness’ sets the deck against those who do not have the reliable wealth to pay their debts. And as Raviv’s presentation revealed (before being eclipsed by other political concerns), for some, the problem is not enough algorithmic analysis of their financial situation, not too much.

There’s a broad and old literature in economics about moral hazards in insurance markets, markets for lemons, and other game theoretic understandings of the winners and losers in these kinds of two-sided markets which is generally understated in ‘critical’ discussions of credit scoring algorithms. That’s too bad in my opinion, as it provides the best explanation of the political outcomes that are most concerning about credit markets. (These discussions of mechanism design use formal modeling but generally do not in an of themselves carry a neoclassical ideology.)

The last talk I attended as about algorithms in the media. Nick Diakonopoulos gave a comprehensive review of the many issues at stake. The most famous speaker on this panel was Gilad Lotan, who presented a number of interesting (though to me, familiar) data science results about media fragmentation and the Outside Your Bubble buzzfeed feature, aimed to counter it.

I wish Lotan had presented about something else: how Buzzfeed uses the engagement data is collects across its platforms and content to make editorial and strategic decisions. This is the kind of algorithmic decision-making that affects people’s lives. It is also precisely the kind of decision-making which is not generally transparent to the consumers of media. It would have been nice (and I feel, appropriate for the conference) if Lotan had taken the opportunity to explain Buzzfeed’s algorithms, especially in sociotechnical context of the organization’s broader decision-making and strategy. But he didn’t.

The discussion proceeded to devolve into one about fake news. One good point that was made in this discussion was by Julia Powles: she’s learned in her work that one of the important and troubling consequences of technology’s role in media is that while Google, Facebook and the like cater to both journalists and media consumers, their market role is disintermediation of the publishers. But historically, journalists have had their editorial power through their relationships with publishers, who used to be the ones to control distribution.

I came away from this conference feeling well informed about innovations in machine learning and statistics in model interpretation and communication. But I’ve left confirmed in my view that much of the discussion of algorithms and their political effects per se is a red herring. Broader economic questions of industrial organization of the information economy dominate the algorithmic particulars, where political effects are concerned.


by Sebastian Benthall at April 28, 2017 08:14 PM

April 23, 2017

Ph.D. student

Process theory; generative epistemology; configurative ontology: notes on Cederman, part 1

I’ve recently had recommended to me the work of L.E. Cederman, who I’ve come to understand is a well-respected and significant figure in computational social science, especially agent based modeling. In particular, I’ve been referred to this paper on the theoretical foundations of computational sociology:

Cederman, L.E., 2005. Computational models of social forms: Advancing generative process theory 1. American Journal of Sociology, 110(4), pp.864-893. (link)

This is a paper I wish I had encountered years ago. I’ve written much here about my struggles with “interdisciplinary” research. In short: I’ve been trying to study social phenomena with scientific rigor. This is a very old problem fraught with division. On top of that, there’s been, it seems, an epistemological upset because of advances in data collection and processing that poses a practical challenge to a lot of established disciplines. On top of this, the social phenomena I’m interested in most tend to involve the interaction between people and technology, which brings with it an association with disciplines specialized to that domain (HCI, STS) that for me have not made my research any more straightforward. After trying for some time to do the work I wanted to do under the new heading of data science, I did not find what I was looking intellectually in that emerging field, however important the practical skill-set involved has been to me.

Computational social science, I’ve convinced myself if not others, is where the answers lie. My hope for it is that as a new discipline, it’s able to break away from dogmas that limited other disciplines and trapped their ambitions in endless methodological debates. What is being offered, I’ve imagined, in computational social science is the possibility of a new paradigm, or at least a viable alternative one. Cederman’s 2005 paper holds out the promise for just that.

Let me address for now some highlights of his vision of social science and how they relate to the other. I hope to come to the rest in a later post.

Sociological process theory. This is a position in sociological theory that Cederman attributes to 19th century sociologist Georg Simmel. The core of this position is that social reality is not fixed, but rather result of an ongoing process of social interactions that give rise to social forms.

“The large systems and the super-individual organizations that customarily come to mind when we think of society, are nothing but immediate interactions that occur among men constantly every minute, but that have become crystallized as permanent fields, as autonomous phenomena.” (Simmel quoted in Wolf 1950, quoted in Cederman 2005)

There is a lot to this claim. If one is coming from the field of Human Computer Interaction (HCI), what may seem most striking about it is how well it resonates with a scholarly tradition that is most frequently positioned as a countercurrent to an unthinking positivism in design. Lucy Suchman, Etienne Wenger, and Jean Lave are scholars that come to mind as representative of this way of thinking. Much of the intellectual thrust of Simmel can be found in Paul Dourish’s criticism of positivist understandings of “context” in HCI.

For Dourish, the intellectual ground of this position is phenomenological social science, often associated with ethnomethodology. Simmel predates phenomenology but is a neo-Kantian, a contemporary of Weber, and a critic of the positivism of his day (the original positivism). As a social scientific tradition, it has had its successors (maybe most notably George Herbert Mead) but has submerged under other theoretical traditions. From Cederman’s analysis, one gathers that this is largely due to process theory’s inability to ground itself in rigorous method. Its early proponents were fond of metaphorical writing in a way that didn’t age well. Cederman pays homage to the sociological process theory’s origins, but quickly moves to discuss an epistemological position that complements it. Notably, this position is neither positivist, nor phenomenological, nor critical (in the Frankfurt School sense), but something else: generative epistemology.

Generative epistemology. Cederman positions generative epistemology primarily in opposition to positivism and particularly a facet of positivism that he calls “nomothetic explanation”: explanation in terms of laws and regularities. The latter is considered the gold standard of natural science and the social sciences that attempt to mimic them. This tendency is independent of whether the inquiry is qualitative or quantitative. Both comparative analysis and statistical control look for a conjunction of factors that is regularly predictive of some outcome. (Cederman’s sources on this: (Gary) King, Keohane, and Verba (1994), and Goldthorpe, 1997. The Gary King cited is I assume the same Gary King who goes on to run Harvard’s IQSS; I hope to return to this question of positivism in computational social science in later writing. I tend to disagree with the idea that ‘data science’ or ‘big data’ has primarily a positivist tendency.)

Cederman describes the ‘process theorist’s’ alternative as based on abduction, not induction. Recall that ‘abduction’ was Peirce’s term for ‘inference to the best explanation’. The goal is to take an observed sociological phenomenon and explain its generation by accounting for how it is socially produced. The preference for generative explanation, in Simmel, comes in part from a pessimism about isolating regularities in complex social systems. Through this theorization, knowledge is gained; the knowledge gained is a theoretical advance that makes a social phenomenon less ‘puzzling’.

“The construction of generative explanations based on abductive inference is an inherently theoretical endeavor (McMullin, 1964). Instead of subsuming observations under laws, the main explanatory goal is to make a puzzling phenomenon less puzzling, something that inevitably requires the introduction of new knowledge through theoretical innovation.”

The specifics of the associated method are less clear than the motivation for this epistemology. Many early process theorists resorted to metaphors. But where all this is going is into the construction of models, and especially computational models, as a way of presenting and testing generative theories. Models generate forms through logical operations based on a number of parameters. A comparison between the logical form and the empirical form is made. If it favorable, then the empirical form can be characterized as the result of a process described by the variables and model. (Barth, 1981)

Cederman draws from Barth (1981) and Thomas Fararo (1989) to ally himself with ‘realist’ social science. The term is clarified later: ‘realism’ is opposed to ‘instrumentalism’, a reference that cuts to one of core epistemological debates in computational methods. An instrumental method, such as a machine learning ensemble, may provide a very instrumental model for purposes of prediction and control that nevertheless does not capture what’s really going on in the underlying process. Realist mathematical sociology, on the other hand, attempts to capture the reality of the process generating the social phenomenon in the precise language of processing, mathematics/computation. The underlying metaphysical point is one that many people would rather not attend to. For now, we will follow Cederman’s logic to a different ontological point.

Configurative ontology. Sociological process theory requires explanations to be specify the process that generates the social form observed. The entities, relations, and mechanisms may be unobserved or even unobservable. Postivists, Cederman argues, will often take the social forms to be variables themselves and undertheorize how the variables have been generated, since they care only about predicting actual outcomes. Whereas positivists study ‘correlations’ among elements, Simmel studies ‘sociations’, the interactions that result in those elements. The ontology, then, is that social forms are “configurations of social interactions and actors that together constitute the structures in which they are embedded.

In this view, variables, such as would be used in some more positivist social scientific study, “merely measure dimensions of social forms; they cannot represent the forms themselves except in very simple cases.” While a variable based analysis detaches a social phenomenon from space and time, “social forms always possess a duration in time and an extension in space.

Aside from a deep resonance with Dourish’s critique of ‘contextual computing’ (noted above), this argument once again recalls much of what now comes under the expansive notion of ‘criticism’ of social sciences. Ethnomethodology and ethnography more general are now often raised as an alternative to simplistic positivist methods. In my experience at Berkeley and exposure so far to the important academic debates, the most noisy contest is between allegedly positivist or instrumentalist (they are different, surely) quantitative methods and phenomenological ethnographic methods. Indeed, it is the latter who more often now claim the mantle of ‘realism’. What is different about Cederman’s case in this paper is that he is setting up a foundation for realist sociology that is nevertheless mathematized and computational.

What I am looking for in this paper, and haven’t found yet, is an account of how these ‘realist’ models of social processes are tested for their correspondence to empirical social form. Here is where I believe there is an opportunity that I have not yet seen fully engaged.


by Sebastian Benthall at April 23, 2017 05:32 PM