School of Information Blogs

July 10, 2018

Ph.D. student

search engines and authoritarian threats

I’ve been intrigued by Daniel Griffin’s tweets lately, which have been about situating some upcoming work of his an Deirdre Mulligan’s regarding the experience of using search engines. There is a lively discussion lately about the experience of those searching for information and the way they respond to misinformation or extremism that they discover through organic use of search engines and media recommendation systems. This is apparently how the concern around “fake news” has developed in the HCI and STS world since it became an issue shortly after the 2016 election.

I do not have much to add to this discussion directly. Consumer misuse of search engines is, to me, analogous to consumer misuse of other forms of print media. I would assume to best solution to it is education in the complete sense, and the problems with the U.S. education system are, despite all good intentions, not HCI problems.

Wearing my privacy researcher hat, however, I have become interested in a different aspect of search engines and the politics around them that is less obvious to the consumer and therefore less popularly discussed, but I fear is more pernicious precisely because it is not part of the general imaginary around search. This is the aspect that is around the tracking of search engine activity, and what it means for this activity to be in the hands of not just such benevolent organizations such as Google, but also such malevolent organizations such as Bizarro World Google*.

Here is the scenario, so to speak: for whatever reason, we begin to see ourselves in a more adversarial relationship with search engines. I mean “search engine” here in the broad sense, including Siri, Alexa, Google News, YouTube, Bing, Baidu, Yandex, and all the more minor search engines embedded in web services and appliances that do something more focused than crawl the whole web. By ‘search engine’ I mean entire UX paradigm of the query into the vast unknown of semantic and semiotic space that contemporary information access depends on. In all these cases, the user is at a systematic disadvantage in the sense that their query is a data point amount many others. The task of the search engine is to predict the desired response to the query and provide it. In return, the search engine gets the query, tied to the identity of the user. That is one piece of a larger mosaic; to be a search engine is to have a picture of a population and their interests and the mandate to categorize and understand those people.

In Western neoliberal political systems the central function of the search engine is realized as commercial transaction facilitating other commercial transactions. My “search” is a consumer service; I “pay” for this search by giving my query to the adjoined advertising function, which allows other commercial providers to “search” for me, indirectly, through the ad auction platform. It is a market with more than just two sides. There’s the consumer who wants information and may be tempted by other information. There are the primary content providers, who satisfy consumer content demand directly. And there are secondary content providers who want to intrude on consumer attention in a systematic and successful way. The commercial, ad-enabled search engine reduces transaction costs for the consumer’s search and sells a fraction of that attentional surplus to the advertisers. Striking the right balance, the consumer is happy enough with the trade.

Part of the success of commercial search engines is the promise of privacy in the sense that the consumer’s queries are entrusted secretly with the engine, and this data is not leaked or sold. Wise people know not to write into email things that they would not want in the worst case exposed to the public. Unwise people are more common than wise people, and ill-considered emails are written all the time. Most unwise people do not come to harm because of this because privacy in email is a de facto standard; it is the very security of email that makes the possibility of its being leaked alarming.

So to with search engine queries. “Ask me anything,” suggests the search engine, “I won’t tell”. “Well, I will reveal your data in an aggregate way; I’ll expose you to selective advertising. But I’m a trusted intermediary. You won’t come to any harms besides exposure to a few ads.”

That is all a safe assumption until it isn’t, at which point we must reconsider the role of the search engine. Suppose that, instead of living in a neoliberal democracy where the free search for information was sanctioned as necessary for the operation of a free market, we lived in an authoritarian country organized around the principle that disloyalty to the state should be crushed.

Under these conditions, the transition of a society into one that depends for its access to information on search engines is quite troubling. The act of looking for information is a political signal. Suppose you are looking for information about an extremist, subversive ideology. To do so is to flag yourself as a potential threat of the state. Suppose that you are looking for information about a morally dubious activity. To do so is to make yourself vulnerable to kompromat.

Under an authoritarian regime, curiosity and free thought are a problem, and a problem that are readily identified by ones search queries. Further, an authoritarian regime benefits if the risks of searching for the ‘wrong’ thing are widely known, since it suppresses inquiry. Hence, the very vaguely announced and, in fact, implausible to implement Social Credit System in China does not need to exist to be effective; people need only believe it exists for it to have a chilling and organizing effect on behavior. That is the lesson of the Foucouldean panopticon: it doesn’t need a guard sitting in it to function.

Do we have a word for this function of search engines in an authoritarian system? We haven’t needed one in our liberal democracy, which perhaps we take for granted. “Censorship” does not apply, because what’s at stake is not speech but the ability to listen and learn. “Surveillance” is too general. It doesn’t capture the specific constraints on acquiring information, on being curious. What is the right term for this threat? What is the term for the corresponding liberty?

I’ll conclude with a chilling thought: when at war, all states are authoritarian, to somebody. Every state has an extremist, subversive ideology that it watches out for and tries in one way or another to suppress. Our search queries are always of strategic or tactical interest to somebody. Search engine policies are always an issue of national security, in one way or another.

by Sebastian Benthall at July 10, 2018 11:52 PM

Ph.D. student

Exploring Implications of Everyday Brain-Computer Interface Adoption through Design Fiction

This blog post is a version of a talk I gave at the 2018 ACM Designing Interactive Systems (DIS) Conference based on a paper written with Nick Merrill and John Chuang, entitled When BCIs have APIs: Design Fictions of Everyday Brain-Computer Interface Adoption. Find out more on our project page, or download the paper: [PDF link] [ACM link]

In recent years, brain computer interfaces, or BCIs, have shifted from far-off science fiction, to medical research, to the realm of consumer-grade devices that can sense brainwaves and EEG signals. Brain computer interfaces have also featured more prominently in corporate and public imaginations, such as Elon Musk’s project that has been said to create a global shared brain, or fears that BCIs will result in thought control.

Most of these narratives and imaginings about BCIs tend to be utopian, or dystopian, imagining radical technological or social change. However, we instead aim to imagine futures that are not radically different from our own. In our project, we use design fiction to ask: how can we graft brain computer interfaces onto the everyday and mundane worlds we already live in? How can we explore how BCI uses, benefits, and labor practices may not be evenly distributed when they get adopted?

Brain computer interfaces allow the control of a computer from neural output. In recent years, several consumer-grade brain-computer interface devices have come to market. One example is the Neurable – it’s a headset used as an input device for virtual reality systems. It detects when a user recognizes an object that they want to select. It uses a phenomenon called the P300 – when a person either recognizes a stimulus, or receives a stimulus they are not expecting, electrical activity in their brain spikes approximately 300 milliseconds after the stimulus. This electrical spike can be detected by an EEG, and by several consumer BCI devices such as the Neurable. Applications utilizing the P300 phenomenon include hands-free ways to type or click.

Demo video of a text entry system using the P300

Neurable demonstration video

We base our analysis on this already-existing capability of brain computer interfaces, rather than the more fantastical narratives (at least for now) of computers being able to clearly read humans’ inner thoughts and emotions. Instead, we create a set of scenarios that makes use of the P300 phenomenon in new applications, combined with the adoption of consumer-grade BCIs by new groups and social systems.

Stories about BCI’s hypothetical future as a device to make life easier for “everyone” abound, particularly in Silicon Valley, as shown in recent research.  These tend to be very totalizing accounts, neglecting the nuance of multiple everyday experiences. However, past research shows that the introductions of new digital technologies end up unevenly shaping practices and arrangements of power and work – from the introduction of computers in workplaces in the 1980s, to the introduction of email, to forms of labor enabled algorithms and digital platforms. We use a set of a design fictions to interrogate these potential arrangements in BCI systems, situated in different types of workers’ everyday experiences.

Design Fictions

Design fiction is a practice of creating conceptual designs or artifacts that help create a fictional reality. We can use design fiction to ask questions about possible configurations of the world and to think through issues that have relevance and implications for present realities. (I’ve written more about design fiction in prior blog posts).

We build on Lindley et al.’s proposal to use design fiction to study the “implications for adoption” of emerging technologies. They argue that design fiction can “create plausible, mundane, and speculative futures, within which today’s emerging technologies may be prototyped as if they are domesticated and situated,” which we can then analyze with a range of lenses, such as those from science and technology studies. For us, this lets us think about technologies beyond ideal use cases. It lets us be attuned to the experiences of power and inequalities that people experience today, and interrogate how emerging technologies might get uptaken, reused, and reinterpreted in a variety of existing social relations and systems of power.

To explore this, we thus created a set of interconnected design fictions that exist within the same fictional universe, showing different sites of adoptions and interactions. We build on Coulton et al.’s insight that design fiction can be a “world-building” exercise; design fictions can simultaneously exist in the same imagined world and provide multiple “entry points” into that world.

We created 4 design fictions that exist in the same world: (1) a README for a fictional BCI API, (2) a programmer’s question on StackOverflow who is working with the API, (3) an internal business memo from an online dating company, (4) a set of forum posts by crowdworkers who use BCIs to do content moderation tasks. These are downloadable at our project page if you want to see them in more detail.  (I’ll also note that we conducted our work in the United States, and that our authorship of these fictions, as well as interpretations and analysis are informed by this sociocultural context.)

Design Fiction 1: README documentation of an API for identifying P300 spikes in a stream of EEG signals

First, this is README documentation of an API for identifying P300 spikes in a stream of EEG signals. The P300 response, or “oddball” response is a real phenomenon. It’s a spike in brain activity when a person is either surprised, or when see something that they’re looking for. This fictional API helps identify those spikes in EEG data. We made this fiction in the form of a GitHub page to emphasize the everyday nature of this documentation, from the viewpoint of a software developer. In the fiction, the algorithms underlying this API come from a specific set of training data from a controlled environment in a university research lab. The API discloses and openly links to the data that its algorithms were trained on.

In our creation and analysis of this fiction, for us it surfaces ambiguity and a tension about how generalizable the system’s model of the brain is. The API with a README implies that the system is meant to be generalizable, despite some indications based on its training dataset that it might be more limited. This fiction also gestures more broadly toward the involvement of academic research in larger technical infrastructures. The documentation notes that the API started as a research project by a professor at a University before becoming hosted and maintained by a large tech company. For us, this highlights how collaborations between research and industry may produce artifacts that move into broader contexts. Yet researchers may not be thinking about the potential effects or implications of their technical systems in these broader contexts.

Design Fiction 2: A question on StackOverflow

Second, a developer, Jay, is working with the BCI API to develop a tool for content moderation. He asks a question on Stack Overflow, a real website for developers to ask and answer technical questions. He questions the API’s applicability beyond lab-based stimuli, asking “do these ‘lab’ P300 responses really apply to other things? If you are looking over messages to see if any of them are abusive, will we really see the ‘same’ P300 response?” The answers from other developers suggest that they predominantly believe the API is generalizable to a broader class of tasks, with the most agreed-upon answer saying “The P300 is a general response, and should apply perfectly well to your problem.”

This fiction helps us explore how and where contestation may occur in technical communities, and where discussion of social values or social implications could arise. We imagine the first developer, Jay, as someone who is sensitive to the way the API was trained, and questions its applicability to a new domain. However, he encounters the commenters who believe that physiological signals are always generalizable, and don’t engage in questions of broader applicability. The community’s answers re-enforce notions not just of what the technical artifacts can do, but what the human brain can do. The stack overflow answers draw on a popular, though critiqued, notion of the “brain-as-computer,” framing the brain as a processing unit with generic processes that take inputs and produce outputs. Here, this notion is reinforced in the social realm on Stack Overflow.

Design Fiction 3: An internal business memo for a fictional online dating company

Meanwhile, SparkTheMatch.com, a fictional online dating service, is struggling to moderate and manage inappropriate user content on their platform. SparkTheMatch wants to utilize the P300 signal to tap into people’s tacit “gut feelings” to recognize inappropriate content. They are planning to implement a content moderation process using crowdsourced workers wearing BCIs.

In creating this fiction, we use the memo to provide insight into some of the practices and labor supporting the BCI-assisted review process from the company’s perspective. The memo suggests that the use of BCIs with Mechanical Turk will “help increase efficiency” for crowdworkers while still giving them a fair wage. The crowdworkers sit and watch a stream of flashing content, while wearing a BCI and the P300 response will subconsciously identity when workers recognize supposedly abnormal content. Yet we find it debatable whether or not this process improves the material conditions of the Turk workers. The amount of content to look at in order to make the supposedly fair wage may not actually be reasonable.

SparkTheMatch employees creating the Mechanical Turk tasks don’t directly interact with the BCI API. Instead they use pre-defined templates created by the company’s IT staff, a much more mediated interaction compared to the programmers and developers reading documentation and posting on Stack Overflow. By this point, the research lab origins of the P300 API underlying the service and questions about its broader applicability are hidden. From the viewpoint of SparkTheMatch staff, the BCI-aspects of their service just “works,” allowing managers to design their workflows around it, obfuscating the inner workings of the P300 API.

Design fiction 4: A crowdworker forum for workers who use BCIs

Fourth, the Mechanical Turk workers who do the SparkTheMatch content moderation work, share their experiences on a crowdworker forum. These crowd workers’ experiences and relationships to the P300 API is strikingly different from the people and organizations described in the other fictions—notably the API is something that they do not get to explicitly see. Aspects of the system are blackboxed or hidden away. While one poster discusses some errors that occurred, there’s ambiguity about whether fault lies with the BCI device or the data processing. EEG signals are not easily human-comprehensible, making feedback mechanisms difficult. Other posters blame the user for the errors. Which is problematic, given the preciousness of these workers’ positions, as crowd workers tend to have few forms of recourse when encountering problems with tasks.

For us, these forum accounts are interesting because they describe a situation in which the BCI user is not the person who obtains the real benefits of its use. It’s the company SparkTheMatch, not the BCI-end users, that is obtaining the most benefit from BCIs.

Some Emergent Themes and Reflections

From these design fictions, several salient themes arose for us. By looking at BCIs from the perspective of several everyday experiences, we can see different types of work done in relation to BCIs – whether that’s doing software development, being a client for a BCI-service, or using the BCI to conduct work. Our fictions are inspired by others’ research on the existing labor relationships and power dynamics in crowdwork and distributed content moderation (in particular work by scholars Lilly Irani and Sarah T. Roberts). Here we also critique utopian narratives of brain-controlled computing that suggest BCIs will create new efficiencies, seamless interactions, and increased productivity. We investigate a set of questions on the role of technology in shaping and reproducing social and economic inequalities.

Second, we use the design fiction to surface questions about the situatedness of brain sensing, questioning how generalizable and universal physiological signals are. Building on prior accounts of situated actions and extended cognition, we note the specific and the particular should be taken into account in the design of supposedly generalizable BCI systems.

These themes arose iteratively, and were somewhat surprising for us, particularly just how different the BCI system looks like from each of the different perspectives in the fictions. We initially set out to create a rather mundane fictional platform or infrastructure, an API for BCIs. With this starting point we brainstormed other types of direct and indirect relationships people might have with our BCI API to create multiple “entry points” into our API’s world. We iterated on various types of relationships and artifacts—there are end-users, but also clients, software engineers, app developers, each of whom might interact with an API in different ways, directly or indirectly. Through iterations of different scenarios (a BCI-assisted tax filing service was thought of at one point), and through discussions with our colleagues (some of whom posed questions about what labor in higher education might look like with BCIs), we slowly began to think that looking at the work practices implicated in these different relationships and artifacts would be a fruitful way to focus our designs.

Toward “Platform Fictions”

In part, we think that creating design fictions in mundane technical forms like documentation or stack overflow posts might help the artifacts be legible to software engineers and technical researchers. More generally, this leads us to think more about what it might mean to put platforms and infrastructures at the center of design fiction (as well as build on some of the insights from platform studies and infrastructure studies). Adoption and use does not occur in a vacuum. Rather, technologies get adopted into and by existing sociotechnical systems. We can use design fiction to open the “black boxes” of emerging sociotechnical systems. Given that infrastructures are often relegated to the background in everyday use, surfacing and focusing on an infrastructure helps us situate our design fictions in the everyday and mundane, rather than dystopia or utopia.

We find that using a digital infrastructure as a starting point helps surface multiple subject positions in relation to the system at different sites of interaction, beyond those of end-users. From each of these subject positions, we can see where contestation may occur, and how the system looks different. We can also see how assumptions, values, and practices surrounding the system at a particular place and time can be hidden, adapted, or changed by the time the system reaches others. Importantly, we also try to surface ways the system gets used in potentially unintended ways – we don’t think that the academic researchers who developed the API to detect brain signal spikes imagined that it would be used in a system of arguably exploitative crowd labor for content moderation.

Our fictions try to blur clear distinctions that might suggest what happens in “labs,” is separate from the “the outside world”, instead highlighting their entanglements. Given that much of BCI research currently exists in research labs, we raise this point to argue that BCI researchers and designers should also be concerned about the implications of adoption and application. This helps gives us insight into the responsibilities (and complicitness) of researchers and builders of technical systems. Some of the recent controversies around Cambridge Analytica’s use of Facebook’s API points to ways in which the building of platforms and infrastructures isn’t neutral, and that it’s incumbent upon designers, developers, and researchers to raise issues related to social concerns and potential inequalities related to adoption and appropriation by others.

Concluding Thoughts

This work isn’t meant to be predictive. The fictions and analysis present our specific viewpoints by focusing on several types of everyday experiences. One can read many themes into our fictions, and we encourage others to do so. But we find that focusing on potential adoptions of an emerging technology in the everyday and mundane helps surface contours of debates that might occur, which might not be immediately obvious when thinking about BCIs – and might not be immediately obvious if we think about social implications in terms of “worst case scenarios” or dystopias. We hope that this work can raise awareness among BCI researchers and designers about social responsibilities they may have for their technology’s adoption and use. In future work, we plan to use these fictions as research probes to understand how technical researchers envision BCI adoptions and their social responsibilities, building on some of our prior projects. And for design researchers, we show that using a fictional platform in design fiction can help raise important social issues about technology adoption and use from multiple perspectives beyond those of end-users, and help surface issues that might arise from unintended or unexpected adoption and use. Using design fiction to interrogate sociotechnical issues present in the everyday can better help us think about the futures we desire.


Crossposted with the UC Berkeley BioSENSE Blog

by Richmond at July 10, 2018 09:56 PM

Ph.D. student

The California Consumer Privacy Act of 2018: a deep dive

I have given the California Consumer Privacy Act of 2018 a close read.

In summary, the act grants consumers a right to request that businesses disclose the categories of information about them that it collects and sells, and gives consumers the right to businesses to delete their information and opt out of sale.

What follows are points I found particularly interesting. Quotations from the Act (that’s what I’ll call it) will be in bold. Questions (meaning, questions that I don’t have an answer to at the time of writing) will be in italics.

Privacy rights

SEC. 2. The Legislature finds and declares that:
(a) In 1972, California voters amended the California Constitution to include the right of privacy among the “inalienable” rights of all people. …

I did not know that. I was under the impression that in the United States, the ‘right to privacy’ was a matter of legal interpretation, derived from other more explicitly protected rights. A right to privacy is enumerated in Article 12 of the Universal Declaration of Human Rights, adopted in 1948 by the United Nations General Assembly. There’s something like a right to privacy in Article 8 of the 1950 European Convention on Human Rights. California appears to have followed their lead on this.

In several places in the Act, it specifies that exceptions may be made in order to be compliant with federal law. Is there an ideological or legal disconnect between privacy in California and privacy nationally? Consider the Snowden/Schrems/Privacy Shield issue: exchanges of European data to the United States are given protections from federal surveillance practices. This presumably means that the U.S. federal government agrees to respect EU privacy rights. Can California negotiate for such treatment from the U.S. government?

These are the rights specifically granted by the Act:

[SEC. 2.] (i) Therefore, it is the intent of the Legislature to further Californians’ right to privacy by giving consumers an effective way to control their personal information, by ensuring the following rights:

(1) The right of Californians to know what personal information is being collected about them.

(2) The right of Californians to know whether their personal information is sold or disclosed and to whom.

(3) The right of Californians to say no to the sale of personal information.

(4) The right of Californians to access their personal information.

(5) The right of Californians to equal service and price, even if they exercise their privacy rights.

It has been only recently that I’ve been attuned to the idea of privacy rights. Perhaps this is because I am from a place that apparently does not have them. A comparison that I believe should be made more often is the comparison of privacy rights to property rights. Clearly privacy rights have become as economically relevant as property rights. But currently, property rights enjoy a widespread acceptance and enforcement that privacy rights do not.

Personal information defined through example categories

“Information” is a notoriously difficult thing to define. The Act gets around the problem of defining “personal information” by repeatedly providing many examples of it. The examples are themselves rather abstract and are implicitly “categories” of personal information. Categorization of personal information is important to the law because under several conditions businesses must disclose the categories of personal information collected, sold, etc. to consumers.

SEC. 2. (e) Many businesses collect personal information from California consumers. They may know where a consumer lives and how many children a consumer has, how fast a consumer drives, a consumer’s personality, sleep habits, biometric and health information, financial information, precise geolocation information, and social networks, to name a few categories.

[1798.140.] (o) (1) “Personal information” means information that identifies, relates to, describes, is capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household. Personal information includes, but is not limited to, the following:

(A) Identifiers such as a real name, alias, postal address, unique personal identifier, online identifier Internet Protocol address, email address, account name, social security number, driver’s license number, passport number, or other similar identifiers.

(B) Any categories of personal information described in subdivision (e) of Section 1798.80.

(C) Characteristics of protected classifications under California or federal law.

(D) Commercial information, including records of personal property, products or services purchased, obtained, or considered, or other purchasing or consuming histories or tendencies.

Note that protected classifications (1798.140.(o)(1)(C)) includes race, which is socially constructed category (see Omi and Winant on racial formation). The Act appears to be saying that personal information includes the race of the consumer. Contrast this with information as identifiers (see 1798.140.(o)(1)(A)) and information as records (1798.140.(o)(1)(D)). So “personal information” in one case is the property of a person (and a socially constructed one at that); in another case it is the specific syntactic form; in another case it is a document representing some past action. The Act is very ontologically confused.

Other categories of personal information include (continuing this last section):


(E) Biometric information.

(F) Internet or other electronic network activity information, including, but not limited to, browsing history, search history, and information regarding a consumer’s interaction with an Internet Web site, application, or advertisement.

Devices and Internet activity will be discussed in more depth in the next section.


(G) Geolocation data.

(H) Audio, electronic, visual, thermal, olfactory, or similar information.

(I) Professional or employment-related information.

(J) Education information, defined as information that is not publicly available personally identifiable information as defined in the Family Educational Rights and Privacy Act (20 U.S.C. section 1232g, 34 C.F.R. Part 99).

(K) Inferences drawn from any of the information identified in this subdivision to create a profile about a consumer reflecting the consumer’s preferences, characteristics, psychological trends, preferences, predispositions, behavior, attitudes, intelligence, abilities, and aptitudes.

Given that the main use of information is to support inferences, it is notable that inferences are dealt with here as a special category of information, and that sensitive inferences are those that pertain to behavior and psychology. This may be narrowly interpreted to exclude some kinds of inferences that may be relevant and valuable but not so immediately recognizable as ‘personal’. For example, one could infer from personal information the ‘position’ of a person in an arbitrary multi-dimensional space that compresses everything known about a consumer, and use this representation for targeted interventions (such as advertising). Or one could interpret it broadly: since almost all personal information is relevant to ‘behavior’ in a broad sense, and inference from it is also ‘about behavior’, and therefore protected.

Device behavior

The Act focuses on the rights of consumers and deals somewhat awkwardly with the fact that most information collected about consumers is done indirectly through machines. The Act acknowledges that sometimes devices are used by more than one person (for example, when they are used by a family), but it does not deal easily with other forms of sharing arrangements (i.e., an open Wifi hotspot) and the problems associated with identifying which person a particular device’s activity is “about”.

[1798.140.] (g) “Consumer” means a natural person who is a California resident, as defined in Section 17014 of Title 18 of the California Code of Regulations, as that section read on September 1, 2017, however identified, including by any unique identifier. [SB: italics mine.]

[1798.140.] (x) “Unique identifier” or “Unique personal identifier” means a persistent identifier that can be used to recognize a consumer, a family, or a device that is linked to a consumer or family, over time and across different services, including, but not limited to, a device identifier; an Internet Protocol address; cookies, beacons, pixel tags, mobile ad identifiers, or similar technology; customer number, unique pseudonym, or user alias; telephone numbers, or other forms of persistent or probabilistic identifiers that can be used to identify a particular consumer or device. For purposes of this subdivision, “family” means a custodial parent or guardian and any minor children over which the parent or guardian has custody.

Suppose you are a business that collects traffic information and website behavior connected to IP addresses, but you don’t go through the effort of identifying the ‘consumer’ who is doing the behavior. In fact, you may collect a lot of traffic behavior that is not connected to any particular ‘consumer’ at all, but is rather the activity of a bot or crawler operated by a business. Are you on the hook to disclose personal information to consumers if they ask for their traffic activity? If they do, or if they do not, provide their IP address?

Incidentally, while the Act seems comfortable defining a Consumer as a natural person identified by a machine address, it also happily defines a Person as “proprietorship, firm, partnership, joint venture, syndicate, business trust, company, corporation, …” etc. in addition to “an individual”. Note that “personal information” is specifically information about a consumer, not a Person (i.e., business).

This may make you wonder what a Business is, since these are the entities that are bound by the Act.

Businesses and California

The Act mainly details the rights that consumers have with respect to businesses that collect, sell, or lose their information. But what is a business?

[1798.140.] (c) “Business” means:
(1) A sole proprietorship, partnership, limited liability company, corporation, association, or other legal entity that is organized or operated for the profit or financial benefit of its shareholders or other owners, that collects consumers’ personal information, or on the behalf of which such information is collected and that alone, or jointly with others, determines the purposes and means of the processing of consumers’ personal information, that does business in the State of California, and that satisfies one or more of the following thresholds:

(A) Has annual gross revenues in excess of twenty-five million dollars ($25,000,000), as adjusted pursuant to paragraph (5) of subdivision (a) of Section 1798.185.

(B) Alone or in combination, annually buys, receives for the business’ commercial purposes, sells, or shares for commercial purposes, alone or in combination, the personal information of 50,000 or more consumers, households, or devices.

(C) Derives 50 percent or more of its annual revenues from selling consumers’ personal information.

This is not a generic definition of a business, just as the earlier definition of ‘consumer’ is not a generic definition of consumer. This definition of ‘business’ is a sui generis definition for the purposes of consumer privacy protection, as it defines businesses in terms of their collection and use of personal information. The definition explicitly thresholds the applicability of the law to businesses over certain limits.

There does appear to be a lot of wiggle room and potential for abuse here. Consider: the Mirai botnet had by one estimate 2.5 million devices compromised. Say you are a small business that collects site traffic. Suppose the Mirai botnet targets your site with a DDOS attack. Suddenly, your business collects information of millions of devices, and the Act comes into effect. Now you are liable for disclosing consumer information. Is that right?

An alternative reading of this section would recall that the definition (!) of consumer, in this law, is a California resident. So maybe the thresholds in 1798.140.(c)(B) and 1798.140.(c)(C) refer specifically to Californian consumers. Of course, for any particular device, information about where that device’s owner lives is personal information.

Having 50,000 California customers or users is a decent threshold for defining whether or not a business “does business in California”. Given the size and demographics of California, you would expect that many of the, just for example, major Chinese technology companies like Tencent to have 50,000 Californian users. This brings up the question of extraterritorial enforcement, which gave the GDPR so much leverage.

Extraterritoriality and financing

In a nutshell, it looks like the Act is intended to allow Californians to sue foreign companies. How big a deal is this? The penalties for noncompliance are civil penalties and a price per violation (presumably individual violation), not a ratio of profit, but you could imagine them adding up:

[1798.155.] (b) Notwithstanding Section 17206 of the Business and Professions Code, any person, business, or service provider that intentionally violates this title may be liable for a civil penalty of up to seven thousand five hundred dollars ($7,500) for each violation.

(c) Notwithstanding Section 17206 of the Business and Professions Code, any civil penalty assessed pursuant to Section 17206 for a violation of this title, and the proceeds of any settlement of an action brought pursuant to subdivision (a), shall be allocated as follows:

(1) Twenty percent to the Consumer Privacy Fund, created within the General Fund pursuant to subdivision (a) of Section 1798.109, with the intent to fully offset any costs incurred by the state courts and the Attorney General in connection with this title.

(2) Eighty percent to the jurisdiction on whose behalf the action leading to the civil penalty was brought.

(d) It is the intent of the Legislature that the percentages specified in subdivision (c) be adjusted as necessary to ensure that any civil penalties assessed for a violation of this title fully offset any costs incurred by the state courts and the Attorney General in connection with this title, including a sufficient amount to cover any deficit from a prior fiscal year.

1798.160. (a) A special fund to be known as the “Consumer Privacy Fund” is hereby created within the General Fund in the State Treasury, and is available upon appropriation by the Legislature to offset any costs incurred by the state courts in connection with actions brought to enforce this title and any costs incurred by the Attorney General in carrying out the Attorney General’s duties under this title.

(b) Funds transferred to the Consumer Privacy Fund shall be used exclusively to offset any costs incurred by the state courts and the Attorney General in connection with this title. These funds shall not be subject to appropriation or transfer by the Legislature for any other purpose, unless the Director of Finance determines that the funds are in excess of the funding needed to fully offset the costs incurred by the state courts and the Attorney General in connection with this title, in which case the Legislature may appropriate excess funds for other purposes.

So, just to be concrete: suppose a business collects personal information on 50,000 Californians and does not disclose that information. California could then sue that business for $7,500 * 50,000 = $375 million in civil penalties, that then goes into the Consumer Privacy Fund, whose purpose is to cover the cost of further lawsuits. The process funds itself. If it makes any extra money, it can be appropriated for other things.

Meaning, I guess this Act basically sustains a very sustained bunch of investigations and fines. You could imagine that this starts out with just some lawyers responding to civil complaints. But consider the scope of the Act, and how it means that any business in the world not properly disclosing information about Californians is liable to be fined. Suppose that some kind of blockchain or botnet based entity starts committing surveillance in violation of this act on a large scale. What kinds of technical investigative capacity is necessary to enforce this kind of thing worldwide? Does this become a self-funding cybercrime investigative unit? How are foreign actors who are responsible for such things brought to justice?

This is where it’s totally clear that I am not a lawyer. I am still puzzling over the meaning of [1798.155.(c)(2), for example.

“Publicly available information”

There are more weird quirks to this Act than I can dig into in this post, but one that deserves mention (as homage to Helen Nissenbaum, among other reasons) is the stipulation about publicly available information, which does not mean what you think it means:

(2) “Personal information” does not include publicly available information. For these purposes, “publicly available” means information that is lawfully made available from federal, state, or local government records, if any conditions associated with such information. “Publicly available” does not mean biometric information collected by a business about a consumer without the consumer’s knowledge. Information is not “publicly available” if that data is used for a purpose that is not compatible with the purpose for which the data is maintained and made available in the government records or for which it is publicly maintained. “Publicly available” does not include consumer information that is deidentified or aggregate consumer information.

The grammatical error in the second sentence (the phrase beginning with “if any conditions” trails off into nowhere…) indicates that this paragraph was hastily written and never finished, as if in response to an afterthought. There’s a lot going on here.

First, the sense of ‘public’ used here is the sense of ‘public institutions’ or the res publica. Amazingly and a bit implausibly, government records are considered publicly available only when they are used for purposes compatible with their maintenance. So if a business takes a public record and uses it differently that it was originally intended when it was ‘made available’, it becomes personal information that must be disclosed? As somebody who came out of the Open Data movement, I have to admit I find this baffling. On the other hand, it may be the brilliant solution to privacy in public on the Internet that society has been looking for.

Second, the stipulation that “publicly available” does not mean biometric information collected by a business about a consumer without the consumer’s knowledge” is surprising. It appears to be written with particular cases in mind–perhaps IoT sensing. But why specifically biometric information, as opposed to other kinds of information collected without consumer knowledge?

There is a lot going on in this paragraph. Oddly, it is not one of the ones explicitly flagged for review and revision in the section of soliciting public participation on changes before the Act goes into effect on 2020.

A work in progress

1798.185. (a) On or before January 1, 2020, the Attorney General shall solicit broad public participation to adopt regulations to further the purposes of this title, including, but not limited to, the following areas:

This is a weird law. I suppose it was written and passed to capitalize on a particular political moment and crisis (Sec. 2 specifically mentions Cambridge Analytica as a motivation), drafted to best express its purpose and intent, and given the horizon of 2020 to allow for revisions.

It must be said that there’s nothing in this Act that threatens the business models of any American Big Tech companies in any way, since storing consumer information in order to provide derivative ad targeting services is totally fine as long as businesses do the right disclosures, which they are now all doing because of GDPR anyway. There is a sense that this is California taking the opportunity to start the conversation about what U.S. data protection law post-GDPR will be like, which is of course commendable. As a statement of intent, it is great. Where it starts to get funky is in the definitions of its key terms and the underlying theory of privacy behind them. We can anticipate some rockiness there and try to unpack these assumptions before adopting similar policies in other states.

by Sebastian Benthall at July 10, 2018 03:16 PM

July 09, 2018

Ph.D. student

some moral dilemmas

Here are some moral dilemmas:

  • A firm basis for morality is the Kantian categorical imperative: treat others as ends and not means, with the corollary that one should be able to take the principles of ones actions and extend them as laws binding all rational beings. Closely associated and important ideas are those concerned with human dignity and rights. However, the great moral issues of today are about social forms (issues around race, gender, etc.), sociotechnical organizations (issues around the role of technology), or a totalizing systemic issues (issues around climate change). Morality based on individualism and individual equivalence seem out of place when the main moral difficulties are about body agonism. What is the basis for morality for these kinds of social moral problems?
  • Theodicy has its answer: it’s bounded rationality. Ultimately what makes us different from other people, that which creates our multiplicity, is our distance from each other, in terms of available information. Our disconnection, based on the different loci and foci within complex reality, is precisely that which gives reality its complexity. Dealing with each other’s ignorance is the problem of being a social being. Ignorance is therefore the condition of society. Society is the condition of moral behavior; if there were only one person, there would be no such thing as right or wrong. Therefore, ignorance is a condition of morality. How, then, can morality be known?

by Sebastian Benthall at July 09, 2018 01:53 PM

July 06, 2018

Ph.D. student

On “Racialization” (Omi and Winant, 2014)

Notes on Omi and Winant, 2014, Chapter 4, Section: “Racialization”.

Summary

Race is often seen as either an objective category, or an illusory one.

Viewed objectively, it is seen as a biological property, tied to phenotypic markers and possibly other genetic traits. It is viewed as an ‘essence’.
Omi and Winant argue that the concept of ‘mixed-race’ depends on this kind of essentialism, as it implies a kind of blending of essences. This is the view associated with “scientific” racism, most prevalent in the prewar era.

View as an illusion, race is seen as an ideological construct. An epiphenomenon of culture, class, or peoplehood. Formed as a kind of “false consciousness”, in the Marxist terminology. This view is associated with certain critics of affirmative action who argue that any racial classification is inherently racist.

Omi and Winant are critical of both perspectives, and argue for an understanding of race as socially real and grounded non-reducibly in phenomic markers but ultimately significant because of the social conflicts and interests constructed around those markers.

They define race as: “a concept that signifies and symbolizes signifiers and symbolizes social conflicts and interests by referring to different types of human bodies.”

The visual aspect of race is irreducible, and becomes significant when, for example, is becomes “understood as a manifestation of more profound differences that are situated within racially identified persons: intelligence, athletic ability, temperament, and sexuality, among other traits.” These “understandings”, which it must be said may be fallacious, “become the basis to justify or reinforce social differentiation.

This process of adding social significance to phenomic markers is, in O&W’s language, racialization, which they define as “the extension of racial meanings to a previously racially unclassified relationship, social practice, or group.” They argue that racialization happens at both macro and micro scales, ranging from the consolidation of the world-system through colonialization to incidents of racial profiling.

Race, then, is a concept that refer to different kinds of bodies by phenotype and the meanings and social practices ascribed to them. When racial concepts are circulated and accepted as ‘social reality’, racial difference is not dependent on visual difference alone, but take on a life of their own.

Omi and Winant therefore take a nuanced view of what it means for a category to be socially constructed, and it is a view that has concrete political implications. They consider the question, raised frequently, as to whether “we” can “get past” race, or go beyond it somehow. (Recall that this edition of the book was written during the Obama administration and is largely a critique of the idea, which seems silly now, that his election made the United States “post-racial”).

Omi and Winant see this framing as unrealistically utopian and based on extreme view that race is “illusory”. It poses race as a problem, a misconception of the past. A more effective position, they claim, would note that race is an element of social structure, not an irregularity in it. “We” cannot naively “get past it”, but also “we” do not need to accept the erroneous conclusion that race is a fixed biological given.

Comments

Omi and Winant’s argument here is mainly one about the ontology of social forms.
In my view, this question of social form ontology is one of the “hard problems”
remaining in philosophy, perhaps equivalent to if not more difficult than the hard problem of consciousness. So no wonder it is such a fraught issue.

The two poles of thinking about race that they present initially, the essentialist view and the epiphenomenal view, had their heyday in particular historical intellectual movements. Proponents of these positions are still popularly active today, though perhaps it’s fair to say that both extremes are now marginalized out of the intellectual mainstream. Despite nobody really understanding how social construction works, most educated people are probably willing to accept that race is socially constructed in one way or another.

It is striking, then, that Omi and Winant’s view of the mechanism of racialization, which involves the reading of ‘deeper meanings’ into phenomic traits, is essentially a throwback to the objective, essentializing viewpoint.
Perhaps there is a kind of cognitive bias, maybe representativeness bias or fundamental attribution bias, which is responsible for the cognitive errors that make racialization possible and persistent.

If so, then the social construction of race would be due as much to the limits of human cognition as to the circulation of concepts. That would explain the temptation to believe that we can ‘get past’ race, because we can always believe in the potential for a society in which people are smarter and are trained out of their basic biases. But Omi and Winant would argue that this is utopian. Perhaps the wisdom of sociology and social science in general is the conservative recognition of the widespread implications of human limitation. As the social expert, one can take the privileged position that notes that social structure is the result of pervasive cognitive error. That pervasive cognitive error is perhaps a more powerful force than the forces developing and propagating social expertise. Whether it is or is not may be the existential question for liberal democracy.

An unanswered question at this point is whether, if race were broadly understood as a function of social structure, it remains as forceful a structuring element as if it is understood as biological essentialism. It is certainly possible that, if understood as socially contingent, the structural power of race will steadily erode through such statistical processes as regression to the mean. In terms of physics, we can ask whether the current state of the human race(s) is at equilibrium, or heading towards an equilibrium, or diverging in a chaotic and path-dependent way. In any of these cases, there is possibly a role to be played by technical infrastructure. In other words, there are many very substantive and difficult social scientific questions at the root of the question of whether and how technical infrastructure plays a role in the social reproduction of race.

by Sebastian Benthall at July 06, 2018 05:40 PM

July 02, 2018

Ph.D. student

“The Theory of Racial Formation”: notes, part 1 (Cha. 4, Omi and Winant, 2014)

Chapter 4 of Omi and Winant (2014) is “The Theory of Racial Formation”. It is where they lay out their theory of race and its formation, synthesizing and improving on theories of race as ethnicity, race as class, and race as nation that they consider earlier in the book.

This rhetorical strategy of presenting the historical development of multiple threads of prior theory before synthesizing them into something new is familiar to me from my work with Helen Nissenbaum on Contextual Integrity. CI is a theory of privacy that advances prior legal and social theories by teasing out their tensions. This seems to be a good way to advance theory through scholarship. It is interesting that the same method of theory building can work in multiple fields. My sense is that what’s going on is that there is an underlying logic to this process which in a less Anglophone world we might call “dialectical”. But I digress.

I have not finished Chapter 4 yet but I wanted to sketch out the outline of it before going into detail. That’s because what Omi and Winant are presenting a way of understanding the mechanisms behind the reproduction of race that are not simplistically “systemic” but rather break it down into discrete operations. This is a helpful contribution; even if the theory is not entirely accurate, its very specificity elevates the discourse.

So, in brief notes:

For Omi and Winant, race is a way of “making up people”; they attribute this phrase to Ian Hacking but do not develop Hacking’s definition. Their reference to a philosopher of science does situate them in a scholarly sense; it is nice that they seem to acknowledge an implicit hierarchy of theory that places philosophy at the foundation. This is correct.

Race-making is a form of othering, of having a group of people identify another group as outsiders. Othering is a basic and perhaps unavoidable human psychological function; their reference for this is powell and Menendian (Apparently, john a. powell being one of these people like danah boyd who decapitalizes their name.)

Race is of course a social construct that is neither a fixed and coherent category nor something that is “unreal”. That is, presumably, why we need a whole book on the dynamic mechanisms that form it. One reason why race is such a dynamic concept is because (a) it is a way of organizing inequality in society, (b) the people on “top” of the hierarchy implied by racial categories enforce/reproduce that category “downwards”, (c) the people on the “bottom” of the hierarchy implied by racial categories also enforce/reproduce a variation of those categories “upwards” as a form of resistance, and so (d) the state of the racial categories at any particular time is a temporary consequence of conflicting “elite” and “street” variations of it.

This presumes that race is fundamentally about inequality. Omi and Winant believe it is. In fact, they think racial categories are a template for all other social categories that are about inequality. This is what they mean by their claim that race is a master category. It’s “a frame used for organizing all manner of political thought”, particularly political thought about liberation struggles.

I’m not convinced by this point. They develop it with a long discussion of intersectionality that is also unconvincing to me. Historically, they point out that sometimes women’s movements have allied with black power movements, and sometimes they haven’t. They want the reader to think this is interesting; as a data scientist, I see randomness and lack of correlation. They make the poignant and true point that “perhaps at the core of intersectionality practice, as well as theory, is the ‘mixed race’ category. Well, how does it come about that people can be ‘mixed’?” They then drop the point with no further discussion.

Perhaps the book suffers from its being aimed at undergraduates. Omi and Winant are unable to bring up even the most basic explanation for why there are mixed race people: a male person of one race and a female person of a different race have a baby, and that creates a mixed race person (whether or not they are male or female). The basic fact that race is hereditary whereas sex is not is probably really important to the interesectionality between race and sex and the different ways those categories are formed; somehow this point is never mentioned in discussions of intersectionality. Perhaps this is because of the ways this salient difference in race and sex undermines the aim of political solidarity that so much intersectional analysis seems to be going for. Relatedly, contemporary sociological theory seems to have some trouble grasping conventional sexual reproduction, perhaps because it is so sensitized to all the exceptions to it. Still, they drop the ball a bit by bringing this up and not going into any analytic depth about it at all.

Omi and Winant make an intriguing comment, “In legal theory, the sexual contract and racial contract have often been compared”. I don’t know what this is about but I want to know more.

This is all a kind of preamble to their presentation of theory. They start to provide some definitions:

racial formation
The sociohistorical process by which racial identities are created, lived out, transformed, and destroyed.
racialization
How phenomic-corporeal dimensions of bodies acquire meaning in social life.
racial projects
The co-constitutive ways that racial meanings are translated into social structures and become racially signified.
racism
Not defined. A property of racial projects that Omi and Winant will discuss later.
racial politics
Ways that the politics (of a state?) can handle race, including racial despotism, racial democracy, and racial hegemony.

This is a useful breakdown. More detail in the next post.

by Sebastian Benthall at July 02, 2018 03:36 PM

June 27, 2018

MIMS 2012

Notes from “Good Strategy / Bad Strategy”

Strategy has always been a fuzzy concept in my mind. What goes into a strategy? What makes a strategy good or bad? How is it different from vision and goals? Good Strategy / Bad Strategy, by UCLA Anderson School of Management professor Richard P. Rumelt, takes a nebulous concept and makes it concrete. He explains what goes into developing a strategy, what makes a strategy good, and what makes a strategy bad – which makes good strategy even clearer.

As I read the book, I kept underlining passages and scribbling notes in the margins because it’s so full of good information and useful techniques that are just as applicable to my everyday work as they are to running a multi-national corporation. To help me use the concepts I learned, I decided to publish my notes and key takeaways so I can refer back to them later.

The Kernel of Strategy

Strategy is designing a way to deal with a challenge. A good strategy, therefore, must identify the challenge to be overcome, and design a way to overcome it. To do that, the kernel of a good strategy contains three elements: a diagnosis, a guiding policy, and coherent action.

  • A diagnosis defines the challenge. What’s holding you back from reaching your goals? A good diagnosis simplifies the often overwhelming complexity of reality down to a simpler story by identifying certain aspects of the situation as critical. A good diagnosis often uses a metaphor, analogy, or an existing accepted framework to make it simple and understandable, which then suggests a domain of action.
  • A guiding policy is an overall approach chosen to cope with or overcome the obstacles identified in the diagnosis. Like the guardrails on a highway, the guiding policy directs and constrains action in certain directions without defining exactly what shall be done.
  • A set of coherent actions dictate how the guiding policy will be carried out. The actions should be coherent, meaning the use of resources, policies, and maneuvers that are undertaken should be coordinated and support each other (not fight each other, or be independent from one another).

Good Strategy vs. Bad Strategy

  • Good strategy is simple and obvious.
  • Good strategy identifies the key challenge to overcome. Bad strategy fails to identify the nature of the challenge. If you don’t know what the problem is, you can’t evaluate alternative guiding policies or actions to take, and you can’t adjust your strategy as you learn more over time.
  • Good strategy includes actions to take to overcome the challenge. Actions are not “implementation” details; they are the punch in the strategy. Strategy is about how an organization will move forward. Bad strategy lacks actions to take. Bad strategy mistakes goals, ambition, vision, values, and effort for strategy (these things are important, but on their own are not strategy).
  • Good strategy is designed to be coherent – all the actions an organization takes should reinforce and support each other. Leaders must do this deliberately and coordinate action across departments. Bad strategy is just a list of “priorities” that don’t support each other, at best, or actively conflict with each other, undermine each other, and fight for resources, at worst. The rich and powerful can get away with this, but it makes for bad strategy.
    • This was the biggest “ah-ha!” moment for me. All strategy I’ve seen has just been a list of unconnected objectives. Designing a strategy that’s coherent and mutually reinforces itself is a huge step forward in crafting good strategies.
  • Good strategy is about focusing and coordinating efforts to achieve an outcome, which necessarily means saying “No” to some goals, initiatives, and people. Bad strategy is the result of a leader who’s unwilling or unable to say “No.” The reason good strategy looks so simple is because it takes a lot of effort to maintain the coherence of its design by saying “No” to people.
  • Good strategy leverages sources of power to overcome an obstacle. It brings relative strength to bear against relative weakness (more on that below).

How to Identify Bad Strategy

Four Major Hallmarks of Bad Strategy

  • Fluff: A strategy written in gibberish masking as strategic concepts is classic bad strategy. It uses abstruse and inflated words to create the illusion of high-level thinking.
  • Failure to face the challenge: A strategy that does not define the challenge to overcome makes it impossible to evaluate, and impossible to improve.
  • Mistaking goals for strategy: Many bad strategies are just statements of desire rather than plans for overcoming obstacles.
  • Bad strategic objectives: A strategic objective is a means to overcoming an obstacle. Strategic objectives are “bad” when they fail to address critical issues or when they are impracticable.

Some Forms of Bad Strategy

  • Dog’s Dinner Objectives: A long list of “things to do,” often mislabeled as “strategies” or “objectives.” These lists usually grow out of planning meetings in which stakeholders state what they would like to accomplish, then they throw these initiatives onto a long list called the “strategic plan” so that no one’s feelings get hurt, and they apply the label “long-term” so that none of them need be done today.
    • In tech-land, I see a lot of companies conflate OKRs (Objectives and Key Results) with strategy. OKRs are an exercise in goal setting and measuring progress towards those goals (which is important), but it doesn’t replace strategy work. The process typically looks like this: once a year, each department head is asked to come up with their own departmental OKRs, which are supposed to be connected to company goals (increase revenue, decrease costs, etc.). Then each department breaks down their OKRs into sub-OKRs for their teams to carry out, which are then broken down into sub-sub-OKRs for sub-teams and/or specific people, so on down the chain. This process just perpetuates departmental silos and are rarely cohesive or mutually supportive of each other (if this does happen, it’s usually a happy accident). Department and team leaders often throw dependencies on other departments and teams, which causes extra work for teams that they often haven’t planned for and aren’t connected to their own OKRs, which drags down the efficiency and effectiveness of the entire organization. It’s easy for leaders to underestimate this drag since it’s hard to measure, and what isn’t measured isn’t managed.
    • As this book makes clear, setting objectives is not the same as creating a strategy to reach those goals. You still need to do the hard strategy work of making a diagnosis of what obstacle is holding you back, creating a guiding policy for overcoming the obstacle, and breaking that down into coherent actions for the company to take (which shouldn’t be based on what departments or people or expertise you already have, but instead you should look at what competencies you need to carry out your strategy and then apply existing teams and people to carrying them out, if they exist, and hire where you’re missing expertise, and get rid of competencies that are no longer needed in the strategy). OKRs can be applied at the top layer as company goals to reach, then applied again to the coherent actions (i.e. what’s the objective of each action, and how will you know if you reached it?), and further broken down for teams and people as needed. You still need an actual strategy before you can set OKRs, but most companies conflate OKRs with strategy.
  • Blue Sky Objectives: A blue-sky objective is a simple restatement of the desired state of affairs or of the challenge. It skips over the annoying fact that no one has a clue as to how to get there.
    • For example, “underperformance” isn’t a challenge, it’s a result. It’s a restatement of a goal. The true challenge are the reasons for the underperformance. Unless leadership offers a theory of why things haven’t worked in the past (a.k.a. a diagnosis), or why the challenge is difficult, it is hard to generate good strategy.
  • The Unwillingness or Inability to Choose: Any strategy that has universal buy-in signals the absence of choice. Because strategy focuses resources, energy, and attention on some objectives rather than others, a change in strategy will make some people worse off and there will be powerful forces opposed to almost any change in strategy (e.g. a department head who faces losing people, funding, headcount, support, etc., as a result of a change in strategy will most likely be opposed to the change). Therefore, strategy that has universal buy-in often indicates a leader who was unwilling to make a difficult choice as to the guiding policy and actions to take to overcome the obstacles.
    • This is true, but there are ways of mitigating this that he doesn’t discuss, which I talk about in the “Closing Thoughts” section below.
  • Template-style “strategic planning:” Many strategies are developed by following a template of what a “strategy” should look like. Since strategy is somewhat nebulous, leaders are quick to adopt a template they can fill in since they have no other frame of reference for what goes into a strategy.
    • These templates usually take this form:
      • The Vision: Your unique vision of what the org will be like in the future. Often starts with “the best” or “the leading.”
      • The Mission: High-sounding politically correct statement of the purpose of the org.
      • The Values: The company’s values. Make sure they are non-controversial.
      • The Strategies: Fill in some aspirations/goals but call them strategies.
    • This template-style strategy skips over the hard work of identifying the key challenge to overcome, and setting out a guiding policy and actions to overcome the obstacle. It mistakes pious statements of the obvious as if they were decisive insights. The vision, mission, and goals are usually statements that no one would argue against, but that no one is inspired by, either.
    • I found myself alternating between laughing and shaking my head in disbelief because this section is so on the nose.
  • New Thought: This is the belief that you only need to envision success to achieve it, and that thinking about failure will lead to failure. The problem with this belief is that strategy requires you to analyze the situation to understand the problem to be solved, as well as anticipating the actions/reactions of customers and competitors, which requires considering both positive and negative outcomes. Ignoring negative outcomes does not set you up for success or prepare you for the unthinkable to happen. It crowds out critical thinking.

Sources of Power

Good strategy will leverage one or more sources of power to overcome the key obstacles. Rumelt describes 7 sources of power, but the list is not exhaustive:

  • Leverage: Leverage is finding an imbalance in a situation, and exploiting it to produce a disproportionately large payoff. Or, in resource constrained situations (e.g. a startup), it’s using the limited resources at hand to achieve the biggest result (i.e. not trying to do everything at once). Strategic leverage arises from a mixture of anticipating the actions and reactions of competitors and buyers, identifying a pivot point that will magnify the effects of focused effort (e.g. an unmet need of people, an underserved market, your relative strengths/weaknesses, a competence you’ve developed that can be applied to a new context, and so on), and making a concentrated application of effort on only the most critical objectives to get there.
    • This is a lesson in constraints – a company that isn’t rich in resources (i.e. money, people) is forced to find a sustainable business model and strategy, or perish. I see startups avoid making hard choices about what objectives to pursue by taking investor money to hire their way out of deciding what not to do. They can avoid designing a strategy by just throwing spaghetti at the wall and hoping something sticks, and if it doesn’t go back to the investors for more handouts. “Fail fast,” “Ready, fire, aim,” “Move fast and break things,” etc., are all Silicon Valley versions of this thinking worshiped by the industry. If a company is resource constrained, they’re forced to find a sustainable business model and strategy sooner. VC money has a habit of making companies lazy when it comes to the business fundamentals of strategy and turning a profit.
  • Proximate Objectives: Choose an objective that is close enough at hand to be feasible, i.e. proximate. This doesn’t mean your goal needs to lack ambition, or be easy to reach, or that you’re sandbagging. Rather, you should know enough about the nature of the challenge that the sub-problems to work through are solvable, and it’s a matter of focusing individual minds and energy on the right areas to reach an otherwise unreachable goal. For example, landing a man on the moon by 1969 was a proximate objective because Kennedy knew the technology and science necessary was within reach, and it was a matter of allocating, focusing, and coordinating resources properly.
  • Chain-link Systems: A system has chain-link logic when its performance is limited by its weakest link. In a business context, this typically means each department is dependent on the other such that if one department underperforms, the performance of the entire system will decline. In a strategic setting, this can cause organizations to become stuck, meaning the chain is not made stronger by strengthening one link – you must strengthen the whole chain (and thus becoming un-stuck is its own strategic challenge to overcome). On the flip side, if you design a chain link system, then you can achieve a level of excellence that’s hard for competitors to replicate. For example, IKEA designs its own furniture, builds its own stores, and manages the entire supply chain, which allows it to have lower costs and a superior customer experience. Their system is chain-linked together such that it’s hard for competitors to replicate it without replicating the entire system. IKEA is susceptible to getting stuck, however, if one link of its chain suffers.
  • Design: Good strategy is design – fitting various pieces together so they work as a coherent whole. Creating a guiding policy and actions that are coherent is a source of power since so few companies do this well. As stated above, a lot of strategies aren’t “designed” and instead are just a list of independent or conflicting objectives.
    • The tight integration of a designed strategy comes with a downside, however — it’s narrower in focus, more fragile, and less flexible in responding to change. If you’re a huge company with a lot of resources at your disposal (e.g. Microsoft), a tightly designed strategy could be a hinderance. But in situations where resources are constrained (e.g. a startup grasping for a foothold in the market), or the competitive challenge is high, a well-designed strategy can give you the advantage you need to be successful.
  • Focus: Focus refers to attacking a segment of the market with a product or service that delivers more value to that segment than other players do for the entire market. Doing this requires coordinating policies and objectives across an organization to produce extra power through their interacting and overlapping effects (see design, above), and then applying that power to the right market segment (see leverage, above).
    • This source of power exists in the UX and product world in the form of building for one specific persona who will love your product, capturing a small – but loyal – share of the market, rather than trying to build a product for “everyone” that captures a potentially bigger part of the market but that no one loves or is loyal to (making it susceptible to people switching to competitors). This advice is especially valuable for small companies and startups who are trying to establish themselves.
  • Growth: Growing the size of the business is not a strategy – it is the result of increased demand for your products and services. It is the reward for successful innovation, cleverness, efficiency, and creativity. In business, there is blind faith that growth is good, but that is not the case. Growth itself does not automatically create value.
    • The tech industry has unquestioned faith in growth. VC-backed companies are expected to grow as big as possible, as fast as possible. If you don’t agree, you’re said to lack ambition, and investors won’t fund you. This myth is perpetuated by the tech media. But as Rumelt points out, growth isn’t automatically good. Most companies don’t need to be, and can’t be, as big as companies like Google, Facebook, Apple, and Amazon. Tech companies grow in an artificial way, i.e. spending the money of their investors, not money they’re making from customers. This growth isn’t sustainable, and when they can’t turn a profit they shut down (or get acquired). What could have been a great company, at a smaller size or slower growth rate, now no longer exists. This generally doesn’t harm investors because they only need a handful of big exits out of their entire portfolio, so they pay for the ones that fail off of the profits from the few that actually make it big.
  • Using Advantage: An advantage is the result of differences – an asymmetry between rivals. Knowing your relative strengths and weaknesses, as well as the relative strengths and weaknesses of your competitors, can help you find an advantage. Strengths and weaknesses are “relative” because a strength you have in one context, or against one competitor, may be a weakness in another context, or against a different competitor. You must press where you have advantage and side-step situations in which you do not. You must exploit your rivals’ weaknesses and avoid leading with your own.
    • The most basic advantage is producing at a lower cost than your competitors, or delivering more perceived value than your competitors, or a mix of the two. The difficult part is sustaining an advantage. To do that, you need an “isolating mechanism” that prevents competitors from duplicating it. Isolating mechanisms include patents, reputations, commercial and social relationships, network effects, dramatic economies of scale, and tacit knowledge and skill gained through experience.
    • Once you have an advantage, you should strengthen it by deepening it, broadening it, creating higher demand for your products and services, or strengthening your isolating mechanisms (all explained fully in the book).
  • Dynamics: Dynamics are waves of change that roll through an industry. They are the net result of a myriad of shifts and advances in technology, cost, competition, politics, and buyer perceptions. Such waves of change are largely exogenous – that is, beyond the control of any one organization. If you can see them coming, they are like an earthquake that creates new high ground and levels what had previously been high ground, leaving behind new sources of advantage for you to exploit.
    • There are 5 guideposts to look out for: 1. Rising fixed costs; 2. Deregulation; 3. Predictable Biases; 4. Incumbent Response; and 5. Attractor States (i.e. where an industry “should” go). (All of these are explained fully in the book).
    • Attractor states are especially interesting because he defines it as where an industry “should” end up in the light of technological forces and the structure of demand. By “should,” he means to emphasize an evolution in the direction of efficiency – meeting the needs and demands of buyers as efficiently as possible. They’re different from corporate visions because the attractor state is based on overall efficiency rather than a single company’s desire to capture most of the pie. Attractor states are what pundits and industry analysts write about. There’s no guarantee, however, that the attractor state will ever come to pass. As it relates to strategy, you can anticipate most players to chase the attractor state. This leads many companies to waste resources chasing the wrong vision, and faltering as a result (e.g. Cisco rode the wave of “dumb pipes” and “IP everywhere” that AT&T and other telecom companies should have exploited). If you “zig” when other companies “zag”, you can build yourself an advantage.
    • As a strategist, you should seek to do your own analysis of where an industry is going, and create a strategy based on that (rather than what pundits “predict” will happen). Combining your own proprietary knowledge of your customers, technology, and capabilities with industry trends can give you deeper insights that analysts on the outside can’t see. Taking that a step further, you should also look for second-order effects as a result of industry dynamics. For example, the rise of the microprocessor was predicted by many, and largely came true. But what most people didn’t predict was the second-order effect that commoditized microprocessors getting embedded in more products led to increased demand for software, making the ability to write good software a competitive advantage.
  • Inertia: Inertia is an organization’s unwillingness or inability to adapt to changing circumstances. As a strategist, you can exploit this by anticipating that it will take many years for large and well-established competitors to alter their basic functioning. For example, Netflix pushed past Blockbuster because the latter could or would not abandon its focus on retail stores.
  • Entropy: Entropy causes organizations to become less organized and less focused over time. As a strategist, you need to watch out for this in your organization to actively maintain your purpose, form, and methods, even if there are no changes in strategy or competition. You can also use it as a weakness to exploit against your competitors by anticipating that entropy will creep into their business lines. For example, less focused product lines are a sign of entropy. GM’s car lines used to have distinct price points, models, and target buyers, but over time entropy caused each line to creep into each other and overlap, causing declining sales from consumer confusion.

Closing Thoughts

One of the things that surprised me as I read the book is how much overlap there is between doing strategy work and design work – diagnosing the problem, creating multiple potential solutions (i.e. the double diamond), looking at situations from multiple perspectives, weighing tradeoffs in potential solutions, and more. The core of strategy, as he defines it, is identifying and solving problems. Sound familiar? That’s the core of design! He even states, “A master strategist is a designer.”

Rumelt goes on to hold up many examples of winning strategies and advantages from understanding customer needs, behaviors, pain points, and building for a specific customer segment. In other words, doing user-centered design. He doesn’t specifically reference any UX methods, but it was clear to me that the tools of UX work apply to strategy work as well.

The overlap with design doesn’t end there. He has a section about how strategy work is rooted in intuition and subjectivity. There’s no way to prove a strategy is the “best” or “right” one. A strategy is a judgement of a situation and the best path forward. You can say the exact same thing about design as well.

Since a strategy can’t be proven to be right, Rumelt recommends considering a strategy a “hypothesis” that can be tested and refined over time. Leaders should listen for signals that their strategy is or is not working, and make adjustments accordingly. In other words, strategists should iterate on their solutions, same as designers.

Furthermore, this subjectivity causes all kinds of challenges for leaders, such as saying “no” to people, selling people on their version of reality, and so on. He doesn’t talk about how to overcome these challenges, but as I read the book I realized these are issues that designers have to learn how to deal with.

Effective designers have to sell their work to people to get it built. Then they have to be prepared for criticism, feedback, questions, and alternate ideas. Since their work can’t be “proven” to be correct, it’s open to attack from anyone and everyone. If their work gets built and shipped to customers, they still need to be open to it being “wrong” (or at least not perfect), listen to feedback from customers, and iterate further as needed. All of these soft skills are ways of dealing with the problems leaders face when implementing a strategy.

In other words, design work is strategy work. As Rumelt says, “Good strategy is design, and design is about fitting various pieces together so they work as a coherent whole.”


If you enjoyed this post (and I’m assuming you did if you made it this far), then I highly recommend reading the book yourself. I only covered the highlights here, and the book goes into a lot more depth on all of these topics. Enjoy!

by Jeff Zych at June 27, 2018 09:16 PM

June 25, 2018

Ph.D. student

Race as Nation (on Omi and Winant, 2014)

Today the people I have personally interacted with are: a Russian immigrant, three black men, a Japanese-American woman, and a Jewish woman. I live in New York City and this is a typical day. But when I sign onto Twitter, I am flooded with messages suggesting that the United States is engaged in a political war over its racial destiny. I would gladly ignore these messages if I could, but there appears to be somebody with a lot of influence setting a media agenda on this.

So at last I got to Omi and Winant‘s chapter on “Nation” — on theories of race as nation. The few colleagues who expressed interest in these summaries of Omi and Winant were concerned that they would not tackle the relationship between race and colonialism; indeed they do tackle it in this chapter, though it comes perhaps surprisingly late in their analysis. Coming to this chapter, I had high hopes that these authors, whose scholarship has been very helpfully thorough on other aspects of race, would shed light on the connection between nation and race that would help shed light on the present political situation in the U.S. I have to say that I wound up being disappointed in their analysis, but that those disappointments were enlightening. Since this edition of their book was written in 2014 when their biggest target was “colorblindness”, the gaps in their analysis are telling precisely because they show how educated, informed imagination could not foresee today’s resurgence of white nationalism in the United States.

Having said that, Omi and Winant are not naive about white nationalism. On the contrary, they open their chapter with a long section on The White Nation, which is a phrase I can’t even type without cringing at. They paint a picture in broad strokes: yes, the United States has for most of its history explicitly been a nation of white people. This racial identity underwrote slavery, the conquest of land from Native Americans, and policies of immigration and naturalization and segregation. For much of its history, for most of its people, the national project of the United States was a racial project. So say Omi and Winant.

Then they also say (in 2014) that this sense of the nation as a white nation is breaking down. Much of their chapter is a treatment of “national insurgencies”, which have included such a wide variety of movements as Pan-Africanism, cultural insurgencies that promote ‘ethnic’ culture within the United States, and Communism. (They also make passing reference to feminism as comparable kind of national insurgency undermining the notion that the United States is a white male nation. While the suggestion is interesting, they do not develop it enough to be convincing, and instead the inclusion of gender into their history of racial nationalism comes off as a perfunctory nod to their progressive allies.)

Indeed, they open this chapter in a way that is quite uncharacteristic for them. They write in a completely different register: not historical and scholarly analysis, and but more overtly ideology-mythology. They pose the question (originally posed by du Bois) in personal and philosophical terms to the reader: whose nation is it? Is it yours? They do this quite brazenly, in a way the denies one the critical intervention of questioning what a nation really is, of dissecting it as an imaginary social form. It is troubling because it seems to be subtle abuse of the otherwise meticulously scholarly character of their work. They set of the question of national identity as a pitched battle over a binary, much as is being done today. It is troublingly done.

This Manichean painting of American destiny is perhaps excused because of the detail with which they have already discussed ethnicity and class at this point in the book. And it does set up their rather prodigious account of Pan-Africanism. But it puts them in the position of appearing to accept uncritically an intuitive notion of what a nation is even while pointing out how this intuitive idea gets challenged. Indeed, they only furnish one definition of a nation, and it is Joseph Stalin’s, from a 1908 pamphlet:

A nation is a historically constituted, stable community of people, formed on the basis of a common language, territory, economic life, and psychological make-up, manifested in a common culture. (Stalin, 1908)

So much for that.

Regarding colonialism, Omi and Winant are surprisingly active in their rejection of ‘colonialist’ explanations of race in the U.S. beyond the historical conditions. They write respectfully of Wallerstein’s world-system theory as contributing to a global understanding of race, but do not see it as illuminating the specific dynamics of race in the United States very much. Specifically, they bring up Bob Blauner’s Racial Oppression in America as a paradigmatic of the application of internal colonialism theory to the United States, then pick it apart and reject it. According to internal colonialism (roughly):

  • There’s a geography of spatial arrangement of population groups along racial line
  • There is a dynamic of cultural domination and resistance, organized on lines of racial antagonism
  • Theirs systems of exploitation and control organized along racial lines

Blauner took up internal colonialism theory explicitly in 1972 to contribute to ‘radical nationalist’ practice of the 60’s, admitting that it is more inspired by activists than sociologists. So we might suspect, with Omi and Winant, that his discussion of colonialism is more about crafting an exciting ideology than one that is descriptively accurate. For example, Blauner makes a distinction between “colonized and immigrant minorities”, where the “colonized” minorities are those whose participation in the United States project was forced (Africans and Latin Americans) while those (Europeans) who came voluntarily are “immigrants” and therefore qualitatively different. Omi and Winant take issue with this classification, as many European immigrants were themselves refugees of ethnic cleansing, while it leaves the status of Asian Americans very unclear. At best, ‘internal colonialism’ theory, as far as the U.S. is concerned, places emphasis on known history but does not add to it.

Omi and Winant frequently ascribe theorists of race agency in racial politics, as if the theories enable self-conceptions that enable movements. This may be professional self-aggrandizement. They also perhaps set up nationalist accounts of race weakly because they want to deliver the goods in their own theory of racial formation that appears in the next chapter. They see nation based theories as capturing something important:

In our view, the nation-based paradigm of race is an important component of our understanding of race: in highlighting “peoplehood,” collective identity, it “invents tradition” (Hobsbawm and Ranger, eds. 1983) and “imagines community” (Anderson, 1998). Nation-based understandings of race provide affective identification: They promise a sense of ineffable connection within racially identified groups; they engage in “collective representation” (Durkheim 2014). The tropes of “soul,” of “folk,” of hermanos/hermanas unidos/unidas uphold Duboisian themes. They channel Marti’s hemispheric consciousness (Marti 1977 [1899]); and Vasconcelo’s ideas of la raza cosmica (1979, Stavans 2011). In communities and movements, in the arts and popular media, as well as universities and colleges (especially in ethnic studies) these frameworks of peoplehood play a vital part in maintaining a sense of racial solidarity, however uneven or partial.

Now, I don’t know most of the references in the above quotation. But one gets the sense that Omi and Winant believe strongly that race contains an affective identifciation component. This may be what they were appealing to in a performative or demonstrative way earlier in the chapter. While they must be on to something, it is strange that they have this as the main takeaway of the history of race and nationalism. It is especially unconvincing that their conclusion after studying the history of racial nationalism is that ethnic studies departments in universities are what racial solidarity is really about, because under their own account the creation of ethnic studies departments was an accomplishment of racial political organization, not the precursor to it.

Omi and Winant deal in only the most summary terms with the ways in which nationalism is part of the operation of a nation state. They see racial nationalism as a factor in slavery and colonialism, and also in Jim Crow segregation, but deal only loosely with whether and how the state benefited from this kind of nationalism. In other words, they have a theory of racial nationalism that is weak on political economy. Their only mention of integration in military service, for example, is the mention that service in the American Civil War was how many Irish Americans “became white”. Compare this with Fred Turner‘s account of how increased racial liberalization was part of the United States strategy to mobilize its own army against fascism.

In my view, Omi and Winant’s blind spot is their affective investment in their view of the United States as embroiled in perpetual racial conflict. While justified and largely information, it prevents them from seeing a wide range of different centrist views as anything but an extension of white nationalism. For example, they see white nationalism in nationalist celebrations of ‘the triumph of democracy’ on a Western model. There is of course a lot of truth in this, but also, as is abundantly clear today when now there appears to be a conflict between those who celebrate a multicultural democracy with civil liberties and those who prefer overt racial authoritarianism, there is something else going on that Omi and Winant miss.

My suspicion is this: in their haste to target “colorblind neoliberalism” as an extension of racism-as-usual, they have missed how in the past forty years or so, and especially in the past eight, such neoliberalism has itself been a national project. Nancy Fraser can argue that progressive neoliberalism has been hegemonic and rejected by right-wing populists. A brief look at the center left media will show how progressivism is at least as much of an affective identity in the United States as is whiteness, despite the fact that progressivism is not in and of itself a racial construct or “peoplehood”. Omi and Winant believed that colorblind neoliberalism would be supported by white nationalists because it was neoliberal. But now it has been rejected by white nationalist because it is colorblind. This is a difference that makes a difference.

by Sebastian Benthall at June 25, 2018 09:27 PM

June 24, 2018

Ph.D. student

deep thoughts about Melania Trump’s jacket: it’s masterstroke trolling

I got into an actual argument with a real person about Melania Trump’s “I really don’t care. Do U?” jacket. I’m going to double down on it and write about it because I have the hot take nobody has been talking about.

I asked this person what they thought about Melania’s jacket, and the response was, “I don’t care what she wears. She wore a jacket to a plane; so what? Is she even worth paying attention to? She’s not an important person whose opinions matter. The media is too focused on something that doesn’t matter. Just leave her alone.”

To which I responded, “So, you agree with the message on the jacket. If Melania had said that out loud, you’d say, ‘yeah, I don’t care either.’ Isn’t that interesting?”

No, it wasn’t (to the person I spoke with). It was just annoying to be talking about it in the first place. Not interesting, nothing to see here.

Back it up and let’s make some assumptions:

  1. FLOTUS thought at least as hard about what to wear that day than I do in the morning, and is a lot better at it than I am, because she is an experience professional at appearing.
  2. Getting the mass media to fall over itself on a gossip item about the ideological implications of first lady fashion gets you a lot of clicks, followers, attention, etc. and that is the political currency of the time. It’s the attention economy, stupid.

FLOTUS got a lot of attention for wearing that jacket because of its ambiguity. The first-order ambiguity of whether it was a coded message playing into any preexisting political perspective was going to get attention, obviously. But the second-order ambiguity, the one that makes it actually clever, is its potential reference to the attention to the first order ambiguity. The jacket, in this second order frames, literally expresses apathy about any attention given to it and questions whether you care yourself. That’s a clever, cool concept for a jacket worn on, like, the street. As a viral social media play, it is even more clever.

It’s clever because with that second-order self-referentiality, everybody who hears about it (which might be everybody in the world, who knows) has to form an opinion about it, and the most sensible opinion about it, the one which you must ultimately concluded in order to preserve your sanity, is the original one expressed: “I don’t really care.” Clever.

What’s the point? First, I’m arguing that this is was deliberate self-referential virality of the same kind I used to give Weird Twitter a name. Having researched this subject before, I claim expertise and knowing-what-I’m-talking-about. This is a tactic one can use in social media to do something clever. Annoying, but clever.

Second, and maybe more profound: in the messed up social epistemology of our time, where any image or message fractally reverberates between thousands of echo chambers, there is hardly any ground for “social facts”, or matters of consensus about the social world. Such facts require not just accurate propositional content but also enough broad social awareness of them to be believed by a quorum of the broader population. The disintegration of social facts is, probably, very challenging for American self-conception as a democracy is part of our political crisis right now.

There aren’t a lot of ways to accomplish social facts today. But one way is to send an ambiguous or controversial message that sparks a viral media reaction whose inevitable self-examinations resolve onto the substance of the original message. The social fact becomes established as a fait accompli through everybody’s conversation about it before anybody knows what’s happened.

That’s what’s happened with this jacket: it spoke the truth. We can give FLOTUS credit for that. And truth is: do any of us really care about any of this? That’s maybe not an irrelevant question, however you answer it.

by Sebastian Benthall at June 24, 2018 02:51 PM

June 22, 2018

MIMS 2011

PhD Scholarships on “Data Justice” and “Living with Pervasive Media Technologies from Drones to Smart Homes”

I’m excited to announce that I will be co-supervising up to four very generous and well-supported PhD scholarships at the University of New South Wales (Sydney) on the themes of “Living with Pervasive Media Technologies from Drones to Smart Homes” and “Data Justice: Technology, policy and community impact”. Please contact me directly if you have any questions. Expressions of Interest are due before 20 July, 2017 via the links below. Please note that you have to be eligible for post-graduate study at UNSW in order to apply – those requirements are slightly different for the Scientia programme but require that you have a first class honours degree or a Master’s by research. There may be some flexibility here but that would be ideal.

Living with Pervasive Media Technologies from Drones to Smart Homes

Digital assistants, smart devices, drones and other autonomous and artificial intelligence technologies are rapidly changing work, culture, cities and even the intimate spaces of the home. They are 21st century media forms: recording, representing and acting, often in real-time. This project investigates the impact of living with autonomous and intelligent media technologies. It explores the changing situation of media and communication studies in this expanded field. How do these media technologies refigure relations between people and the world? What policy challenges do they present? How do they include and exclude marginalized peoples? How are they transforming media and communications themselves? (Supervisory team: Michael Richardson, Andrew Murphie, Heather Ford)

Data Justice: Technology, policy and community impact

With growing concerns that data mining, ubiquitous surveillance and automated decision making can unfairly disadvantage already marginalised groups, this research aims to identify policy areas where injustices are caused by data- or algorithm-driven decisions, examine the assumptions underlying these technologies, document the lived experiences of those who are affected, and explore innovative ways to prevent such injustices. Innovative qualitative and digital methods will be used to identify connections across community, policy and technology perspectives on ‘big data’. The project is expected to deepen social engagement with disadvantaged communities, and strengthen global impact in promoting social justice in a datafied world. (Supervisory team: Tanja Dreher, Heather Ford, Janet Chan)

Further details on the UNSW Scientia Scholarship scheme are available on the titles above and here:
https://www.2025.unsw.edu.au/apply/?interest=scholarships 

by Heather Ford at June 22, 2018 06:06 AM

June 21, 2018

Ph.D. alumna

The Messy Fourth Estate

(This post was originally posted on Medium.)

For the second time in a week, my phone buzzed with a New York Times alert, notifying me that another celebrity had died by suicide. My heart sank. I tuned into the Crisis Text Line Slack channel to see how many people were waiting for a counselor’s help. Volunteer crisis counselors were pouring in, but the queue kept growing.

Celebrity suicides trigger people who are already on edge to wonder whether or not they too should seek death. Since the Werther effect study, in 1974, countless studies have conclusively and repeatedly shown that how the news media reports on suicide matters. The World Health Organization has adetailed set of recommendations for journalists and news media organizations on how to responsibly report on suicide so as to not trigger copycats. Yet in the past few years, few news organizations have bothered to abide by them, even as recent data shows that the reporting on Robin Williams’ death triggered an additional 10 percent increase in suicide and a 32 percent increase in people copying his method of death. The recommendations aren’t hard to follow — they focus on how to convey important information without adding to the problem.

Crisis counselors at the Crisis Text Line are on the front lines. As a board member, I’m in awe of their commitment and their willingness to help those who desperately need support and can’t find it anywhere else. But it pains me to watch as elite media amplifiers make counselors’ lives more difficult under the guise of reporting the news or entertaining the public.

Through data, we can see the pain triggered by 13 Reasons Why and the New York Times. We see how salacious reporting on method prompts people to consider that pathway of self-injury. Our volunteer counselors are desperately trying to keep people alive and get them help, while for-profit companies reap in dollars and clicks. If we’re lucky, the outlets triggering unstable people write off their guilt by providing a link to our services, with no consideration of how much pain they’ve caused or the costs we must endure.

I want to believe in journalism. But my faith is waning.

I want to believe in journalism. I want to believe in the idealized mandate of the fourth estateI want to trust that editors and journalists are doing their best to responsibly inform the public and help create a more perfect union.But my faith is waning.

Many Americans — especially conservative Americans — do not trust contemporary news organizations. This “crisis” is well-trod territory, but the focus on fact-checking, media literacy, and business models tends to obscure three features of the contemporary information landscape that I think are poorly understood:

  1. Differences in worldview are being weaponized to polarize society.
  2. We cannot trust organizations, institutions, or professions when they’re abstracted away from us.
  3. Economic structures built on value extraction cannot enable healthy information ecosystems.

Let me begin by apologizing for the heady article, but the issues that we’re grappling with are too heady for a hot take. Please read this to challenge me, debate me, offer data to show that I’m wrong. I think we’ve got an ugly fight in front of us, and I think we need to get more sophisticated about our thinking, especially in a world where foreign policy is being boiled down to 140 characters.

1. Your Worldview Is Being Weaponized

I was a teenager when I showed up at a church wearing jeans and a T-shirt to see my friend perform in her choir. The pastor told me that I was not welcomebecause this was a house of God, and we must dress in a manner that honors Him. Not good at following rules, I responded flatly, “God made me naked. Should I strip now?” Needless to say, I did not get to see my friend sing.

Faith is an anchor for many people in the United States, but the norms that surround religious institutions are man-made, designed to help people make sense of the world in which we operate. Many religions encourage interrogation and questioning, but only within a well-established framework.Children learn those boundaries, just as they learn what is acceptable insecular society. They learn that talking about race is taboo and that questioning the existence of God may leave them ostracized.

Like many teenagers before and after me, I was obsessed with taboos and forbidden knowledge. I sought out the music Tipper Gore hated, read the books my school banned, and tried to get answers to any question that made adults gasp. Anonymously, I spent late nights engaged in conversations on Usenet, determined to push boundaries and make sense of adult hypocrisy.

Following a template learned in Model UN, I took on strong positions in order to debate and learn. Having already lost faith in the religious leaders in my community, I saw no reason to respect the dogma of any institution. And because I made a hobby out of proving teachers wrong, I had little patience for the so-called experts in my hometown. I was intellectually ravenous, but utterly impatient with, if not outright cruel to the adults around me. I rebelled against hierarchy and was determined to carve my own path at any cost.

have an amazing amount of empathy for those who do not trust the institutions that elders have told them they must respect. Rage against the machine. We don’t need no education, no thought control. I’m also fully aware that you don’t garner trust in institutions through coercion or rational discussion. Instead, trust often emerges from extreme situations.

Many people have a moment where they wake up and feel like the world doesn’t really work like they once thought or like they were once told. That moment of cognitive reckoning is overwhelming. It can be triggered by any number of things — a breakup, a death, depression, a humiliating experience.Everything comes undone, and you feel like you’re in the middle of a tornado, unable to find the ground. This is the basis of countless literary classics, the crux of humanity. But it’s also a pivotal feature in how a society comes together to function.

Everyone needs solid ground, so that when your world has just been destabilized, what comes next matters. Who is the friend that picks you up and helps you put together the pieces? What institution — or its representatives — steps in to help you organize your thinking? What information do you grab onto in order to make sense of your experiences?

Contemporary propaganda isn’t about convincing someone to believe something, but convincing them to doubt what they think they know.

Countless organizations and movements exist to pick you up during your personal tornado and provide structure and a framework. Take a look at how Alcoholics Anonymous works. Other institutions and social bodies know how to trigger that instability and then help you find groundCheck out the dynamics underpinning military basic training. Organizations, movements, and institutions that can manipulate psychological tendencies toward a sociological end have significant power. Religious organizations, social movements, and educational institutions all play this role, whether or not they want to understand themselves as doing so.

Because there is power in defining a framework for people, there is good reason to be wary of any body that pulls people in when they are most vulnerable. Of course, that power is not inherently malevolentThere is fundamental goodness in providing structures to help those who are hurting make sense of the world around them. Where there be dragons is when these processes are weaponized, when these processes are designed to produce societal hatred alongside personal stability. After all, one of the fastest ways to bond people and help them find purpose is to offer up an enemy.

And here’s where we’re in a sticky spot right now. Many large institutions — government, the church, educational institutions, news organizations — are brazenly asserting their moral authority without grappling with their own shit.They’re ignoring those among them who are using hate as a tool, and they’re ignoring their own best practices and ethics, all to help feed a bottom line. Each of these institutions justifies itself by blaming someone or something to explain why they’re not actually that powerful, why they’re actually the victim. And so they’re all poised to be weaponized in a cultural war rooted in how we stabilize American insecurity.And if we’re completely honest with ourselves, what we’re really up against is how we collectively come to terms with a dying empire. But that’s a longer tangent.

Any teacher knows that it only takes a few students to completely disrupt a classroom. Forest fires spark easily under certain conditions, and the ripple effects are huge. As a child, when I raged against everyone and everything, it was my mother who held me into the night. When I was a teenager chatting my nights away on Usenet, the two people who most memorably picked me up and helped me find stable ground were a deployed soldier and a transgender woman, both of whom held me as I asked insane questions. They absorbed the impact and showed me a different way of thinking. They taught me the power of strangers counseling someone in crisis. As a college freshman, when I was spinning out of control, a computer science professor kept me solid and taught me how profoundly important a true mentor could be. Everyone needs someone to hold them when their world spins, whether that person be a friend, family, mentor, or stranger.

Fifteen years ago, when parents and the news media were panicking about online bullying, I saw a different risk. I saw countless kids crying out online in pain only to be ignored by those who preferred to prevent teachers from engaging with students online or to create laws punishing online bullies. We saw the suicides triggered as youth tried to make “It Gets Better” videos to find community, only to be further harassed at school. We saw teens studying the acts of Columbine shooters, seeking out community among those with hateful agendas and relishing the power of lashing out at those they perceived to be benefiting at their expense. But it all just seemed like a peculiar online phenomenon, proof that the internet was cruel. Too few of us tried to hold those youth who were unquestionably in pain.

Teens who are coming of age today are already ripe for instability. Their parents are stressed; even if they have jobs, nothing feels certain or stable. There doesn’t seem to be a path toward economic stability that doesn’t involve college, but there doesn’t seem to be a path toward college that doesn’t involve mind-bending debt. Opioids seem like a reasonable way to numb the pain in far too many communities. School doesn’t seem like a safe place, so teenagers look around and whisper among friends about who they believe to be the most likely shooter in their community. As Stephanie Georgopulos notesthe idea that any institution can offer security seems like a farce.

When I look around at who’s “holding” these youth, I can’t help but notice the presence of people with a hateful agenda. And they terrify me, in no small part because I remember an earlier incarnation.

In 1995, when I was trying to make sense of my sexuality, I turned to various online forums and asked a lot of idiotic questions. I was adopted by the aforementioned transgender woman and numerous other folks who heard me out, gave me pointers, and helped me think through what I felt. In 2001, when I tried to figure out what the next generation did, I realized thatstruggling youth were more likely to encounter a Christian gay “conversion therapy” group than a supportive queer peer. Queer folks were sick of being attacked by anti-LGBT groups, and so they had created safe spaces on private mailing lists that were hard for lost queer youth to find. And so it was that in their darkest hours, these youth were getting picked up by those with a hurtful agenda.

Teens who are trying to make sense of social issues aren’t finding progressive activists. They’re finding the so-called alt-right.

Fast-forward 15 years, and teens who are trying to make sense of social issues aren’t finding progressive activists willing to pick them up. They’re finding the so-called alt-right. I can’t tell you how many youth we’ve seen asking questions like I asked being rejected by people identifying with progressive social movements, only to find camaraderie among hate groupsWhat’s most striking is how many people with extreme ideas are willing to spend time engaging with folks who are in the tornado.

Spend time reading the comments below the YouTube videos of youth struggling to make sense of the world around them. You’ll quickly find comments by people who spend time in the manosphere or subscribe to white supremacist thinking. They are diving in and talking to these youth, offering a framework to make sense of the world, one rooted in deeply hateful ideas.These self-fashioned self-help actors are grooming people to see that their pain and confusion isn’t their fault, but the fault of feminists, immigrants, people of color. They’re helping them believe that the institutions they already distrust — the news media, Hollywood, government, school, even the church — are actually working to oppress them.

Most people who encounter these ideas won’t embrace them, but some will. Still, even those who don’t will never let go of the doubt that has been instilled in the institutions around them. It just takes a spark.

So how do we collectively make sense of the world around us? There isn’t one universal way of thinking, but even the act of constructing knowledge is becoming polarized. Responding to the uproar in the news media over “alternative facts,” Cory Doctorow noted:

We’re not living through a crisis about what is true, we’re living through a crisis about how we know whether something is true. We’re not disagreeing about facts, we’re disagreeing about epistemology. The “establishment” version of epistemology is, “We use evidence to arrive at the truth, vetted by independent verification (but trust us when we tell you that it’s all been independently verified by people who were properly skeptical and not the bosom buddies of the people they were supposed to be fact-checking).

The “alternative facts” epistemological method goes like this: “The ‘independent’ experts who were supposed to be verifying the ‘evidence-based’ truth were actually in bed with the people they were supposed to be fact-checking. In the end, it’s all a matter of faith, then: you either have faith that ‘their’ experts are being truthful, or you have faith that we are. Ask your gut, what version feels more truthful?”

Doctorow creates these oppositional positions to make a point and to highlight that there is a war over epistemology, or the way in which we produce knowledge.

The reality is much messier, because what’s at stake isn’t simply about resolving two competing worldviews. Rather, what’s at stake is how there is no universal way of knowing, and we have reached a stage in our political climate where there is more power in seeding doubt, destabilizing knowledge, and encouraging others to distrust other systems of knowledge production.

Contemporary propaganda isn’t about convincing someone to believe something, but convincing them to doubt what they think they know. Andonce people’s assumptions have come undone, who is going to pick them up and help them create a coherent worldview?

2. You Can’t Trust Abstractions

Deeply committed to democratic governance, George Washington believed that a representative government could only work if the public knew their representatives. As a result, our Constitution states that each member of the House should represent no more than 30,000 constituents. When we stopped adding additional representatives to the House in 1913 (frozen at 435), each member represented roughly 225,000 constituents. Today, the ratio of congresspeople to constituents is more than 700,000:1Most people will never meet their representative, and few feel as though Washington truly represents their interests. The democracy that we have is representational only in ideal, not in practice.

As our Founding Fathers knew, it’s hard to trust an institution when it feels inaccessible and abstract. All around us, institutions are increasingly divorced from the community in which they operate, with often devastating costs.Thanks to new models of law enforcement, police officers don’t typically come from the community they serve. In many poor communities, teachers also don’t come from the community in which they teach. The volunteer U.S. military hardly draws from all communities, and those who don’t know a solider are less likely to trust or respect the military.

Journalism can only function as the fourth estate when it serves as a tool to voice the concerns of the people and to inform those people of the issues that matter. Throughout the 20th century, communities of color challenged mainstream media’s limitations and highlighted that few newsrooms represented the diverse backgrounds of their audiences. As such, we saw the rise of ethnic media and a challenge to newsrooms to be smarter about their coverage. But let’s be real — even as news organizations articulate a commitment to the concerns of everyone, newsrooms have done a dreadful job of becoming more representativeOver the past decade, we’ve seen racial justice activists challenge newsrooms for their failure to cover Ferguson, Standing Rock, and other stories that affect communities of color.

Meanwhile, local journalism has nearly died. The success of local journalismdidn’t just matter because those media outlets reported the news, but because it meant that many more people were likely to know journalists. It’s easier to trust an institution when it has a human face that you know and respect. Andas fewer and fewer people know journalists, they trust the institution less and less. Meanwhile, the rise of social media, blogging, and new forms of talk radio has meant that countless individuals have stepped in to cover issues not being covered by mainstream news, often using a style and voice that is quite unlike that deployed by mainstream news media.

We’ve also seen the rise of celebrity news hosts. These hosts help push the boundaries of parasocial interactions, allowing the audience to feel deep affinity toward these individuals, as though they are true friends. Tabloid papers have long capitalized on people’s desire to feel close to celebrities by helping people feel like they know the royal family or the Kardashians. Talking heads capitalize on this, in no small part by how they communicate with their audiences. So, when people watch Rachel Maddow or listen to Alex Jones, they feel more connected to the message than they would when reading a news article. They begin to trust these people as though they are neighbors. They feel real.

No amount of drop-in journalism will make up for the loss of journalists within the fabric of local communities.

People want to be informed, but who they trust to inform them is rooted in social networks, not institutions. The trust of institutions stems from trust in people. The loss of the local paper means a loss of trusted journalists and a connection to the practices of the newsroom. As always, people turn to their social networks to get information, but what flows through those social networks is less and less likely to be mainstream news. But here’s where you also get an epistemological divide.

As Francesca Tripodi points out, many conservative Christians have developed a media literacy practice that emphasizes the “original” text rather than an intermediary. Tripodi points out that the same type of scriptural inference that Christians apply in Bible study is often also applied to reading the Constitution, tax reform bills, and Google results. This approach is radically different than the approach others take when they rely on intermediaries to interpret news for them.

As the institutional construction of news media becomes more and more proximately divorced from the vast majority of people in the United States, we can and should expect trust in news to decline. No amount of fact-checking will make up for a widespread feeling that coverage is biased. No amount of articulated ethical commitments will make up for the feeling that you are being fed clickbait headlines.

No amount of drop-in journalism will make up for the loss of journalists within the fabric of local communities. And while the population who believes that CNN and the New York Times are “fake news” are not demographically representative, the questionable tactics that news organizations use are bound to increase distrust among those who still have faith in them.

3. The Fourth Estate and Financialization Are Incompatible

If you’re still with me at this point, you’re probably deeply invested in scholarship or journalism. And, unless you’re one of my friends, you’re probably bursting at the seams to tell me that the reason journalism is all screwed up is because the internet screwed news media’s business model. So I want to ask a favor: Quiet that voice in your head, take a deep breath, and let me offer an alternative perspective.

There are many types of capitalism. After all, the only thing that defines capitalism is the private control of industry (as opposed to government control). Most Americans have been socialized into believing that all forms of capitalism are inherently good (which, by the way, was a propaganda project). But few are encouraged to untangle the different types of capitalism and different dynamics that unfold depending on which structure is operating.

I grew up in mom-and-pop America, where many people dreamed of becoming small business owners. The model was simple: Go to the bank and get a loan to open a store or a company. Pay back that loan at a reasonable interest rate — knowing that the bank was making money — until eventually you owned the company outright. Build up assets, grow your company, and create something of value that you could pass on to your children.

In the 1980s, franchises became all the rage. Wannabe entrepreneurs saw a less risky path to owning their own business. Rather than having to figure it out alone, you could open a franchise with a known brand and a clear process for running the business. In return, you had to pay some overhead to the parent company. Sure, there were rules to follow and you could only buy supplies from known suppliers and you didn’t actually have full control, but it kinda felt like you did. Like being an Uber driver, it was the illusion of entrepreneurship that was so appealing. And most new franchise owners didn’t know any better, nor were they able to read the writing on the wall when the water all around them started boiling their froggy self. I watched my mother nearly drown, and the scars are still visible all over her body.

I will never forget the U.S. Savings & Loan crisis, not because I understood it, but because it was when I first realized that my Richard Scarry impression of how banks worked was way wrong. Only two decades later did I learn to seethe FIRE industries (Finance, Insurance, and Real Estate) as extractive ones.They aren’t there to help mom-and-pop companies build responsible businesses, but to extract value from their naiveté. Like today’s post-college youth are learning, loans aren’t there to help you be smart, but to bend your will.

It doesn’t take a quasi-documentary to realize thatMcDonald’s is not a fast-food franchise; it’s a real estate business that uses a franchise structure to extract capital from naive entrepreneurs. Go talk to a wannabe restaurant owner in New York City and ask them what it takes to start a business these days. You can’t even get a bank loan or lease in 2018 without significant investor backing, which means that the system isn’t set up for you to build a business and pay back the bank, pay a reasonable rent, and develop a valuable asset.You are simply a pawn in a financialized game between your investors, the real estate companies, the insurance companies, and the bank, all of which want to extract as much value from your effort as possible. You’re just another brick in the wall.

Now let’s look at the local news ecosystem. Starting in the 1980s, savvy investors realized that many local newspapers owned prime real estate in the center of key towns. These prized assets would make for great condos and office rentals. Throughout the country, local news shops started getting eaten up by private equity and hedge funds — or consolidated by organizations controlled by the same forces. Media conglomerates sold off their newsrooms as they felt increased pressure to increase profits quarter over quarter.

Building a sustainable news business was hard enough when the news had a wealthy patron who valued the goals of the enterprise. But the finance industry doesn’t care about sustaining the news business; it wants a return on investment. And the extractive financiers who targeted the news business weren’t looking to keep the news alive. They wanted to extract as much value from those business as possible. Taking a page out of McDonald’s, they forced the newsrooms to sell their real estate. Often, news organizations had to rent from new landlords who wanted obscene sums, often forcing them to move out of their buildings. News outlets were forced to reduce staff, reproduce more junk content, sell more ads, and find countless ways to cut costs. Of course the news suffered — the goal was to push news outlets into bankruptcy or sell, especially if the companies had pensions or other costs that couldn’t be excised.

Yes, the fragmentation of the advertising industry due to the internet hastened this process. And let’s also be clear that business models in the news business have never been cleanBut no amount of innovative new business models will make up for the fact that you can’t sustain responsible journalism within a business structure that requires newsrooms to make more money quarter over quarter to appease investors. This does not mean that you can’t build a sustainable news business, but if the news is beholden to investors trying to extract value, it’s going to impossible. And if news companies have no assets to rely on (such as their now-sold real estate), they are fundamentally unstable and likely to engage in unhealthy business practices out of economic desperation.

Untangling our country from this current version of capitalism is going to be as difficult as curbing our addiction to fossil fuels. I’m not sure it can be done, but as long as we look at companies and blame their business models without looking at the infrastructure in which they are embedded, we won’t even begin taking the first steps. Fundamentally, both the New York Times and Facebook are public companies, beholden to investors and desperate to increase their market cap. Employees in both organizations believe themselves to be doing something important for society.

Of course, journalists don’t get paid well, while Facebook’s employees can easily threaten to walk out if the stock doesn’t keep rising, since they’re also investors. But we also need to recognize that the vast majority of Americans have a stake in the stock market. Pension plans, endowments, and retirement plans all depend on stocks going up — and those public companies depend on big investors investing in them. Financial managers don’t invest in news organizations that are happy to be stable break-even businesses. Heck, even Facebook is in deep trouble if it can’t continue to increase ROI, whether through attracting new customers (advertisers and users), increasing revenue per user, or diversifying its businesses. At some point, it too will get desperate, because no business can increase ROI forever.

ROI capitalism isn’t the only version of capitalism out there. We take it for granted and tacitly accept its weaknesses by creating binaries, as though the only alternative is Cold War Soviet Union–styled communism. We’re all frogs in an ocean that’s quickly getting warmer. Two degrees will affect a lot more than oceanfront properties.

Reclaiming Trust

In my mind, we have a hard road ahead of us if we actually want to rebuild trust in American society and its key institutions (which, TBH, I’m not sure is everyone’s goal). There are three key higher-order next steps, all of which are at the scale of the New Deal.

  1. Create a sustainable business structure for information intermediaries (like news organizations) that allows them to be profitable without the pressure of ROI. In the case of local journalism, this could involve subsidized rent, restrictions on types of investors or takeovers, or a smartly structured double bottom-line model. But the focus should be on strategically building news organizations as a national project to meet the needs of the fourth estateIt means moving away from a journalism model that is built on competition for scarce resources (ads, attention) to one that’s incentivized by societal benefits.
  2. Actively and strategically rebuild the social networks of America.Create programs beyond the military that incentivize people from different walks of life to come together and achieve something great for this country. This could be connected to job training programs or rooted in community service, but it cannot be done through the government alone or, perhaps, at all. We need the private sector, religious organizations, and educational institutions to come together and commit to designing programs that knit together America while also providing the tools of opportunity.
  3. Find new ways of holding those who are struggling. We don’t have a social safety net in America. For many, the church provides the only accessible net when folks are lost and struggling, but we need a lot more.We need to work together to build networks that can catch people when they’re falling. We’ve relied on volunteer labor for a long time in this domain—women, churches, volunteer civic organizations—but our current social configuration makes this extraordinarily difficult. We’re in the middle of an opiate crisis for a reason. We need to think smartly about how these structures or networks can be built and sustained so that we can collectively reach out to those who are falling through the cracks.

Fundamentally, we need to stop triggering one another because we’re facing our own perceived pain. This means we need to build large-scale cultural resilience. While we may be teaching our children “social-emotional learning”in the classroom, we also need to start taking responsibility at scale.Individually, we need to step back and empathize with others’ worldviews and reach out to support those who are struggling. But our institutions also have important work to do.

At the end of the day, if journalistic ethics means anythingnewsrooms cannot justify creating spectacle out of their reporting on suicide or other topics just because they feel pressure to create clicks. They have the privilege of choosing what to amplify, and they should focus on what is beneficial. If they can’t operate by those values, they don’t deserve our trust. While I strongly believe that technology companies have a lot of important work to do to be socially beneficial, I hold news organizations to a higher standard because of their own articulated commitments and expectations that they serve as the fourth estateAnd if they can’t operationalize ethical practices, I fear the society that must be knitted together to self-govern is bound to fragment even further.

Trust cannot be demanded. It’s only earned by being there at critical junctures when people are in crisis and need help. You don’t earn trust when things are going well; you earn trust by being a rock during a tornado. The winds are blowing really hard right now. Look around. Who is helping us find solid ground?

by zephoria at June 21, 2018 01:26 AM

June 20, 2018

Ph.D. student

Omi and Winant on economic theories of race

Speaking of economics and race, Chapter 2 of Omi and Winant (2014), titled “Class”, is about economic theories of race. These are my notes on it

Throughout this chapter, Omi and Winant seem preoccupied with whether and to what extent economic theories of race fall on the left, center, or right within the political spectrum. This is despite their admission that there is no absolute connection between the variety of theories and political orientation, only general tendencies. One presumes when reading it that they are allowing the reader to find themselves within that political alignment and filter their analysis accordingly. I will as much as possible leave out these cues, because my intention in writing these blog posts is to encourage the reader to make an independent, informed judgment based on the complexity the theories reveal, as opposed to just finding ideological cannon fodder. I claim this idealistic stance as my privilege as an obscure blogger with no real intention of ever being read.

Omi and Winant devote this chapter to theories of race that attempt to more or less reduce the phenomenon of race to economic phenomena. They outline three varieties of class paradigms for race:

  • Market relations theories. These tend to presuppose some kind theory of market efficiency as an ideal.
  • Stratification theories. These are vaguely Weberian, based on classes as ‘systems of distribution’.
  • Product/labor based theories. These are Marxist theories about conflicts over social relations of production.

For market relations theories, markets are efficient, racial discrimination and inequality isn’t, and so the theory’s explicandum is what market problems are leading to the continuation of racial inequalities and discrimination. There are a few theories on the table:

  • Irrational prejudice. This theory says that people are racially prejudiced for some stubborn reason and so “limited and judicious state interventionism” is on the table. This was the theory of Chicago economist Gary Becker, who is not to be confused with the Chicago sociologist Howard Becker, whose intellectual contributions were totally different. Racial prejudice unnecessarily drives up labor costs and so eventually the smart money will become unprejudiced.
  • Monopolistic practices. The idea here is that society is structured in the interest of whites, who monopolize certain institutions and can collect rents from their control of resources. Jobs, union membership, favorably located housing, etc. are all tied up in this concept of race. Extra-market activity like violence is used to maintain these monopolies. This theory, Omi and Winant point out, is sympatico with white privilege theories, as well as nation-based analyses of race (cf. colonialism).
  • Disruptive state practices. This view sees class/race inequality as the result of state action of some kind. There’s a laissez-faire critique which argues that minimum wage and other labor laws, as well as affirmative action, entrench race and prevent the market from evening things out. Doing so would benefit both capital owners and people of color according to this theory. There’s a parallel neo-Marxist theory that says something similar, interestingly enough.

It must be noted that in the history of the United States, especially before the Civil Rights era, there absolutely was race-based state intervention on a massive scale and this was absolutely part of the social construction of race. So there hasn’t been a lot of time to test out the theory that market equilibrium without racialized state policies results in racial equality.

Omi and Winant begin to explicate their critique of “colorblind” theories in this chapter. They characterize “colorblind” theories as individualistic in principle, and opposed to the idea of “equality of result.” This is the familiar disparate treatment vs. disparate impact dichotomy from the interpretation of nondiscrimination law. I’m now concerned that this, which appears to be the crux of the problem of addressing contests over racial equality between the center and the left, will not be resolved even after O&W’s explication of it.

Stratification theory is about the distribution of resources, though understood in a broader sense than in a narrow market-based theory. Resources include social network ties, elite recruitment, and social mobility. This is the kind of theory of race an symbolic interactionist sociologist of class can get behind. Or a political scientist’s: the relationship between the elites and the masses, as well as the dynamics of authority systems, are all part of this theory, according to Omi and Winant. One gets the sense that of the class based theories, this nuanced and nonreductivist one is favored by the authors … except for the fascinating critique that these theories will position race vs. class as two dimensions of inequality, reifying them in their analysis, whereas “In experiential terms, of course, inequality is not differentiated by race or class.”

The phenomenon that there is a measurable difference in “life chances” between races in the United States is explored by two theorists to which O&W give ample credit: William J Wilson and Douglas Massey.

Wilson’s major work in 1978, The Declining Significance of Race, tells a long story of race after the Civil War and urbanization that sounds basically correct to me. It culminates with the observation that there are now elite and middle-class black people in the United States due to the uneven topology of reforms but that ‘the massive black “underclass” was relegated to permanent marginality’. He argued that race was no longer a significant linkage between these two classes, though Omi and Winant criticize this view, arguing that there is fragility to the middle-class status for blacks because of public sector job losses. His view that class divides have superseded racial divides is his most controversial claim and so perhaps what he is known best for. He advocated for a transracial alliance within the Democratic party to contest the ‘racial reaction’ to Civil Rights, which at this point was well underway with Nixon’s “southern strategy”. The political cleavages along lines of partisan racial alliance are familiar to us in the United States today. Perhaps little has changed.
He called for state policies to counteract class cleavages, such as day care services to low-income single mothers. These calls “went nowhere” because Democrats were unwilling to face Republican arguments against “giveaways” to “welfare queens”. Despite this, Omi and Winant believe that Wilson’s views converge with neoconservative views because he doesn’t favor public sector jobs as a solution to racial inequality; more recently, he’s become a “culture of poverty” theorist (because globalization reduces the need for black labor in the U.S.) and believes in race neutral policies to overcome urban poverty. The relationship between poverty and race is incidental to Wilson, which I suppose makes him ‘colorblind” in O&W’s analysis.

Massey’s work, which is also significantly reviewed in this chapter, deals with immigration and Latin@s. There’s a lot there, so I’ll cut to the critique of his recent book, Categorically Unequal (2008), in which Massey unites his theories of anti-black and anti-brown racism into a comprehensive theory of racial stratification based on ingrained, intrinsic, biological processes of prejudice. Naturally, to Omi and Winant, the view that there’s something biological going on is “problematic”. They (being quite mainstream, really) see this as tied to the implicit bias literature but think that there’s a big difference from implicit bias due to socialization vs. over permanent hindbrain perversity. This is apparently taken up again in their Chapter 4.

Omi and Winant’s final comment is that these stratification theories deny agency and can’t explain how “egalitarian or social justice-oriented transformations could ever occur, in the past, present, or future.” Which is, I suppose, bleak to the anti-racist activists Omi and Winant are implicitly aligned with. Which does raise the possibility that what O&W are really up to in advocating a hard line on the looser social construction of race is to keep the hope of possibility of egalitarian transformation alive. It had not occurred to me until just now that their sensitivity to the idea that implicit bias may be socially trained vs. being a more basic and inescapable part of psychology, a sensitivity which is mirrored elsewhere in society, is due to this concern for the possibility and hope for equality.

The last set of economic theories considered in this chapter are class-conflict theories, which are rooted in a Marxist conception of history as reducible to labor-production relations and therefore class conflict. There are two different kinds of Marxist theory of race. There are labor market segmentation theories, led by Michael Reich, a labor economist at Berkeley. According to this research, when the working class unifies across racial lines, it increases its bargaining power and so can get better wages in its negotiations with capital. So the capitalist in this theory may want to encourage racial political divisions even if they harbor no racial prejudices themselves. “Workers of the world unite!” is the message of these theories. An alternative view is split labor market theory, which argues that under economic pressure the white working class would rather throw other races under the bus than compete with them economically. Political mobilization for a racially homogenous, higher paid working class is then contested by both capitalists and lower paid minority workers.

Reflections

Omi and Winant respect the contributions of these theories but think that trying to reduce race to economic relations ultimately fails. This is especially true for the market theorists, who always wind up introducing race as an non-economic, exogenous variable to avoid inequalities in the market.

The stratification theories are perhaps more realistic and complex.

I’m most surprised at how the class-conflict based theories are reflected in what for me are the major lenses into the zeitgeist of contemporary U.S. politics. This may be because I’m very disproportionately surrounded by Marxist-influenced intellectuals. But it is hard to miss the narrative that the white working class has rejected the alliance between neoliberal capital and low-wage immigrant and minority labor. Indeed, it is arguably this latter alliance that Nancy Fraser has called neoliberalism. This conflict accords with the split labor market theory. Fraser and other hopeful socialist types argue that a triumph over identity differences is necessary to realize racial conflicts in the working class play into the hands of capitalists, not white workers. It is very odd that this ideological question is not more settled empirically. It may be that the whole framing is perniciously oversimplified, and that really you have to talk about things in a more nuanced way to get real headway.

Unless of course there isn’t any such real hope. This was an interesting part of the stratification theory: the explanation that included an absence of agency. I used to study lots and lots of philosophy, and in philosophy it’s a permissible form of argument to say, “This line of reasoning, if followed to its conclusion, leads to an appalling and untenable conclusion, one that could never be philosophically satisfying. For that reason, we reject it and consider a premise to be false.” In other words, in philosophy you are allowed to be motivated by the fact that a philosophical stance is life negating or self-defeating in some way. I wonder if that is true of sociology of race. I also wonder whether bleak conclusions are necessary even if you deny the agency of racial minorities in the United States to liberate themselves on their own steam. Now there’s globalization, and earlier patterns of race may well be altered by forces outside of it. This is another theme in contemporary political discourse.

Once again Omi and Winant have raised the specter of “colorblind” policies without directly critiquing them. The question seems to boil down to whether or not the mechanisms that reproduce racial inequality can be mitigated better by removing those mechanisms that are explicitly racial or not. If part of the mechanism is irrational prejudice due to some hindbrain tick, then there may be grounds for a systematic correction of that tick. But that would require a scientific conclusion about the psychology of race that identifies a systematic error. If the error is rather interpreting an empirical inequality due to racialized policies as an essentialized difference, then that can be partially corrected by reducing the empirical inequality in fact.

It is in fact because I’m interested in what kinds of algorithms would be beneficial interventions in the process of racial formation that I’m reading Omi and Winant so closely in the first place.

by Sebastian Benthall at June 20, 2018 03:57 AM

June 17, 2018

Ph.D. student

a few philosophical conclusions

  1. Science, Technology, Engineering, and Mathematics (STEM) are a converged epistemic paradigm that is universally valid. Education in this field is socially prized because it is education in actual knowledge that is resilient to social and political change. These fields are constantly expanding their reach into domains that have resisted their fundamental principles in the past. That is because these principles really are used to design socially and psychologically active infrastructure that tests these principles. While this is socially uncomfortable and there’s plenty of resistance, that resistance is mostly futile.
  2. Despite or even because of (1), phenomenology and methods based on it remain interesting. There are two reasons for this.
    1. The first is that much of STEM rests on a phenomenological core, and this gives some of the ethos of objectivity around the field instability. There are interesting philosophical questions at the boundaries of STEM that have the possibility of flipping it on its head. These questions have to do with theory of probability, logic, causation, and complexity/emergence. There is a lot of work to be done here with increasingly urgent practical applications.
    2. The second reason why phenomenology is important is that there is still a large human audience for knowledge and for pragmatic application in lived experience knowledge needs to be linked to phenomenology. The science of personal growth and transformation, as a science ready for consumption by people, is an ongoing field which may never be reconciled perfectly with the austere ontologies of STEM.
  3. Contemporary social organizations depend on the rule of law. That law, as a practice centered around use of natural language, is strained under the new technologies of data collection and control, which are ultimately bound by physical logic, not rhetorical logic. This impedance mismatch is the source of much friction today and will be particularly challenging for legal regimes based on consensus and tradition such as those based on democracy and common law.
  4. In terms of social philosophy, the moral challenge we are facing today is to devise a communicable, accurate account of how a diversity of bodies can and should cooperate despite their inequality. This is a harder problem than coming up with theory of morality wherein theoretical equals maintain their equality. One good place to start on this would be the theory of economics, and how economics proposes differently endowed actors can and should specialize and trade. Sadly, economics is a complex field that is largely left out of the discourse today. It is, perhaps, considered either too technocratic or too ideologically laden to take seriously. Nevertheless, we have to remember that economics was originally and may be again primarily a theory of the moral order; the fact that it is about the pragmatic matters of business and money, shunned by the cultural elite, does not make it any less significant a field of study in terms of its moral implications.

by Sebastian Benthall at June 17, 2018 05:37 PM

June 16, 2018

Ph.D. student

fancier: scripts to help manage your Twitter account, in Python

My Twitter account has been a source of great entertainment, distraction, and abuse over the years. It is time that I brought it under control. I am too proud and too cheap to buy a professional grade Twitter account manager, and so I’ve begun developing a new suite of tools in Python that will perform the necessary tasks for me.

I’ve decided to name these tools fancier, because the art and science of breeding domestic pigeons is called pigeon fancying. Go figure.

The project is now available on GitHub, and of course I welcome any collaboration or feedback!

At the time of this writing, the project has only one feature: it searches through who you follow on Twitter, finds which accounts are both inactive in 90 days and don’t follow you back, and then unfollows them.

This is a common thing to try to do when grooming and/or professionalizing your Twitter account. I saw a script for this shared in a pastebin years ago, but couldn’t find it again. There are some on-line services that will help you do this, but they charge a fee to do it at scale. Ergo: the open source solution. Voila!

by Sebastian Benthall at June 16, 2018 08:53 PM

June 12, 2018

Ph.D. student

bodies and liberal publics in the 20th century and today

I finally figured something out, philosophically, that has escaped me for a long time. I feel a little ashamed that it’s taken me so long to get there, since it’s something I’ve been told in one way or another many times before.

Here is the set up: liberalism is justified by universal equivalence between people. This is based in the Enlightenment idea that all people have something in common that makes them part of the same moral order. Recognizing this commonality is an accomplishment of reason and education. Whether this shows up in Habermasian discourse ethics, according to which people may not reason about politics from their personal individual situation, or in the Rawlsian ‘veil of ignorance’, in which moral precepts are intuitively defended under the presumption that one does not know who or where one will be, liberal ideals always require that people leave something out, something that is particular to them. What gets left out is people’s bodies–meaning both their physical characteristics and more broadly their place in lived history. Liberalism was in many ways a challenge to a moral order explicitly based on the body, one that took ancestry and heredity very seriously. So much a part of aristocratic regime was about birthright and, literally, “good breeding”. The bourgeois class, relatively self-made, used liberalism to level the moral playing field with the aristocrats.

The Enlightenment was followed by a period of severe theological and scientific racism that was obsessed with establishing differences between people based on their bodies. Institutions that were internally based on liberalism could then subjugate others, by creating an Other that was outside the moral order. Equivalently, sexism too.
Social Darwinism was a threat to liberalism because it threatened to bring back a much older notion of aristocracy. In WWII, the Nazis rallied behind such an ideology and were defeated in the West by a liberal alliance, which then established the liberal international order.

I’ve got to leave out the Cold War and Communism here for a minute, sorry.

Late modern challenges to the liberal ethos gained prominence in activist circles and the American academy during and following the Civil Rights Movement. These were and continue to be challenges because they were trying to bring bodies back into the conversation. The problem is that a rules-based order that is premised on the erasure of differences in bodies is going to be unable to deal with the political tensions that precisely do come from those bodily differences. Because the moral order of the rules was blind to those differences, the rules did not govern them. For many people, that’s an inadequate circumstance.

So here’s where things get murky for me. In recent years, you have had a tension between the liberal center and the progressive left. The progressive left reasserts the political importance of the body (“Black Lives Matter”), and assertions of liberal commonality (“All Lives Matter”) are first “pushed” to the right, but then bump into white supremacy, which is also a reassertion of the political importance of the body, on the far right. It’s worth mention Piketty, here, I think, because to some extent that also exposed how under liberal regimes the body has secretly been the organizing principle of wealth through the inheritance of private property.

So what has been undone is the sense, necessary for liberalism, that there is something that everybody has in common which is the basis for moral order. Now everybody is talking about their bodily differences.

That is on the one hand good because people do have bodily differences and those differences are definitely important. But it is bad because if everybody is questioning the moral order it’s hard to say that there really is one. We have today, I submit, a political nihilism crisis due to our inability to philosophically imagine a moral order that accounts for bodily difference.

This is about the Internet too!

Under liberalism, you had an idea that a public was a place people could come to agree on the rules. Some people thought that the Internet would become a gigantic public where everybody could get together and discuss the rules. Instead what happened was that the Internet became a place where everybody could discuss each other’s bodies. People with similar bodies could form counterpublics and realize their shared interests as body-classes. (This piece by David Weinberger critiquing the idea of an ‘echo chamber’ is inspiring.) Within these body-based counterpublics each form their own internal moral order whose purpose is to mobilize their body-interests against other kinds of bodies. I’m talking about both black lives matter and white supremacists here, radical feminists and MRA’s. They are all buffeting liberalism with their body interests.

I can’t say whether this is “good” or “bad” because the moral order is in flux. There is apparently no such thing as neutrality in a world of pervasive body agonism. That may be its finest criticism: body agonism is politically unstable. Body agonism leads to body anarchy.

I’ll conclude with two points. The first is that the Enlightenment view of people having something in common (their personhood, their rationality, etc.) which put them in the same moral order was an intellectual and institutional accomplishment. People do not naturally get outside themselves and put themselves in other people’s shoes; they have to be educated to do it. Perhaps there is a kernal of truth here about what moral education is that transcends liberal education. We have to ask whether today’s body agonism is an enlightened state relative to moral liberalism because it acknowledges a previously hidden descriptive reality of body difference and is no longer so naive, or if body agonism is a kind of ethical regress because it undoes moral education, reducing us to a more selfish state of nature, of body conflict, albeit in a world full of institutions based on something else entirely.

The second point is that there is an alternative to liberal order which appears to be alive and well in many places. This is an order that is not based on individual attitudes for legitimacy, but rather is more about the endurance of institutions for their own sake. I’m referring of course to authoritarianism. Without the pretense of individual equality, authoritarian regimes can focus on maintaining power on their own terms. Authoritarian regimes do not need to govern through moral order. U.S. foreign policy used to be based on the idea that such amoral governance would be shunned. But if body agonism has replaced the U.S. international moral order, we no longer have an ideology to export or enforce abroad.

by Sebastian Benthall at June 12, 2018 02:35 PM

June 06, 2018

Ph.D. student

Re: meritocracy and codes of conduct

In thinking about group governance practices, it seems like setting out explicit norms can be broadly useful, no matter the particular history that's motivated the adoption of those norms. In a way, it's a common lesson of open source collaborative practice: documentation is essential.

Seb wrote:

I have to admit that though I’m quite glad that we have a Code of Conduct now in BigBang, I’m uncomfortable with the ideological presumptions of its rationale and the rejection of ‘meritocracy’.

For what it's worth, I don't think this is an ideological presumption, but an empirical observation. Lots of people have noticed lots of open source communities where the stated goal of decision-making by "meritocracy" has apparently contributed to a culture where homogeneity is preferred (because maybe you measure the vague concept of "merit" in some ways by people who behave most similarly to you) and where harassment is tolerated (because if the harasser has some merit -- again, on that fuzzy scale -- maybe that merit could outweigh the negative consequences of their behavior).

I don't see the critiques of meritocracy as relativistic; that is, it's not an argument that there is no such thing as merit, that nothing can be better than something else. It's just a recognition that many implementations of claimed meritocracy aren't very systematic about evaluation of merit and that common models tend to have side effects that are bad for working communities, especially for communities that want to attract participants from a range of situations and backgrounds, where online collaboration can especially benefit.

To that point, you don't need to mention "merit" or "meritocracy" at all in writing a code of conduct and establishing such a norm doesn't require having had those experiences with "meritocratic" projects in the past. Having an established norm of inclusivity makes it easier for everyone. We don't have to decide on a case-by-case basis whether some harassing behavior needs to be tolerated by, for example, weighing the harm against the contributions of the harasser. When you start contributing to a new project, you don't have to just hope the leadership of that project shares your desire for respectful behavior. Instead, we just agree that we'll follow simple rules and anyone who wants to join in can get a signal of what's expected. Others have tried to describe why the practice can be useful in countering obstacles faced by underrepresented groups, but the tool of a Code of Conduct is in any case useful for all.

Could we use forking as a mechanism for promoting inclusivity rather than documenting a norm? Perhaps; open source projects could just fork whenever it became clear that a contributor was harassing other participants, and that capability is something of a back stop if, for example, harassment occurs and project maintainers do nothing about it. But that only seems effective (and efficient) if the new fork established a code of conduct that set a different expectation of behavior; without the documentary trace (a hallmark of open source software development practice) others can't benefit from that past experience and governance process. While forking is possible in open source development, we don't typically want to encourage it to happen rapidly, because it introduces costs in dividing a community and splitting their efforts. Where inclusivity is a goal of project maintainers, then, it's easier to state that norm up front, just like we state the license up front, and the contribution instructions up front, and the communication tools up front, rather than waiting for a conflict and then forking both the code and collaborators at each decision point. And if a project has a goal of broad use and participation, it wants to demonstrate inclusivity towards casual participants as well as dedicated contributors. A casual user (who provides documentation, files bugs, uses the software and contributes feedback on usability) isn't likely to fork an open source library that they're using if they're treated without respect, they'll just walk away instead.

It could be that some projects (or some developers) don't value inclusivity. That seems unusual for an open source project since such projects typically benefit from increased participation (both at the level of core contributers and at lower-intensity users who provide feedback) and online collaboration typically has the advantage of bringing in participation from outside one's direct neighbors and colleagues. But for the case of the happy lone hacker model, a Code of Conduct might be entirely unnecessary, because the lone contributor isn't interested in developing a community, but instead just wishes to share the fruits of a solitary labor. Permissive licensing allows interested groups with different norms to build on that work without the original author needing to collaborate at all -- and that's great, individuals shouldn't be pressured to collaborate if they don't want to. Indeed, the choice to refuse to set community norms is itself an expression which can be valuable to others; development communities who explicitly refuse to codify norms or developers who refuse to abide by them do others a favor by letting them know what to expect from potential collaboration.

Thanks for the interesting conversation,
Nick

by npdoty@ischool.berkeley.edu at June 06, 2018 09:05 PM

Ph.D. student

Notes on Omi and Winant, 2014, “Ethnicity”

I’m continuing to read Omi and Winant’s Racial Formation in the United States (2014). These are my notes on Chapter 1, “Ethnicity”.

There’s a long period during which the primary theory of race in the United States is a theological and/or “scientific” racism that maintains that different races are biologically different subspecies of humanity because some of them are the cursed descendants of some tribe mentioned in the Old Testament somewhere. In the 1800’s, there was a lot of pseudoscience involving skull measurements trying to back up a biblical literalism that rationalized, e.g., slavery. It was terrible.

Darwinism and improved statistical methods started changing all that, though these theological/”scientific” ideas about race were prominent in the United States until World War II. What took them out of the mainstream was the fact that the Nazis used biological racism to rationalize their evilness, and the U.S. fought them in a war. Jewish intellectuals in the United States in particular (and by now there were a lot of them) forcefully advocated for a different understanding of race based on ethnicity. This theory was dominant as a replacement for theories of scientific racism between WWII and the mid-60’s, when it lost its proponents on the left and morphed into a conservative ideology.

To understand why this happened, it’s important to point out how demographics were changing in the U.S. in the 20th century. The dominant group in the United States in the 1800’s were White Anglo-Saxon Protestants, or WASPs. Around 1870-1920, the U.S. started to get a lot more immigrants from Southern and Eastern Europe, as well as Ireland. These often economic refugees, though there were also people escaping religious persecution (Jews). Generally speaking these immigrants were not super welcome in the United States, but they came in at what may be thought of as a good time, as there was a lot of economic growth and opportunity for upward mobility in the coming century.

Partly because of this new wave of immigration, there was a lot of interest in different ethnic groups and whether or not they would assimilate in with the mainstream Anglo culture. American pragmatism, of the William James and Jown Dewey type, was an influential philosophical position in this whole scene. The early ethnicity theorists, who were part of the Chicago school of sociology that was pioneering grounded, qualitative sociological methods, were all pragmatists. Robert Park is a big figure here. All these guys apparently ripped off W.E.B. Du Bois, who was trained by William James and didn’t get enough credit because he was black.

Based on the observation of these European immigrants, the ethnicity theorists came to the conclusion that if you lower the structural barriers to participation in the economy, “ethnics” will assimilate to the mainstream culture (melt into the “melting pot”) and everything is fine. You can even tolerate some minor ethnic differences, resulting in the Italian-Americans, the Irish-Americans, and… the African-American. But that was a bigger leap for people.

What happened, as I’ve mentioned, is that scientific racism was discredited in the U.S. partly because it had to fight the Nazis and had so many Jewish intellectuals, who had been on the wrong end of scientific racism in Europe and who in the U.S. were eager to become “ethnics”. These became, in essence, the first “racial liberals”. At the time there was also a lot of displacement of African Americans who were migrating around the U.S. in search of economic opportunities. So in the post-war period ethnicity theorists optimistically proposed that race problems could be solved by treating all minority groups as if they were Southern and Eastern European immigrant groups. Reduce enough barriers and they would assimilate and/or exist in a comfortable equitable pluralism, they thought.

The radicalism of the Civil Rights movement broke the spell here, as racial minorities began to demand not just the kinds of liberties that European ethnics had taken advantage of, but also other changes to institutional racism and corrections to other racial injustices. The injustices persisted in part because racial differences are embodied differently than ethnic differences. This is an academic way of saying that the fact that (for example) black people often look different from white people matters for how society treats them. So treating race as a matter of voluntary cultural affiliation misses the point.

So ethnicity theory, which had been critical for dismantling scientific racism and opening the door for new policies on race, was ultimately rejected by the left. It was picked up by neoconservatives through their policies of “colorblindness”, which Omi and Winant describe in detail in the latter parts of their book.

There is a lot more detail in the chapter, which I found quite enlightening.

My main takeaways:

  • In today’s pitched media battles between “Enlightenment classical liberalism” and “postmodern identity politics”, we totally forget that a lot of American policy is based on American pragmatism, which is definitely neither an Enlightenment position nor postmodern. Everybody should shut up and read The Metaphysical Club.
  • There has been a social center, with views that are seen as center-left or center-right depending on the political winds, since WWII. The adoption of ethnicity theory into the center was a significant culture accomplishment with a specific history, however ultimately disappointing its legacy has been for anti-racist activists. Any resurgence of scientific racism is a definite backslide.
  • Omi and Winant are convincing about the limits of ethnicity theory in terms of: its dependence on economic “engines of mobility” that allow minorities to take part in economic growth, its failure to recognize the corporeal and ocular aspects of race, and its assumption that assimilation is going to be as appealing to minorities as it is to the white majority.
  • Their arguments about colorblind racism, which are at the end of their book, are going to be doing a lot of work and the value of the new edition of their book, for me at least, really depends on the strength of that theory.

by Sebastian Benthall at June 06, 2018 07:57 PM

June 04, 2018

Ph.D. student

Notes on Racial Formation by Omi and Winant, 2014, Introduction

Beginning to read Omi and Winant, Racial Formation in the United States, Third Edition, 2014. These are notes on the introduction, which outlines the trajectory of their book. This introduction is available on Google Books.

Omi and Winant are sociologists of race and their aim is to provide a coherent theory of race and racism, particularly as a United States phenomenon, and then to tell a history of race in the United States. One of their contentions is that race is a social construct and therefore varies over time. This means, in principle, that racial categories are actionable, and much of their analysis is about how anti-racist and racial reaction movements have transformed the politics and construction of race over the course of U.S. history. On the other hand, much of their work points to the persistence of racial categories despite the categorical changes.

Since the Third Edition, in 2014, comes twenty years after the Second Edition, much of the new material in the book addresses specifically what they call colorblind racial hegemony. This is a response to the commentary and question around the significance of Barack Obama’s presidency for race in America. It is interesting reading this in 2018, as in just a few brief years it seems like things have changed significantly. It’s a nice test, then to ask to what extent their theory explains what happened next.

Here is, broadly speaking, what is going on in their book based on the introduction.

First, they discuss prior theories of race they can find in earlier scholarship. They acknowledge that these are interesting lenses but believe they are ultimately reductionist. They will advance their own theory of racial formation in contrast with these. In the background of this section but dismissed outright is the “scientific” racism and religious theories of race that were prevalent before World War II and were used to legitimize what Omi and Winant call racial domination (this has specific meaning for them). Alternative theories of race that Omi and Winant appear to see as constructive contributions to racial theory include:

  • Race as ethnicity. As an alternative to scientific racism, post WWII thinkers advanced the idea of racial categories as reducing to ethnic categories, which were more granular social units based on shared and to some extent voluntary culture. This conception of race could be used for conflicting political agendas, including both pluralism and assimilation.
  • Race as class. The theory attempted to us economic theories–including both Marxist and market-based analysis–to explain race. Omi and Winant think this–especially the Marxist theory–was a productive lens but ultimate a reductive one. Race cannot be subsumed to class.
  • Race as nationality. Race has been used as the basis for national projects, and is tied up with the idea of “peoplehood”. In colonial projects especially, race and nationality are used both to motivate subjugation of a foreign people, and is also used in resistance movements to resist conquest.

It is interesting that these theories of race are ambiguous in their political import. Omi and Winant do a good job of showing how multi-dimensional race really is. Ultimately they reject all these theories and propose their own, racial formation theory. I have not read their chapter on it yet, so all I know is that: (a) they don’t shy away from the elephant in the room, which is that there is a distinctively ‘ocular’ component to race–people look different from each other in ways that are hereditary and have been used for political purposes, (b) they maintain that despite this biological aspect of race, the social phenomenon of race is a social construct and primarily one of political projects and interpretations, and (c) race is formed by a combination of action both at the representational level (depicting people in one way or another) and at the institutional level, with the latter determining real resource allocation and the former providing a rationalization for it.

Complete grokking of the racial formation picture is difficult, perhaps. This may be why instead of having a mainstream understanding of racial formation theory, we get reductive and ideological concepts of race active in politics. The latter part of Omi and Winant’s book is their historical account of the “trajectory” of racial politics in the United States, which they see in terms of a pendulum between anti-racist action (with feminist, etc., allies) and “racial reaction”–right-wing movements that subvert the ideas used by the anti-racists and spin them around into a backlash.

Omi and Winant describe three stages of racial politics in United States history:

  • Racial domination. Slavery and Jim Crow before WWII, based on religious and (now discredited, pseudo-)scientific theories of racial difference.
  • Racial hegemony. (Nod to Gramsci) Post-WWII race relations as theories of race-as-ethnicity open up egalitarian ideals. Opens way for Civil Rights movement.
  • Colorblind racism. A phase where the official ideology denies the significance of race in society while institutions continue to reinforce racial differences in a pernicious way. Necessarily tied up with neoliberalism, in Omi and Winant’s view.

The question of why colorblind racism is a form of racism is a subtle one. Omi and Winant do address this question head on, and I am in particular looking forward to their articulation of the point. Their analysis was done during the Obama presidency, which did seem to move the needle on race in a way that we are still seeing the repercussions of today. I’m interested in comparing their analysis with that of Fraser and Gilman. There seem to be some productive alignments and tensions there.

by Sebastian Benthall at June 04, 2018 01:00 PM

June 02, 2018

Ph.D. alumna

The case for quarantining extremist ideas

(Joan Donovan and I wrote the following op-ed for The Guardian.) 

When confronted with white supremacists, newspaper editors should consider ‘strategic silence’

kkk
 ‘The KKK of the 1920s considered media coverage their most effective recruitment tactic.’ Photograph: Library of Congress

George Lincoln Rockwell, the head of the American Nazi party, had a simple media strategy in the 1960s. He wrote in his autobiography: “Only by forcing the Jews to spread our message with their facilities could we have any hope of success in counteracting their left-wing, racemixing propaganda!”

Campus by campus, from Harvard to Brown to Columbia, he would use the violence of his ideas and brawn of his followers to become headline news. To compel media coverage, Rockwell needed: “(1) A smashing, dramatic approach which could not be ignored, without exposing the most blatant press censorship, and (2) a super-tough, hard-core of young fighting men to enable such a dramatic presentation to the public.” He understood what other groups competing for media attention knew too well: a movement could only be successful if the media amplified their message.

Contemporary Jewish community groups challenged journalists to consider not covering white supremacists’ ideas. They called this strategy “quarantine”, and it involved working with community organizations to minimize public confrontations and provide local journalists with enough context to understand why the American Nazi party was not newsworthy.

In regions where quarantine was deployed successfully, violence remained minimal and Rockwell was unable to recruit new party members. The press in those areas was aware that amplification served the agenda of the American Nazi party, so informed journalists employed strategic silence to reduce public harm.

The Media Manipulation research initiative at the Data & Society institute is concerned precisely with the legacy of this battle in discourse and the way that modern extremists undermine journalists and set media agendas. Media has always had the ability to publish or amplify particular voices, perspectives and incidents. In choosing stories and voices they will or will not prioritize, editors weigh the benefits and costs of coverage against potential social consequences. In doing so, they help create broader societal values. We call this willingness to avoid amplifying extremist messages “strategic silence”.

Editors used to engage in strategic silence – set agendas, omit extremist ideas and manage voices – without knowing they were doing so. Yet the online context has enhanced extremists’ abilities to create controversies, prompting newsrooms to justify covering their spectacles. Because competition for audience is increasingly fierce and financially consequential, longstanding newsroom norms have come undone. We believe that journalists do not rebuild reputation through a race to the bottom. Rather, we think that it’s imperative that newsrooms actively take the high ground and re-embrace strategic silence in order to defy extremists’ platforms for spreading hate.

Strategic silence is not a new idea. The Ku Klux Klan of the 1920s considered media coverage their most effective recruitment tactic and accordingly cultivated friendly journalists. According to Felix Harcourt, thousands of readers joined the KKK after the New York World ran a three-week chronicle of the group in 1921. Catholic, Jewish and black presses of the 1920s consciously differed from Protestant-owned mainstream papers in their coverage of the Klan, conspicuously avoiding giving the group unnecessary attention. The black press called this use of editorial discretion in the public interest “dignified silence”, and limited their reporting to KKK follies, such as canceled parades, rejected donations and resignations. Some mainstream journalists also grew suspicious of the KKK’s attempts to bait them with camera-ready spectacles. Eventually coverage declined.

The KKK was so intent on getting the coverage they sought that they threatened violence and white boycotts of advertisers. Knowing they could bait coverage with violence, white vigilante groups of the 1960s staged cross burnings and engaged in high-profile murders and church bombings. Civil rights protesters countered white violence with black stillness, especially during lunch counter sit-ins. Journalists and editors had to make moral choices of which voices to privilege, and they chose those of peace and justice, championing stories of black resilience and shutting out white extremism. This was strategic silence in action, and it saved lives.

The emphasis of strategic silence must be placed on the strategic over the silencing. Every story requires a choice and the recent turn toward providing equal coverage to dangerous, antisocial opinions requires acknowledging the suffering that such reporting causes. Even attempts to cover extremism critically can result in the media disseminating the methods that hate groups aim to spread, such as when Virginia’s Westmoreland News reproduced in full a local KKK recruitment flier on its front page. Media outlets who cannot argue that their reporting benefits the goal of a just and ethical society must opt for silence.

Newsrooms must understand that even with the best of intentions, they can find themselves being used by extremists. By contrast, they must also understand they have the power to defy the goals of hate groups by optimizing for core American values of equality, respect and civil discourse. All Americans have the right to speak their minds, but not every person deserves to have their opinions amplified, particularly when their goals are to sow violence, hatred and chaos.

If telling stories didn’t change lives, journalists would never have started in their careers. We know that words matter and that coverage makes a difference. In this era of increasing violence and extremism, we appeal to editors to choose strategic silence over publishing stories that fuel the radicalization of their readers.

(Visit the original version at The Guardian to read the comments and help support their organization, as a sign of appreciation for their willingness to publish our work.)

by zephoria at June 02, 2018 01:39 AM

May 27, 2018

Ph.D. student

Notes on Pasquale, “Tech Platforms and the Knowledge Problem”, 2018

I’ve taken a close look at Frank Pasquale’s recent article, “Tech Platforms and the Knowledge Problem” in American Affairs. This is a topic that Pasquale has had his finger on the pulse of for a long time, and I think with this recent articulation he’s really on to something. It’s an area that’s a bit of an attractor state in tech policy thinking at the moment, and as I appear to be in that mix more than ever before, I wanted to take a minute to parse out Frank’s view of the state of the art.

Here’s the setup: In 1945, Hayek points out that the economy needs to be managed somehow, and that this is the main economic use of information/knowledge. Hayek sees the knowledge as distributed and coordination accomplished through the price mechanism. Today we have giant centralizing organizations like Google and Amazon mediating markets, and it’s possible that these have the kind of ‘central planning’ role that Hayek didn’t want. There is a status quo where these companies run things in an unregulated way. Pasquale, being a bit of a regulatory hawk, not unreasonably thinks this may be disappointing and traces out two different modes of regulatory action that could respond to the alleged tech company dominance.

He does this with a nice binary opposition between Jeffersonians, who want to break up the big companies into smaller ones, and Hamiltonians, who want to keep the companies big but regulate them as utilities. His choice of Proper Nouns is a little odd to me, since many of his Hamiltonians are socialists and that doesn’t sound very Hamiltonian to me, but whatever: what can you do, writing for Americans? This table sums up some of the contrasts. Where I’m introducing new components I’m putting in a question mark (?).

Jeffersonian Hamiltonian
Classical competition Schumpeterian competition
Open Markets Institute, Lina Khan Big is Beautiful, Rob Atkinson, Evgeny Morozov
Fully automated luxury communism
Regulatory capture (?) Natural monopoly
Block mergers: unfair bargaining power Encourage mergers: better service quality
Allow data flows to third parties to reduce market barriers Security feudalism to prevent runaway data
Regulate to increase market barriers
Absentee ownership reduces corporate responsibility Many small companies, each unaccountable with little to lose, reduces corporate responsibility
Bargaining power of massive firms a problem Lobbying power of massive firms a problem (?)
Exit Voice
Monopoly reduces consumer choice Centralized paternalistic AI is better than consumer choice
Monopoly abuses fixed by competition Monopoly abuses fixed by regulation
Distrust complex, obscure corporate accountability Distrust small companies and entrepreneurs
Platforms lower quality; killing competition Platforms improve quality via data size, AI advances; economies of scale
Antitrust law Public utility law
FTC Federal Search Commission?
Libertarianism Technocracy
Capitalism Socialism
Smallholding entrepreneur is hero Responsible regulator/executive is hero

There is a lot going on here, but I think the article does a good job of developing two sides of a dialectic about tech companies and their regulation that’s been emerging. These framings extend beyond the context of the article. A lot of blockchain proponents are Jeffersonian, and their opponents are Hamiltonian, in this schema.

I don’t have much to add at this point except for the observation that it’s very hard to judge the “natural” amount of industrial concentration in these areas in part because of the crudeness of the way we measure concentration. We easily pay attention to the top five or ten companies in a sector. But we do so by ignoring the hundred or thousand or more very small companies. It’s just incorrect to say that there is only one search engine or social network; it’s just that the size distribution for the many many search engines and social networks is very skewed, like a heavy tail or log normal distribution. There may be perfectly neutral, “complex systems” oriented explanations for this distribution that make it very robust even with a number of possible interventions.

If that’s true, there will always be many many small companies and a few market leaders in the tech sector. The small companies will benefit from Jeffersonian policies, and those invested in the market leaders will benefit (in some sense) from Hamiltonian policies. The question of which strategy to take then becomes a political matter: it depends on the self-interest of differently positioned people in the socio-economic matrix. Or, alternatively, there is no tension between pursuing both kinds of policy agenda, because they target different groups that will persist no matter hat regime is in place.

by Sebastian Benthall at May 27, 2018 10:29 PM

May 26, 2018

Ph.D. student

population traits, culture traits, and racial projects: a methods challenge #ica18

In a recent paper I’ve been working on with Mark Hannah that he’s presenting this week at the International Communications Association conference, we take on the question of whether and how “big data” can be used to study the culture of a population.

By “big data” we meant, roughly large social media data sets. The pitfalls of using this sort of data for any general study of a population are perhaps best articled by Tufekci (2014). In short: studies based on social media data are often sampling on the dependent variable because they only consider the people representing themselves on social media, though this is only a small portion of the population. To put it another way, the sample suffers from the 1% rule of Internet cultures: for any on-line community, only 1% create content, 10% interact with the content somehow, and the rest lurk. The behavior and attitudes of the lurkers, in addition to any field effects in the “background” of the data (latent variables in the social field of production), are all out of band and so opaque to the analyst.

By “the culture of a population”, we meant something specific: the distribution of values, beliefs, dispositions, and tastes of a particular group of people. The best source we found on this was Marsden and Swingle (1994), and article from a time before the Internet had started to transform academia. Then and perhaps now, the best way to study the distribution of culture across a broad population was a survey. The idea is that you sample the population according to some responsible statistics, you ask them some questions about their values, beliefs, dispositions, and tastes, and you report the results. Viola!

(Given the methodological divergence here, the fact that many people, especially ‘people on the Internet’, now view culture mainly through the lens of other people on the Internet is obviously a huge problem. Most people are not in this sample, and yet we pretend that it is representative because it’s easily available for analysis. Hence, our concept of culture (or cultures) is screwy, reflecting much more than is warranted whatever sorts of cultures are flourishing in a pseudonymous, bot-ridden, commercial attention economy.)

Can we productively combine social media data with surveys methods to get a better method for studying the culture of a population? We think so. We propose the following as a general method framework:

(1) Figure out the population of interest by their stable, independent ‘population traits’ and look for their activity on social media. Sample from this.

(2) Do exploratory data analysis to inductively get content themes and observations about social structure from this data.

(3) Use the inductively generated themes from step (2) to design a survey addressing cultural traits of the population (beliefs, values, dispositions, tastes).

(4) Conduct a stratified sample specifically across social media creators, synthesizers (e.g. people who like, retweet, and respond), and the general population and/or known audience, and distribute the survey.

(5) Extrapolate the results to general conclusions.

(6) Validate the conclusions with other data or not discrepancies for future iterations.

I feel pretty good about this framework as a step forward, except that in the interest of time we had to sidestep what is maybe the most interesting question raised by it, which is: what’s the difference between a population trait and a cultural trait.

Here’s what we were thinking:

Population trait Cultural trait
Location Twitter use (creator, synthesizer, lurker, none)
Age Political views: left, right, center
Permanent unique identifier Attitude towards media
Preferred news source
Pepsi or coke?

One thing to note: we decided that traits about media production and consumption were a subtype of cultural traits. I.e., if you use Twitter, that’s a particular cultural trait that may be correlated with other cultural traits. That makes the problem of sampling on the dependent variable explicit.

But the other thing to note is that there are certain categories that we did not put on this list. Which ones? Gender, race, etc. Why not? Because choosing whether these are population traits or cultural traits opens a big bag of worms that is the subject of active political contest. That discussion was well beyond the scope of the paper!

The dicey thing about this kind of research is that we explicitly designed it to try to avoid investigator bias. That includes the bias of seeing the world through social categories that we might otherwise naturalize of reify. Naturally, though, if we were to actually conduct this method on a sample, such as, I dunno, a sample of Twitter-using academics, we would very quickly discover that certain social categories (men, women, person of color, etc.) were themes people talked about and so would be included as survey items under cultural traits.

That is not terrible. It’s probably safer to do that than to treat them like immutable, independent properties of a person. It does seem to leave something out though. For example, say one were to identify race as a cultural trait and then ask people to identify with a race. Then one takes the results, does a factor analysis, and discovers a factor that combines a racial affinity with media preferences and participation rates. It then identifies the prevalence of this factor in a certain region with a certain age demographic. One might object to this result as a representation of a racial category as entailing certain cultural categories, and leaving out the cultural minority within a racial demographic that wants more representation.

This is upsetting to some people when, for example, Facebook does this and allows advertisers to target things based on “ethnic affinity”. Presumably, Facebook is doing just this kind of factor analysis when they identify these categories.

Arguably, that’s not what this sort of science is for. But the fact that the objection seems pertinent is an informative intuition in its own right.

Maybe the right framework for understanding why this is problematic is Omi and Winant’s racial formation theory (2014). I’m just getting into this theory recently, at the recommendation of Bruce Haynes, who I look up to as an authority on race in America. According to racial projects theory, racial categories are stable because they include both representations of groups of people as having certain qualities and social structures controlling the distribution of resources. So, the white/black divide in the U.S. is both racial stereotypes and segregating urban policy, because the divide is stable because of how the material and cultural factors reinforce each other.

This view is enlightening because it helps explain why hereditary phenotype, representations of people based on hereditary phenotype, requests for people to identify with a race even when this may not make any sense, policies about inheritance and schooling, etc. all are part of the same complex. When we were setting out to develop the method described above, we were trying to correct for a sampling bias in media while testing for the distribution of culture across some objectively determinable population variables. But the objective qualities (such as zip code) are themselves functions of the cultural traits when considered over the course of time. In short, our model, which just tabulates individual differences without looking at temporal mechanisms, is naive.

But it’s a start, if only to an interesting discussion.

References

Marsden, Peter V., and Joseph F. Swingle. “Conceptualizing and measuring culture in surveys: Values, strategies, and symbols.” Poetics 22.4 (1994): 269-289.

Omi, Michael, and Howard Winant. Racial formation in the United States. Routledge, 2014.

Tufekci, Zeynep. “Big Questions for Social Media Big Data: Representativeness, Validity and Other Methodological Pitfalls.” ICWSM 14 (2014): 505-514.

by Sebastian Benthall at May 26, 2018 09:12 PM

May 25, 2018

MIMS 2012

Where do you get ideas for blog posts?

A picture of a dam by Anthony Da Cruz on Unsplash

A dam, blocking all of your great ideas – Photo by Anthony Da Cruz on Unsplash

People often ask me, “Where do you get ideas for blog posts?” I have many sources, but my most effective one is simple: pay attention to the questions people ask you.

When a person asks you a question, it means they’re seeking your advice or expertise to fill a gap in their knowledge. Take your answer, and write it down.

This technique works so well because it overcomes the two biggest barriers to blogging: “What should I write about?” and, “Does anyone care what I have to say?”

It overcomes the first barrier by giving you a specific topic to write about. Our minds contain a lifetime of experiences to draw from, but when you try to find something specific to write about you’re blank. All that accumulated knowledge is locked up in your head, as if trapped behind a dam. A question cracks the dam and starts the flow of ideas.

It overcomes the second barrier (“will anyone care?”) because you already have your first reader: the question asker. Congratulations! You just infinitely increased your reader base. And chances are they aren’t the only person who’s ever asked this question, or ever will ask it. When this question comes up in the future, you’ll be more articulate when responding, and you can keep building your audience by sharing your post.

Having at least one reader has another benefit: you now have a specific person to write for. A leading cause of poorly written blog posts is that the author doesn’t know who they’re writing for (trust me, I’ve made this mistake plenty). This leads them to try to write for everyone. Which means their writing connects with no one. The resulting article is a Frankenstein’s monster of ideas bolted together that aimlessly stumbles around mumbling and groaning and scaring away the villagers.

Instead, you can avoid this fate by conjuring up the question asker in your mind, and write your response as if you’re talking to them. Instead of creating a monster, your post will sound like a polished, engaging TED speaker.

A final benefit to answering a specific question is that it keeps your post focused. Just answer the question, and call it a day. No more, no less. Another leading cause of Frankenstein’s monster blog posts is that they don’t have a specific point they’re trying to make. So the post tries to say everything there is to say about a subject, or deviates down side roads, or doesn’t say anything remarkable, or is just plain confusing. Answering a specific question keeps these temptations at bay.

So the next time you’re wondering where to get started blogging, start by paying attention to the questions people ask you. Then write down your answers.

p.s. Yes, I applied the advice in this post to the post itself :)

p.p.s. If you’d like more writing advice, I created a page to house all of the tips and tricks I’ve picked up from books and articles over the years. Check it out at jlzych.com/writing.

by Jeff Zych at May 25, 2018 08:27 PM

May 24, 2018

Ph.D. student

thinking about meritocracy in open source communities

There has been a trend in open source development culture over the past ten years or so. It is the rejection of ‘meritocracy’. Just now, I saw this Post-Meritocracy Manifesto, originally created by Coraline Ada Ehmke. It is exactly what it sounds like: an explicit rejection of meritocracy, specifically in open source development. It captures a recent progressive wing of software development culture. It is attracting signatories.

I believe this is a “trend” because I’ve noticed a more subtle expression of similar ideas a few months ago. This came up when we were coming up with a Code of Conduct for BigBang. We wound up picking the Contributor Covenant Code of Conduct, though there’s still some open questions about how to integrate it with our Governance policy.

This Contributor Covenant is widely adopted and the language of it seems good to me. I was surprised though when I found the rationale for it specifically mentioned meritocracy as a problem the code of conduct was trying to avoid:

Marginalized people also suffer some of the unintended consequences of dogmatic insistence on meritocratic principles of governance. Studies have shown that organizational cultures that value meritocracy often result in greater inequality. People with “merit” are often excused for their bad behavior in public spaces based on the value of their technical contributions. Meritocracy also naively assumes a level playing field, in which everyone has access to the same resources, free time, and common life experiences to draw upon. These factors and more make contributing to open source a daunting prospect for many people, especially women and other underrepresented people.

If it looks familiar, it may be because it was written by the same author, Coraline Ada Ehmke.

I have to admit that though I’m quite glad that we have a Code of Conduct now in BigBang, I’m uncomfortable with the ideological presumptions of its rationale and the rejection of ‘meritocracy’. There is a lot packed into this paragraph that is open to productive disagreement and which is not necessary for a commitment to the general point that harassment is bad for an open source community.

Perhaps this would be easier for me to ignore if this political framing did not mirror so many other political tensions today, and if open source governance were not something I’ve been so invested in understanding. I’ve taught a course on open source management, and BigBang spun out of that effort as an experiment in scientific analysis of open source communities. I am, I believe, deep in on this topic.

So what’s the problem? The problem is that I think there’s something painfully misaligned about criticism of meritocracy in culture at large and open source development, which is a very particular kind of organizational form. There is also perhaps a misalignment between the progressive politics of inclusion expressed in these manifestos and what many open source communities are really trying to accomplish. Surely there must be some kind of merit that is not in scare quotes, or else there would not be any good open source software to use a raise a fuss about.

Though it does not directly address the issue, I’m reminded of an old email discussion on the Numpy mailing list that I found when I was trying to do ethnographic work on the Scientific Python community. It was a response by John Hunter, the creator of Matplotlib, in response to concerns raised when Travis Oliphant, the leader of NumPy, started Continuum Analytics and there were concerns about corporate control over NumPy. Hunter quite thoughtfully, in my opinion, debunked the idea that open source governance should be a ‘democracy’, like many people assume institutions ought to be by default. After a long discussion about how Travis had great merit as a leader, he argued:

Democracy is something that many of us have grown up by default to consider as the right solution to many, if not most or, problems of governance. I believe it is a solution to a specific problem of governance. I do not believe democracy is a panacea or an ideal solution for most problems: rather it is the right solution for which the consequences of failure are too high. In a state (by which I mean a government with a power to subject its people to its will by force of arms) where the consequences of failure to submit include the death, dismemberment, or imprisonment of dissenters, democracy is a safeguard against the excesses of the powerful. Generally, there is no reason to believe that the simple majority of people polled is the “best” or “right” answer, but there is also no reason to believe that those who hold power will rule beneficiently. The democratic ability of the people to check to the rule of the few and powerful is essential to insure the survival of the minority.

In open source software development, we face none of these problems. Our power to fork is precisely the power the minority in a tyranical democracy lacks: noone will kill us for going off the reservation. We are free to use the product or not, to modify it or not, to enhance it or not.

The power to fork is not abstract: it is essential. matplotlib, and chaco, both rely *heavily* on agg, the Antigrain C++ rendering library. At some point many years ago, Maxim, the author of Agg, decided to change the license of Agg (circa version 2.5) to GPL rather than BSD. Obviously, this was a non-starter for projects like mpl, scipy and chaco which assumed BSD licensing terms. Unfortunately, Maxim had a new employer which appeared to us to be dictating the terms and our best arguments fell on deaf ears. No matter: mpl and Enthought chaco have continued to ship agg 2.4, pre-GPL, and I think that less than 1% of our users have even noticed. Yes, we forked the project, and yes, noone has noticed. To me this is the ultimate reason why governance of open source, free projects does not need to be democratic. As painful as a fork may be, it is the ultimate antidote to a leader who may not have your interests in mind. It is an antidote that we citizens in a state government may not have.

It is true that numpy exists in a privileged position in a way that matplotlib or scipy does not. Numpy is the core. Yes, Continuum is different than STScI because Travis is both the lead of Numpy and the lead of the company sponsoring numpy. These are important differences. In the worst cases, we might imagine that these differences will negatively impact numpy and associated tools. But these worst case scenarios that we imagine will most likely simply distract us from what is going on: Travis, one of the most prolific and valuable contributers to the scientific python community, has decided to refocus his efforts to do more. And that is a very happy moment for all of us.

This is a nice articulation of how forking, not voting, is the most powerful governance mechanism in open source development, and how it changes what our default assumptions about leadership ought to be. A critical but I think unacknowledged question is to how the possibility of forking interacts with the critique of meritocracy in organizations in general, and specifically what that means for community inclusiveness as a goal in open source communities. I don’t think it’s straightforward.

by Sebastian Benthall at May 24, 2018 08:36 PM

Inequality perceived through implicit factor analysis and its implications for emergent social forms

Vox published an interview with Keith Payne, author of The Broken Ladder.

My understanding is that the thesis of the book is that income inequality has a measurable effect on public health, especially certain kinds of chronic illnesses. The proposed mechanism for this effect is the psychological state of those perceiving themselves to be relatively worse off. This is a hardwired mechanism, it would seem, and one that is being turned on more and more by socioeconomic conditions today.

I’m happy to take this argument for granted until I hear otherwise. I’m interested in (and am jotting notes down here, not having read the book) the physics of this mechanism. It’s part of a larger puzzle about social forms, emergent social properties, and factor analysis that I’ve written about it some other posts.

Here’s the idea: income inequality is a very specific kind of social metric and not one that is easy to directly perceive. Measuring it from tax records, which short be straightforward, is fraught with technicalities. Therefore, it is highly implausible that direct perception of this metric is what causes the psychological impact of inequality.

Therefore, there must be one or more mediating factors between income inequality as an economic fact and psychological inequality as a mental phenomenon. Let’s suppose–because it’s actually what we should see as a ‘null hypothesis’–that there are many, many factors linking these phenomena. Some may be common causes of income inequality and psychological inequality, such as entrenched forms of social inequality that prevent equal access to resources and are internalized somehow. Others may be direct perception of the impact of inequality, such as seeing other people flying in higher class seats, or (ahem) hearing other people talk about flying at all. And yet we seem comfortable deriving from this very complex mess a generalized sense of inequality and its impact, and now that’s one of the most pressing political topics today.

I want to argue that when a person perceives inequality in a general way, they are in effect performing a kind of factor analysis on their perceptions of other people. When we compare ourselves with others, we can do so on a large number of dimensions. Cognitively, we can’t grok all of it–we have to reduce the feature space, and so we come to understand the world through a few blunt indicators that combine many other correlated data points into one.

These blunt categories can suggest that there is structure in the world that isn’t really there, but rather is an artifact of constraints on human perception and cognition. In other words, downward causation would happen in part through a dimensionality reduction of social perception.

On the other hand, if those constraints are regular enough, they may in turn impose a kind of structure on the social world (upward causation). If downward causation and upward causation reinforced each other, then that would create some stable social conditions. But there’s also no guarantee that stable social perceptions en masse track the real conditions. There may be systematic biases.

I’m not sure where this line of inquiry goes, to be honest. It needs more work.

by Sebastian Benthall at May 24, 2018 03:34 PM

May 21, 2018

Ph.D. student

General intelligence, social privilege, and causal inference from factor analysis

I came upon this excellent essay by Cosma Shalizi about how factor analysis has been spuriously used to support the scientific theory of General Intelligence (i.e., IQ). Shalizi, if you don’t know, is one of the best statisticians around. He writes really well and isn’t afraid to point out major blunders in things. He’s one of my favorite academics, and I don’t think I’m alone in this assessment.

First, a motive: Shalizi writes this essay because he thinks the scientific theory of General Intelligence, or a g factor that is some real property of the mind, is wrong. This theory is famous because (a) a lot of people DO believe in IQ as a real feature of the mind, and (b) a significant percentage of these people believe that IQ is hereditary and correlated with race, and (c) the ideas in (b) are used to justify pernicious and unjust social policy. Shalizi, being a principled statistician, appears to take scientific objection to (a) independently of his objection to (c), and argues persuasively that we can reject (a). How?

Shalizi’s point is that the general intelligence factor g is a latent variable that was supposedly discovered using a factor analysis of several different intelligence tests that were supposed to be independent of each other. You can take the data from these data sets and do a dimensionality reduction (that’s what factor analysis is) and get something that looks like a single factor, just as you can take a set of cars and do a dimensionality reduction and get something that looks like a single factor, “size”. The problem is that “intelligence”, just like “size”, can also be a combination of many other factors that are only indirectly associated with each other (height, length, mass, mass of specific components independent of each other, etc.). Once you have many different independent factors combining into one single reduced “dimension” of analysis, you no longer have a coherent causal story of how your general latent variable caused the phenomenon. You have, effectively, correlation without demonstrated causation and, moreover, the correlation is a construct of your data analysis method, and so isn’t really even telling you what correlations normally tell you.

To put it another way: the fact that some people seem to be generally smarter than other people can be due to thousands of independent factors that happen to combine when people apply themselves to different kinds of tasks. If some people were NOT seeming generally smarter than others, that would allow you to reject the hypothesis that there was general intelligence. But the mere presence of the aggregate phenomenon does not prove the existence of a real latent variable. In fact, Shalizi goes on to say, when you do the right kinds of tests to see if there really is a latent factor of ‘general intelligence’, you find that there isn’t any. And so it’s just the persistent and possibly motivated interpretation of the observational data that allows the stubborn myth of general intelligence to continue.

Are you following so far? If you are, it’s likely because you were already skeptical of IQ and its racial correlates to begin with. Now I’m going to switch it up though…

It is fairly common for educated people in the United States (for example) to talk about “privilege” of social groups. White privilege, male privilege–don’t tell me you haven’t at least heard of this stuff before; it is literally everywhere on the center-left news. Privilege here is considered to be a general factor that adheres in certain social groups. It is reinforced by all manner of social conditioning, especially through implicit bias in individual decision-making. This bias is so powerful it extends not to just cases of direct discrimination but also in cases where discrimination happens in a mediated way, for example through technical design. The evidence for these kinds of social privileging effects is obvious: we see inequality everywhere, and we can who is more powerful and benefited by the status quo and who isn’t.

You see where this is going now. I have the momentum. I can’t stop. Here it goes: Maybe this whole story about social privilege is as spuriously supported as the story about general intelligence? What if both narratives were over-interpretations of data that serve a political purpose, but which are not in fact based on sound causal inference techniques?

How could this be? Well, we might gather a lot of data about people: wealth, status, neighborhood, lifespan, etc. And then we could run a dimensionality reduction/factor analysis and get a significant factor that we could name “privilege” or “power”. Potentially that’s a single, real, latent variable. But also potentially it’s hundreds of independent factors spuriously combined into one. It would probably, if I had to bet on it, wind up looking a lot like the factor for “general intelligence”, which plays into the whole controversy about whether and how privilege and intelligence get confused. You must have heard the debates about, say, representation in the technical (or other high-status, high-paying) work force? One side says the smart people get hired; the other side say it’s the privileged (white male) people that get hired. Some jerk suggests that maybe the white males are smarter, and he gets fired. It’s a mess.

I’m offering you a pill right now. It’s not the red pill. It’s not the blue pill. It’s some other colored pill. Green?

There is no such thing as either general intelligence or group based social privilege. Each of these are the results of sloppy data compression over thousands of factors with a loose and subtle correlational structure. The reason why patterns of social behavior that we see are so robust against interventions is that each intervention can work against only one or two of these thousands of factors at a time. Discovering the real causal structure here is hard partly because the effect sizes are very small. Anybody with a simple explanation, especially a politically convenient explanation, is lying to you but also probably lying to themselves. We live in a complex world that resists our understanding and our actions to change it, though it can be better understood and changed through sound statistics. Most people aren’t bothering to do this, and that’s why the world is so dumb right now.

by Sebastian Benthall at May 21, 2018 12:05 AM

May 20, 2018

Ph.D. student

Goodbye, TheListserve!

Today I got an email I never thought I’d get: a message from the creators of TheListserve saying they were closing down the service after over 6 years.

TheListserve was a fantastic idea: it was a mailing list that allowed one person, randomly selected from the subscribers each day, to email everyone else.

It was an experiment in creating a different kind of conversational space on-line. And it worked great! Tens of thousands of subscribers, really interesting content–a space unlike most others in social media. You really did get a daily email with what some random person thought was the most interesting thing they had to say.

I was inspired enough by TheListserve to write a Twitter bot based on similar principles, TheTweetserve. Maybe the Twitter bot was also inspired by Habermas. It was not nearly as successful or interesting as TheListserve, for reasons that you could deduce if you thought about it.

Six years ago, “The Internet” was a very different imaginary. There was this idea that a lightweight intervention could capture some of the magic of serendipity that scale and connection had to offer, and that this was going to be really, really big.

It was, I guess, but then the charm wore off.

What’s happened now, I think, is that we’ve been so exposed to connection and scale that novelty has worn off. We now find ourselves exposed on-line mainly to the imposing weight of statistical aggregates and regressions to the mean. After years of messages to TheListserve, it started, somehow, to seem formulaic. You would get honest, encouraging advice, or a self-promotion. It became, after thousands of emails, a genre in itself.

I wonder if people who are younger and less jaded than I am are still finding and creating cool corners of the Internet. What I hear about more and more now are the ugly parts; they make the news. The Internet used to be full of creative chaos. Now it is so heavily instrumented and commercialized I get the sense that the next generation will see it much like I saw radio or television when I was growing up: as a medium dominated by companies, large and small. Something you had to work hard to break into as a professional choice or otherwise not at all.

by Sebastian Benthall at May 20, 2018 02:42 AM

May 15, 2018

Ph.D. student

“Context, Causality, and Information Flow: Implications for Privacy Engineering, Security, and Data Economics” <– My dissertation

In the last two weeks, I’ve completed, presented, and filed my dissertation, and commenced as a doctor of philosophy. In a word, I’ve PhinisheD!

The title of my dissertation is attention-grabbing, inviting, provocative, and impressive:

“Context, Causality, and Information Flow: Implications for Privacy Engineering, Security, and Data Economics”

If you’re reading this, you are probably wondering, “How can I drop everything and start reading that hot dissertation right now?”

Look no further: here is a link to the PDF.

You can also check out this slide deck from my “defense”. It covers the highlights.

I’ll be blogging about this material as I break it out into more digestible forms over time. For now, I’m obviously honored by any interest anybody takes in this work and happy to answer questions about it.

by Sebastian Benthall at May 15, 2018 05:24 PM

April 30, 2018

Center for Technology, Society & Policy

Data for Good Competition — Showcase and Judging

The four teams in CTSP’s Facebook-sponsored Data for Good Competition will be presenting today in CITRIS and CTSP’s Tech & Data for Good Showcase Day. The event will be streamed through Facebook Live on the CTSP Facebook page. After deliberations from the judges, the top team will receive $5000 and the runner-up will receive $2000.

Update:

Agenda:

Data for Good Judges:

Joy Bonaguro, Chief Data Officer, City and County of San Francisco

Joy Bonaguro the first Chief Data Officer for the City and County of San Francisco, where she manages the City’s open data program. Joy has spent more than a decade working at the nexus of public policy, data, and technology. Joy earned her Masters from UC Berkeley’s Goldman School of Public Policy, where she focused on IT policy.

Lisa García Bedolla, Professor, UC Berkeley Graduate School of Education and Director of UC Berkeley’s Institute of Governmental Studies

Professor Lisa García Bedolla is a Professor in the Graduate School of Education and Director of the Institute of Governmental Studies. Professor García Bedolla uses the tools of social science to reveal the causes of political and economic inequalities in the United States. Her current projects include the development of a multi-dimensional data system, called Data for Social Good, that can be used to track and improve organizing efforts on the ground to empower low-income communities of color. Professor García Bedolla earned her PhD in political science from Yale University and her BA in Latin American Studies and Comparative Literature from UC Berkeley.

Chaya Nayak, Research Manager, Public Policy, Data for Good at Facebook

Chaya Nayak is a Public Policy Research Manager at Facebook, where she leads Facebook’s Data for Good Initiative around how to use data to generate positive social impact and address policy issues. Chaya received a Masters of Public Policy from the Goldman School of Public Policy at UC Berkeley, where she focused on the intersection between Public Policy, Technology, and Utilizing Data for Social Impact.

Michael Valle, Manager, Technology Policy and Planning for California’s Office of Statewide Health Planning and Development

Michael D. Valle is Manager of Technology Policy and Planning at the California Office of Statewide Health Planning and Development, where he oversees the digital product portfolio. Michael has worked since 2009 in various roles within the California Health and Human Services Agency. In 2014 he helped launch the first statewide health open data portal in California. Michael also serves as Adjunct Professor of Political Science at American River College.

Judging:

As detailed in the call for proposals, the teams will be judged on the quality of their application of data science skills, the demonstration of how the proposal or project addresses a social good problem, their advancing the use of public open data, all while demonstrating how the proposal or project mitigates potential pitfalls.

by Daniel Griffin at April 30, 2018 07:06 PM

April 19, 2018

Ph.D. student

So you want to start a data science institute? Achieving sustainability

This is a post that first appeared on the Software Sustainability Institute’s blog and was co-authored by myself, Alejandra Gonzalez-Beltran, Robert Haines, James Hetherington, Chris Holdgraf, Heiko Mueller, Martin O’Reilly, Tomas Petricek, Jake VanderPlas (authors in alphabetical order) during a workshop at the Alan Turing Institute.

Introduction: Sustaining Data Science and Research Software Engineering

Data and software have enmeshed themselves in the academic world, and are a growing force in most academic disciplines (many of which are not traditionally seen as “data-intensive”). Many universities wish to improve their ability to create software tools, enable efficient data-intensive collaborations, and spread the use of “data science” methods in the academic community.

The fundamentally cross-disciplinary nature of such activities has led to a common model: the creation of institutes or organisations not bound to a particular department or discipline, focusing on the skills and tools that are common across the academic world. However, creating institutes with a cross-university mandate and non-standard academic practices is challenging. These organisations often do not fit into the “traditional” academic model of institutes or departments, and involve work that is not incentivised or rewarded under traditional academic metrics. To add to this challenge, the combination of quantitative and qualitative skills needed is also highly in-demand in non-academic sectors. This raises the question: how do you create such institutes so that they attract top-notch candidates, sustain themselves over time, and provide value both to members of the group as well as the broader university community?

In recent years many universities have experimented with organisational structures aimed at acheiving this goal. They focus on combining research software, data analytics, and training for the broader academic world, and intentionally cut across scientific disciplines. Two-such groups are the Moore-Sloan Data Science Environments based in the USA and the Research Software Engineer groups based in the UK. Representatives from both countries recently met at the Alan Turing Institute in London for the RSE4DataScience18 Workshop to discuss their collective experiences at creating successful data science and research software institutes.

This article synthesises the collective experience of these groups, with a focus on challenges and solutions around the topic of sustainability. To put it bluntly: a sustainable institute depends on sustaining the people within it. This article focuses on three topics that have proven crucial.

  1. Creating consistent and competitive funding models.
  2. Building a positive culture and an environment where all members feel valued.
  3. Defining career trajectories that cater to the diverse goals of members within the organisation.

We’ll discuss each of these points below, and provide some suggestions, tips, and lessons-learned in accomplishing each.

An Aside on Nomenclature

The terms Research Software Engineer (i.e. RSE; most often used by UK partners) and Data Scientist (most often used by USA partners) have slightly different connotations, but we will not dwell on those aspects here (see Research Software Engineers and Data Scientists: More in Common for some more thoughts on this). In the current document, we will mostly use the terms RSE and Data Scientist interchangeably, to denote the broad range of positions that focus on software-intensive and data-intensive research within academia. In practice, we find that most people flexibly operate in both worlds simultaneously.

Challenges & Proposed Solutions

Challenge: Financial sustainability

How can institutions find the financial support to run an RSE program?

The primary challenge for sustainability of this type of program is often financial: how do you raise the funding necessary to hire data scientists and support their research? While this doesn’t require paying industry-leading rates for similar work, it does require resources to compensate people comfortably. In practice, institutions have come at this from a number of angles:

Private Funding: Funding from private philanthropic organisations has been instrumental in getting some of these programs off the ground: for example, the Moore-Sloan Data Science Initiative funded these types of programs for five years at the University of Washington (UW), UC Berkeley, and New York University (NYU). This is probably best viewed as seed funding to help the institutions get on their feet, with the goal of seeking other funding sources for the long term.

Organisational Grants: Many granting organisations (such as the NSF or the UK Research Councils) have seen the importance of software to research, and are beginning to make funding available specifically for cross-disciplinary software-related and data science efforts. Examples are the Alan Turing Institute, mainly funded by the UK Engineering and Physical Sciences Research Council (EPSRC) and the NSF IGERT grant awarded to UW, which funded the interdisciplinary graduate program centered on the data science institute there.

Project-based Grants: There are also opportunities to gain funding for the development of software or to carry out scientific work that requires creating new tools. For example, several members of UC Berkeley were awarded a grant from the Sloan Foundation to hire developers for the NumPy software project. The grant provided enough funding to pay competitive wages with the broader tech community in the Bay Area.

Individual Grants: For organisations that give their RSEs principal investigator status, grants to individuals’ research programs can be a route to sustainable funding, particularly as granting organisations become more aware of and attuned to the importance of software in science. In the UK, the EPSRC has run two rounds of Research Software Engineer Fellowships, supporting leaders in the research software field for a period of five years to establish their RSE groups. Another example of a small grant for individuals promoting and supporting RSE activities is the Software Sustainability Institute fellowship.

Paid Consulting: Some RSE organisations have adopted a paid consulting model, in which they fund their institute by consulting with groups both inside and outside the university. This requires finding common goals with non-academic organisations, and agreeing to create open tools in order to accomplish those goals. An example is at Manchester, where as part of their role in research IT, RSEs provide paid on-demand technical research consulting services for members of the University community. Having a group of experts on campus able to do this sort of work is broadly beneficial to the University as a whole.

University Funding: Universities generally spend part of their budget on in-house services for students and researchers; a prime example is IT departments. When RSE institutes establish themselves as providing a benefit to the University community, the University administration may see fit to support those efforts: this has been the case at UW, where the University funds faculty positions within the data science institute. In addition, several RSE groups perform on-demand training sessions to research groups on campus in exchange for proceeds from research grants.

Information Technology (IT) Connections: IT organisations in universities are generally well-funded, and their present-day role is often far removed from their original mission of supporting computational research. One vision for sustainability is to reimagine RSE programs as the “research wing” of university IT, to make use of the relatively large IT funding stream to help enable more efficient computational research. This model has been implemented at the University of Manchester, where Research IT sits directly within the Division of IT Services. Some baseline funding is provided to support things like research application support and training, and RSE projects are funded via cost recovery.

Professors of Practice: Many U.S. universities have the notion of “professors of practice” or “clinical professors,” which often exist in professional schools like medicine, public policy, business, and law. In these positions, experts with specialised fields are recruited as faculty for their experience outside of traditional academic research. Such positions are typically salaried, but not tenure-track, with these faculty evaluated on different qualities than traditional faculty. Professors of practice are typically able to teach specialised courses, advise students, influence the direction of their departments, and get institutional support for various projects. Such a model could be applied to support academic data science efforts, perhaps by adopting the “Professor of practice” pattern within computational science departments.

Research Librarians: We also see similarities in how academic libraries have supported stable, long-term career paths for their staff. Many academic librarians are experts in both a particular domain specialty and in library science, and spend much of their time helping members of the community with their research. At some universities, librarians have tenure-track positions equivilant to those in academic departments, while at others, librarians are a distinct kind of administrative or staff track that often have substantial long-term job security and career progression. These types of institutions and positions provide a precedent for the kinds of flexible, yet stable academic careers that our data science institutes support.

Challenge: Community cohesion and personal value

How to create a successful environment where people feel valued?

From our experience, there are four main points that help create an enjoyable and successful environment to facilitate success and makes people feel valued in their role.

Physical Space. The physical space that hosts the group plays an important role to creating an enjoyable working environment. In most cases there will be a lot of collaboration going on between people within the group but also with people from other departments within the university. Having facilities (e.g. meeting spaces) that support collaborative work on software projects will be a big facilitator for successful outputs.

Get Started Early. Another important aspect to creating a successful environment is to connect the group to other researchers with the university early on. It is important to inform people about the tasks and services the group provides, and to involve people early on who are well connected and respected within the university so that they can promote and champion the group within the university. This helps get the efforts off the ground early, and spread the word and bring on further opportunities.

Celebrate Each Other’s Work. While it may not be possible to convince the broader academic community to treat software as first-class research output, data science organisations should explicitly recognise many forms of scientific output, including tools and software, analytics workflows, or non-standard written communication. This is especially true for projects where there is no “owner”, such as major open-source projects. Just because your name isn’t “first” doesn’t mean you can’t make a valuable contribution to science. Creating a culture that celebrates these efforts makes individuals feel that their work is valued.

Allow Free Headspace. The roles of individuals should (i) enable them to work in collaboration with researchers from other domains (e.g., in a support role on their research projects) and (ii) also allow them to explore their own ‘research’ ideas. Involvement in research projects not only helps these projects develop reliable and reproducible results but can be an important source to help identify areas and tasks that are currently poorly supported be existing research software. Having free head space allows individuals to further pursue ideas that help solve the identified tasks. There are a lot of examples for successful open source software projects that have started as small side projects.

Challenge: Preparing members for a diversity of careers

How do we establish career trajectories that value people’s skills and experience in this new inter-disciplinary domain?

The final dimension that we consider is that of the career progression of data scientists. Their career path generally differs from the traditional academic progression, and the traditional academic incentives and assessment criteria do not necessarily apply to the work they perform.

Professional Development. A data science institute should prepare its staff both in technical skills (such as software development best practices and data-intensive activities) as well as soft skills (such as team work and communication skills) that would allow them to be ready for their next career step in multiple interdisciplinary settings. Whether it is in academia or industry, data science is inherently collaborative, and requires working with a team composed of diverse skillsets.

Where Next. Most individuals will not spend their entire careers within a data science institute, which means their time must be seen as adequately preparing them for their next step. We envision that a data scientist could progress in their career either staying in academia, or moving to industry positions. For the former, career progression might involve moving to new supervisory roles, attaining PI status, or building research groups. For the latter, the acquired technical and soft skills are valuable in industrial settings and should allow for a smooth transition. Members should be encouraged to collaborate or communicate with industry partners in order to understand the roles that data analytics and software play in those organisations.

The Revolving Door. The career trajectory from academia to industry has traditionally been mostly a one-way street, with academic researchers and industry engineers living in different worlds. However, the value of data analytic methods cuts across both groups, and offers opportunities to learn from one another. We believe a Data Science Institute should encourage strong collaborations and a bi-directional and fluid interchange between academic and industrial endeavours. This will enable a more rapid spread of tools and best-practices, and support the intermixing of career paths between research and industry. We see the institute as ‘the revolving door’ with movement of personnel between different research and commercial roles, rather than a one-time commitment where members must choose one or the other.

Final Thoughts

Though these efforts are still young, we have already seen the dividends of supporting RSEs and Data Scientists within our institutions in the USA and the UK. We hope this document can provide a roadmap for other institutions to develop sustainable programs in support of cross-disciplinary software and research.

by R. Stuart Geiger at April 19, 2018 07:00 AM

Research Software Engineers and Data Scientists: More in Common

This is a post that first appeared on the Software Sustainability Institute’s blog and was co-authored by Matthew Archer, Stephen Dowsland, Rosa Filgueira, R. Stuart Geiger, Alejandra Gonzalez-Beltran, Robert Haines, James Hetherington, Christopher Holdgraf, Sanaz Jabbari Bayandor, David Mawdsley, Heiko Mueller, Tom Redfern, Martin O’Reilly, Valentina Staneva, Mark Turner, Jake VanderPlas, Kirstie Whitaker (authors in alphabetical order) during a workshop at the Alan Turing Institute.

In our institutions, we employ multidisciplinary research staff who work with colleagues across many research fields to use and create software to understand and exploit research data. These researchers collaborate with others across the academy to create software and models to understand, predict and classify data not just as a service to advance the research of others, but also as scholars with opinions about computational research as a field, making supportive interventions to advance the practice of science.

Some of us use the term “data scientist” to refer to our team members, in others we use “research software engineer” (RSE), and in some both. Where both terms are used, the difference seems to be that data scientists in an academic context focus more on using software to understand data, while research software engineers more often make software libraries for others to use. However, in some places, one or other term is used to cover both, according to local tradition.

What we have in common

Regardless of job title, we hold in common many of the skills involved and the goal of driving the use of open and reproducible research practices.

Shared skill focuses include:

  • Literate programming: writing code to be read by humans.
  • Performant programming: the time or memory used by the code really matters
  • Algorithmic understanding: you need to know what the maths of the code you’re working with actually does.
  • Coding for a product: software and scripts need to live beyond the author, being used by others.
  • Verification and testing: it’s important that the script does what you think it does.
  • Scaling beyond the laptop: because performance matters, cloud and HPC skills are important.
  • Data wrangling: parsing, managing, linking and cleaning research data in an arcane variety of file formats.
  • Interactivity: the visual display of quantitative information.

Shared attitudes and approaches to work are also important commonalities:

  • Multidisciplinary agility: the ability to learn what you need from a new research domain as you begin a collaboration.
  • Navigating the research landscape: learning the techniques, languages, libraries and algorithms you need as you need them.
  • Managing impostor syndrome: as generalists, we know we don’t know the detail of our methods quite as well as the focused specialists, and we know how to work with experts when we need to.

Our differences emerge from historical context

The very close relationship thus seen between the two professional titles is not an accident. In different places, different tactics have been tried to resolve a common set of frustrations seen as scholars struggle to make effective use of information technology.

In the UK, the RSE Groups have tried to move computational research forward by embracing a service culture while retaining participation in the academic community, sometimes described as being both a “craftsperson and a scholar”, or science-as-a-service. We believe we make a real difference to computational research as a discipline by helping individual research groups use and create software more effectively for research, and that this helps us to create genuine value for researchers rather than to build and publish tools that are not used by researchers to do research.

The Moore-Sloan Data Science Environments (MSDSE) in the US are working to establish Data Science as a new academic interdisciplinary field, bringing together researchers from domain and methodology fields to collectively develop best practices and software for academic research. While these institutes also facilitate collaboration across academia, their funding models are less based on a service model than in UKRSE groups and more based on bringing together graduate students, postdocs, research staff, and faculty across academia together in a shared environment.

Although these approaches differ strongly, we nevertheless see that the skills, behaviours and attitudes used by the people struggling to make this work are very similar. Both movements are tackling similar issues, but in different institutional contexts. We took diverging paths from a common starting point, but now find ourselves envisaging a shared future.

The Alan Turing Institute in the UK straddles the two models, with both a Research Engineering Group following a science-as-a-service model and comprising both Data Scientists and RSEs, and a wider collaborative academic data science engagement across eleven partner universities.

Recommendations

Observing this convergence, we recommend:

  • Create adverts and job descriptions that are welcoming to people who identify as one or the other title: the important thing is to attract and retain the right people.
  • Standardised nomenclature is important, but over-specification is harmful. Don’t try too hard to delineate the exact differences in the responsibilities of the two roles: people can and will move between projects and focuses, and this is a good thing.
  • These roles, titles, groups, and fields are emerging and defined differently across institutions. It is important to have clear messaging to various stakeholders about the responsibilities and expectations of people in these roles.
  • Be open to evolving roles for team members, and ensure that stable, long-term career paths exist to support those who have taken the risk to work in emerging roles.
  • Don’t restrict your recruitment drive to people who have worked with one or other of these titles: the skills you need could be found in someone whose earlier roles used the other term.
  • Don’t be afraid to embrace service models to allow financial and institutional sustainability, but always maintain the genuine academic collaboration needed for research to flourish.

by R. Stuart Geiger at April 19, 2018 07:00 AM

April 16, 2018

Ph.D. student

Keeping computation open to interpetation: Ethnographers, step right in, please

This is a post that first appeared on the ETHOSLab Blog, written by myself, Bastian Jørgensen (PhD fellow at Technologies in Practice, ITU), Michael Hockenhull (PhD fellow at Technologies in Practice, ITU), Mace Ojala (Research Assistant at Technologies in Practice, ITU).

Introduction: When is data science?

We recently held a workshop at ETHOS Lab and the Data as Relation project at ITU Copenhagen, as part of Stuart Geiger’s seminar talk on “Computational Ethnography and the Ethnography of Computation: The Case for Context” on 26th of March 2018. Tapping into his valuable experience, and position as a staff ethnographer at Berkeley Institute for Data Science, we wanted to think together about the role that computational methods could play in ethnographic and interpretivist research. Over the past decade, computational methods have exploded in popularity across academia, including in the humanities and interpretive social sciences. Stuart’s talk made an argument for a broad, collaborative, and pluralistic approach to the intersection of computation and ethnography, arguing that ethnography has many roles to play in what is often called “data science.”

Based on Stuart’s talk the previous day, we began the workshop with three different distinctions about how ethnographers can work with computation and computational data: First, the “ethnography of computation” is using traditional qualitative methods to study the social, organizational, and epistemic life of computation in a particular context: how do people build, produce, work with, and relate to systems of computation in their everyday life and work? Ethnographers have been doing such ethnographies of computation for some time, and many frameworks — from actor-network theory (Callon 1986Law 1992) to “technography” (Jansen and Vellema 2011Bucher 2012) — have been useful to think about how to put computation at the center of these research projects.

Second, “computational ethnography” involves extending the traditional qualitative toolkit of methods to include the computational analysis of data from a fieldsite, particularly when working with trace or archival data that ethnographers have not generated themselves. Computational ethnography is not replacing methods like interviews and participant-observation with such methods, but supplementing them. Frameworks like “trace ethnography” (Geiger and Ribes 2010) and “computational grounded theory” (Nelson 2017) have been useful ways of thinking about how to integrate these new methods alongside traditional qualitative methods, while upholding the particular epistemological commitments that make ethnography a rich, holistic, situated, iterative, and inductive method. Stuart walked through a few Jupyter notebooks from a recent paper (Geiger and Halfaker, 2017) in which they replicated and extended a previously published study about bots in Wikipedia. In this project, they found computational methods quite useful in identifying cases for qualitative inquiry, and they also used ethnographic methods to inform a set of computational analyses in ways that were more specific to Wikipedians’ local understandings of conflict and cooperation than previous research.

Finally, the “computation of ethnography” (thanks to Mace for this phrasing) involves applying computational methods to the qualitative data that ethnographers generate themselves, like interview transcripts or typed fieldnotes. Qualitative researchers have long used software tools like NVivo, Atlas.TI, or MaxQDA to assist in the storage and analysis of data, but what are the possibilities and pitfalls of storing and analyzing our qualitative data in various computational ways? Even ethnographers who use more standard word processing tools like Google Docs or Scrivener for fieldnotes and interviews can use computational methods to organize, index, tag, annotate, aggregate and analyze their data. From topic modeling of text data to semantic tagging of concepts to network analyses of people and objects mentioned, there are many possibilities. As multi-sited and collaborative ethnography are also growing, what tools let us collect, store, and analyze data from multiple ethnographers around the world? Finally, how should ethnographers deal with the documents and software code that circulate in their fieldsites, which often need to be linked to their interviews, fieldnotes, memos, and manuscripts?

These are not hard-and-fast distinctions, but instead should be seen as sensitizing concepts that draw our attention to different aspects of the computation / ethnography intersection. In many cases, we spoke about doing all three (or wanting to do all three) in our own projects. Like all definitions, they blur as we look closer at them, but this does not mean we should abandon the distinctions. For example, computation of ethnography can also strongly overlap with computational ethnography, particularly when thinking about how to analyze unstructured qualitative data, as in Nelson’s computational grounded theory. Yet it was productive to have different terms to refer to particular scopings: our discussion of using topic modeling of interview transcripts to help identify common themes was different than our discussion of analyzing of activity logs to see how prevalent a particular phenomenon, which were different than our discussion a situated investigation of the invisible work of code and data maintenance.

We then worked through these issues in the specific context of two cases from ETHOS Lab and Data as Relation project, where Bastian and Michael are both studying public sector organizations in Denmark that work with vast quantities and qualities of data and are often seeking to become more “data-driven.” In the Danish tax administration (SKAT) and the Municipality of Copenhagen’s Department of Cultural and Recreational Activities, there are many projects that are attempting to leverage data further in various ways. For Michael, the challenge is to be able to trace how method assemblages and sociotechnical imaginaries of data travel between private organisations and sites to public organisations, and influence the way data is worked with and what possibilities data are associated with. Whilst doing participant-observation, Michael suggested that a “computation of ethnography” approach might make it easier to trace connections between disparate sites and actors.

The ethnographer enters the perfect information organization

In one group, we explored the idea of the Perfect Information Organisation, or PIO, in which there are traces available of all workplace activity. This nightmarish panopticon construction would include video and audio surveillance of every meeting and interaction, detailed traces of every activity online, and detailed minutes on meetings and decisions. All of this would be available for the ethnographer, as she went about her work.

The PIO is of course a thought experiment designed to provoke the common desire or fantasy for more data. This is something we all often feel in our fieldwork, but we felt this raised many implicit risks if one combined and extended the three types of ethnography detailed earlier on. By thinking about the PIO, ludicrous though it might be, we would challenge ourselves to look at what sort of questions we could and should ask in such a situation. We came up with the following questions, although there are bound to be many more:

  1. What do members know about the data being collected?
  2. Does it change their behaviour?
  3. What takes place outside of the “surveilled” space? I.e. what happens at the bar after work?
  4. What spills out of the organisation, like when members of the organization visit other sites as part of their work?
  5. How can such a system be slowed down and/or “disconcerted” (a concept from Helen Verran that have found useful in thinking about data in context)?
  6. How can such a system even exist as an assemblage of many surveillance technologies, and would not the weight of the labour sustaining it outstrip its ability to function?

What the list shows is that although the PIO may come off as a wet-dream of the data obsessed or fetisitch researcher, even it has limits as a hypothetical thought experiment. Information is always situated in a context, often defined in relation to where and what information is not available. Yet as we often see in our own fieldwork (and constantly in the public sphere), the fantasies of total or perfect information persist for powerful reasons. Our suggestion was that such a thought experiment would be a good initial exercise for the researcher about to embark on a mixed-methods/ANT/trace ethnography inspired research approach in a site heavily infused with many data sources. The challenge of what topics and questions to ask in ethnography is always as difficult as asking what kind of data to work with, even if we put computational methods and trace data aside. We brought up many tradeoffs in our own fieldwork, such as when getting access to archival data means that the ethnographer is not spending as much time in interviews or participant observation.

This also touches on some of the central questions which the workshop provoked but didn’t answer: what is the phenomenon we are studying, in any given situation? Is it the social life in an organisation, that life distributed over a platform and “real life” social interactions or the platform’s affordances and traces itself? While there is always a risk of making problematic methodological trade-offs in trying to get both digital and more classic ethnographic traces, there is also, perhaps, a methodological necessity in paying attention to the many different types of traces available when the phenomenon we are interested in takes place both online, at the bar and elsewhere. We concluded that ethnography’s intentionally iterative, inductive, and flexible approach to research applies to these methodological tradeoffs as well: as you get access to new data (either through traditional fieldwork or digitized data) ask what you are not focusing on as you see something new.

In the end, these reflections bear a distinct risk of indulging in fantasy: the belief that we can ever achieve a full view (the view from nowhere), or a holistic or even total view of social life in all its myriad forms, whether digital or analog. The principles of ethnography are most certainly not about exhausting the phenomenon, so we do well to remain wary of this fantasy. Today, ethnography is often theorized as documentation of an encounter between an ethnographer and people in a particular context, with the partial perspectives to be embraced. However, we do believe that it is productive to think through the PIO and to not write off in advance traces which do not correspond with an orthodox view of what ethnography might consider proper material or data.

The perfect total information ethnographers

In the second group conversation originated from the wish of an ethnographer to gain access to a document sharing platform from the organization in which the ethnographer is doing fieldwork. Of course, it is not just one platform, but a loose collection of platforms in various stages of construction, adoption, and acceptance. As we know, ethnographers are not only careful about the wishes of others but also of their own wishes — how would this change their ethnography if they had access to countless internal documents, records, archives, and logs? So rather than “just doing (something)”, the ethnographer took a step back and became puzzled over wanting such a strange thing in the first place.

The imaginaries of access to data

In the group, we speculated about if ethnographer got their wish to get access to as much data as possible from the field. Would a “Google Street view” recorded from head-mounted 360° cameras into the site be too much? Probably. On highly mediated sites — Wikipedia serving as an example during the workshop — plenty of traces are publicly left by design. Such archival completeness is a property of some media in some organizations, but not others. In ethnographies of computation, the wish of total access brings some particular problems (or opportunities) as a plenitude of traces and documents are being shared on digital platforms. We talked about three potential problems, the first and most obvious being that the ethnographer drowns in the available data. A second problem, is for the ethnographer to believe that getting more access will provide them with a more “whole” or full picture of the situation. The final problem we discussed was whether the ethnographer would end up replicating the problems of the people in the organization they are studying, which was working out how to deal with a multitude of heterogeneous data in their work.

Besides the problems we also discussed, we asked why the ethnographer would want access to the many documents and traces in the first place. What ideas of ethnography and epistemology does such a desire imply? Would the ethnographer want to “power up” their analysis by mimicking the rhetoric of “the more data the better”? Would the ethnographer add their own data (in the form of field notes and pictures) and through visualisations, show a different perspective on the situation? Even though we reject the notion of a panoptic view on various grounds, we are still left with the question of how much data we need or should want as ethnographers. Imagine that we are puzzled by a particular discussion, would we benefit from having access to a large pile of documents or logs that we could computationally search through for further information? Or would more traditional ethnographic methods like interviews actually be better for the goals of ethnography?

Bringing data home

“Bringing data home” is an idea and phrase that originates from the fieldsite and captures something about the intentions that are playing out. One must wonder what is implied by that idea, and what does the idea do. A straightforward reading would be that it describes a strategic and managerial struggle to cut off a particular data intermediary — a middleman — and restore a more direct data-relationship between the agency and actors using the data they provide. A product/design struggle, so to say. Pushing the speculations further, what might that homecoming, that completion of the re-redesign of data products be like? As ethnographers, and participants in the events we write about, when do we say “come home, data”, or “go home, data”? What ethnography or computation will be left to do, when data has arrived home? In all, we found a common theme in ethnographic fieldwork — that our own positionalities and situations often reflect those of the people in our fieldsites.

Concluding thoughts – why this was interesting/a good idea

It is interesting that our two groups did not explicitly coordinate our topics – we split up and independently arrived at very similar thought experiments and provocations. We reflected that this is likely because all of us attending the workshop were in similar kinds of situations, as we are all struggling with the dual problem of studying computation as an object and working with computation as a method. We found that these kinds of speculative thought experiments were useful in helping us define what we mean by ethnography. What are the principles, practices, and procedures that we mean when we use this term, as opposed to any number of others that we could also use to describe this kind of work? We did not want to do too much boundary work or policing what is and isn’t “real” ethnography, but we did want to reflect on how our positionality as ethnographers is different than, say, digital humanities or computational social science.

We left with no single, simple answers, but more questions — as is probably appropriate. Where do contributions of ethnography of computation, computational ethnography, or computation of ethnography go in the future? We instead offer a few next steps:

Of all the various fields and disciplines that have taken up ethnography in a computational context, what are their various theories, methods, approaches, commitments, and tools? For example, how is work that has more of a home in STS different from that in CSCW or anthropology? Should ethnographies of computation, computational ethnography, and computation of ethnography look the same across fields and disciplines, or different?

Of all the various ethnographies of computation taking place in different contexts, what are we finding about the ways in which people relate to computation? Ethnography is good at coming up with case studies, but we often struggle (or hesitate) to generalize across cases. Our workshop brought together a diverse group of people who were studying different kinds of topics, cases, sites, peoples, and doing so from different disciplines, methods, and epistemologies. Not everyone at the workshop primarily identified as an ethnographer, which was also productive. We found this mixed group was a great way to force us to make our assumptions explicit, in ways we often get away with when we work closer to home.

Of computational ethnography, did we propose some new, operationalizable mathematical approaches to working with trace data in context? How much should the analysis of trace data depend on the ethnographer’s personal intuition about how to collect and analyze data? How much should computational ethnography involve the integration of interviews and fieldnotes alongside computational analyses?

Of computation of ethnography, what does “tooling up” involve? What do our current tools do well, and what do we struggle to do with them? How do their affordances shape the expectations and epistemologies we have of ethnography? How can we decouple the interfaces from their data, such as exporting the back-end database used by a more standard QDA program and analyzing it programmatically using text analysis packages, and find useful cuts to intervene in, in an ethnographic fashion, without engineering everything from some set of first principles? What skills would be useful in doing so?

by R. Stuart Geiger at April 16, 2018 07:00 AM

April 05, 2018

adjunct professor

Syllabi

I’m getting a lot of requests for my syllabi. Here are links to my most recent courses. Please note that we changed our LMS in 2014 and so some of my older course syllabi are missing. I’m going to round those up.

  • Cybersecurity in Context (Fall 2018)
  • Cybersecurity Reading Group (Spring 2018, Fall 2017, Spring 2017)
  • Privacy and Security Lab (Spring 2018, Spring 2017)
  • Technology Policy Reading Group (AI & ML; Free Speech: Private Regulation of Speech; CRISPR) (Spring 2017)
  • Privacy Law for Technologists (Fall 2017, Fall 2016)
  • Problem-Based Learning: The Future of Digital Consumer Protection (Fall 2017)
  • Problem-Based Learning: Educational Technology: Design Policy and Law (Spring 2016)
  • Computer Crime Law (Fall 2015, Fall 2014, Fall 2013, Fall 2012, Fall 2011)
  • FTC Privacy Seminar (Spring 2015, Spring 2010)
  • Internet Law (Spring 2013)
  • Information Privacy Law (Spring 2012, Spring 2009)
  • Samuelson Law, Technology & Public Policy Clinic (Fall 2014, Spring 2014, Fall 2013, Spring 2011, Fall 2010, Fall 2009)

by web at April 05, 2018 05:34 PM

March 30, 2018

MIMS 2014

I Googled Myself (Part 2)

In my last post, I set up an A/B test through Google Optimize and learned Google Tag Manager (GTM), Google Analytics (GA) and Google Data Studio (GDS) along the way. When I was done, I wanted to learn how to integrate Enhanced E-commerce and Adwords into my mock-site, so I set that as my next little project.

As the name suggests, Enhanced E-commerce works best with an e-commerce site—which I don’t quite have. Fortunately, I was able to find a bunch of different mock e-commerce website source code repositories on Github which I could use to bootstrap my own. After some false starts, I found one that worked well for my purposes, based on this repository that made a mock e-commerce site using the “MEAN” stack (MongoDB, Express.js, AngularJS, and node.js).

Forking this repository gave me an opportunity to learn a bit more about modern front-end / back-end website building technologies, which was probably overdue. It was also a chance to brush up on my javascript skills. Tackling this new material would have been much more difficult without the use of WebStorm, the javascript IDE by the same makers of my favorite python IDE, PyCharm.

Properly implementing Enhanced E-commerce does require some back end development—specifically to render static values on a page that can then be passed to GTM (and ultimately to GA) via the dataLayer. In the source code I inherited, this was done through the nunjucks templating library, which was well suited to the task.

Once again, I used Selenium to simulate traffic to the site. I wanted to have semi-realistic traffic to test the GA pipes, so I modeled consumer preferences off of the beta distribution with \alpha  = 2.03 and \beta = 4.67 . That looks something like this:

beta

The x value of the beta distribution is normally constrained to the (0,1) interval, but I multiplied it by the number of items in my store to simulate preferences for my customers. So in the graph, the 6th item (according to an arbitrary indexing of the store items) is the most popular, while the 22nd and 23rd items are the least popular.

For the customer basket size, I drew from a poisson distribution with \lambda  = 3 .  That looks like this:

poisson

Although the two distributions do look quite similar, they are actually somewhat different. For one thing, the Poisson distribution is discrete while the beta distribution is continuous—though I do end up dropping all decimal figures when drawing samples from the beta distribution since the items are also discrete. However, the two distributions do serve different purposes in the simulation. The x axis in the beta distribution represents an arbitrary item index, and in the poisson distribution, it represents the number of items in a customer’s basket.

So putting everything together, the simulation process goes like this: for every customer, we first draw from the Poisson distribution with \lambda = 3 to determine q , i.e. how many items that customer will purchase. Then we draw q times from the beta distribution to see which items the customer will buy. Then, using Selenium, these items are added to the customer’s basket and the purchase is executed, while sending the Enhanced Ecommerce data to GA via GTM and the dataLayer.

When it came to implementing Adwords, my plan had been to bid on uber obscure keywords that would be super cheap to bid on (think “idle giraffe” or “bellicose baby”), but unfortunately Google requires that your ad links be live, properly hosted websites. Since my website is running on my localhost, Adwords wouldn’t let me create a campaign with my mock e-commerce website 😦

As a workaround, I created a mock search engine results page that my users would navigate to before going to my mock e-commerce site’s homepage. 20% of users would click on my ‘Adwords ad’ for hoody sweatshirts on that page (that’s one of the things my store sells, BTW) . The ad link was encoded with the same UTM parameters that would be used in Google Adwords to make sure the ad click is attributed to the correct source, medium, and campaign in GA. After imposing a 40% bounce probability on these users, the remaining ones buy a hoody.

It seemed like I might as well use this project as another opportunity to work with GDS, so I went ahead and made another dashboard for my e-commerce website (live link):

gds_dashboard

If you notice that the big bar graph in the dashboard above looks a little like the beta distribution from before, that’s not an accident. Seeing the Hoody Promo Conv. Rate hover around 60% was another sign things were working as expected (implemented as a Goal in GA).

In my second go-around with GDS, however, I did come up against a few more frustrating limitations. One thing I really wanted to do was create a scorecard element that would tell you the name of the most popular item in the store, but GDS won’t let you do that.

I also wanted to make a histogram, but that is also not supported in GDS. Using my own log data, I did manage to generate the histogram I wanted—of the average order value.

avg_cart_value_hist

I’m pretty sure we’re seeing evidence of the Central Limit Theorem kicking in here. The CLT says that the distribution of sample means—even when drawn from a distribution that is not normal—will tend towards normality as the sample size gets larger.

A few things have me wondering here, however. In this simulation, the sample size is itself a random variable which is never that big. The rule of thumb says that 30 counts as a large sample size, but if you look at the Poisson graph above you’ll see the sample size rarely goes above 8. I’m wondering whether this is mitigated by a large number of samples (i.e. simulated users); the histogram above is based on 50,000 simulated users. Also, because average order values can never be negative, we can only have at best a truncated normal distribution, so unfortunately we cannot graphically verify the symmetry typical of the normal distribution in this case.

But anyway, that’s just me trying to inject a bit of probability/stats into an otherwise implementation-heavy analytics project. Next I might try to re-implement the mock e-commerce site through something like Shopify or WordPress. We’ll see.

 

by dgreis at March 30, 2018 12:48 PM

March 23, 2018

MIMS 2012

Discovery Kanban 101: My Newest Skillshare Class

I just published my first Skillshare class — Discovery Kanban 101: How to Integrate User-Centered Design with Agile. From the class description:

Learn how to make space for designers and researchers to do user-centered design in an Agile/scrum engineering environment. By creating an explicit Discovery process to focus on customer needs before committing engineers to shipping code, you will unlock design’s potential to deliver great user experiences to your customers.

By the end of this class, you will have built a Discovery Kanban board and learned how to use it to plan and manage the work of your team.

While I was at Optimizely, I implemented a Discovery kanban process to improve the effectiveness of my design team (which I blogged about previously here and here, and spoke about here). I took the lessons I learned from doing that and turned them into a class on Skillshare to help any design leader implement an explicit Discovery process at their organization.

Whether you’re a design manager, a product designer, a program manager, a product manager, or just someone who’s interested in user-centered design, I hope you find this course valuable. If you have any thoughts or questions, don’t hesitate to reach out: @jlzych

by Jeff Zych at March 23, 2018 09:50 PM

March 14, 2018

Ph.D. student

Artisanal production, productivity and automation, economic engines

I’m continuing to read Moretti’s The new geography of jobs (2012). Except for the occasional gushing over the revolutionary-ness of some new payments startup, a symptom no doubt of being so close to Silicon Valley, it continues to be an enlightening and measured read on economic change.

There are a number of useful arguments and ideas from the book, which are probably sourced more generally from economics, which I’ll outline here, with my comments:

Local, artisanal production can never substitute for large-scale manufacturing. Moretti argues that while in many places in the United States local artisinal production has cropped up, it will never replace the work done by large-scale production. Why? Because by definition, local artisinal production is (a) geographically local, and therefore unable to scale beyond a certain region, and (b) defined in part by its uniqueness, differentiating it from mainstream products. In other words, if your local small-batch shop grows to the point where it competes with large-scale production, it is no longer local and small-batch.

Interestingly, this argument about production scaling echoes work on empirical heavy tail distributions in social and economic phenomena. A world where small-scale production constituted most of production would have an exponentially bounded distribution of firm productivity. The world doesn’t look that way, and so we have very very big companies, and many many small companies, and they coexist.

Higher labor productivity in a sector results in both a richer society and fewer jobs in that sector. Productivity is how much a person’s labor produces. The idea here is that when labor productivity increases, the firm that hires those laborers needs fewer people working to satisfy its demand. But those people will be paid more, because their labor is worth more to the firm.

I think Moretti is hand-waving a bit when he argues that a society only gets richer through increased labor productivity. I don’t follow it exactly.

But I do find it interesting that Moretti calls “increases in productivity” what many others would call “automation”. Several related phenomena are viewed critically in the popular discourse on job automation: more automation causes people to lose jobs; more automation causes some people to get richer (they are higher paid); this means there is a perhaps pernicious link between automation and inequality. One aspect of this is that automation is good for capitalists. But another aspect of this is that automation is good for lucky laborers whose productivity and earnings increase as a result of automation. It’s a more nuanced story than one that is only about job loss.

The economic engine of an economy is what brings in money, it need not be the largest sector of the economy. The idea here is that for a particular (local) economy, the economic engine of that economy will be what pulls in money from outside. Moretti argues that the economic engine must be a “trade sector”, meaning a sector that trades (sells) its goods beyond its borders. It is the workers in this trade-sector economic engine that then spend their income on the “non-trade” sector of local services, which includes schoolteachers, hairdressers, personal trainers, doctors, lawyers, etc. Moretti’s book is largely about how the innovation sector is the new economic engine of many American economies.

One thing that comes to mind reading this point is that not all economic engines are engaged in commercial trade. I’m thinking about Washington, DC, and the surrounding area; the economic engine there is obviously the federal government. Another strange kind of economic engine are top-tier research universities, like Carnegie Mellon or UC Berkeley. Top-tier research universities, unlike many other forms of educational institutions, are constantly selling their degrees to foreign students. This means that they can serve as an economic engine.

Overall, Moretti’s book is a useful guide to economic geography, one that clarifies the economic causes of a number of political tensions that are often discussed in a more heated and, to me, less useful way.

References

Moretti, Enrico. The new geography of jobs. Houghton Mifflin Harcourt, 2012.

by Sebastian Benthall at March 14, 2018 04:04 PM

March 10, 2018

Ph.D. student

the economic construction of knowledge

We’ve all heard about the social construction of knowledge.

Here’s the story: Knowledge isn’t just in the head. Knowledge is a social construct. What we call “knowledge” is what it is because of social institutions and human interactions that sustain, communicate, and define it. Therefore all claims to absolute and unsituated knowledge are suspect.

There are many different social constructivist theories. One of the best, in my opinion, is Bourdieu’s, because he has one of the best social theories. For Bourdieu, social fields get their structure in part through the distribution of various kinds of social capital. Economic capital (money!) is one kind of social capital. Symbolic capital (the fact of having published in a peer-reviewed journal) is a different form of capital. What makes the sciences special, for Bourdieu, is that they are built around a particular mechanism for awarding symbolic capital that makes it (science) get the truth (the real truth). Bourdieu thereby harmonizes social constructivism with scientific realism, which is a huge relief for anybody trying to maintain their sanity in these trying times.

This is all super. What I’m beginning to appreciate more as I age, develop, and in some sense I suppose ‘progress’, is that economic capital is truly the trump card of all the forms of social capital, and that this point is underrated in social constructivist theories in general. What I mean by this is that flows of economic capital are a condition for the existence of the social fields (institutions, professions, etc.) in which knowledge is constructed. This is not to say that everybody engaged in the creation of knowledge is thinking about monetization all the time–to make that leap would be to commit the ecological fallacy. But at the heart of almost every institution where knowledge is created, there is somebody fundraising or selling.

Why, then, don’t we talk more about the economic construction of knowledge? It is a straightforward idea. To understand an institution or social field, you “follow the money”, seeing where it comes from and where it goes, and that allows you to situated the practice in its economic context and thereby determine its economic meaning.

by Sebastian Benthall at March 10, 2018 03:33 PM

Ph.D. alumna

You Think You Want Media Literacy… Do You?

The below original text was the basis for Data & Society Founder and President danah boyd’s March 2018 SXSW Edu keynote,“What Hath We Wrought?” — Ed.

Growing up, I took certain truths to be self evident. Democracy is good. War is bad. And of course, all men are created equal.

My mother was a teacher who encouraged me to question everything. But I quickly learned that some questions were taboo. Is democracy inherently good? Is the military ethical? Does God exist?

I loved pushing people’s buttons with these philosophical questions, but they weren’t nearly as existentially destabilizing as the moments in my life in which my experiences didn’t line up with frames that were sacred cows in my community. Police were revered, so my boss didn’t believe me when I told him that cops were forcing me to give them free food, which is why there was food missing. Pastors were moral authorities and so our pastor’s infidelities were not to be discussed, at least not among us youth. Forgiveness is a beautiful thing, but hypocrisy is destabilizing. Nothing can radicalize someone more than feeling like you’re being lied to. Or when the world order you’ve adopted comes crumbling down.

The funny thing about education is that we ask our students to challenge their assumptions. And that process can be enlightening.

The funny thing about education is that we ask our students to challenge their assumptions. And that process can be enlightening. I will never forget being a teenager and reading “The People’s History of the United States.” The idea that there could be multiple histories, multiple truths blew my mind.Realizing that history is written by the winners shook me to my core. This is the power of education. But the hole that opens up, that invites people to look for new explanationsthat hole can be filled in deeply problematic ways.When we ask students to challenge their sacred cows but don’t give them a new framework through which to make sense of the world, others are often there to do it for us.

For the last year, I’ve been struggling with media literacy. I have a deep level of respect for the primary goal. As Renee Hobbs has written, media literacy is the “active inquiry and critical thinking about the messages we receive and create. The field talks about the development of competencies or skills to help people analyze, evaluate, and even create media. Media literacy is imagined to be empowering, enabling individuals to have agency and giving them the tools to help create a democratic society. But fundamentally, it is a form of critical thinking that asks people to doubt what they see. And that makes me nervous.

Most media literacy proponents tell me that media literacy doesn’t exist in schools. And it’s true that the ideal version that they’re aiming for definitely doesn’t. But I spent a decade in and out of all sorts of schools in the US, where I quickly learned that a perverted version of media literacy does already exist.Students are asked to distinguish between CNN and Fox. Or to identify bias in a news story. When tech is involved, it often comes in the form of “don’t trust Wikipedia; use Google.” We might collectively dismiss these practices as not-media-literacy, but these activities are often couched in those terms.

I’m painfully aware of this, in part because media literacy is regularly proposed as the “solution” to the so-called “fake news” problem. I hear this from funders and journalists, social media companies and elected officials. My colleagues Monica Bulger and Patrick Davison just released a report on media literacy in light of “fake news” given the gaps in current conversations. I don’t know what version of media literacy they’re imagining but I’m pretty certain it’s not the CNN vs Fox News version. Yet, when I drill in, they often argue for the need to combat propaganda, to get students to ask where the money is coming from, to ask who is writing the stories for what purposes, to know how to fact-check, etcetera. And when I push them further, I often hear decidedly liberal narratives. They talk about the Mercers or about InfoWars or about the Russians. They mock “alternative facts.” While I identify as a progressive, I am deeply concerned by how people understand these different conservative phenomena and what they see media literacy as solving.

get that many progressive communities are panicked about conservative media, but we live in a polarized society and I worry about how people judge those they don’t understand or respect. It also seems to me that the narrow version of media literacy that I hear as the “solution” is supposed to magically solve our political divide. It won’t. More importantly, as I’m watching social media and news media get weaponized, I’m deeply concerned that the well-intended interventions I hear people propose will backfire, because I’m fairly certain that the crass versions of critical thinking already have.

New Data & Society report on media literacy by Monica Bulger and Patrick Davison

My talk today is intended to interrogate some of the foundations upon which educating people about the media landscape dependsRather than coming at this from the idealized perspective, I am trying to come at this from the perspective of where good intentions might go awry, especially in a moment in which narrow versions of media literacy and critical thinking are being proposed as the solution to major socio-cultural issues. I want to examine the instability of our current media ecosystem to then return to the question of:what kind of media literacy should we be working towards? So let’s dig in.

Epistemological Warfare

In 2017, sociologist Francesca Tripodi was trying to understand how conservative communities made sense of the seemingly contradictory words coming out of the mouth of the US PresidentAlong her path, she encountered people talking about making sense of The Word when referencing his speeches. She began accompanying people in her study to their bible study groups. Then it clicked. Trained on critically interrogating biblical texts, evangelical conservative communities were not taking Trump’s messages as literal text. They were interpreting their meanings using the sameepistemological framework as they approached the bible. Metaphors and constructs matter more than the precision of words.

Why do we value precision in language? I sat down for breakfast with Gillian Tett, a Financial Times journalist and anthropologist. She told me that when she first moved to the States from the UK, she was confounded by our inability to talk about class. She was trying to make sense of what distinguished class in America. In her mind, it wasn’t race. Or education. It came down to what construction of language was respected and valued by whom. People became elite by mastering the language marked as elite. Academics, journalists, corporate executives, traditional politicians: they all master the art of communication. I did too. I will never forget being accused of speaking like an elite by my high school classmates when I returned home after a semester of college. More importantly, although it’s taboo in America to be explicitly condescending towards people on the basis of race or education, there’s no social cost among elites to mock someone for an inability to master language.For using terms like “shithole.”

Linguistic and communications skills are not universally valued. Those who do not define themselves through this skill loathe hearing the never-ending parade of rich and powerful people suggesting that they’re stupid, backwards, and otherwise lesser. Embracing being anti-PC has become a source of pride, a tactic of resistance. Anger boils over as people who reject “the establishment” are happy to watch the elites quiver over their institutions being dismantled. This is why this is a culture war. Everyone believes they are part of the resistance.

But what’s at the root of this cultural war? Cory Doctorow got me thinkingwhen he wrote the following:

We’re not living through a crisis about what is true, we’re living through a crisis about how we know whether something is true. We’re not disagreeing about facts,we’re disagreeing about epistemology. The “establishment” version of epistemology is, “We use evidence to arrive at the truth, vetted by independent verification (but trust us when we tell you that it’s all been independently verified by people who were properly skeptical and not the bosom buddies of the people they were supposed to be fact-checking).”

The “alternative facts” epistemological method goes like thisThe ‘independent’ experts who were supposed to be verifying the ‘evidence-based’ truth were actually in bed with the people they were supposed to be fact-checking. In the end, it’s all a matter of faith, then: you either have faith that ‘their’ experts are being truthful, or you have faith that we are. Ask your gut, what version feels more truthful?

Let’s be honest — most of us educators are deeply committed to a way of knowing that is rooted in evidence, reason, and fact. But who gets to decide what constitutes a fact? In philosophy circles, social constructivists challenge basic tenets like fact, truth, reason, and evidence. Yet, it doesn’t take a doctorate of philosophy to challenge the dominant way of constructing knowledge. Heck, 75 years ago, evidence suggesting black people were biologically inferior was regularly used to justify discrimination. And this was called science!

In many Native communities, experience trumps Western science as the key to knowledge. These communities have a different way of understanding topics like weather or climate or medicineExperience is also used in activist circles as a way of seeking truth and challenging the status quo. Experience-based epistemologies also rely on evidence, but not the kind of evidence that would be recognized or accepted by those in Western scientific communities.

Those whose worldview is rooted in religious faith, particularly Abrahamic religions, draw on different types of information to construct knowledge. Resolving scientific knowledge and faith-based knowledge has never been easy; this tension has countless political and social ramifications. As a result, American society has long danced around this yawning gulf and tried to find solutions that can appease everyone. But you can’t resolve fundamental epistemological differences through compromise.

No matter what worldview or way of knowing someone holds dear, they always believe that they are engaging in critical thinking when developing a sense of what is right and wrong, true and false, honest and deceptive. But much of what they conclude may be more rooted in their way of knowing than any specific source of information.

If we’re not careful, “media literacy” and “critical thinking”will simply be deployed as an assertion of authority over epistemology.

Right now, the conversation around fact-checking has already devolved to suggest that there’s only one truth. And we have to recognize that there are plenty of students who are taught that there’s only one legitimate way of knowing, one accepted worldview. This is particularly dicey at the collegiate level, where us professors have been taught nothing about how to teach across epistemologies.

Personally, it took me a long time to recognize the limits of my teachersLike many Americans in less-than-ideal classrooms, I was taught that history was a set of facts to be memorized. When I questioned those facts, I was sent to the principal’s office for disruption. Frustrated and confused, I thought that I was being force-fed information for someone else’s agenda. Now I can recognize that that teacher was simply exhausted, underpaid, and waiting for retirement. But it took me a long time to realize that there was value in history and that history is a powerful tool.

Weaponizing Critical Thinking

The political scientist Deen Freelon was trying to make sense of the role of critical thinking to address “fake news.” He ended up looking back at a fascinating campaign by Russian Today (known as RT). Their motto for a while was “question more.” They produced a series of advertisements as teasers for their channel. These advertisements were promptly banned in the US and UK, resulting in RT putting up additional ads about how they were banned and getting tremendous mainstream media coverage about being banned. What was so controversial? Here’s an example:

“Just how reliable is the evidence that suggests human activity impacts on climate change? The answer isn’t always clear-cut. And it’s only possible to make a balanced judgement if you are better informed. By challenging the accepted view, we reveal a side of the news that you wouldn’t normally see. Because we believe that the more you question, the more you know.”

If you don’t start from a place where you’re confident that climate change is real, this sounds quite reasonable. Why wouldn’t you want more information? Why shouldn’t you be engaged in critical thinking? Isn’t this what you’re encouraged to do at school? So why is asking this so taboo? And lest you think that this is a moment to be condescending towards climate deniers, let me offer another one of their ads.

“Is terror only committed by terrorists? The answer isn’t always clear-cut. And it’s only possible to make a balanced judgement if you are better informed. By challenging the accepted view, we reveal a side of the news that you wouldn’t normally see. Because we believe that the more you question, the more you know.”

Many progressive activists ask whether or not the US government commits terrorism in other countries. The ads all came down because they were too political, but RT got what they wanted: an effective ad campaignThey didn’t come across as conservative or liberal, but rather a media entity that was “censored” for asking questions. Furthermore, by covering the fact that they were banned, major news media legitimized their frame under the rubric of “free speech.” Under the assumption that everyone should have the right to know and to decide for themselves.

We live in a world now where we equate free speech with the right to be amplified. Does everyone have the right to be amplified? Social media gave us that infrastructure under the false imagination that if we were all gathered in one place, we’d find common ground and eliminate conflict. We’ve seen this logic before. After World War II, the world thought that connecting the globe through financial interdependence would prevent World War III. It’s not clear that this logic will hold.

For better and worse, by connecting the world through social media and allowing anyone to be amplified, information can spread at record speed.There is no true curation or editorial control. The onus is on the public to interpret what they see. To self-investigate. Since we live in a neoliberal society that prioritizes individual agency, we double down on media literacy as the “solution” to misinformation. It’s up to each of us as individuals to decide for ourselves whether or not what we’re getting is true.

Figure 1

Yet, if you talk with someone who has posted clear, unquestionable misinformation, more often than not, they know it’s bullshit. Or they don’t care whether or not it’s true. Why do they post it then? Because they’re making a statement. The people who posted this meme (figure 1) didn’t bother to fact check this claim. They didn’t care. What they wanted to signal loud and clear is that they hated Hillary Clinton. And that message was indeed heard loud and clear. As a result, they are very offended if you tell them that they’ve been duped by Russians into spreading propaganda. They don’t believe you for one second.

Misinformation is contextual. Most people believe that people they know are gullible to false information, but that they themselves are equipped to separate the wheat from the chaff. There’s widespread sentiment that we can fact check and moderate our way out of this conundrum. This will fail. Don’t forget that for many people in this country, both education and the media are seen as the enemy — two institutions who are trying to have power over how people think. Two institutions that are trying to assert authority over epistemology.

Finding the Red Pill

Growing up on Usenet, Godwin’s Law was more than an adage to me. I spent countless nights lured into conversation by the idea that someone was wrong on the internet. And I long ago lost count about how many of them ended up with someone invoking Hitler or the Holocaust. I might have even been to blame in some of these conversations.

Fast forward 15 years to the point when Nathan Poe wrote a poignant comment on an online forum dedicated to Christianity: Without a winking smiley or other blatant display of humor, it is utterly impossible to parody a Creationist in such a way that someone won’t mistake for the genuine article.”Poe’s Law, as it became known, signals that it’s hard to tell the difference between an extreme view and a parody of an extreme view on the internet.

In their book, “The Ambivalent Internet,”media studies scholars Whitney Phillips and Ryan Milner highlight how a segment of society has become so well-versed at digital communications — memes, GIFs, videos, etc. — that they can use these tools of expression to fundamentally destabilize others’communication structures and worldviewsIt’s hard to tell what’s real and what’s fiction, what’s cruel and what’s a joke. But that’s the point. That is howirony and ambiguity can be weaponized. And for some, the goal is simple:dismantle the very foundations of elite epistemological structures that are so deeply rooted in fact and evidence.

Many people, especially young people, turn to online communities to make sense of the world around them. They want to ask uncomfortable questions, interrogate assumptions, and poke holes at things they’ve heard. Welcome to youth. There are some questions that are unacceptable to ask in public and they’ve learned that. But in many online fora, no question or intellectual exploration is seen as unacceptable. To restrict the freedom of thought is to censor. And so all sorts of communities have popped up for people to explore questions of race and gender and other topics in the most extreme ways possible. And these communities have become slippery. Are those taking on such hateful views real? Or are they being ironic?

In the 1999 film The Matrix, Morpheus says to Neo: “You take the blue pill,the story ends. You wake up in your bed and believe whatever you want. You take the red pill, you stay in Wonderland, and I show you how deep the rabbit hole goes.” Most youth aren’t interested in having the wool pulled over their head, even if blind faith might be a very calming way of living. Restricted in mobility and stressed to holy hell, they want to have access to what’s inaccessible, know what’s taboo, and say what’s politically incorrect. So who wouldn’t want to take the red pill?

Image via Warner Bros.

In some online communities, taking the red pill refers to the idea of waking up to how education and media are designed to deceive you into progressive propaganda. In these environments, visitors are asked to question more. They’re invited to rid themselves of their politically correct shackles. There’s an entire online university designed to undo accepted ideas about diversity, climate, and history. Some communities are even more extreme in their agenda. These are all meant to fill in the gaps for those who are opening to questioning what they’ve been taught.

In 2012, it was hard not to avoid the names Trayvon Martin and George Zimmerman, but that didn’t mean that most people understood the storyline.In South Carolina, a white teenager who wasn’t interested in the news felt like he needed to know what the fuss was all about. He decided to go to Wikipedia to understand more. He was left with the impression that Zimmerman was clearly in the right and disgusted that everyone was defending Martin. While reading up on this case, he ran across the term “black on white crime” on Wikipedia and decided to throw that term into Google where he encountered a deeply racist website inviting him to wake up to a reality that he had never considered. He took that red pill and dove deep into a worldview whose theory of power positioned white people as victims. Over a matter of years, he began to embrace those views, to be radicalized towards extreme thinking. On June 17, 2015, he sat down for an hour with a group of African-American church-goers in Charleston South Carolina before opening fire on them, killing 9 and injuring 1. His goal was simple: he wanted to start a race war.

It’s easy to say that this domestic terrorist was insane or irrational, but he began his exploration trying to critically interrogate the media coverage of a story he didn’t understandThat led him to online fora filled with people who have spent decades working to indoctrinate people into a deeply troubling, racist worldview. They draw on countless amounts of “evidence,” engage in deeply persuasive discursive practices, and have the mechanisms to challenge countless assumptions. The difference between what is deemed missionary work, education, and radicalization depends a lot on your worldview. And your understanding of power.

Who Do You Trust?

The majority of Americans do not trust the news media. There are many explanations for this — loss of local news, financial incentiveshard to distinguish between opinion and reporting, etcBut what does it mean to encourage people to be critical of the media’s narratives when they are already predisposed against the news media?

Perhaps you want to encourage people to think critically about how information is constructed, who is paying for it, and what is being left out. Yet, among those whose prior is to not trust a news media institution, among those who see CNN and The New York Times as “fake news,” they’re already there. They’re looking for flaws. It’s not hard to find them. After all, the news industry is made of people in institutions in a society. So when youth are encouraged to be critical of the news media, they come away thinking that the media is lying. Depending on someone’s prior, they may even take what they learn to be proof that the media is in on the conspiracy. That’s where things get very dicey.

Many of my digital media and learning colleagues encourage people to make media to help understand how information is produced. Realistically, many young people have learned these skills outside the classroom as they seek to represent themselves on Instagram, get their friends excited about a meme, or gain followers on YouTube. Many are quite skilled at using media, but to what end? Every day, I watch teenagers produce anti-Semitic and misogynistic content using the same tools that activists use to combat prejudice. It’s notable that many of those who are espousing extreme viewpoints are extraordinarily skilled at using mediaToday’s neo-Nazis are a digital propaganda machine. Developing media making skills doesn’t guarantee that someone will use them for good. This is the hard part.

Most of my peers think that if more people are skilled and more people are asking hard questions, goodness will see the light. In talking about misunderstandings of the First Amendment, Nabiha Syed of Buzzfeedhighlights that the frame of the “marketplace of ideas” sounds great, but is extremely naiveDoubling down on investing in individuals as a solution to a systemic abuse of power is very American. But the best ideas don’t always surface to the top. Nervously, many of us tracking manipulation of media are starting to think that adversarial messages are far more likely to surface than well-intended ones.

This is not to say that we shouldn’t try to educate people. Or that producing critical thinkers is inherently a bad thing. I don’t want a world full of sheeple.But I also don’t want to naively assume what media literacy could do in responding to a culture war that is already underwayI want us to grapple with reality, not just the ideals that we imagine we could maybe one day build.

It’s one thing to talk about interrogating assumptions when a person can keep emotional distance from the object of study. It’s an entirely different thing to talk about these issues when the very act of asking questions is what’s being weaponized. This isn’t historical propaganda distributed through mass media. Or an exercise in understanding state power. This is about making sense of an information landscape where the very tools that people use to make sense of the world around them have been strategically perverted by other people who believe themselves to be resisting the same powerful actors that we normally seek to critique.

Take a look at the graph above. Can you guess what search term this is? This is the search query for “crisis actors.” This concept emerged as a conspiracy theory after Sandy Hook. Online communities worked hard to get this to land with the major news media after each shooting. With Parkland, they finally succeeded. Every major news outlet is now talking about crisis actors, as though it’s a real thing, or something to be debunked. When teenage witnesses of the mass shooting in Parkland speak to journalists these days, they have to now say that they are not crisis actors. They must negate a conspiracy theory that was created to dismiss them. A conspiracy theory that undermines their message from the get-go. And because of this, many people have turned to Google and Bing to ask what a crisis actor is. They quickly get to the Snopes page. Snopes provides a clear explanation of why this is a conspiracy. But you are now asked to not think of an elephant.

You may just dismiss this as craziness, but getting this narrative into the media was designed to help radicalize more people. Some number of people will keep researching, trying to understand what the fuss is all about. They’ll find online fora discussing the images of a brunette woman and ask themselves if it might be the same person. They will try to understand the fight between David Hogg and Infowars or question why Infowars is being restricted by YouTube. They may think this is censorship. Seeds of doubt will start to form. And they’ll ask whether or not any of the articulate people they see on TV might actually be crisis actors. That’s the power of weaponized narratives.

One of the main goals for those who are trying to manipulate media is to pervert the public’s thinking. It’s called gaslighting. Do you trust what is real?One of the best ways to gaslight the public is to troll the media. By getting the news media to be forced into negating frames, they can rely on the fact that people who distrust the media often respond by self-investigating. This is the power of the boomerang effect. And it has a history. After all, the CDC realized that the more news media negated the connection between autism and vaccination, the more the public believed there was something real there.

In 2016, I watched networks of online participants test this theory through an incident now known as Pizzagate. They worked hard to get the news media to negate the conspiracy theory, believing that this would prompt more people to try to research if there was something real there. They were effective. The news media covered the story to negate it. Lots of people decided to self-investigate. One guy even showed up with a gun.

Still from the trailer for “Gaslight

The term “gaslighting” originates in the context of domestic violence. The term refers back to an 1944 movie called Gas Light where a woman is manipulated by her husband in a way that leaves her thinking she’s crazy. It’sa very effective technique of controlIt makes someone submissive and disoriented, unable to respond to a relationship productively. While many anti-domestic violence activists argue that the first step is to understand that gaslighting exists, the “solution” is not to fight back against the person doing the gaslighting. Instead, it’s to get out. Furthermore, anti-domestic violence experts argue that recovery from gaslighting is a long and arduous process, requiring therapy. They recognize that once instilled, self-doubt is hard to overcome.

While we have many problems in our media landscape, the most dangerous is how it is being weaponized to gaslight people.

And unlike the domestic violence context, there is no “getting out” that is really possible in a media ecosystem. Sure, we can talk about going off the grid and opting out of social media and news media, but c’mon now.

The Cost of Triggering

In 2017, Netflix released a show called 13 Reasons Why. Before parents and educators had even heard of the darn show, millions of teenagers had watched it. For most viewers, it was a fascinating show. The storyline was enticing, the acting was phenomenal. But I’m on the board of Crisis Text Line, an amazing service where people around this country talk with trained counselors via text message when they’re in a crisis. Before the news media even began talking about the show, we started to see the impact. After all, the premise of the show is that a teen girl died by suicide and left behind 13 tapes explaining how people had bullied her to justify her decision.

At Crisis Text Line, we do active rescues every night. This means that we send emergency personnel to the homes of someone who is in the middle of a suicide attempt in an effort to save their lives. Sometimes, we succeed. Sometimes, we don’t. It’s heartbreaking work. As word of 13 Reasons Why got out and people started watching the show, our numbers went through the roof. We were drowning in young people referencing the show, signaling how it had given them a framework for ending their lives. We panicked. All hands on deck. As we got things under control, I got angry. What the hell was Netflix thinking?

Researchers know the data on suicide and media. The more the media normalizes suicide, the more suicide is put into people’s head as a possibility,the more people who are on the edge start to take it seriously and consider it for themselves. After early media effects research was published, journalists developed best practices to minimize their coverage of suicide. As Joan Donovan often discusses, this form of “strategic silence” was viable in earlier media landscapes; it’s a lot harder now. Today, journalists and media makers feel as though the fact that anyone could talk about suicide on the internet means that they should have a right to do so too.

We know that you can’t combat depression through rational discourse.Addressing depression is hard work. And I’m deeply concerned that we don’t have the foggiest clue how to approach the media landscape today. I’m confident that giving grounded people tools to think smarter can be effective.But I’m not convinced that we know how to educate people who do not share our epistemological frame. I’m not convinced that we know how to undo gaslighting. I’m not convinced that we understand how engaging people about the media intersects with those struggling with mental health issues.And I’m not convinced that we’ve even begun to think about the unintended consequences of our good — let alone naive — intentions.

In other words, I think that there are a lot of assumptions baked into how we approach educating people about sensitive issues and our current media crisis has made those painfully visible.

Oh, and by the way, the Netflix TV show ends by setting up Season 2 to start with a school shooting. WTF, Netflix?

Pulling Back Out

So what role do educators play in grappling with the contemporary media landscape? What kind of media literacy makes sense? To be honest, I don’t know. But it’s unfair to end a talk like this without offering some path forward so I’m going to make an educated guess.

I believe that we need to develop antibodies to help people not be deceived.

That’s really tricky because most people like to follow their gut more than than their mindNo one wants to hear that they’re being tricked. Still, thinkthere might be some value in helping people understand their own psychology.

Consider the power of nightly news and talk radio personalities. If you bring Sean Hannity, Rachel Maddow, or any other host into your home every night,you start to appreciate how they think. You may not agree with them, but youbuild a cognitive model of their words such that they have a coherent logic to them. They become real to you, even if they don’t know who you are. This is what scholars call parasocial interaction. And the funny thing about humanpsychology is that we trust people who we invest our energies into understanding. That’s why bridging difference requires humanizing people across viewpoints.

Empathy is a powerful emotion, one that most educators want to encourage.But when you start to empathize with worldviews that are toxic, it’s very hard to stay grounded. It requires deep cognitive strength. Scholars who spend a lot of time trying to understand dangerous worldviews work hard to keep their emotional distance. One very basic tactic is to separate the different signals. Just read the text rather than consume the multimedia presentation of that. Narrow the scopeActively taking things out of context can be helpful for analysis precisely because it creates a cognitive disconnect. This is the opposite of how most people encourage everyday analysis of media, where the goal is to appreciate the context first. Of course, the trick here is wanting to keep that emotional distance. Most people aren’t looking for that.

I also believe that it’s important to help students truly appreciate epistemological differencesIn other words, why do people from different worldviews interpret the same piece of content differently? Rather than thinking about the intention behind the production, let’s analyze the contradictions in the interpretation. This requires developing a strong sense of how others think and where the differences in perspective lie. From an educational point of view, this means building the capacity to truly hear and embrace someone else’s perspective and teaching people to understand another’s view while also holding their view firm. It’s hard work, an extension of empathy into a practice that is common among ethnographers. It’s also a skill that is honed in many debate clubs. The goal is to understand the multiple ways of making sense of the world and use that to interpret media.Of course, appreciating the view of someone who is deeply toxic isn’t always psychologically stabilizing.

Still from “Selective Attention Test

Another thing I recommend is to help students see how they fill in gaps when the information presented to them is sparse and how hard it is to overcome priors. Conversations about confirmation bias are important here because it’s important to understand what information we accept and what information we reject. Selective attention is another tool, most famously shown to students through the “gorilla experiment.” If you aren’t familiar with this experiment, it involves showing a basketball video and focusing on counting passes made by people in one color shirt and then asking if they saw the gorilla. Many people do not. Inverting these cognitive science exercises,asking students to consider different fan fiction that fills in the gaps of a story with divergent explanations is another way to train someone to recognize how their brain fills in gaps.

What’s common about the different approaches I’m suggesting is that they are designed to be cognitive strengthening exercises, to help students recognize their own fault linesnot the fault lines of the media landscape around them. I can imagine that this too could be called media literacy and if you want to bend your definition that way, I’ll accept it. But the key is to realize the humanity in ourselves and in others. We cannot and should not assert authority over epistemology, but we can encourage our students to be more aware of how interpretation is socially constructed. And to understand how that can be manipulated. Of course, just because you know you’re being manipulated doesn’t mean that you can resist it. And that’s where my proposal starts to get shaky.

Let’s be honest — our information landscape is going to get more and more complex. Educators have a critical role to play in helping individuals and societies navigate what we encounter. But the path forward isn’t about doubling down on what constitutes a fact or teaching people to assess sources.Rebuilding trust in institutions and information intermediaries is important, but we can’t assume the answer is teaching students to rely on those signals.The first wave of media literacy was responding to propaganda in a mass media context. We live in a world of networks now. We need to understand how those networks are intertwined and how information that spreads through dyadic — even if asymmetric — encounters is understood and experienced differently than that which is produced and disseminated through mass media.

Above all, we need to recognize that information can, is, and will be weaponized in new ways. Today’s propagandist messages are no longer simply created by Madison Avenue or Edward Bernays-style State campaigns. For the last 15 years, a cohort of young people has learned how to hack the attention economy in an effort to have power and status in this new information ecosystem. These aren’t just any youth. They are young people who are disenfranchised, who feel as though the information they’re getting isn’t fulfilling, who struggle to feel powerful. They are trying to make sense of an unstable world and trying to respond to it in a way that is personally fulfilling.Most youth are engaged in invigorating activities. Others are doing the same things youth have always done. But there are youth out there who feel alienated and disenfranchised, who distrust the system and want to see it all come down. Sometimes, this frustration leads to productive ends. Often it does not. But until we start understanding their response to our media society, we will not be able to produce responsible interventions. So I would argue that we need to start developing a networked response to this networked landscape. And it starts by understanding different ways of constructing knowledge.


Special thanks to Monica Bulger, Mimi Ito, Whitney Phillips, Cathy Davidson, Sam Hinds Garcia, Frank Shaw, and Alondra Nelson for feedback.


Update (March 16, 2018): I crafted some responses to the most common criticisms I’ve received to date about this work here. (Also, the original version of this blog post was published on Medium.)

by zephoria at March 10, 2018 01:30 AM

March 08, 2018

MIMS 2012

Why I Blog

The fable of the millipede and the songbird is a story about the difference between instinct and knowledge. It goes like this:

High above the forest floor, a millipede strolled along the branch of a tree, her thousand pairs of legs swinging in an easy gait. From the tree top, song birds looked down, fascinated by the synchronization of the millipede’s stride. “That’s an amazing talent,” chirped the songbirds. “You have more limbs than we can count. How do you do it?” And for the first time in her life the millipede thought about this. “Yes,” she wondered, “how do I do what I do?” As she turned to look back, her bristling legs suddenly ran into one another and tangled like vines of ivy. The songbirds laughed as the millipede, in a panic of confusion, twisted herself in a knot and fell to earth below.

On the forest floor, the millipede, realizing that only her pride was hurt, slowly, carefully, limb by limb, unraveled herself. With patience and hard work, she studied and flexed and tested her appendages, until she was able to stand and walk. What was once instinct became knowledge. She realized she didn’t have to move at her old, slow, rote pace. She could amble, strut, prance, even run and jump. Then, as never before, she listened to the symphony of the songbirds and let music touch her heart. Now in perfect command of thousands of talented legs, she gathered courage, and, with a style of her own, danced and danced a dazzling dance that astonished all the creatures of her world. [1]

The lesson here is that conscious reflection of an unconscious action will impair your ability to do that action. But after you introspect and really study how you do what you do, it will transform into knowledge and you will have greater command of that skill.

That, in a nutshell, is why I blog. The act of introspection — of turning abstract thoughts into concrete words — strengthens my knowledge of that subject and enables me to dance a dazzling dance.


[1] I got this version of the fable from the book Story: Substance, Structure, Style and the Principles of Screenwriting by Robert McKee, but can’t find the original version of it anywhere (it’s uncredited in his book). The closest I can find is The Centipede’s Dilemma, but that version lacks the second half of the fable.

by Jeff Zych at March 08, 2018 10:18 PM

March 06, 2018

Ph.D. student

Appealing economic determinism (Moretti)

I’ve start reading Enrico Moretti’s The New Geography of Jobs and finding it very clear and persuasive (though I’m not far in).

Moretti is taking up the major theme of What The Hell Is Happening To The United States, which is being addressed by so many from different angles. But whereas many writers seem to have an agenda–e.g., Noble advocating for political reform regulating algorithms; Deenan arguing for return to traditional community values in some sense; etc.–or to focus on particularly scandalous or dramatic aspects of changing political winds–such as Gilman’s work on plutocratic insurgency and collapsing racial liberalism–Moretti is doing economic geography showing how long term economic trends are shaping the distribution of prosperity within the U.S.

From the introduction, it looks like there are a few notable points.

The first is about what Moretti calls the Great Divergence, which has been going on since the 1980’s. This is the decline of U.S. manufacturing as jobs moved from Detroit, Michegan to Shenzhen, Guangdong, paired with the rise of an innovation economy where the U.S. takes the lead in high-tech and creative work. The needs of the high-tech industry–high-skilled workers, who may often be educated immigrants–changes the demographics of the innovation hubs and results in the political polarization we’re seeing on the national stage. This is an account of the economic base determining the cultural superstructure which is so fraught right now, and exactly what I was getting at yesterday with my rant yesterday about the politics of business.

The second major point Moretti makes which is probably understated in more polemical accounts of the U.S. political economy is the multiplier effect of high-skilled jobs in innovation hubs. Moretti argues that every high-paid innovation job (like software engineer or scientist) results in four other jobs in the same city. These other jobs are in service sectors that are by their nature local and not able to be exported. The consequence is that the innovation economy does not, contrary to its greatest skeptics, only benefit the wealthy minority of innovators to the ruin of the working class. However, it does move the location of working class prosperity into the same urban centers where the innovating class is.

This gives one explanation for why the backlash against Obama-era economic policies was such a shock to the coastal elites. In the locations where the “winners” of the innovation economy were gathered, there was also growth in the service economy which by objective measures increased the prosperity of the working class in those cities. The problem was the neglected working class in those other locations, who felt left behind and struck back against the changes.

A consequence of this line of reasoning is that arguments about increasing political tribalism are really a red herring. Social tribes on the Internet are a consequence, not a cause, of divisions that come from material conditions of economy and geography.

Moretti even appears to have a constructive solution in mind. He argues that there are “three Americas”: the rich innovation hubs, the poor former manufacturing centers, and mid-sized cities that have not yet gone either way. His recipe for economic success in these middle cities is attracting high-skilled workers who are a kind of keystone species for prosperous economic ecosystems.

References

Deneen, Patrick J. Why Liberalism Failed. Yale University Press, 2018.

Gilman, Nils. “The twin insurgency.” American Interest 15 (2014): 3-11.

Gilman, Nils. “The Collapse of Racial Liberalism.” The American Interest (2018).

Moretti, Enrico. The new geography of jobs. Houghton Mifflin Harcourt, 2012.

Noble, Safiya Umoja. Algorithms of Oppression: How search engines reinforce racism. NYU Press, 2018.

by Sebastian Benthall at March 06, 2018 06:43 PM

MIMS 2014

I Googled Myself

As a huge enthusiast of A/B testing, I have been wanting to learn how to run A/B tests through Google Optimize for some time. However, it’s hard to do this without being familiar with all the different parts of the Google product eco-system. So I decided it was time to take the plunge and finally Google myself. This post will cover my adventures with several products in the Google product suite including: Google Analytics (GA), Google Tag Manager (GTM), Google Optimize (GO), and Google Data Studio (GDS).

Of course, in order to do A/B testing, you have to have A) something to test, and B) sufficient traffic to drive significant results. Early on I counted out trying to A/B test this blog—not because I don’t have sufficient traffic—I got tons of it, believe me . . . (said in my best Trump voice). The main reason I didn’t try do it with my blog is that I don’t host it, WordPress does, so I can’t easily access or manipulate the source code to implement an A/B test. It’s much easier if I host the website myself (which I can do locally using MAMP).

But how do I send traffic to a website I’m hosting locally? By simulating it, of course. Using a nifty python library called Selenium, I can be as popular as I want! I can also simulate any kind of behavior I want, and that gives me maximum control. Since I can set the expected outcomes ahead of time, I can more easily troubleshoot/debug whenever the results don’t square with expectations.

My Mini “Conversion Funnel”

When it came to designing my first A/B test, I wanted to keep things relatively simple while still mimicking the general flow of an e-commerce conversion funnel. I designed a basic website with two different landing page variants—one with a green button and one with a red button. I arbitrarily decided that users would be 80% likely to click on the button when it’s green and 95% likely to click on the button when it’s red (these conversion rates are unrealistically high, I know). Users who didn’t click on the button would bounce, while those who did would advance to the “Purchase Page”.

website_flow_diagram

To make things a little more complicated, I decided to have 20% of ‘green’ users bounce after reaching the purchase page. The main reason for this was to test out GA’s funnel visualizations to see if they would faithfully reproduce the graphic above (they did). After the purchase page, users would reach a final “Thank You” page with a button to claim their gift. There would be no further attrition at this point; all users who arrived on this page would click the “Claim Your Gift” button. This final action was the conversion (or ‘Goal’ in GA-speak) that I set as the objective for the A/B test.

Google Analytics

With GA, I jumped straight into the deep end, adding gtag.js snippets to all the pages of my site. Then I implemented a few custom events and dimensions via javascript. In retrospect, I would have done the courses offered by Google first (Google Analytics for Beginners & Advanced Google Analytics) . These courses give you a really good lay of the land of what GA is capable of, and it’s really impressive. If you have a website, I don’t see how you can get away with not having it plugged into GA.

In terms of features, the real time event tracking is a fantastic resource for debugging GA implementations. However, the one feature I wasn’t expecting GA to have was the benchmarking feature. It allows you to compare the traffic on your site with websites in similar verticals. This is really great because even if you’re totally out of ideas on what to analyze (which you shouldn’t be given the rest of the features in GA), you can use the benchmarking feature as a starting point for figuring out the weak points in your site.

The other great thing about the two courses I mentioned is that they’re free, and at the end you can take the GA Individual Qualification exam to certify your knowledge about GA (which I did). If you’re gonna put it the time to learn the platform, it’s nice to have a little endorsement at the end.

Google Tag Manager

After implementing everything in gtag.js, I did it all again using GTM. I can definitely see the appeal of GTM as a way to deploy GA; it abstracts away all of that messy javascript and replaces it with a clean user interface and a handy debug tool. The one drawback of GTM seems that it doesn’t send events to GA quite as well as gtag.js. Specifically, in my GA reports for the ‘red button` variant of my A/B test, I saw more conversions for the “Claim Your Gift” button than conversions for the initial click to get off the landing page. Given the attrition rates I defined, that’s impossible. I tried to configure the tag to wait until the event was sent to GA before the next page was loaded, but there still seemed to be some data meant to be sent to GA that got lost in the mix.

Google Optimize

Before trying out GO, I implemented my little A/B test through Google’s legacy system, Content Experiments. I can definitely see why GO is the way of the future. There’s a nifty tool that lets you edit visual DOM elements right in the browser while you’re defining your variants. In Content Experiments, you have to either provide two separate A or B pages or implement the expected changes on your end. It’s a nice thing to not have to worry about, especially if you’re not a pro front-end developer.

Also, it’s clear that GO has more powerful decision features. For one thing, it has Bayesian decision logic which is more comprehensible for business stakeholders and is gaining steam in online a/b testing. Also, it has the ability to do multivariate testing, which is a great addition, though I don’t use that functionality for this test.

The one thing that was a bit irritating with GO was setting it up to run on localhost. It took a few hours of yak shaving to get the different variants to actually show up on my computer. It boiled down to 1) editing my etc/hosts file with an extra line in accordance with this post on the Google Advertiser Community forum and 2) making sure the Selenium driver navigated to localhost.domain instead of just localhost.

Google Data Studio

Nothing is worth doing unless you can make a dashboard at the end of it, right? While GA has some amazing report power generating capabilities, it can feel somewhat rigid in terms of customizability. GDS is a relatively new program that gives you way more options to visualize the data sitting in GA. But while GDS has an advantage over GA, it does have some frustrating limitations which I hope they resolve soon. In particular, I hope they’ll let you show percent differences between two score cards soon. As someone who’s done a lot of  A/B test reports, I know that the thing stakeholders are most interested in seeing is the % difference, or lift, caused by one variant versus another.

Here is a screenshot of the ultimate dashboard (or a link it you want to see it live):

dashboard_ss

The dashboard was also a good way to do a quick check to make sure everything in the test was working as expected. For example, the expected conversion rate for the “Claim Your Gift” button was 64% versus 95%, and we see more or less those numbers in the first bar chart on the left. The conditional conversion rate (the conversion rate of users conditioned on clicking off the landing page) is also close to what was expected: 80% vs. 100%

Notes about Selenium

So I really like Selenium, and after this project I have a little personal library to do automated tests in the future that I can apply to any website, not just this little dinky one I ran locally on my machine.

When you’re writing code dealing with Selenium, one thing I’ve realized is that it’s important to write highly fault tolerant code. Things that depend on the internet imply many things that can go wrong—the wifi in the cafe you’re in might go down. Resources might randomly fail to load. So many different things that can go wrong… But if you’ve written fault-tolerant code, hitting one of these snags won’t cause your program to stop running.

Along with fault-tolerant code, it’s a good idea to write good logs. When stuff does go wrong, this helps you figure out what it was. In this particular case, logs also served as a good source of ground truth to compare against the numbers I was seeing in GA.

The End! (for now…)

I think I’ll be back soon with another post about AdWords and Advanced E-Commerce in GA…

 

by dgreis at March 06, 2018 03:37 AM

March 05, 2018

Ph.D. student

politics of business

This post is an attempt to articulate something that’s on the tip of my tongue, so bear with me.

Fraser has made the point that the politics of recognition and the politics of distribution are not the same. In her view, the conflict in the U.S. over recognition (i.e., or women, racial minorities, LGBTQ, etc. on the progressive side, and on the straight white male ‘majority’ on the reactionary side) has overshadowed the politics of distribution, which has been at a steady neoliberal status quo for some time.

First, it’s worth pointing out that in between these two political contests is a politics of representation, which may be more to the point. The claim here is that if a particular group is represented within a powerful organization–say, the government, or within a company with a lot of power such as a major financial institution or tech company–then that organization will use its power in a way that is responsive to the needs of the represented group.

Politics of representation are the link between recognition and distribution: the idea is that if “we” recognize a certain group, then through democratic or social processes members of that group will be lifted into positions of representative power, which then will lead to (re)distribution towards that group in the longer run.

I believe this is the implicit theory of social change at the heart of a lot of democratish movements today. It’s an interesting theory in part because it doesn’t seem to have any room for “good governance”, or broadly beneficial governance, or technocracy. There’s nothing deliberative about this form of democracy; it’s a tribal war-by-other-means. It is also not clear that this theory of social change based on demographic representation is any more effective at changing distributional outcomes than a pure politics of recognition, which we have reason to believhe is ineffectual.

Who do we expect to have power over distributional outcomes in our (and probably other) democracies? Realistically, it’s corporations. Businesses comprise most of the economic activity; businesses have the profits needed to reinvest in lobbying power for the sake of economic capture. So maybe if what we’re interested in is politics of distribution, we should stop trying to parse out the politics of recognition, with its deep dark rabbit hole of identity politics and the historical injustice and Jungian archetypal conflicts over the implications of the long arc of sexual maturity. These conversations do not seem to be getting anyone anywhere! It is, perhaps, fake news: not because the contents are fake, but because the idea that these issues are new is fake. They are perhaps just a lot of old issues stirred to conflagration by the feedback loops between social and traditional media.

If we are interested in the politics of distribution, let’s talk about something else, something that we all know must be more relevant, when it comes down to it, than the politics of recognition. I’m talking about the politics of business.

We have a rather complex economy with many competing business interests. Let’s assume that one of the things these businesses compete over is regulatory capture–their ability to influence economic policy in their favor.

When academics talk about neoliberal economic policy, they are often talking about those policies that benefit the financial sector and big businesses. But these big businesses are not always in agreement.

Take, for example, the steel tariff proposed by the Trump administration. There is no blunter example of a policy that benefits some business interests–U.S. steelmakers–and not others–U.S. manufacturers of steel-based products.

It’s important from the perspective of electoral politics to recognize that the U.S. steelmakers are a particular set of people who live in particular voting districts with certain demographics. That’s because, probably, if I am a U.S. steelworker, I will vote in the interest of my industry. Just as if I am a U.S. based urban information worker at an Internet company, I will vote in the interest of my company, which in my case would mean supporting net neutrality. If I worked for AT&T, I would vote against net neutrality, which today means I would vote Republican.

It’s an interesting fact that AT&T employs a lot more people than Google and (I believe this is the case, though I don’t know where to look up the data) that they are much more geographically distributed that Google because, you know, wires and towers and such. Which means that AT&T employees will be drawn from more rural, less diverse areas, giving them an additional allegiance to Republican identity politics.

You must see where I’m getting at. Assume that the main driver of U.S. politics is not popular will (which nobody really believes, right?) and is in fact corporate interests (which basically everybody admits, right?). In that case the politics of recognition will not be determining anything; rather it will be a symptom, an epiphenomenon, of an underlying politics of business. Immigration of high-talent foreigners then becomes a proxy issue for the economic battle between coastal tech companies and, say, old energy companies which have a much less geographically mobile labor base. Nationalism, or multinationalism, becomes a function of trade relations rather than a driving economic force in its own right. (Hence, Russia remains an enemy of the U.S. largely because Putin paid off all its debt to the U.S. and doesn’t owe it any money, unlike many of its other allies around the world.)

I would very much like to devote myself better to the understanding of politics of business because, as I’ve indicated, I think the politics of recognition have become a huge distraction.

by Sebastian Benthall at March 05, 2018 10:00 PM

March 02, 2018

Ph.D. student

Moral individualism and race (Barabas, Gilman, Deenan)

One of my favorite articles presented at the recent FAT* 2018 conference was Barabas et al. on “Interventions over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment” (link). To me, this was the correct response to recent academic debate about the use of actuarial risk-assessment in determining criminal bail and parole rates. I had a position on this before the conference which I drafted up here; my main frustration with the debate had been that it had gone unquestioned why bail and parole rates are based on actuarial prediction of recidivism in the first place, given that rearrest rates are so contingent on social structural factors such as whether or not police are racist.

Barabas et al. point out that there’s an implicit theory of crime behind the use of actuarial risk assessments. In that theory of crime, there are individual “bad people” and “good people”. “Bad people” are more likely to commit crimes because of their individual nature, and the goal of the criminal policing system is to keep bad people from committing crimes by putting them in prison. This is the sort of theory that, even if it is a little bit true, is also deeply wrong, and so we should probably reassess the whole criminal justice system as a result. Even leaving aside the important issue of whether “recidivism” is interpreted as reoffense or rearrest rate, it is socially quite dangerous to see probability of offense as due to the specific individual moral character of a person. One reason why this is dangerous is that if the conditions for offense are correlated with the conditions for some sort of unjust desperation, then we risk falsely justifying an injustice with the idea that the bad things are only happening to bad people.

I’d like to juxtapose this position with a couple others that may on the surface appear to be in tension with it.

Nils Gilman’s new piece on “The Collapse of Racial Liberalism” is a helpful account of how we got where we are as an American polity. True to the title, Gilman’s point is that there was a centrist consensus on ‘racial liberalism’ that it reached its apotheosis in the election of Obama and then collapsed under its one contradictions, getting us where we are today.

By racial liberalism, I mean the basic consensus that existed across the mainstream of both political parties since the 1970s, to the effect that, first, bigotry of any overt sort would not be tolerated, but second, that what was intolerable was only overt bigotry—in other words, white people’s definition of racism. Institutional or “structural” racism—that is, race-based exclusions that result from deep social habits such as where people live, who they know socially, what private organizations they belong to, and so on—were not to be addressed. The core ethic of the racial liberal consensus was colorblind individualism.

Bill Clinton was good at toeing the line of racial liberalism, and Obama, as a black meritocratic elected president, was its culmination. But:

“Obama’s election marked at once the high point and the end of a particular historical cycle: a moment when the realization of a particular ideal reveals the limits of that ideal.”

The limit of the ideal is, of course, that all the things not addressed–“race-based exclusions that result from deep social habits such as where people live, who they know socially, what private organizations they belong to, and so on”–matter, and result in, for example, innocent black guys getting shot disproportionately by police even when there is a black meritocratic sitting as president.

And interesting juxtaposition here is that in both cases discussed so far, we have a case of a system that is reaching its obsolescence due to the contradictions of individualism. In the case of actuarial policing (as it is done today; I think a properly sociological version of actuarial policing could be great), there’s the problem of considering criminals as individuals whose crimes are symptoms of their individual moral character. The solution to crime is to ostracize and contain the criminals by, e.g., putting them in prison. In the case of racial liberalism, there’s the problem of considering bigotry a symptom of individual moral character. The solution to the bigotry is to ostracize and contain the bigots by teaching them that it is socially unacceptable to express bigotry and keeping the worst bigots out of respectable organizations.

Could it be that our broken theories of both crime and bigotry both have the same problem, which is the commitment to moral individualism, by which I mean the theory that it’s individual moral character that is the cause of and solution to these problems? If a case of individual crime and individual bigotry is the result of, instead of an individual moral failing, a collective action problem, what then?

I still haven’t looked carefully into Deenan’s argument (see notes here), but I’m intrigued that his point may be that the crisis of liberalism may be, at its root, a crisis of individualism. Indeed, Kantian views of individual autonomy are really nice but they have not stood the test of time; I’d say the combined works of Haberams, Foucault, and Bourdieu have each from very different directions developed Kantian ideas into a more sociological frame. And that’s just on the continental grand theory side of the equation. I have not followed up on what Anglophone liberal theory has been doing, but I suspect that it has been going the same way.

I am wary, as I always am, of giving too much credit to theory. I know, as somebody who has read altogether too much of it, what little use it actually is. However, the notion of political and social consensus is one that tangibly effects my life these days. For this reason, it’s a topic of great personal interest.

One last point, that’s intended as constructive. It’s been argued that the appeal of individualism is due in part to the methodological individualism of rational choice theory and neoclassical economic theory. Because we can’t model economic interactions on anything but an individualistic level, we can’t design mechanisms or institutions that treat individual activity as a function of social form. This is another good reason to take seriously computational modeling of social forms.

References

Barabas, Chelsea, et al. “Interventions over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment.” arXiv preprint arXiv:1712.08238 (2017).

Deneen, Patrick J. Why Liberalism Failed. Yale University Press, 2018.

Gilman, Nils. “The Collapse of Racial Liberalism.” The American Interest (2018).

by Sebastian Benthall at March 02, 2018 09:09 PM

February 28, 2018

Ph.D. student

interesting article about business in China

I don’t know much about China, really, so I’m always fascinated to learn more.

This FT article, “Anbang arrests demonstrates hostility to business”, by Jamil Anderlini, provides some wonderful historical context to a story about the arrest of an insurance oligarch.

In ancient times, merchants were at the very bottom of the four official social classes, below warrior-scholars, farmers and artisans. Although some became very rich they were considered parasites in Chinese society.

Ever since the Han emperors established the state salt monopoly in the second century BCE (remnants of which remain to this day), large-scale business enterprises have been controlled by the state or completely reliant on the favour of the emperor and the bureaucrat class.

In the 20th century, the Communist emperor Mao Zedong effectively managed to stamp out all private enterprise for a while.

Until the party finally allowed “capitalists” to join its ranks in 2002, many of the business activities carried out by the resurgent merchant class were technically illegal.

China’s rich lists are populated by entrepreneurs operating in just a handful of industries — particularly real estate and the internet.

Tycoons like Mr Wu who emerge in state-dominated sectors are still exceedingly rare. They are almost always closely linked to one of the old revolutionary families exercising enormous power from the shadows.

Everything about this is interesting.

First, in Western scholarship we rarely give China credit for its history of bureaucracy in the absence of capitalism. In the well know Weberian account, bureaucracy is an institutional invention that provides regular rule of law so that capitalism can thrive. But China’s history is one that is statist “from ancient times”, but with effective bureaucracy from the beginning. A managerialist history, perhaps.

Which makes the second point so unusual: why, given this long history of bureaucratic rule, are Internet companies operating in a comparatively unregulated way? This seems like a massive concession of power, not unlike how (arguably) the government of the United States conceded a lot of power to Silicon Valley under the Obama administration.

The article dramatically foreshadows a potential power struggle between Xi Jinping’s consolidated state and the tech giant oligarchs:

Now that Chinese President Xi Jinping has abolished his own term limits, setting the stage for him to rule for life if he wants to, the system of state patronage and the punishment of independent oligarchs is likely to expand. Any company or billionaire who offends the emperor or his minions will be swiftly dealt with in the same way as Mr Wu.

There is one group of Chinese companies with charismatic — some would say arrogant — founders that enjoy immense economic power in China today. They would seem to be prime candidates if the assault on private enterprise is stepped up.

Internet giants Alibaba, Tencent and Baidu are not only hugely profitable, they control the data that is the lifeblood of the modern economy. That is why Alibaba founder Jack Ma has repeatedly said, including to the FT, that he would gladly hand his company over to the state if Beijing ever asked him to. Investors in BABA can only hope it never comes to that.

That is quite the expression of feudal fealty from Jack Ma. Truly, a totally different business culture from that of the United States.

by Sebastian Benthall at February 28, 2018 03:18 PM

February 27, 2018

Ph.D. student

Notes on Deenan, “Why Liberalism Failed”, Foreward

I’ve begun reading the recently published book, Why Liberalism Failed (2018), by Patrick Deenan. It appears to be making some waves in the political theory commentary. The author claims that it was 10 years in the making but was finished three weeks before the 2016 presidential election, which suggests that the argument within it is prescient.

I’m not far in yet.

There is an intriguing forward from James Davison Hunter and John M. Owen IV, the editors. Their framing of the book is surprisingly continental:

  • They declare that liberalism has arrived at its “legitimacy crisis”, a Habermasian term.
  • They claim that the core contention of the book is a critique of the contradictions within Immanuel Kant’s view of individual autonomy.
  • They compare Deenan with other “radical” critics of liberalism, of which they name: Marx, the Frankfurt School, Foucault, Nietzsche, Schmitt, and the Catholic Church.

In search of a litmus-test like clue as to where in the political spectrum the book falls, I’ve found this passage in the Foreward:

Deneen’s book is disruptive not only for the way it links social maladies to liberalism’s first principles, but also because it is difficult to categorize along our conventional left-right spectrum. Much of what he writes will cheer social democrats and anger free-market advocates; much else will hearten traditionalists and alienate social progressives.

Well, well, well. If we are to fit Deenan’s book into the conceptual 2-by-2 provided in Fraser’s recent work, it appears that Deenan’s political theory is a form of reactionary populism, rejecting progressive neoliberalism. In other words, the Foreward evinces that Deenan’s book is a high-brow political theory contribution that weighs in favor of the kind of politics that has been heretofore only articulated by intellectual pariahs.

by Sebastian Benthall at February 27, 2018 03:54 PM

February 26, 2018

MIMS 2012

On Mastery

Mastery

I completely agree with this view on mastery from American fashion designer, writer, television personality, entrepreneur, and occasional cabaret star Isaac Mizrahi:

I’m a person who’s interested in doing a bunch of things. It’s just what I like. I like it better than doing one thing over and over. This idea of mastery—of being the very best at just one thing—is not in my future. I don’t really care that much. I care about doing things that are interesting to me and that I don’t lose interest in.

Mastery – “being the very best at just one thing” – doesn’t hold much appeal for me. I’m a very curious person. I like jumping between various creative endeavors that “are interesting to me and that I don’t lose interest in.” Guitar, web design, coding, writing, hand lettering – these are just some of the creative paths I’ve gone down so far, and I know that list will continue to grow.

I’ve found that my understanding of one discipline fosters a deeper understanding of other disciplines. New skills don’t take away from each other – they only add.

So no, mastery isn’t for me. The more creative paths I go down, the better. Keep ‘em coming.


Update 4/2/18

Quartz recently profiled Charlie Munger, Warren Buffett’s billionaire deputy, who credits his investing success to not mastering just 1 field — investment theory — but instead “mastering the multiple models which underlie reality.” In other words, Munger is an expert-generalist. The term was coined by Orit Gadiesh, chairman of Bain & Co, who describes an expert-generalist as:

Someone who has the ability and curiosity to master and collect expertise in many different disciplines, industries, skills, capabilities, countries, and topics., etc. He or she can then, without necessarily even realizing it, but often by design:

  1. Draw on that palette of diverse knowledge to recognize patterns and connect the dots across multiple areas.
  2. Drill deep to focus and perfect the thinking.

The article goes on to describe the strength of this strategy:

Being an expert-generalist allows individuals to quickly adapt to change. Research shows that they:

  • See the world more accurately and make better predictions of the future because they are not as susceptible to the biases and assumptions prevailing in any given field or community.
  • Have more breakthrough ideas, because they pull insights that already work in one area into ones where they haven’t been tried yet.
  • Build deeper connections with people who are different than them because of understanding of their perspectives.
  • Build more open networks, which allows them to serve as a connector between people in different groups. According to network science research, having an open network is the #1 predictor of career success.

All of this sounds exactly right. I had never thought about the benefits of being an expert-generalist, nor did I deliberately set out to be one (my natural curiosity got me here), but reading these descriptions gave form to something that previously felt intuitively true.

Read the full article here: https://qz.com/1179027/mental-models-how-warren-buffetts-billionaire-deputy-became-an-expert-generalist/

by Jeff Zych at February 26, 2018 01:29 AM

February 20, 2018

MIMS 2012

Stay Focused on the User by Switching Between Maker Mode and Listener Mode

When writing music, ambient music composer Brian Eno makes music that’s pleasurable to listen to by switching between “maker” mode and “listener” mode. He says:

I just start something simple [in the studio]—like a couple of tones that overlay each other—and then I come back in here and do emails or write or whatever I have to do. So as I’m listening, I’ll think, It would be nice if I had more harmonics in there. So I take a few minutes to go and fix that up, and I leave it playing. Sometimes that’s all that happens, and I do my emails and then go home. But other times, it starts to sound like a piece of music. So then I start working on it.

I always try to keep this balance with ambient pieces between making them and listening to them. If you’re only in maker mode all the time, you put too much in. […] As a maker, you tend to do too much, because you’re there with all the tools and you keep putting things in. As a listener, you’re happy with quite a lot less.

In other words, Eno makes great music by experiencing it the way his listeners do: by listening to it.

This is also a great lesson for product development teams: to make a great product, regularly use your product.

By switching between “maker” and “listener” modes, you put yourself in your user’s shoes and seeing your work through their eyes, which helps prevent you from “put[ting] too much in.”

This isn’t a replacement for user testing, of course. We are not our users. But in my experience, it’s all too common for product development teams to rarely, if ever, use what they’re building. No shade – I’ve been there. We get caught on the treadmill of building new features, always moving on to the next without stopping to catch our breath and use what we’ve built. This is how products devolve into an incomprehensible pile of features.

Eno’s process is an important reminder to keep your focus on the user by regularly switching between “maker” mode and “listener” mode.

by Jeff Zych at February 20, 2018 08:20 PM

February 13, 2018

Ph.D. student

that time they buried Talcott Parsons

Continuing with what seems like a never-ending side project to get a handle on computational social science methods, I’m doing a literature review on ‘big data’ sociological methods papers. Recent reading has led to two striking revelations.

The first is that Tufekci’s 2014 critique of Big Data methodologies is the best thing on the subject I’ve ever read. What it does is very clearly and precisely lay out the methodological pitfalls of sourcing the data from social media platforms: use of a platform as a model organism; selecting on a dependent variable; not taking into account exogenous, ecological, or field factors; and so on. I suspect this is old news to people who have more rigorously surveyed the literature on this in the past. But I’ve been exposed to and distracted by literature that seems aimed mainly to discredit social scientists who want to work with this data, rather than helpfully engaging them on the promises and limitations of their methods.

The second striking revelation is that for the second time in my literature survey, I’ve found a reference to that time when the field of cultural sociology decided they’d had enough of Talcott Parsons. From (Bail, 2014):

The capacity to capture all – or nearly all – relevant text on a given topic opens exciting new lines of meso- and macro-level inquiry into what environments (Bail forthcoming). Ecological or functionalist interpretations of culture have been unpopular with cultural sociologists for some time – most likely because the subfield defined itself as an alternative to the general theory proposed by Talcott Parsons (Alexander 2006). Yet many cultural sociologists also draw inspiration from Mary Douglas (e.g., Alexander 2006; Lamont 1992; Zelizer 1985), who – like Swidler – insists upon the need for our subfield to engage broader levels of analysis. “For sociology to accept that no functionalist arguments work,” writes Douglas (1986, p. 43), “is like cutting off one’s nose to spite one’s face.” To be fair, cultural sociologists have recently made several programmatic statements about the need to engage functional or ecological theories of culture. Abbott (1995), for example, explains the formation of boundaries between professional fields as the result of an evolutionary process. Similarly, Lieberson (2000), presents an ecological model of fashion trends in child-naming practices. In a review essay, Kaufman (2004) describes such ecological approaches to cultural sociology as one of the three most promising directions for the future of the subfield.

I’m not sure what’s going on with all these references to Talcott Parsons. I gather that at one time he was a giant in sociology, but that then a generation of sociologists tried to bury him. Then the next generation of sociologists reinvented structural functionalism with new language–“ecological approaches”, “field theory”?

One wonder what Talcott Parsons did or didn’t do to inspire such a rebellion.

References

Bail, Christopher A. “The cultural environment: measuring culture with big data.” Theory and Society 43.3-4 (2014): 465-482.

Tufekci, Zeynep. “Big Questions for Social Media Big Data: Representativeness, Validity and Other Methodological Pitfalls.” ICWSM 14 (2014): 505-514.

by Sebastian Benthall at February 13, 2018 06:15 PM

Ph.D. alumna

The Reality of Twitter Puffery. Or Why Does Everyone Now Hate Bots?

(This was originally posted on NewCo Shift.)

A friend of mine worked for an online dating company whose audience was predominantly hetero 30-somethings. At some point, they realized that a large number of the “female” accounts were actually bait for porn sites and 1–900 numbers. I don’t remember if users complained or if they found it themselves, but they concluded that they needed to get rid of these fake profiles. So they did.

And then their numbers started dropping. And dropping. And dropping.

Trying to understand why, researchers were sent in. What they learned was that hot men were attracted to the site because there were women that they felt were out of their league. Most of these hot men didn’t really aim for these ultra-hot women, because they felt like they would be inaccessible, but they were happy to talk with women who they saw as being one rung down (as in actual hot women). These hot women, meanwhile, were excited to have these hot men (who they saw as equals) on the site. These also felt that, since there were women hotter than them, that this was a site for them. When they removed the fakes, the hot men felt the site was no longer for them. They disappeared. And then so did the hot women. Etc. The weirdest part? They reintroduced decoy profiles (not as redirects to porn but as fake women who just didn’t respond) and slowly folks came back.

Why am I telling you this story? Fake accounts and bots on social media are not new. Yet, in the last couple of weeks, there’s been newfound hysteria around Twitter bots and fake accounts. I find it deeply problematic that folks are saying that having fake followers is inauthentic. This is like saying that makeup is inauthentic. What is really going on here?

From Fakesters to Influencers

From the earliest days of Friendster and MySpace, people liked to show how cool they were by how many friends they had. As Alice Marwick eloquentlydocumentedself-branding and performing status were the name of the gamefor many in the early days of social media. This hasn’t changedPeople made entire careers out of appearing to be influential, not just actually being influential. Of course a market emerged around this so that people could buy and sell followers, friends, likes, comments, etc. Indeed, standard practice, especially in the wink-nudge world of Instagram, where monetized content is the game and so-called organic “macroinfluencers” can easily double their follower size through bots are more than happily followed by bots, paid or not.

Some sites have tried to get rid of fake accounts. Indeed, Friendster played whack-a-mole with them, killing off “Fakesters” and any account that didn’t follow their strict requirements; this prompted a mass exodus. Facebook’s real-name policy also signaled that such shenanigans would not be allowed on their site, although shhh…. lots of folks figured out how to have multiple accounts and otherwise circumvent the policy.

And let’s be honest — fake accounts are all over most online dating profiles. Ashley Madison, anyone?

Bots, Bots, Bots

Bots have been an intrinsic part of Twitter since the early days. Following the Pope’s daily text messaging services, the Vatican set up numerous bots offering Catholics regular reflections. Most major news organizations have bots so that you can keep up with the headlines of their publications. Twitter’s almost-anything-goes policy meant that people have built bots for all sorts of purposes. There are bots that do poetry, ones that argue with anti-vaxxers about their beliefs, and ones that call out sexist comments people post. I’m a big fan of the @censusAmericans bot created by FiveThirtyEight to regularly send out data from the Census about Americans.

Over the last year, sentiment towards Twitter’s bots has become decidedly negative. Perhaps most people didn’t even realize that there were bots on the site. They probably don’t think of @NYTimes as a bot. When news coverage obsesses over bots, they primarily associate the phenomenon with nefarious activities meant to seed discord, create chaos, and do harm. It can all be boiled down to: Russian bots. As a result, Congress saw bots as inherently bad and journalists keep accusing Twitter of having a “bot problem” without accounting for how their stories appear on Twitter through bots.

Although we often hear about the millions and millions of bots on Twitter as though they’re all manipulative, the stark reality is that bots can be quite fun. I had my students build Twitter bots to teach them how these things worked — they had a field day, even if they didn’t get many followers.

Of course, there are definitely bots that you can buy to puff up your status. Some of them might even be Russian built. And here’s where we get to the crux of the current conversation.

Buying Status

Typical before/after image on Instagram.

People buy bots to increase their number of followers, retweets, and likes in order to appear cooler than they are. Think of this as mascara for your digital presence. While plenty of users are happy chatting away with their friends without their makeup on, there’s an entire class of professionals who feel the need to be dolled up and giving the best impression possible. It’s a competition for popularity and status, marked by numbers.

Number games are not new, especially not in the world of media. Take a well-established firm like Nielsen. Although journalists often uncritically quote Nielsen numbers as though they are “fact,” most people in the ad and media business know that they’re crap. But they’ve long been the best crap out there. And, more importantly, they’re uniform crap so businesses can make predictable decisions off of these numbers, fully aware that they might not be that accurate. The same has long been true of page views and clicksNo major news organization should take their page views literally. And yet, lots of news agencies rank their reporters based on this data.

What makes the purchasing of Twitter bots and status so nefarious? The NYTimes story suggests that doing so is especially deceptive. Their coverage shamed Twitter into deleting a bunch of Twitter accounts, outing all of the public figures who had bought bots. It almost felt like a discussion of who had gotten Botox.

Much of this recent flurry of coverage suggests that the so-called bot problem is a new thing that is “finally” known. It boggles my mind to think that any regular Twitter user hadn’t seen automated accounts in the past. And heck, there have been services like Twitter Audit to see how many fake followers you have since at least 2012. Gilad Lotan even detailed the ecosystem of buying fake followers in 2014I think that what’s new is that the term “bot” is suddenly toxic. And it gives us an opportunity to engage in another round of social shaming targeted at insecure people’s vanity all under the false pretense of being about bad foreign actors.

I’ve never been one to feel the need to put on a lot of makeup in order to leave the house and I haven’t been someone who felt the need to buy bots to appear cool online. But I find it deeply hypocritical to listen to journalists and politicians wring their hands about fake followers and bots given that they’ve been playing at that game for a long time. Who among them is really innocent of trying to garner attention through any means possible?

At the end of the day, I don’t really blame Twitter for giving these deeply engaged users what they want and turning a blind eye towards their efforts to puff up their status online. After all, the cosmetic industry is $55 billion. Then again, even cosmetic companies sometimes change their formulas when their products receive bad press.

Note: I’m fully aware of hypotheses that bots have destroyed American democracy. That’s a different essay. But I think that the main impact that they have had, like spam, is to destabilize people’s trust in the media ecosystem. Still, we need to contend with the stark reality that they do serve a purpose and some people do want them.

by zephoria at February 13, 2018 01:32 AM

February 12, 2018

Ph.D. student

What happens if we lose the prior for sparse representations?

Noting this nice paper by Giannone et al., “Economic predictions with big data: The illusion of sparsity.” It concludes:

Summing up, strong prior beliefs favouring low-dimensional models appear to be necessary to support sparse representations. In most cases, the idea that the data are informative enough to identify sparse predictive models might be an illusion.

This is refreshing honesty.

In my experience, most disciplinary social sciences have a strong prior bias towards pithy explanatory theses. In a normal social science paper, what you want is a single research question, a single hypothesis. This thesis expresses the narrative of the paper. It’s what makes the paper compelling.

In mathematical model fitting, the term for such a simply hypothesis is a sparse predictive model. These models will have relatively few independent variables predicting the dependent variable. In machine learning, this sparsity is often accomplished by a regularization step. While generally well-motivate, regularization for sparsity can be done for reasons that are more aesthetic or reflect a stronger prior than is warranted.

A consequence of this preference for sparsity, in my opinion, is the prevalence of literature on power law distributions vs. log normal explanations. (See this note on disorganized heavy tail distributions.) A dense model on a log linear regression will predict a heavy tail dependent variable without great error. But it will be unsatisfying from the perspective of scientific explanation.

What seems to be an open question in the social sciences today is whether the culture of social science will change as a result of the robust statistical analysis of new data sets. As I’ve argued elsewhere (Benthall, 2016), if the culture does change, it will mean that narrative explanation will be less highly valued.

References

Benthall, Sebastian. “Philosophy of computational social science.” Cosmos and History: The Journal of Natural and Social Philosophy 12.2 (2016): 13-30.

Giannone, Domenico, Michele Lenza, and Giorgio E. Primiceri. “Economic predictions with big data: The illusion of sparsity.” (2017).

by Sebastian Benthall at February 12, 2018 03:44 AM

February 10, 2018

Ph.D. student

The therapeutic ethos in progressive neoliberalism (Fraser and Furedi)

I’ve read two pieces recently that I found helpful in understanding today’s politics, especially today’s identity politics, in a larger context.

The first is Nancy Fraser’s “From Progressive Neoliberalism to Trump–and Beyond” (link). It portrays the present (American but also global) political moment as a “crisis of hegemony”, using Gramscian terms, for which the presidency of Donald Trump is a poster child. It’s main contribution is to point out that the hegemony that’s been in crisis is a hegemony of progressive neoliberalism, which sounds like an oxymoron but, Fraser argues, isn’t.

Rather, Fraser explains a two-dimensional political spectrum: there are politics of distribution, and there are politics of recognition.

To these ideas of Gramsci, we must add one more. Every hegemonic bloc embodies a set of assumptions about what is just and right and what is not. Since at least the mid-twentieth century in the United States and Europe, capitalist hegemony has been forged by combining two different aspects of right and justice—one focused on distribution, the other on recognition. he distributive aspect conveys a view about how society should allocate divisible goods, especially income. This aspect speaks to the economic structure of society and, however obliquely, to its class divisions. The recognition aspect expresses a sense of how society should apportion respect and esteem, the moral marks of membership and belonging. Focused on the status order of society, this aspect refers to its status hierarchies.

Fraser’s argument is that neoliberalism is a politics of distribution–it’s about using the market to distribute goods. I’m just going to assume that anybody reading this has a working knowledge of what neoliberalism means; if you don’t I recommend reading Fraser’s article about it. Progressivism is a politics of recognition that was advanced by the New Democrats. Part of its political potency been its consistency with neoliberalism:

At the core of this ethos were ideals of “diversity,” women’s “empowerment,” and LGBTQ rights; post-racialism, multiculturalism, and environmentalism. These ideals were interpreted in a specific, limited way that was fully compatible with the Goldman Sachsification of the U.S. economy…. The progressive-neoliberal program for a just status order did not aim to abolish social hierarchy but to “diversify” it, “empowering” “talented” women, people of color, and sexual minorities to rise to the top. And that ideal was inherently class specific: geared to ensuring that “deserving” individuals from “underrepresented groups” could attain positions and pay on a par with the straight white men of their own class.

A less academic, more Wall Street Journal reading member of the commentariat might be more comfortable with the terms “fiscal conservativism” and “social liberalism”. And indeed, Fraser’s argument seems mainly to be that the hegemony of the Obama era was fiscally conservatism but socially liberal. In a sense, it was the true libertarians that were winning, which is an interesting take I hadn’t heard before.

The problem, from Frasers perspective, is that neoliberalism concentrates wealth and carries the seeds of its own revolution, allowing for Trump to run on a combination of reactionary politics of recognition (social conservativism) with a populist politics of distribution (economic liberalism: big spending and protectionism). He won, and then sold out to neoliberalism, giving us the currently prevailing combination of neoliberalism and reactionary social policy. Which, by the way, we would be calling neoconservatism if it were 15 years ago. Maybe it’s time to resuscitate this term.

Fraser thinks the world would be a better place if progressive populists could establish themselves as an effective counterhegemonic bloc.

The second piece I’ve read on this recently is Frank Furedi’s “The hidden history of t identity politics” (link). Pairing Fraser with Furedi is perhaps unlikely because, to put it bluntly, Fraser is a feminist and Furedi, as far as I can tell from this one piece, isn’t. However, both are serious social historians and there’s a lot of overlap in the stories they tell. That is in itself interesting from a scholarly perspective of one trying to triangulate an accurate account of political history.

Furedi’s piece is about “identity politics” broadly, including both its right wing and left wing incarnations. So, we’re talking about what Fraser calls the politics of recognition here. On a first pass, Furedi’s point is that Enlightenment universalist values have been challenged by both right and left wing identity politics since the late 18th century Romantic nationalist movements in Europe, which led to World Wars and the holocaust. Maybe, Furedi’s piece suggests, abandoning Enlightenment universalist values was a bad idea.

Although expressed through a radical rhetoric of liberation and empowerment, the shift towards identity politics was conservative in impulse. It was a sensibility that celebrated the particular and which regarded the aspiration for universal values with suspicion. Hence the politics of identity focused on the consciousness of the self and on how the self was perceived. Identity politics was, and continues to be, the politics of ‘it’s all about me’.

Strikingly, Furedi’s argument is that the left took the “cultural turn” into recognition politics essentially because of its inability to maintain a left-wing politics of redistribution, and that this happened in the 70’s. But this in turn undermined the cause of the economic left. Why? Because economic populism requires social solidarity, while identity politics is necessarily a politics of difference. Solidarity within an identity group can cause gains for that identity group, but at the expense of political gains that could be won with an even more unified popular political force.

The emergence of different identity-based groups during the 1970s mirrored the lowering of expectations on the part of the left. This new sensibility was most strikingly expressed by the so-called ‘cultural turn’ of the left. The focus on the politics of culture, on image and representation, distracted the left from its traditional interest in social solidarity. And the most significant feature of the cultural turn was its sacralisation of identity. The ideals of difference and diversity had displaced those of human solidarity.

So far, Furedi is in agreement with Fraser that hegemonic neoliberalism has been the status quo since the 70’s, and that the main political battles have been over identity recognition. Furedi’s point, which I find interesting, is that these battles over identity recognition undermine the cause of economic populism. In short, neoliberals and neocons can use identity to divide and conquer their shared political opponents and keep things as neo- as possible.

This is all rather old news, though a nice schematic representation of it.

Where Furedi’s piece gets interesting is where it draws out the next movements in identity politics, which he describes as the shift from it being about political and economic conditions into a politics of first victimhood and then a specific therapeutic ethos.

The victimhood move grounded the politics of recognition in the authoritative status of the victim. While originally used for progresssive purposes, this move was adopted outside of the progressive movement as early as 1980’s.

A pervasive sense of victimisation was probably the most distinct cultural legacy of this era. The authority of the victim was ascendant. Sections of both the left and the right endorsed the legitimacy of the victim’s authoritative status. This meant that victimhood became an important cultural resource for identity construction. At times it seemed that everyone wanted to embrace the victim label. Competitive victimhood quickly led to attempts to create a hierarchy of victims. According to a study by an American sociologist, the different movements joined in an informal way to ‘generate a common mood of victimisation, moral indignation, and a self-righteous hostility against the common enemy – the white male’ (5). Not that the white male was excluded from the ambit of victimhood for long. In the 1980s, a new men’s movement emerged insisting that men, too, were an unrecognised and marginalised group of victims.

This is interesting in part because there’s a tendency today to see the “alt-right” of reactionary recognition politics as a very recent phenomenon. According to Furedi, it isn’t; it’s part of the history of identity politics in general. We just thought it was
dead because, as Fraser argues, progresssive neoliberalism had attained hegemony.

Buried deep into the piece is arguable Furedi’s most controversial and pointedly written point, which is about the “therapeutic ethos” of identity politics since the 1970’s that resonates quite deeply today. The idea here is that principles from psychotherapy have become part of repertoire of left-wing activism. A prescription against “blaming the victim” transformed into a prescription towards “believing the victim”, which in turn creates a culture where only those with lived experience of a human condition may speak with authority on it. This authority is ambiguous, because it is at once both the moral authority of the victim, but also the authority one must give a therapeutic patient in describing their own experiences for the sake of their mental health.

The obligation to believe and not criticise individuals claiming victim identity is justified on therapeutic grounds. Criticism is said to constitute a form of psychological re-victimisation and therefore causes psychic wounding and mental harm. This therapeutically informed argument against the exercise of critical judgement and free speech regards criticism as an attack not just on views and opinions, but also on the person holding them. The result is censorious and illiberal. That is why in society, and especially on university campuses, it is often impossible to debate certain issues.

Furedi is concerned with how the therapeutic ethos in identity politics shuts down liberal discourse, which further erodes social solidarity which would advance political populism. In therapy, your own individual self-satisfaction and validation is the most important thing. In the politics of solidarity, this is absolutely not the case. This is a subtle critique of Fraser’s argument, which argues that progressive populism is a potentially viable counterhegemonic bloc. We could imagine a synthetic point of view, which is that progressive populism is viable but only if progressives drop the therapeutic ethos. Or, to put it another way, if “[f]rom their standpoint, any criticism of the causes promoted by identitarians is a cultural crime”, then that criminalizes the kind of discourse that’s necessary for political solidarity. That serves to advantage the neoliberal or neoconservative agenda.

This is, Furedi points out, easier to see in light of history:

Outwardly, the latest version of identity politics – which is distinguished by a synthesis of victim consciousness and concern with therapeutic validation – appears to have little in common with its 19th-century predecessor. However, in one important respect it represents a continuation of the particularist outlook and epistemology of 19th-century identitarians. Both versions insist that only those who lived in and experienced the particular culture that underpins their identity can understand their reality. In this sense, identity provides a patent on who can have a say or a voice about matters pertaining to a particular culture.

While I think they do a lot to frame the present political conditions, I don’t agree with everything in either of these articles. There are a few points of tension which I wish I knew more about.

The first is the connection made in some media today between the therapeutic needs of society’s victims and economic distributional justice. Perhaps it’s the nexus of these two political flows that makes the topic of workplace harassment and culture in its most symbolic forms such a hot topic today. It is, in a sense, the quintessential progressive neoliberal problem, in that it aligns the politics of distribution with the politics of recognition while employing the therapeutic ethos. The argument goes: since market logic is fair (the neoliberal position), if there is unfair distribution it must be because the politics of recognition are unfair (progressivism). That’s because if there is inadequate recognition, then the societal victims will feel invalidated, preventing them from asserting themselves effectively in the workplace (therapeutic ethos). To put it another way, distributional inequality is being represented as a consequence of a market externality, which is the psychological difficulty imposed by social and economic inequality. A progressive politthiics of recognition are a therapeutic intervention designed to alleviate this psychological difficulty, which corrects the meritocratic market logic.

One valid reaction to this is: so what? Furedi and Fraser are both essentially card carrying socialists. If you’re a card-carrying socialist (maybe because you have a universalist sense of distributional justice), then you might see the emphasis on workplace harassment as a distraction from a broader socialist agenda. But most people aren’t card-carrying socialist academics; most people go to work and would prefer not to be harassed.

The other thing I would like to know more about is to what extent the demands of the therapeutic ethos are a political rhetorical convenience and to what extent it is a matter of ground truth. The sweeping therapeutic progressive narrative outlined pointed out by Furedi, wherein vast swathes of society (i.e, all women, all people of color, maybe all conservatives in liberal-dominant institutions, etc.) are so structurally victimized that therapy-grade levels of validation are necessary for them to function unharmed in universities and workplaces is truly a tough pill to swallow. On the other hand, a theory of justice that discounts the genuine therapeutic needs of half the population can hardly be described as a “universalist” one.

Is there a resolution to this epistemic and political crisis? If I had to drop everything and look for one, it would be in the clinical psychological literature. What I want to know is how grounded the therapeutic ethos is in (a) scientific clinical psychology, and (b) the epidemiology of mental illness. Is it the case that structural inequality is so traumatizing (either directly or indirectly) that the fragmentation of epistemic culture is necessary as a salve for it? Or is this a political fiction? I don’t know the answer.

by Sebastian Benthall at February 10, 2018 04:48 PM

February 07, 2018

MIMS 2011

Wikipedia’s relationship to academia and academics

I was recently quoted in an article for Science News about the relationship between academia and Wikipedia by Bethany Brookshire. I was asked to comment on a recent paper by MIT Sloan‘s Neil Thompson and Douglas Hanley who investigated the relationship between Wikipedia articles and scientific papers using examples from chemistry and econometrics. There are a bunch of studies on a similar topic (if you’re interested, here is a good place to start) and I’ve been working on this topic – but from a very different angle – for a qualitative study to be published soon. I thought I would share my answers to the interview questions here since many of them are questions that friends and colleagues ask regularly about citing Wikipedia articles and about quality issues on Wikipedia.

Have you ever edited Wikipedia articles?  What do you think of the process?

Some, yes. Being a successful editor on English Wikipedia is a complicated process, particularly if you’re writing about topics that are either controversial or outside the purview of the majority of Western editors. Editing is complicated not only because it is technical (even with the excellent new tools that have been developed to support editing without having to learn wiki markup) – most of the complications come with knowing the norms, the rules and the power dynamics at play.

You’ve worked previously with Wikipedia on things like verification practices. What are the verification practices currently?

That’s a big question 🙂 Verification practices involve a complicated set of norms, rules and technologies. Editors may (or may not) verify their statements by checking sources, but the power of Wikipedia’s claim-making practice lies in the norms of questioning  unsourced claims using the “citation needed” tag and by any other editor being able to remove claims that they believe to be incorrect. This, of course, does not guarantee that every claim on Wikipedia is factually correct, but it does enable the dynamic labelling of unverified claims and the ability to set verification tasks in an iterative fashion.

Many people in academia view Wikipedia as an unreliable source and do not encourage students to use it. What do you think of this?

Academic use of sources is a very contextual practice. We refer to sources in our own papers and publications not only when we are supporting the claims they contain, but also when we dispute them. That’s the first point: even if Wikipedia was generally unreliable, that is not a good reason for denying its use. The second point is that Wikipedia can be a very reliable source for particular types of information. Affirming the claims made in a particular article, if that was our goal in using it, would require verifying the information that we are reinforcing through citation and in citing the particular version (the “oldid” in Wikipedia terms) that we are referring to. Wikipedia can be used very soundly by academics and students – we just need to do so carefully and with an understanding of the context of citation – something we should be doing generally, not only on Wikipedia.

You work in a highly social media savvy field, what is the general attitude of your colleagues toward Wikipedia as a research resource? Do you think it differs from the attitudes of other academics?

I would say that Wikipedia is widely recognized by academics, including those of my colleagues who don’t specifically conduct Wikipedia research, as a source that is fine to visit but not to cite.

What did you think of this particular paper overall?

I thought that it was a really good paper. Excellent research design and very solid analysis. The only weakness, I would argue, would be that there are quite different results for chemistry and econometrics and that those differences aren’t adequately accounted for. More on that below.

The authors were attempting a causational study by adding Wikipedia articles (while leaving some written but unadded) and looking at how the phrases translated to the scientific literature six months later. Is this a long enough period of time?

This seems to be an appropriate amount of time to study, but there are probably quite important differences between fields of study that might influence results. The volume of publication (social scientists and humanities scholars tend to produce much lower volumes of publications and publications thus tend to be extended over time than natural science and engineering subjects, for example), the volume of explanatory or definitional material in publications (requiring greater use of the literature), the extent to which academics in the particular field consult and contribute to Wikipedia – all might affect how different fields of study influence and are influenced by Wikipedia articles.

Do you think the authors achieved evidence of causation here?

Yes. But again, causation in a single field i.e. chemistry.

It is important to know whether Wikipedia is influencing the scientific literature? Why or why not?

Yes. It is important to know whether Wikipedia is influencing scientific literature – particularly because we need to know where power to influence knowledge is located (in order to ensure that it is being fairly governed and maintained for the development of accurate and unbiased public knowledge).

Do you think papers like this will impact how scientists view and use Wikipedia?

As far as I know, this is the first paper that attributes a strong link between what is on Wikipedia and the development of science. I am sure that it will influence how scientists and other academic view and use Wikipedia – particularly in driving initiatives where scientists contribute to Wikipedia either directly or via initiatives such as PLoS’s Topic Pages.

Is there anything especially important to emphasize?

The most important thing is to emphasize the differences between fields that I think needs to be better explained. I definitely think that certain types of academic research are more in line with Wikipedia’s way of working, forms and styles of publication and epistemology and that it will not have the same influence on other fields.

by Heather Ford at February 07, 2018 08:10 AM

February 06, 2018

Ph.D. student

Values, norms, and beliefs: units of analysis in research on culture

Much of the contemporary critical discussion about technology in society and ethical design hinges on the term “values”. Privacy is one such value, according to Mulligan, Koopman, and Doty (2016), drawing on Westin and Post. Contextual Integrity (Nissenbaum, 2009) argues that privacy is a function of norms, and that norms get their legitimacy from, among other sources, societal values. The Data and Society Research Institute lists “values” as one of the cross-cutting themes of its research. Richmond Wong (2017) has been working on eliciting values reflections as a tool in privacy by design. And so on.

As much as ‘values’ get emphasis in this literary corner, I have been unsatisfied with how these literatures represent values as either sociological or philosophical phenomena. How are values distributed in society? Are they stable under different methods of measurement? Do they really have ethical entailments, or are they really just a kind of emotive expression?

For only distantly related reasons, I’ve been looking into the literature on quantitative measurement of culture. I’m doing a bit of a literature review and need your recommendations! But an early hit is Marsden and Swingle’s is a “Conceptualizing and measuring culture in surveys: Values, strategies, and symbols” (1994), which is a straightforward social science methods piece apparently written before either rejections of positivism or Internet-based research became so destructively fashionable.

A useful passage comes early:

To frame our discussion of the content of the culture module, we have drawn on distinctions made in Peterson’s (1979: 137-138) review of cultural research in sociology. Peterson observes that sociological work published in the late 1940s and 1950s treated values – conceptualizations of desirable end-states – and the behavioral norms they specify as the principal explanatory elements of culture. Talcott Parsons (19.51) figured prominently in this school of thought, and more recent survey studies of culture and cultural change in both the United States (Rokeach, 1973) and Europe (Inglehart, 1977) continue the Parsonsian tradition of examining values as a core concept.

This was a surprise! Talcott Parsons is not a name you hear every day in the world of sociology of technology. That’s odd, because as far as I can tell he’s one of these robust and straightforwardly scientific sociologists. The main complaint against him, if I’ve heard any, is that he’s dry. I’ve never heard, despite his being tied to structural functionalism, that his ideas have been substantively empirically refuted (unlike Durkheim, say).

So the mystery is…whatever happened to the legacy of Talcott Parsons? And how is it represented, if at all, in contemporary sociological research today?

One reason why we don’t hear much about Parsons may be because the sociological community moved from measuring “values” to measuring “beliefs”. Marsden and Swingle go on:

Cultural sociologists writing since the late 1970s however, have accented other elements of culture. These include, especially, beliefs and expressive symbols. Peterson’s (1979: 138) usage of “beliefs” refers to “existential statements about how the world operates that often serve to justify value and norms”. As such, they are less to be understood as desirable end-states in and of themselves, but instead as habits or styles of thought that people draw upon, especially in unstructured situations (Swidler, 1986).

Intuitively, this makes sense. When we look at the contemporary seemingly mortal combat of partisan rhetoric and tribalist propaganda, a lot of what we encounter are beliefs and differences in beliefs. As suggested in this text, beliefs justify values and norms, meaning that even values (which you might have thought are the source of all justification) get their meaning from a kind of world-view, rather than being held in a simple way.

That makes a lot of sense. There’s often a lot more commonality in values than in ways those values should be interpreted or applied. Everybody cares about fairness, for example. What people disagree about, often vehemently, is what is fair, and that’s because (I’ll argue here) people have widely varying beliefs about the world and what’s important.

To put it another way, the Humean model where we have beliefs and values separately and then combine the two in an instrumental calculus is wrong, and we’ve known it’s wrong since the 70’s. Instead, we have complexes of normatively thick beliefs that reinforce each other into a worldview. When we we’re asked about our values, we are abstracting in a derivative way from this complex of frames, rather than getting at a more core feature of personality or culture.

A great book on this topic is Hilary Putnam’s The collapse of the fact/value dichotomy (2002), just for example. It would be nice if more of this metaethical theory and sociology of values surfaced in the values in design literature, despite it’s being distinctly off-trend.

References

Marsden, Peter V., and Joseph F. Swingle. “Conceptualizing and measuring culture in surveys: Values, strategies, and symbols.” Poetics 22.4 (1994): 269-289.

Mulligan, Deirdre K., Colin Koopman, and Nick Doty. “Privacy is an essentially contested concept: a multi-dimensional analytic for mapping privacy.” Phil. Trans. R. Soc. A 374.2083 (2016): 20160118.

Nissenbaum, Helen. Privacy in context: Technology, policy, and the integrity of social life. Stanford University Press, 2009.

Putnam, Hilary. The collapse of the fact/value dichotomy and other essays. Harvard University Press, 2002.

Wong, Richmond Y., et al. “Eliciting Values Reflections by Engaging Privacy Futures Using Design Workbooks.” (2017).

by Sebastian Benthall at February 06, 2018 08:40 PM

January 26, 2018

Ph.D. student

Call for abstracts for critical data studies / human contexts and ethics track at the 2018 4S Annual Conference

4S 2018 Open Panel 101: Critical Data Studies: Human Contexts and Ethics

We’re pleased to be organizing one of the open panels at the 2018 Meeting of the Society for the Social Studies of Science (4S). Please submit an abstract!

Deadline: 1 February 2018, submit 250 word abstract here

Conference: 29 August - 1 September 2018, Sydney, Australia

Convenors:

Call for abstracts

In this continuation of the previous Critical Data Studies / Studying Data Critically tracks at 4S (see also Dalton and Thatcher 2014; Iliadis and Russo 2016), we invite papers that address the organizational, social, cultural, ethical, and otherwise human impacts of data science applications in areas like science, education, consumer products, labor and workforce management, bureaucracies and administration, media platforms, or families. Ethnographies, case studies, and theoretical works that take a situated approach to data work, practices, politics, and/or infrastructures in specific contexts are all welcome.

Datafication and autonomous computational systems and practices are producing significant transformations in our analytical and deontological framework, sometimes with objectionable consequences (O’Neill 2016; Barocas, Bradley, Honovar, and Provost 2017). Whether we’re looking at the ways in which new artefacts are constructed or at their social consequences, questions of value and valuation or objectivity and operationalization are indissociable from the processes of innovation and the principles of fairness, reliability, usability, privacy, social justice, and harm avoidance (Campolo, Sanfilippo, Whittaker, and Crawford, 2017).

By reflecting on situated unintended and objectionable consequences, we will gather a collection of works that illuminate one or several aspects of the unfolding of controversies and ethical challenges posed by these new systems and practices. We’re specifically interested in pieces that provide innovative theoretical insights about ethics and controversies, fieldwork, and reflexivity about the researcher’s positionality and her own ethical practices. We also encourage practitioners and educators who have worked to infuse ethical questions and concerns into a workflow, pedagogical strategy, collaboration, or intervention.

Submit a 250 word abstract here.

by R. Stuart Geiger at January 26, 2018 08:00 AM

January 25, 2018

Ph.D. alumna

Panicked about Kids’ Addiction to Tech? Here are two things you could do

Flickr: Jan Hoffman

(This was originally posted on NewCo Shift)

Ever since key Apple investors challenged the company to address kids’ phone addiction, I’ve gotten a stream of calls asking me to comment on the topic. Mostly, I want to scream. I wrote extensively about the unhelpful narrative of “addiction” in my book It’s Complicated: The Social Lives of Networked Teens. At the time, the primary concern was social media. Today, it’s the phone, but the same story still stands: young people are using technology to communicate with their friends non-stop at a point in their life when everything is about sociality and understanding your place in the social world.

As much as I want to yell at all of the parents around me to chill out, I’m painfully and acutely aware of how ineffective this is. Parents don’t like to see that they’re part of the problem or that their efforts to protect and help their children might backfire. (If you want to experience my frustration in full color, watch the Black Mirror episode called “Arkangel” (trailer here).)

Lately, I’ve been trying to find smaller interventions that can make a huge different, tools that parents can use to address the problems they panic about. So let me offer two approaches for “addiction” that work at different ages.

Parenting the Small People: Verbalizing Tech Use

In the early years, children learn values and norms by watching their parents and other caregivers. They emulate our language and our facial expressions, our quirky habits and our tastes. There’s nothing more satisfying and horrifying than listening to your child repeat something you say all too often. Guess what? They also get their cues about technology from people around them. A child would need to be alone in the woods to miss that people love their phones. From the time that they’re born, people are shoving phones in their faces to take pictures, turning to their phones to escape, and obsessively talking on their phones while ignoring them. Of course they want the attention that they see the phone as taking away. And of course they want the device to be special to them.

So, here’s what I recommend to parents of small people: Verbalize what you’re doing with your phone. Whenever you pick up your phone (or other technologies) in front of your kids, say what you’re doing. And involve them in the process if they’d like.

  • “Mama’s trying to figure out how long it will take to get to Bobby’s house. Want to look at the map with me?”
  • “Daddy’s checking out the weather. Do you want to see what it says?”
  • “Mom wants to take a picture of you. Is that OK?
  • “Papa needs a break and wants to read the headlines of the New York Times. Do you want me to read them to you?”
  • “Mommy got a text message from Mama and needs to respond. Should I tell her something from you too?”

The funny thing about verbalizing what you’re doing is that you’ll check yourself about your decisions to grab that phone. Somehow, it’s a lot less comfy saying: “Mom’s going to check work email because she can’t stop looking in case something important happens.” Once you begin saying out loud every time you look at technology, you also realize how much you’re looking at technology. And what you’re normalizing for your kids. It’s like looking at a mirror and realizing what they’re learning. So check yourself and check what you have standardized. Are you cool with the values and norms you’ve set?

Parenting the Mid-Size People: Household Contracts

I can’t tell you how many parents have told me that they have a rule in their house that their kids can’t use technology until X, where X could be “after dinner” or “after homework is done” or any other markers. And yet, consistently, I ask them if they put away their phones during dinner or until after they’ve bathed and they look at me like I’m an alien. Teenagers loathe hypocrisy. It’s the biggest thing that I’ve seen to undermine trust between a parent and a child. And boy do they have a lot to say about their parents’ addiction to their phones. Oy vay.

So if you want to curb the usage of your child’s technology use, here’s what I propose: Create a household contract. This is a contract that sets the boundaries for everyone in the house — parents and kids.

Ask your teenage or tween child to write the first draft of the contract, stipulating what they think is appropriate as the rules for everyone in the house, what they’re willing to trade-off to get technology privileges and what they think that parents should trade-off. Ask them to list the consequences of not abiding by the household rules for everyone in the house. (As a parent, you can think through or sketch the terms you think are fair, but you should not present them first.). Ask your child to pitch to you what the household rules should be. You will most likely be shocked that they’re stricter and more structured than you expected. And then start the negotiation process. You may want to argue that you should have the right to look at the phone when it’s ringing in case it’s grandma calling, but then your daughter should have the right to look at her phone to see if her best friend is looking. That kind of thing. Work through the process, but have your child lead it rather than you dictate it. And then write up those rules and hang them up in the house as a contract that can be renegotiated at different types.

Parenting Past Addiction

Many people have unhealthy habits and dynamics in their life. Some are rooted in physical addiction. Others are habitual or psychological crutches. But across that spectrum, most people are aware of when something that they’re doing isn’t healthy. They may not be able to stop. Or they may not want to stop. Untangling that is part of the challenge. When you feel as though your child has an unhealthy relationship with technology (or anything else in their life), you need to start by asking if they see this the same way you do. When parents feel as though what their child is doing is unhealthy for them, but the child does not, the intervention has to be quite different than when the child is also concerned about the issue. There are plenty of teens out there that know their psychological desire to talk non-stop with their friends for fear of missing out is putting them in a bad place. Help them through that process and work through what strategies they can develop and learn to cope. Helping them build those coping skills long term will help them a lot more than just putting rules into place.

When there is a disconnect between parent and child’s views on a situation, the best thing a parent can do is try to understand why the disconnect exists.Is it about pleasure seeking? Is it about fear of missing out? Is it about the emotional bond of friendship? Is it about a parent’s priorities being at odds with a child’s priorities? What comes next is fundamentally about values in parenting. Some parents believe that they are the masters of the house and their demands rule the day. Others acquiesce to their children’s desires with no push back. The majority of the parents are in-between. But at the end of the day, parenting is about helping children navigate the world and support them to develop agency in a healthy manner. So I would strongly recommend that parents focus their energies on negotiating a path through that allows children to be bought-in and aware of why boundaries are being set. That requires communication and energy, not a new technology to police boundaries for you. More often than not, the latter sends the wrong message and backfires, not unlike the Black Mirror episode I mentioned earlier.

Good luck parents — parenting is a non-stop adventure filled with both joy and anxiety.

by zephoria at January 25, 2018 01:33 AM

January 24, 2018

MIMS 2014

A Possible Explanation why America does Nothing about Gun Control

Ever since the Las Vegas mass shooting last October, I’ve wanted to blog about gun control. But I also wanted to wait—to see whether that mass shooting, though the deadliest to date in U.S. history, would quickly slip into the dull recesses of the American public subconsciousness just like all the rest. It did, and once again we find ourselves in the same sorry cycle of inaction that by this point is painfully familiar to everyone.

I also recently came across a 2016 study, by Kalesan, Weinberg, and Galea, which found that on average, Americans are 99% likely to know someone either killed or injured by gun violence over the course of their lifetime. That made me wonder: how can it possibly be that Americans remain so paralyzed on this issue if it affects pretty much everyone?

It could be that the ubiquity of gun violence is actually the thing that actually causes the paralysis. That is, gun violence affects almost everyone just as Kalesan et al argue, but the reactions Americans have to the experience are diametrically opposed to one another. These reactions result in hardened views that inform people’s voting choices, and since these choices more or less divide the country in half across partisan lines, the result is an equilibrium where nothing can ever get done on gun control. So on this reading, it’s not so much a paralysis of inaction so much as a tense political stalemate.

But it could also be something else. Kalesan et al calculate the likelihood of knowing someone killed or injured by general gun violence over the course of a lifetime, but they don’t focus on mass shootings in particular. Their methodology is based on basic principles of probability and some social network theory that posits people have an effective social network numbering a little fewer than 300 people. If you look at the Kalesan et al paper, it becomes clear that their methodology can also be used to calculate the likelihood of knowing someone killed or injured in a mass shooting. It’s just a matter of substituting the rate of general gun violence for the rate of mass shooting in their probability calculation.

It turns out that the probability of knowing someone killed/injured in a mass shooting is much, much lower than for gun violence more generally. Even with a relatively generous definition of what counts as a mass shooting (four or more people injured/killed not including the shooter, according to the Gun Violence Archive), this probability is about 10%. When you only include incidents that have received major national news media attention—based on a list compiled by Mother Jones—that probability drops to about 0.36%.

So, it’s possible the reason Americans continue to drag their feet on gun control is that the problem just doesn’t personally affect enough people. Curiously, the even lower likelihood of knowing someone killed or injured in a terrorist attack doesn’t seem to hinder politicians from working aggressively to prevent further terrorist attacks. Still, if more people were personally affected by mass shootings, more might change their minds on gun control like Caleb Keeter, the Josh Abbot band guitarist who survived the Las Vegas shooting.

by dgreis at January 24, 2018 05:58 PM

January 21, 2018

Ph.D. student

It’s just like what happened when they invented calculus…

I’ve picked up this delightful book again: David Foster Wallace’s Everything and More: A Compact History of Infinity (2003). It is the David Foster Wallace (the brilliant and sadly dead writer and novelist you’ve heard of) writing a history of mathematics, starting with the Ancient Greeks and building up to the discovery of infinity by Georg Cantor.

It’s a brilliantly written book written to educate its reader without any doctrinal baggage. Wallace doesn’t care if he’s a mathematician or a historian; he’s just a great writer. And what comes through in the book is truly a history of the idea of infinity, with all the ways that it was a reflection of the intellectual climate and preconceptions of the mathematicians working on it. The book is fully of mathematical proofs that are blended seamlessly into the casual prose. The whole idea is to build up the excitement and wonder of mathematical discover, just how hard it was to come to appreciate infinity in the way we understand it mathematically today. A lot of this development had to do with the way mathematicians and scientists thought about their relationship to abstraction.

It’s a wonderful book that, refreshingly, isn’t obsessed with how everything has been digitized. Rather (just as one gem), it offers a historical perspective on what was perhaps even a more profound change: that time in the 1700’s when suddenly everything started to be looked at as an expression of mathematical calculus.

To quote the relevant passage:

As has been at least implied and will now be exposited on, the math-historical consensus is that the late 1600s mark the start of a modern Golden Age in which there are far more significant mathematical advances than anytime else in world history. Now things start moving really fast, and we can do little more than try to build a sort of flagstone path from early work on functions to Cantor’s infinicopia.

Two large-scale changes in the world of math to note very quickly The first involves abstraction. Pretty much all math from the Greeks to Galileo is empirically based: math concepts are straightforward abstractions from real-world experience. This is one reason why geometry (along with Aristotle) dominated mathematical reasoning for so long. The modern transition from geometric to algebraic reasoning was itself a symptom of a larger shift. By 1600, entities like zero, negative integers, and irrationals are used routinely. Now start adding in the subsequent decades’ introductions of complex numbers, Napierian logarithms, higher-degree polynomials and literal coefficients in algebra–plus of course eventually the 1st and 2nd derivative and the integral–and it’s clear that as of some pre-Enlightenment date math has gotten so remote from any sort of real-world observation that we and Saussure can say verily it is now, as a system of symbols, “independent of the objects designated,” i.e. that math is now concerned much more with the logical relations between abstract concepts than with any particular correspondence between those concepts and physical reality. The point: It’s in the seventeenth century that math becomes primarily a system of abstractions from other abstractions instead of from the world.

Which makes the second big change seem paradoxical: math’s new hyperabstractness turns out to work incredibly well in real-world applications. In science, engineering, physics, etc. Take, for one obvious example, calculus, which is exponentially more abstract than any sort of ‘practical’ math before (like, from what real-world observation does one dream up the idea than an object’s velocity and a curve’s subtending area have anything to do with each other?), and yet it is unprecedentedly good for representing/explaining motion and acceleration, gravity, planetary movements, heat–everything science tells us is real about the real world. Not at all for nothing does D. Berlinski call calculus “the story this world first told itself as it became the modern world.” Because what the modern world’s about, what it is, is science.And it’s in the seventeenth century that the marriage of math and science is consummated, the Scientific Revolution both causing and caused by the Math Explosion because science–increasingly freed of its Aristotelian hangups with substance v. matter and potentiality v. actuality–becomes now essentially a mathematical enterprise in which force, motion, mass, and law-as-formula compose the new template for understanding how reality works. By the late 1600s, serious math is part of astronomy, mechanics, geography, civil engineering, city planning, stonecutting, carpentry, metallurgy, chemistry, hyrdraulics, optics, lens-grinding, military strategy, gun- and cannon-design, winemaking, architecture, music, shipbuilding, timekeeping, calendar-reckoning; everything.

We take these changes for granted now.

But once, this was a scientific revolution that transformed, as Wallace observed, everything.

Maybe this is the best historical analogy for the digital transformation we’ve been experiencing in the past decade.

by Sebastian Benthall at January 21, 2018 02:07 AM

January 19, 2018

Ph.D. student

May there be shared blocklists

A reminder:

Unconstrained media access to a person is indistinguishable from harassment.

It pains me to watch my grandfather suffer from surfeit of communication. He can't keep up with the mail he receives each day. Because of his noble impulse to charity and having given money to causes he supports (evangelical churches, military veterans, disadvantaged children), those charities sell his name for use by other charities (I use "charity" very loosely), and he is inundated with requests for money. Very frequently, those requests include a "gift", apparently in order to induce a sense of obligation: a small calendar, a pen and pad of paper, refrigerator magnets, return address labels, a crisp dollar bill. Those monetary ones surprised me at first, but they are common and if some small percentage of people feel an obligation to write a $50 check, then sending out a $1 to each person makes it worth their while (though it must not help the purported charitable cause very much, not a high priority). Many now include a handful of US coins stuck to the response card -- ostensibly to imply that just a few cents a day can make a difference, but, I suspect, to make it harder to recycle the mail directly because it includes metal as well as paper. (I throw these in the recycling anyway.) Some of these solicitations include a warning on the outside that I hadn't seen before, indicating that it's a federal criminal offense to open postal mail or to keep it from the recipient. Perhaps this is a threat to caregivers to discourage them from throwing away this junk mail for their family members; I suspect more likely, it encourages the suspicion in the recipient that someone might try to filter their mail, and that to do so would be unjust, even criminal, that anyone trying to help them by sorting their mail should not be trusted. It disgusts me.

But the mails are nothing compared to the active intrusiveness of other media. Take conservative talk radio, which my grandfather listened to for years as a way to keep sound in the house and fend off loneliness. It's often on in the house at a fairly low volume, but it's ever present, and it washes over the brain. I suspect most people could never genuinely understand Rush Limbaugh's rants, but coherent argument is not the point, it's just the repetition of a claim, not even a claim, just a general impression. For years, my grandfather felt conflicted, as many of his beloved family members (liberal and conservative) worked for the federal government, but he knew, in some quite vague but very deep way, that everyone involved with the federal government was a menace to freedom. He tells me explicitly that if you hear something often enough, you start to think it must be true.

And then there's the TV, now on and blaring 24 hours a day, whether he's asleep or awake. He watches old John Wayne movies or NCIS marathons. Or, more accurately, he watches endless loud commercials, with some snippets of quiet movies or television shows interspersed between them. The commercials repeat endlessly throughout the day and I start to feel confused, stressed and tired within a few hours of arriving at his house. I suspect advertisers on those channels are happy with the return they receive; with no knowledge of the source, he'll tell me that he "really ought to" get or try some product or another for around the house. He can't hear me, or other guests, or family he's talking to on the phone when a commercial is on, because they're so loud.

Compared to those media, email is clear and unintrusive, though its utility is still lost in inundation. Email messages that start with "Fw: FWD: FW: FW FW Fw:" cover most of his inbox; if he clicks on one and scrolls down far enough he can get to the message, a joke about Obama and monkeys, or a cute picture of a kitten. He can sometimes get to the link to photos of the great-grand-children, but after clicking the link he's faced with a moving pop-up box asking him to login, covering the faces of the children. To close that box, he must identify and click on a small "x" in very light grey on a white background. He can use the Web for his bible study and knows it can be used for other purposes, but ubiquitous and intrusive prompts (advertising or otherwise) typically distract him from other tasks.

My grandfather grew up with no experience with media of these kinds, and had no time to develop filters or practices to avoid these intrusions. At his age, it is probably too late to learn a new mindset to throw out mail without a second thought or immediately scroll down a webpage. With a lax regulatory environment and unfamiliar with filtering, he suffers -- financially and emotionally -- from these exploitations on a daily basis. Mail, email, broadcast video, radio and telephone could provide an enormous wealth of benefits for an elderly person living alone: information, entertainment, communication, companionship, edification. But those advantages are made mostly inaccessible.

Younger generations suffer other intrusions of media. Online harassment is widely experienced (its severity varies, by gender among other things); your social media account probably lets you block an account that sends you a threat or other unwelcome message, but it probably doesn't provide mitigations against dogpiling, where a malicious actor encourages their followers to pursue you. Online harassment is important because of the severity and chilling impact on speech, but an analogous problem of over-access exists with other attention-grabbing prompts. What fraction of smartphone users know how to filter the notifications that buzz or ring their phone? Notifications are typically on by default rather than opt-in with permission. Smartphone users can, even without the prompt of the numerous thinkpieces on the topic, describe the negative effects on their attention and well-being.

The capability to filter access to ourselves must be a fundamental principle of online communication: it may be the key privacy concern of our time. Effective tools that allow us to control the information we're exposed to are necessities for freedom from harassment; they are necessities for genuine accessibility of information and free expression. May there be shared blocklists, content warnings, notification silencers, readability modes and so much more.

by nick@npdoty.name at January 19, 2018 10:50 PM

January 15, 2018

Ph.D. student

social structure and the private sector

The Human Cell

Academic social scientists leaning towards the public intellectual end of the spectrum love to talk about social norms.

This is perhaps motivated by the fact that these intellectual figures are prominent in the public sphere. The public sphere is where these norms are supposed to solidify, and these intellectuals would like to emphasize their own importance.

I don’t exclude myself from this category of persons. A lot of my work has been about social norms and technology design (Benthall, 2014; Benthall, Gürses and Nissenbaum, 2017)

But I also work in the private sector, and it’s striking how differently things look from that perspective. It’s natural for academics who participate more in the public sphere than the private sector to be biased in their view of social structure. From the perspective of being able to accurately understand what’s going on, you have to think about both at once.

That’s challenging for a lot of reasons, one of which is that the private sector is a lot less transparent than the public sphere. In general the internals of actors in the private sector are not open to the scrutiny of commentariat onlookers. Information is one of the many resources traded in pairwise interactions; when it is divulged, it is divulged strategically, introducing bias. So it’s hard to get a general picture of the private sector, even though accounts for a much larger proportion of the social structure that’s available than the public sphere. In other words, public spheres are highly over-represented in analysis of social structure due to the available of public data about them. That is worrisome from an analytic perspective.

It’s well worth making the point that the public/private dichotomy is problematic. Contextual integrity theory (Nissenbaum, 2009) argues that modern society is differentiated among many distinct spheres, each bound by its own social norms. Nissenbaum actually has a quite different notion of norm formation from, say, Habermas. For Nissenbaum, norms evolve over social history, but may be implicit. Contrast this with Habermas’s view that norms are the result of communicative rationality, which is an explicit and linguistically mediated process. The public sphere is a big deal for Habermas. Nissenbaum, a scholar of privacy, reject’s the idea of the ‘public sphere’ simpliciter. Rather, social spheres self-regulate and privacy, which she defines as appropriate information flow, is maintained when information flows according to these multiple self-regulatory regimes.

I believe Nissenbaum is correct on this point of societal differentiation and norm formation. This nuanced understanding of privacy as the differentiated management of information flow challenges any simplistic notion of the public sphere. Does it challenge a simplistic notion of the private sector?

Naturally, the private sector doesn’t exist in a vacuum. In the modern economy, companies are accountable to the law, especially contract law. They have to pay their taxes. They have to deal with public relations and are regulated as to how they manage information flows internally. Employees can sue their employers, etc. So just as the ‘public sphere’ doesn’t permit a total free-for-all of information flow (some kinds of information flow in public are against social norms!), so too does the ‘private sector’ not involve complete secrecy from the public.

As a hypothesis, we can posit that what makes the private sector different is that the relevant social structures are less open in their relations with each other than they are in the public sphere. We can imagine an autonomous social entity like a biological cell. Internally it may have a lot of interesting structure and organelles. Its membrane prevents this complexity leaking out into the aether, or plasma, or whatever it is that human cells float around in. Indeed, this membrane is necessary for the proper functioning of the organelles, which in turn allows the cell to interact properly with other cells to form a larger organism. Echoes of Francisco Varela.

It’s interesting that this may actually be a quantifiable difference. One way of modeling the difference between the internal and external-facing complexity of an entity is using information theory. The more complex internal state of the entity has higher entropy than the membrane. The fact that the membrane causally mediates interactions between the internals and the environment prevents information flow between them; this is captured by the Data Processing Inequality. The lack of information flow between the system internals and externals is quantified as lower mutual information between the two domains. At zero mutual information, the two domains are statistically independent of each other.

I haven’t worked out all the implications of this.

References

Benthall, Sebastian. (2015) Designing Networked Publics for Communicative Action. Jenny Davis & Nathan Jurgenson (eds.) Theorizing the Web 2014 [Special Issue]. Interface 1.1. (link)

Sebastian Benthall, Seda Gürses and Helen Nissenbaum (2017), “Contextual Integrity through the Lens of Computer Science”, Foundations and Trends® in Privacy and Security: Vol. 2: No. 1, pp 1-69. http://dx.doi.org/10.1561/3300000016

Nissenbaum, H. (2009). Privacy in context: Technology, policy, and the integrity of social life. Stanford University Press.

by Sebastian Benthall at January 15, 2018 09:01 PM

January 14, 2018

Ph.D. student

on university businesses

Suppose we wanted to know why there’s an “epistemic crisis” today. Suppose we wanted to talk about higher education’s role and responsibility towards that crisis, even though that may be just a small part of it.

That’s a reason why we should care about postmodernism in universities. The alternative, some people have argued, is a ‘modernist’ or even ‘traditional’ university which was based on a perhaps simpler and less flexible theory of knowledge. For the purpose of this post I’m going to assume the reader knows roughly what that’s all about. Since postmodernism rejects meta-narratives and instead admits that all we have to legitimize anything is a contest of narratives, that is really just asking for an epistemic crisis where people just use whatever narratives are most convenient for them and then society collapses.

In my last post I argued that the question of whether universities should be structured around modernist or postmodernist theories of legitimation and knowledge has been made moot by the fact that universities have the option of operating solely on administrative business logic. I wasn’t being entirely serious, but it’s a point that’s worth exploring.

One reason why it’s not so terrible if universities operate according to business logic is because it may still, simply as a function of business logic, be in their strategic interest to hire serious scientists and scholars whose work is not directly driven by business logic. These scholars will be professionally motivated and in part directed by the demands of their scholarly fields. But that kicks the can of the inquiry down the road.

Suppose that there are some fields that are Bourdieusian sciences, which might be summarized as an artistic field structured by the distribution of symbolic capital to those who win points in the game of arbitration of the real. (Writing that all out now I can see why many people might find Bourdieu a little opaque.)

Then if a university business thinks it should hire from the Bourdieusian sciences, that’s great. But there’s many other kinds of social fields it might be useful to hire from for, e.g, faculty positions. This seems to agree with the facts: many university faculty are not from Bourdieusian sciences!

This complicates, a lot actually, the story about the relationship between universities and knowledge. One thing that is striking from the ethnography of education literature (Jean Lave) is how much the social environment of learning is constitutive of what learning is (to put it one way). Society expects and to some extent enforces that when a student is in a classroom, what they are taught is knowledge. We have concluded that not every teacher in a university business is a Bourdieusian scientist, hence some of what students learn in universities is not Bourdieusian science, so it must be that a lot of what students are taught in universities: isn’t real. But what is it then? It’s got to be knowledge!

The answer may be: it’s something useful. It may not be real or even approximating what’s real (by scientific standards), but it may still be something that’s useful to believe, express, or perform. If it’s useful to “know” even in this pragmatic and distinctly non-Platonic sense of the term, there’s probably a price at which people are willing to be taught it.

As a higher order effect, universities might engage in advertising in such a way that some prospective students are convinced that what they teach is useful to know even when it’s not really useful at all. This prospect is almost too cynical to even consider. But that’s why it’s important to consider why a university operating solely according to business logic would in fact be terrible! This would not just be the sophists teaching sophistry to students so that they can win in court. It would be sophists teaching bullshit to students because they can get away with being paid for it. In other words, charlatans.

Wow. You know I didn’t know where this was going to go when I started reasoning about this, but it’s starting to sound worse and worse!

It can’t possibly be that bad. University businesses have a reputation to protect, and they are subject to the court of public opinion. Even if not all fields are Bourdieusian science, each scholarly field has its own reputation to protect and so has an incentive to ensure that it, at least, is useful for something. It becomes, in a sense, a web of trust, where each link in the network is tested over time. As an aside, this is an argument for the importance of interdisciplinary work. It’s not just a nice-to-have because wouldn’t-it-be-interesting. It’s necessary as a check on the mutual compatibility of different fields. Prevents disciplines from becoming exploitative of students and other resources in society.

Indeed, it’s possible that this process of establishing mutual trust among experts even across different fields is what allows a kind of coherentist, pragmatist truth to emerge. But that’s by no means guaranteed. But to be very clear, that process can happen among people whether or not they are involved in universities or higher education. Everybody is responsible for reality, in a sense. To wit, citizen science is still Bourdieusian science.

But see how the stature of the university has fallen. Under a modernist logic, the university was where one went to learn what is real. One would trust that learning it would be useful because universities were dedicated to teaching what was real. Under business logic, the university is a place to learn something that the university finds it useful to teach you. It cannot be trusted without lots of checked from the rest of the society. Intellectual authority is now much more distributed.

The problem with the business university is that it finds itself in competition for intellectual authority, and hence society’s investment in education, with other kinds of institutions. These include employers, who can discount wages for jobs that give their workers valuable human capital (e.g. the free college internship). Moreover, absent its special dedication to science per se, there’s less of a reason to put society’s investment to basic research in its hands. This accords with Clark Kerr‘s observation that the postwar era was golden for universities because the federal government kept them flush with funds for basic research, but these started to trickle down and now a lot more important basic research is done in the private sector.

So to the extent that the university is responsible for the ‘epistemic crisis’, it may be because universities began to adopt business logic as their guiding principle. This is not because they then began to teach garbage. It’s because they lost the special authority awarded to modernist universities, which we funded for a special mission in society. This opened the door for more charlatans, most of whom are not at universities. They might be on YouTube.

Note that this gets us back to something similar but not identical to postmodernism.* What’s at stake are not just narratives, but also practices and other forms of symbolic and social capital. But there’s certainly many different ones, articulated differently, and in competition with each other. The university business winds up reflecting the many different kinds of useful knowledge across all society and reproducing it through teaching. Society at large can then keep universities in check.
This “society keeping university businesses in check” point is a case for abolishing tenure in university businesses. Tenure may be a great idea in universities with different purposes and incentive structures. But for university businesses, it’s not good–it makes them less good businesses.

The epistemic crisis is due to a crisis in epistemic authority. To the extent universities are responsible, it’s because universities lost their special authority. This may be because they abandoned the modernist model of the university. But is not because they abandoned modernism to postmodernism. “Postmodern” and “modern” fields coexist symbiotically with the pragmatist model of the university as business. But losing modernism has been bad for the university business as a brand.

* Though it must be noted that Lyotard’s analysis of the postmodern condition is all about how legitimation by performativity is the cause of this new condition. I’m probably just recapitulating his points in this post.

by Sebastian Benthall at January 14, 2018 06:03 AM

January 12, 2018

Ph.D. student

STEM and (post-)modernism

There is an active debate in the academic social sciences about modernism and postmodernism. I’ll refer to my notes on Clark Kerr’s comments on the postmodern university as an example of where this topic comes up.

If postmodernism is the condition where society is no longer bound by a single unified narrative but rather is constituted by a lot of conflicting narratives, then, yeah, ok, we live in a postmodern society. This isn’t what the debate is really about though.

The debate is about whether we (anybody in intellectual authority) should teach people that we live in a postmodern society and how to act effectively in that world, or if we should teach people to believe in a metanarrative which allows for truth, progress, and so on.

It’s important to notice that this whole question of what narratives we do or do not teach our students is irrelevant to a lot of educational fields. STEM fields aren’t really about narratives. They are about skills or concepts or something.

Let me put it another way. Clark Kerr was concerned about the rise of the postmodern university–was the traditional, modernist university on its way out?

The answer, truthfully, was that neither the traditional modernist university nor the postmodern university became dominant. Probably the most dominant university in the United States today is Stanford; it has accomplished this through a winning combination of STEM education, proximity to venture capital, and private fundraising. You don’t need a metanarrative if you’re rich.

Maybe that indicates where education has to go. The traditional university believed that philosophy was at its center. Philosophy is no longer at the center of the university. Is there a center? If there isn’t, then postmodernism reigns. But something else seems to be happening: STEM is becoming the new center, because it’s the best funded of the disciplines. Maybe that’s fine! Maybe focusing on STEM is how to get modernism back.

by Sebastian Benthall at January 12, 2018 03:58 AM