Maybe 15 months ago, I started training in Brazilian Jiu Jitsu (BJJ), a martial art that focuses on grappling and ground-fighting. Matches are won through points based on position (e.g., “mount”, where you are sitting on somebody else) and through submission, when a player taps out due to hyperextension under a joint lock or asphyxiation by choking. I recommend it heartily to anybody as a fascinating, smart workout that also has a vibrant and supportive community around it.
One of the impressive aspects of BJJ, which differentiates it from many other martial arts, is its emphasis on live drilling and sparring (“rolling”), which can offer a third or more of a training session. In the context of sparring, there is opportunity for experimentation and rapid feedback about technique. In addition to being good fun and practice, regular sparring continually reaffirms the hierarchical ranking of skill. As in some other martial arts, rank is awarded as different colored “belts”–white, blue, purple, brown, black. Intermediary progress is given as “stripes” on the belt. White belts can spar with higher belts; more often than not, when they do so they get submitted.
BJJ also has tournaments, which allow players from different dojos to compete against each other. I attended my first tournament in August and thought it was a great experience. There is nothing like meeting a stranger for the first time and then engage them in single combat to kindle a profound respect for the value of sportsmanship. Off the mat, I’ve had some of the most courteous encounters with anybody I have ever met in New York City.
At tournaments, hundreds of contestants are divided into brackets. The brackets are determined by belt (white, blue, etc.), weight (up to 155 lbs, up to 170 lbs, etc.), sex (men and women), and age (kids age groups, adult, 30+ adult). There is an “absolute” bracket for those who would rise above the division of weight classes. There are “gi” and “no gi” variants of BJJ; the former requires wearing special uniform of jacket and pants, which are used in many techniques.
Overall, it is an efficient system for training a skill.
The few readers of this blog will recall that for some time I studied sociology of science and engineering, especially through the lens of Bourdieu’s Science of Science and Reflexivity. This was in turn a reaction to a somewhat startling exposure to sociology of science and education, and intellectual encounter that I never intended to have. I have been interested for a long time in the foundations of science. It was a rude shock, and one that I mostly regret, to have gone to grad school to become a better data scientist and find myself having to engage with the work of Bruno Latour. I did not know how to respond intellectually to the attack on scientific legitimacy on the basis that its self-understanding is insufficiently sociological until encountering Bourdieu, who refuted the Latourian critique and provides a clear-sighted view of how social structure under-girds scientific objectivity, when it works. Better was my encounter with Jean Lave, who introduced me to more phenomenological methods for understanding education through her class and works (Chaiklin and Lave, 1996). This made me more aware of the role of apprenticeship as well as the nuances of culture, framing, context, and purpose in education. Had I not encountered this work, I would likely never have found my way to Contextual Integrity, which draws more abstract themes about privacy from such subtle observations.
Now it’s impossible for me to do something as productive and enjoyable as BJJ without considering it through these kinds of lenses. One day I would like to do more formal work along these lines, but as has been my habit I have a few notes to jot down at the moment.
The first point, which is a minor one, is that there is something objectively known by experienced BJJ players, and that this knowledge is quintessentially grounded in intersubjective experience. The sparring encounter is the site at which technique is tested and knowledge is confirmed. Sparring simulates conditions of a fight for survival; indeed, if a choke is allowed to progress, a combatant can lose consciousness on the mat. This recalls Hegel’s observation that it is in single combat that a human being is forced to see the limits of their own solipsism. When the Other can kill you, that is an Other that you must see as, in some sense, equivalent in metaphysical status to oneself. This is a sadly forgotten truth in almost every formal academic environment I’ve found myself in, and that, I would argue, is why there is so much bullshit in academia. But now I digress.
The second point, which is perhaps more significant, is that BJJ has figured out how to be an inclusive field of knowledge despite the pervasive and ongoing politics of what I have called in another post body agonism. We are at a point where political conflict in the United States and elsewhere seems to be at root about the fact that people have different kinds of bodies, and these differences are upsetting for liberalism. How can we have functioning liberal society when, for example, some people have male bodies and other people have female bodies? It’s an absurd question, perhaps, but nevertheless it seems to be the question of the day. It is certainly a question that plagues academic politics.
BJJ provides a wealth of interesting case studies in how to deal productively with body agonism. BJJ is an unarmed martial art. The fact that there are different body types is an instrinsic aspect of the sport. Interestingly, in the dojo practices I’ve seen, trainings are co-ed and all body types (e.g., weight classes) train together. This leads to a dynamic and irregular practice environment that perhaps is better for teaching BJJ as a practical form of self-defense. Anecdotally, self-defense is an important motivation for why especially women are interested in BJJ, and in the context of a gym, sparring with men is a way to safely gain practical skill in defending against male assailants. On the other hand, as far as ranking progress is concerned, different bodies are considered in relation to other similar bodies through the tournament bracket system. While I know a badass 40-year old who submitted two college kids in the last tournament, that was extra. For the purposes of measuring my improvement in the discipline, I will be in the 30+ men’s bracket, compared with other guys approximately my weight. The general sense within the community is that progress in BJJ is a function of time spent practicing (something like the mantra that it takes 10,000 hours to master something), not any other intrinsic talent. Some people who are more dedicated to their training advance faster, and others advance slower.
Training in BJJ has been a positive experience for me, and I often wonder whether other social systems could be more like BJJ. There are important lessons to be learned from it, as it is a mental discipline, full of subtlety and intellectual play, in its own right.
Bourdieu, Pierre. Science of science and reflexivity. Polity, 2004.
Chaiklin, Seth, and Jean Lave, eds. Understanding practice: Perspectives on activity and context. Cambridge University Press, 1996.
I’m writing in response to Ted Hill’s recent piece describe the acceptance and subsequent removal of a paper about the ‘Greater Male Variability Hypothesis’, the controversial idea that there is more variability in male intelligence than female intelligence, i.e. “that there are more idiots and more geniuses among men than among women.”
I have no reason to doubt Hill’s account of events–his collaboration, his acceptance to a journal, and the mysterious political barriers to publication–and assume them for the purposes of this post. If these are refuted by future controversy somehow, I’ll stand corrected.
The few of you who have followed this blog for some time will know that I’ve devoted some energy to understanding the controversy around gender and STEM. One post, criticizing how Donna Haraway, widely used in Science and Technology Studies, can be read as implying that women should not become ‘hard scientists’ in the mathematical mode, has gotten a lot of hits (and some pushback). Hill’s piece makes me revisit the issue.
The paper itself is quite dry and the following quote is its main thesis:
SELECTIVITY-VARIABILITY PRINCIPLE. In a species with two sexes A and B, both of which are needed for reproduction, suppose that sex A is relatively selective, i.e., will mate only with a top tier (less than half ) of B candidates. Then from one generation to the next, among subpopulations of B with comparable average attributes, those with greater variability will tend to prevail over those with lesser variability. Conversely, if A is relatively non-selective, accepting all but a bottom fraction (less than half ) of the opposite sex, then subpopulations of B with lesser variability will tend to prevail over those with comparable means and greater variability.
This mathematical thesis is supported in the paper by computational simulations and mathematical proofs. From this, one can get the GMVH if one assumes that: (a) (human) males are less selective in their choice of (human) females when choosing to mate, and (b) traits that drive variability in intelligence are intergenerationally heritable, whether biologically or culturally. While not uncontroversial, neither of these are crazy ideas. In fact, if they weren’t both widely accepted, then we wouldn’t be having this conversation.
Is this the kind of result that should be published? This is the controversy. I am less interested in the truth or falsehood of broad implications of the mathematical work than I am in the arguments for why the mathematical work should not be published (in a mathematics journal).
As far as I can tell from Hill’s account and also from conversations and cultural osmosis on the matter, there are a number of reasons why research of this kind should not be published.
The first reason might be that there are errors in the mathematical or simulation work. In other words, the Selectivity-Variability Principle may be false, and falsely supported. If that is the case, then the reviewers should have rejected the paper on those grounds. However, the principle is intuitively plausible and the reviewers accepted it. Few of Hill’s critics (though some) attacked the piece on mathematical grounds. Rather, the objections were of a social and political nature. I want to focus on these latter objections, though if there is a mathematical refutation of the Selectivity-Variability Principle I’m not aware of, I’ll stand corrected.
The crux of the problem seems to be this: the two assumptions (a) and (b) are both so plausible that publishing a defense of (c) the Selectivity-Variability Principle would imply (d) the Greater Male Variability Hypothesis (GMVH). And if GMVH is true, then (e) there is a reason why more of the celebrated high-end of the STEM professions are male. It is because at the high-end, we’re looking at the thin tails of the human distribution, and the male tail is longer. (It is also longer at the low end, but nobody cares about the low end.)
The argument goes that if this claim (e) were widely known by aspiring females in STEM fields, then they will be discouraged from pursuing these promising careers, because “women have a lesser chance to succeed in mathematics at the very top end”, which would be a biased, sexist view. (e) could be used to defend the idea that (f) normatively, there’s nothing wrong with men having most success at the top end of mathematics, though there is a big is/ought distinction there.
My concern with this argument is that it assumes, at its heart, the idea that women aspiring to be STEM professionals are emotionally vulnerable to being dissuaded by this kind of mathematical argument, even when it is neither an empirical case (it is a mathematical model, not empirically confirmed within the paper) nor does it reflect on the capacity of any particular woman, and especially not after she has been selected for by the myriad social sorting mechanisms available. The argument that GMVH is professionally discouraging assumes many other hypotheses about human professional motivation, for example, the idea that it is only worth taking on a profession if one can expect to have a higher-than-average chance of achieving extremely high relative standing in that field. Given that extremely high relative standing in any field is going to be rare, it’s hard to say this is a good motivation for any profession, for men or for women, in the long run. In general, those that extrapolate from population level gender tendencies to individual cases are committing the ecological fallacy. It is ironic that under the assumption of the critics, potential female entrants into STEM might be screened out precisely because of their inability to understand a mathematical abstraction, along with its limitations and questionable applicability, through a cloud of political tension. Whereas if one were really interested in reaching mathematics in an equitable way, that would require teaching the capacity to see through political tension to the precise form of a mathematical abstraction. That is precisely what top performance in the STEM field should be about, and that it should be unflinchingly encouraged as part of the educational process for both men and women.
My point, really, is this: the argument that publishing and discussing GMVH is detrimental to the career aspirations of women, because of how individual women will internalize the result, depends on a host of sexist assumptions that are as if not more pernicious than GMVH. It is based on the idea that women as a whole need special protection from mathematical ideas in order to pursue careers in mathematics, which is self-defeating crazy talk if I’ve ever heard it. The whole point of academic publication is to enable a debate of defeasible positions on their intellectual merits. In the case of mathematics research, the standards of merit are especially clear. If there’s a problem with Hill’s model, that’s a great opportunity for another, better model, on a topic that is clearly politically and socially relevant. (If the reviewers ignored a lot prior work that settled the scientific relevance of the question, then that’s a different story. One gathers that is not what happened.)
As a caveat, there are other vectors through which GMVH could lead to bias against women pursuing STEM careers. For example, it could bias their less smart families or colleagues into believing less in their potential on the basis of their sex. But GMVH is about the variance, not the mean, of mathematical ability. So the only population that it’s relevant to is that in the very top tier of performers. That nuance is itself probably beyond the reach of most people who do not have at least some training in STEM, and indeed if somebody is reasoning from GMVH to an assumption about women’s competency in math then they are almost certainly conflating it with a dumber hypothesis about population means which is otherwise irrelevant.
This is perhaps the most baffling thing about this debate: that it boils down to a very rarefied form of elite conflict. “Should a respected mathematics journal publish a paper that implies that there is greater variance in mathematical ability between sexes based on their selectivity and therefore…” is a sentence that already selects for a very small segment of the population, a population that should know better than to censor a mathematical proof rather than to take the opportunity to engage it as an opportunity to educate people in STEM and why it is an interesting field. Nobody is objecting to the publication of support for GMVH on the grounds that it implies that more men are grossly incompetent and stupid than women, and it’s worth considering why that is. If our first reaction to GMVH is “but can no one woman never be the best off?”, we are showing that our concerns lie with who gets to be on top, not the welfare of those on bottom.
I’ve had recommended to me Greg Austin’s “Cyber Policy in China” (2014) as a good, recent work. I am not sure what I was expecting–something about facts and numbers, how companies are being regulated, etc. Just looking at the preface, it looks like this book is about something else.
The preface frames the book in the discourse, beginning in the 20th century, about the “information society”. It explicitly mentions the UN’s World Summit on the Information Society (WSIS) as a touchstone of international consensus about what the information society is, as society “where everyone can create, access, utilise and share information and knowledge’ to ‘achieve their full potential’ in ‘improving their quality of life’. It is ‘people-centered’.
In Chinese, the word for information society is xinxi shehui (Please forgive me: I’ve got little to know understanding of the Chinese language and that includes not knowing how to put the appropriate diacritics into transliterations of Chinese terms.) It is related to a term “informatization” (xinxihua) that is compared to industrialization. It means the historical process by which information technology is fully used, information resources are developed and utilized, the exchange of information and knowledge sharing are promoted, the quality of economic growth is improved, and the transformation of economic and social development is promoted”. Austin’s interesting point is that this is “less people-centered than the UN vision and more in the mould of the materialist and technocratic traditions that Chinese Communists have preferred.”
This is an interesting statement on the difference between policy articulations by the United Nations and the CCP. It does not come as a surprise.
What did come as a surprise is how Austin chooses to orient his book.
On the assumption that outcomes in the information society are ethically determined, the analytical framework used in the book revolves around ideal policy values for achieving an advanced information society. This framework is derived from a study of ethics. Thus, the analysis is not presented as a work of social science (be that political science, industry policy or strategic studies). It is more an effort to situate the values of China’s leaders within an ethical framework implied by their acceptance of the ambition to become and advanced information society.
This comes as a surprise to me because what I was expected from a book titled “Cyber Policy in China” is really something more like industry policy or strategic studies. I was not ready for, and am frankly a bit disappointed by, the idea that this is really a work of applied philosophy.
Why? I do love philosophy as a discipline and have studied it carefully for many years. I’ve written and published about ethics and technological design. But my conclusion after so much study is that “the assumption that outcomes in the information society are ethically determined” is totally incorrect. I have been situated for some time in discussions of “technology ethics” and my main conclusion from them is that (a) “ethics” in this space are more often than not an attempt to universalize what are more narrow political and economic interests, and that (b) “ethics” are constantly getting compromised by economic motivations as well as the mundane difficulty of getting information technology to work as it is intended to in a narrow, functionally defined way. The real world is much bigger and more complex than any particular ethical lens can take in. Attempt to define technological change in terms of “ethics” are almost always a political maneuver, for good or for ill, of some kind that is reducing the real complexity of technological development into a soundbite. A true ethical analysis of cyber policy would need to address industrial policy and strategic aspects, as this is what drives the “cyber” part of it.
The irony is that there is something terribly un-emic about this approach. By Austin’s own admission, the CCP cyber policy is motivated by material concerns about the distribution of technology and economic growth. Austin could have approached China’s cyber policy in the technocratic terms they see themselves in. But instead Austin’s approach is “human-centered”, with a focus on leaders and their values. I already doubt the research on anthropological grounds because of the distance between the researcher and the subjects.
So I’m not sure what to do about this book. The preface makes it sound like it belongs to a genre of scholarship that reads well, and maybe does important ideological translation work, but does provide something like scientific knowledge of China’s cyber policy, which is what I’m most interested in. Perhaps I should move on, or take other recommendations for reading on this topic.
When we think about who is making decisions that will impact the future health and wellbeing of society, one would hope that these individuals would wield their expertise in a way that addresses the social and economic issues affecting our communities. Scientists often fill this role: for example, an ecologist advising a state environmental committee on river water redistribution , a geologist consulting for an architectural team building a skyscraper , an oncologist discussing the best treatment options based on the patient’s diagnosis and values  or an economist brought in by a city government to help develop a strategy for allocating grants to elementary schools. Part of the general contract between technical experts and their democracies is that they inform relevant actors so that decisions are made with the strongest possible factual basis.
The three examples above describe scientists going outside of the boundaries of their disciplines to present for people outside of the scientific community “on stage” . But what about decisions made by scientists behind the scenes about new technologies that could affect more than daily laboratory life? In the 1970s, genetic engineers used their technical expertise to make a call about an exciting new technology, recombinant DNA (rDNA). This technology allowed scientists to mix and add DNA from different organisms; later giving rise to engineered bacteria that could produce insulin and eventually transgenic crops. The expert decision making process and outcome, in this case, had little to do with the possibility of commercializing biotechnology or the economic impacts of GMO seed monopolies. This happened before the patenting of whole biological organisms , and the use of rDNA in plants in 1982. Instead, the emerging issues surrounding rDNA were dealt with as a technical issue of containment. Researchers wanted to ensure that anything tinkered with genetically stayed not just inside the lab, but inside specially marked and isolated rooms in the lab, eventually given rise to well-established institution of biosafety. A technical fix, for a technical issue.
Today, scientists are similarly engaged in a process of expert decision making around another exciting new technology, the CRISPR-Cas9 system. This technology allows scientists to make highly specific changes, “edits”, to the DNA of virtually any organism. Following the original publication that showed that CRISPR-Cas9 could be used to modify DNA in a “programmable” way, scientists have developed the system into a laboratory toolbox and laboratories across the life sciences are using it to tinker away at bacteria, butterflies, corn, frogs, fruit flies, human liver cells, nematodes, and many other organisms. Maybe because most people do not have strong feelings about nematodes, most of the attention in both popular news coverage and in expert circles about this technology has had to do with whether modifications that could affect human offspring (i.e. germline editing) are moral.
We have been interviewing faculty members directly engaged in these critical conversations about the potential benefits and risks of new genome editing technologies. As we continue to analyze these interviews, we want to better understand the nature of these backstage conversations and learn how the experiences and professional development activities of these expects influenced their decision-making. In subsequent posts we’ll be sharing some of our findings from these interviews, which so far have highlighted the role of a wide range of technical experiences and skills for the individuals engaged in these discussions, the strength of personal social connections and reputation in getting you a seat at the table and the dynamic nature of expert decision making.
 Scoville, C. (2017). “We Need Social Scientists!” The Allure and Assumptions of Economistic Optimization in Applied Environmental Science. Science as Culture, 26(4), 468-480.
 Sprangers, M. A., & Aaronson, N. K. (1992). The role of health care providers and significant others in evaluating the quality of life of patients with chronic disease: a review. Journal of clinical epidemiology, 45(7), 743-760.
 Hilgartner, S. (2000). Science on stage: Expert advice as public drama. Stanford University Press.
 Diamond v Chakrabarty was in 1980, upheld first whole-scale organism patent (bacterium that could digest crude oil).
I’m continuing a look into trade policy 8/08/30/trade-policy-and-income-distribution-effects/”>using Corden’s (1997) book on the topic.
Picking up where the last post left off, I’m operating on the assumption that any reader is familiar with the arguments for free trade that are an extension of those arguments of laissez-faire markets. I will assume that these arguments are true as far as they go: that the economy grows with free trade, that tariffs create a dead weight loss, that subsidies are expensive, but that both tariffs and subsidies do shift the market towards imports.
The question raised by Corden is why, despite its deleterious effects on the economy as a whole, protectionism enjoys political support by some sectors of the economy. He hints, earlier in Chapter 5, that this may be due to income distribution effects. He clarifies this with reference to an answer to this question that was given as early as 1941 by Stolper and Samuelson; their result is now celebrated as the Stolper-Samuelson theorem.
The mathematics of the theorem can be read in manyplaces. Like any economic model, it depends on some assumptions that may or may not be the case. Its main advantage is that it articulates how it is possible for protectionism to benefit a class of the population, and not just in relative but in absolute terms. It does this by modeling the returns to different factors of production, which classically have been labor, land, and capital.
Roughly, the argument goes like this. Suppose and economy has two commodities, one for import and one for export. Suppose that the imported good is produced with a higher labor to land ratio than the export good. Suppose a protectionist policy increases the amount of the import good produced relative to the export good. Then the return on labor will increase (because more labor is used in supply), and the return on land will decrease (because less land is used in supply). Wages will increase and rent on land will decrease.
These breakdowns of the economy into “factors of production” feels very old school. You rarely read economists discuss the economy in these terms now, which is itself interesting. One reason why (and I am only speculating here) is that these models clarify how laborers, land-owners, and capital-owners have different political interests in economic intervention, and that can lead to the kind of thinking that was flushed out of the American academy during the McCarthy era. Another reason may be that “capital” has changed meaning from being about ownership of machine goods into being about having liquid funds available for financial investment.
I’m interested in these kinds of models today partly because I’m interested in the political interests in various policies, and also because I’m interested in particular in the economics of supply chain logistics. The “factors of production” approach is a crude way to model the ‘supply chain’ in a broad sense, but one that has proven to be an effective source of insights in the past.
Corden, W. Max. “Trade policy and economic welfare.” OUP Catalogue (1997).
Stolper, Wolfgang F., and Paul A. Samuelson. “Protection and real wages.” The Review of Economic Studies 9.1 (1941): 58-73.
I am going to start researching trade policy, meaning policies around trade between different countries; imports and exports. Why?
It is politically relevant in the U.S. today.
It is a key component to national cybersecurity strategy, both defensive and offensive, which hinges in many cases on supply chain issues.
Formal models from trade policy may be informative in other domains as well.
In general, years of life experience and study have taught me that economics, however much it is maligned, is a wise and fundamental social science without which any other understanding of politics and society is incomplete, especially when considering the role of technology in society.
Plenty of good reasons! Onward!
As a starting point, I’m working through Max Corden’s Trade policy and social welfare (1997), which appears to be a well regarded text on the subject. In it, he sets out to describe a normative theory of trade policy. Here are two notable points based on a first perusal.
1. (from Chapter 1, “Introduction”) Corden identifies three “stages of thought” about trade policy. The first is the discovery of the benefits of free trade with the great original economists Adam Smith and David Ricardo. Here, the new appreciation of free trade was simultaneous with the new appreciation of the free market in general. “Indeed, the case for free trade was really a special case of the argument for laissez-faire.”
In the second phase, laissez-faire policies came into question. These policies may not lead to full employment, and the income distribution effects (which Corden takes seriously throughout the book, by the way) may not be desirable. Parallel to this, the argument for free trade was challenged. Some of these challenges were endorsed by John Stuart Mill. One argument is that tariffs might be necessary to protect “infant industries”.
As time went on, the favorability of free trade more or less tracked the favorability of laissez-faire. Both were popular in Western Europe and failed to get traction in most other countries (almost all of which were ‘developing’).
Corden traces the third stage of thought to Meade’s (1955) Trade and welfare. “In the third stage the link between the case for free trade and the case for laissez-faire was broken.“. The normative case for free trade, in this stage, did not depend on a normative case for laissez-faire, but existed despite normative reasons for government intervention in the economy. The point made in this approach, called the theory of domestic distortions, is that it is generally better for the kinds of government intervention made to solve domestic problems to be domestic interventions, not trade interventions.
This third stage came with a much more sophisticated toolkit for comparing the effects of different kinds of policies, which is the subject of exposition for a large part of Corden’s book.
2. (from Chapter 5, “Protection and Income Distribution) Corden devotes at least one whole chapter to an aspect of the trade policy discussion that is very rarely addressed in, say, the mainstream business press. This is the fact that trade policy can have an effect on internal income distribution, and that this has been throughout history a major source of the political momentum for protectionist policies. This explains why the domestic politics of protectionism and free trade can be so heated and are really often independent from arguments about the effect of trade policy on the economy as a whole, which, it must be said, few people realize they have a real stake in.
Corden’s examples involve the creation of fledgling industries under the conditions of war, which often cut off foreign supplies. When the war ends, those businesses that flourished during war exert political pressure to protect themselves from erosion from market forces. “Thus the Napoleonic Wars cut off supplies of corn (wheat) to Britain from the Continent and led to expansion of acreage and higher prices of corn. When the war was over, the Corn Law of 1815 was designed to maintain prices, with an import prohibition as long as the domestic price was below a certain level.” It goes almost without saying that this served the interests of a section of the community, the domestic corn farmers, and not of others. This is what Corden means by an “income distribution effect”.
“Any history book will show that these income distribution effects are the very stuff of politics. The great free trade versus protection controversies of the nineteenth century in Great Britain and in the United States brought out the conflicting interests of different sections of the community. It was the debate about the effects of the Corn Laws which really stimulated the beginnings of the modern theory of international trade.”
Extending this argument a bit, one might say that a major reason why economics gets such a bad rap as a social science is that nobody really cares about Pareto optimality except for those sections of the economy that are well served by a policy that can be justified as being Pareto optimal (in practice, this would seem to be correlated with how much somebody has invested in mutual funds, as these track economic growth). The “stuff of politics” is people using political institutions to change their income outcomes, and the potential for this makes trade policy a very divisive topic.
Implication for future research:
The two key takeaways for trade policy in cybersecurity are:
1) The trade policy discussion need not remain within the narrow frame of free trade versus protectionism, but rather a more nuanced set of policy analysis tools should be brought to bear on the problem, and
2) An outcome of these policy analyses should be the identification not just of total effects on the economy, or security posture, or what have you, but on the particular effects on different sections of the economy and population.
Corden, W. Max. “Trade policy and economic welfare.” OUP Catalogue (1997).
Meade, James Edward. Trade and welfare. Vol. 2. Oxford University Press, 1955.
Join CTSP and IMSA to brainstorm ideas for projects that address the challenges of technology, society, and policy. We welcome students, community organizations, local municipal partners, faculty, and campus initiatives to discuss discrete problems that project teams can take on over the course of this academic year. Teams will be encouraged to apply to CTSP to fund their projects.
Un-Pitches are meant to be informal and brief introductions of yourself, your idea, or your organization’s problem situation. Un-pitches can include designing technology, research, policy recommendations, and more. Students and social impact representatives will be given 3 minutes to present their Un-Pitch. In order to un-pitch, please share 1-3 slides, as PDF and/or a less than 500-word description—at this email: email@example.com. You can share slides and/or description of your ideas even if you aren’t able to attend. Deadline to share materials: midnight October 1st, 2018.
The next application round for fellows will open in November. CTSP’s fellowship program will provide small grants to individuals and small teams of fellows for 2019. CTSP also has a recurring offer of small project support.
Prior Projects & Collaborations
Here are several examples of projects that members of the I School community have pursued as MIMS final projects or CTSP Fellow projects (see more projects from 2016, 2017, and 2018).
A team of MIMS students partnered with a local non-profit working with vulnerable populations to build their information and communication capacity: Yakap
The above projects demonstrate a range of interests and skills of the I School community. Students here and more broadly on the UC Berkeley campus are interested and skilled in all aspects of where information and technology meets people—from design and data science, to user research and information policy.
Please join us for a panel discussion featuring award-winning tech reporter Cyrus Farivar, whose new book, Habeas Data, explores how the explosive growth of surveillance technology has outpaced our understanding of the ethics, mores, and laws of privacy. Habeas Data explores ten historic court decisions that defined our privacy rights and matches them against the capabilities of modern technology. Mitch Kapor, co-founder, Electronic Frontier Foundation, said the book was “Essential reading for anyone concerned with how technology has overrun privacy.”
The panel will be moderated by 2017 and 2018 CTSP Fellow Steve Trush, a MIMS 2018 graduate and now a Research Fellow at the Center for Long-Term Cybersecurity (CLTC). He was on a CTSP project starting in 2017 that provided a report to the Oakland Privacy Advisory Commission—read an East Bay Express write-up on their work here.
The panelists will discuss what public governance models can help local governments protect the privacy of citizens—and what role citizen technologists can play in shaping these models. The discussion will showcase the ongoing collaboration between the UC Berkeley School of Information and the Oakland Privacy Advisory Commission (OPAC). Attendees will learn how they can get involved in addressing issues of governance, privacy, fairness, and justice related to state surveillance.
Cyrus Farivar, Author, Habeas Data: Privacy vs. the Rise of Surveillance Tech
Deirdre Mulligan, Associate Professor in the School of Information at UC Berkeley, Faculty Director, UC Berkeley Center for Law & Technology
Catherine Crump, Assistant Clinical Professor of Law, UC Berkeley; Director, Samuelson Law, Technology & Public Policy Clinic.
Camille Ochoa, Coordinator, Grassroots Advocacy; Electronic Frontier Foundation
Moderated by Steve Trush, Research Fellow, UC Berkeley Center for Long-Term Cybersecurity
The panel will be followed by a reception with light refreshments. Building is wheelchair accessible – wheelchair users can enter through the ground floor level and take the elevator to the second floor.
Cyrus [“suh-ROOS”] Farivar is a Senior Tech Policy Reporter at Ars Technica, and is also an author and radio producer. His second book,Habeas Data, about the legal cases over the last 50 years that have had an outsized impact on surveillance and privacy law in America, is out now from Melville House. His first book,The Internet of Elsewhere—about the history and effects of the Internet on different countries around the world, including Senegal, Iran, Estonia and South Korea—was published in April 2011. He previously was the Sci-Tech Editor, and host of “Spectrum” at Deutsche Welle English, Germany’s international broadcaster. He has also reported for the Canadian Broadcasting Corporation, National Public Radio, Public Radio International, The Economist, Wired, The New York Times and many others. His PGP key and other secure channels are available here.
Catherine Crump: Catherine Crump is an Assistant Clinical Professor of Law and Director of the Samuelson Law, Technology & Public Policy Clinic. An experienced litigator specializing in constitutional matters, she has represented a broad range of clients seeking to vindicate their First and Fourth Amendment rights. She also has extensive experience litigating to compel the disclosure of government records under the Freedom of Information Act. Professor Crump’s primary interest is the impact of new technologies on civil liberties. Representative matters include serving as counsel in the ACLU’s challenge to the National Security Agency’s mass collection of Americans’ call records; representing artists, media outlets and others challenging a federal internet censorship law, and representing a variety of clients seeking to invalidate the government’s policy of conducting suspicionless searches of laptops and other electronic devices at the international border.
Prior to coming to Berkeley, Professor Crump served as a staff attorney at the ACLU for nearly nine years. Before that, she was a law clerk for Judge M. Margaret McKeown at the United States Court of Appeals for the Ninth Circuit.
Camille Ochoa: Camille promotes the Electronic Frontier Foundation’s grassroots advocacy initiative (the Electronic Frontier Alliance) and coordinates outreach to student groups, community groups, and hacker spaces throughout the country. She has very strong opinions about food deserts, the school-to-prison pipeline, educational apartheid in America, the takeover of our food system by chemical companies, the general takeover of everything in American life by large conglomerates, and the right to not be spied on by governments or corporations.
A confusing debate in my corner of the intellectual Internet is about (a) whether the progressive left has a coherent intellectual stance that can be articulated, (b) what to call this stance, (c) whether the right-wing critics of this stance have the intellectual credentials to refer to it and thereby land any kind of rhetorical punch. What may be true is that both “sides” reflect social movements more than they reflect coherent philosophies as such, and so trying to bridge between them intellectually is fruitless.
Happily, reading through Omi and Winant, which among other things outlines a history of what I think of as the progressive left, or the “social justice”, “identity politics” movement in the United States. They address this in their Chapter 6: “The Great Transformation”. They use “the Great Transformation” to refer to “racial upsurges” in the 1950’s and 1960’s.
They are, as far as I can tell, the only people who ever use “The Great Transformation” to refer to this period. I don’t think it is going to stick. They name it this because they see this period as a great victorious period for democracy in the United States. Omi and Winant refer to previous periods in the United States as “racial despotism”, meaning that the state was actively treating nonwhites as second class citizens and preventing them from engaging in democracy in a real way. “Racial democracy”, which would involve true integration across race lines, is an ideal future or political trajectory that was approached during the Great Transformation but not realized fully.
The story of the civil rights movements in the mid-20th century are textbook material and I won’t repeat Omi and Winant’s account, which is interesting for a lot of reasons. One reason why it is interesting is how explicitly influenced by Gramsci their analysis is. As the “despotic” elements of United States power structures fade, the racial order is maintained less by coercion and more by consent. A power disparity in social order maintained by consent is a hegemony, in Gramscian theory.
They explain the Great Transformation as being due to two factors. One was the decline of the ethnicity paradigm of race, which had perhaps naively assumed that racial conflicts could be resolved through assimilation and recognition of ethnic differences without addressing the politically entrenched mechanisms of racial stratification.
The other factor was the rise of new social movements characterized by, in alliance with second-wave feminism, the politicization of the social, whereby social identity and demographic categories were made part of the public political discourse, rather than something private. This is the birth of “politics of identity”, or “identity politics”, for short. These were the original social justice warriors. And they attained some real political victories.
The reason why these social movements are not exactly normalized today is that there was a conservative reaction to resist changes in the 70’s. The way Omi and Winant tell it, the “colorblind ideology” of the early 00’s was culmination of a kind of political truce between “racial despotism” and “racial democracy”–a “racial hegemony”. Gilman has called this “racial liberalism”.
So what does this mean for identity politics today? It means it has its roots in political activism which was once very radical. It really is influenced by Marxism, as these movements were. It means that its co-option by the right is not actually new, as “reverse racism” was one of the inventions of the groups that originally resisted the Civil Rights movement in the 70’s. What’s new is the crisis of hegemony, not the constituent political elements that were its polar extremes, which have been around for decades.
What it also means is that identity politics has been, from its start, a tool for political mobilization. It is not a philosophy of knowledge or about how to live the good life or a world view in a richer sense. It serves a particular instrumental purpose. Omi and Winant talk about the politics of identity is “attractive”, that it is a contagion. These are positive terms for them; they are impressed at how anti-racism spreads. These days I am often referred to Phillips’ report, “The Oxygen of Amplification”, which is about preventing the spread of extremist views by reducing the amount of reporting on them in ‘disgust’. It must be fair to point out that identity politics as a left-wing innovation were at one point an “extremist” view, and that proponents of that view do use media effectively to spread it. This is just how media-based organizing tactics work, now.
Following up on earlier posts on Omi and Winant, I’ve gotten to the part where they discuss racial projects and racism.
Because I use Twitter, I have not been able to avoid the discussion of Sarah Jeong’s tweets. I think it provides a useful case study in Omi and Winant’s terminology. I am not a journalist or particularly with-it person, so I have encountered this media event mainly through articles about it. Herearesome.
To recap, for Omi and Winant, race is a “master category” of social organization, but nevertheless one that is unstable and politically contested. The continuity of racial classification is due to a historical, mutually reinforcing process that includes both social structures that control the distribution of resources and social meanings and identities that have been acquired by properties of people’s bodies. The fact that race is sustained through this historical and semiotically rich structuration (to adopt a term from Giddens), means that
“To identify an individual or group racially is to locate them within a socially and historically demarcated set of demographic and cultural boundaries, state activities, “life-chances”, and tropes of identity/difference/(in)equality.
“We cannot understand how racial representations set up patterns of residential segregation, for example, without considering how segregation reciprocally shapes and reinforces the meaning of race itself.”
This is totally plausible. Identifying the way that racial classification depends on a relationship between meaning and social structure opens the possibility of human political agency in the (re)definition of race. Omi and Winant’s term for these racial acts is racial projects.
A racial project is simultaneously an interpretation, representation, or explanation of racial identities and meanings, and an effort to organize and distribute resources (economic, political, cultural) along particular racial lines.
… Racial projects connect the meaning of race in discourse and ideology with the way that social structures are racially organized.
“Racial project” is a broad category that can include both large state and institutional interventions and individual actions. “even the decision to wear dreadlocks”. What makes them racial projects is how they reflect and respond to broader patterns of race, whether to reproduce it or to subvert it. Prevailing stereotypes are one of the main ways we can “read” the racial meanings of society, and so the perpetuation of subversion of stereotypes is a form of “racial project”. Racial projects are often in contest with each other; the racial formation process is the interaction and accumulation of these projects.
Racial project is a useful category partly because it is key to Omi and Winant’s definition of racism. They acknowledge that the term itself is subject to “enormous debate”, at times inflated to be meaningless and at other times deflated to be too narrow. They believe the definition of racism as “racial hate” is too narrow, though it has gain legal traction as a category, as in when “hate crimes” are considered an offense with enhanced sentencing, or universities institute codes against “hate speech”. I’ve read “racial animus” as another term that means something similar, though perhaps more subtle, than ‘racial hate’.
The narrow definition of racism as racial hate is rejected due to an argument O&W attribute to David Theo Goldberg (1997), which is that by narrowly focusing on “crimes of passion” (I would gloss this more broadly to ‘psychological states’), the interpretation of racism misses the ideologies, policies, and practices that “normalize and reproduce racial inequality and domination”. In other words, racism, as a term, has to reference the social structure that is race in order to adequate.
Omi and Winant define racism thus:
A racial project can be defined as racist if it creates or reproduces structures of domination based on racial significance and identities.
A key implication of their argument is that not all racial projects are racist. Recall that Omi and Winant are very critical of colorblindness as (they allege) a political hegemony. They want to make room for racial solidarity and agency despite the hierarchical nature of race as a social fact. This allows them to answer two important questions.
Are there anti-racist projects? Yes. “[w]e define anti-racist projects as those that undo or resist structures of domination based on racial significations and identities.”
Note that the two definitions are not exactly parallel in construction. To “create and reproduce structure” is not entirely the opposite of “undo or resist structure”. Given O&W’s ontology, and the fact that racial structure is always the accumulation of a long history of racial projects, projects that have been performed by (bluntly) both the right and the left, and given that social structure is not homogeneous across location (consider how race is different in the United States and in Brazil, or different in New York City and in Dallas), and given that an act of resistance is also an act of creation, implicitly, one could easily get confused trying to apply these definitions. The key word, “domination”, is not defined precisely, and everything hinges on this. It’s clear from the writing that Omi and Winant subscribe to the “left” view of how racial domination works; this orients their definition of racism concretely. But they also not that the political agency of people of color in the United States over the past hundred years or so has gained them political power. Isn’t the key to being racist having power? This leads O&W to the second question, which is
Can Group of Color Advance Racist Projects? O&W’s answer is, yes, they can. There are exceptions to the hierarchy of white supremacy, and in these exceptions there can be racial conflicts where a group of color is racist. Their example is in cases where blacks and Latinos are in contest over resources. O&W do not go so far as to say that it is possible to be racist against white people, because they believe all racial relations are shaped by the overarching power of white supremacy.
Case Study: Jeong’s tweets
That is the setup. So what about Sarah Jeong? Well, she wrote some tweets mocking white people, and specifically white men, in 2014, which was by the way the heyday of obscene group conflict on Twitter. That was the year of Gamergate. A whole year of tweets that are probably best forgotten. She compared white people to goblins, she compared them the dogs. She said she wished ill on white men. As has been pointed out, if any other group besides white men were talked about, her tweets would be seen as undeniably racist, etc. They are, truth be told, similar rhetorically to the kinds of tweets that the left media have been so appalled at for some time.
They have surfaced again because Jeong was hired by the New York Times, and right wing activists (or maybe just trolls, I’m a little unclear about which) surfaced the old tweets. In the political climate of 2018, when Internet racism feels like it’s gotten terribly real, these struck a chord and triggered some reflection.
What should we make of these tweets, in light of racial formation theory?
First, we should acknowledge that the New York Times has some really great lawyers working for it. Their statement was the at the time, (a) Jeong was being harassed, (b) that she responded to them in the same rhetorical manner of the harassment, that (c) that’s regrettable, but also, it’s long past and not so bad. Sarah Jeong’s own statement makes this point, acknowledges that the tweets may be hurtful out of context, and that she didn’t mean them the way others could take them. “Harassment” is actually a relatively neutral term; you can harass somebody, legally speaking, on the basis of their race without invoking a reaction from anti-racist sociologists. This is all perfectly sensible, IMO, and the case is pretty much closed.
But that’s not where the discussion on the Internet ended. Why? Because the online media is where the contest of racial formation is happening.
We can ask: Were Sarah Jeong’s tweets a racial project? The answer seems to be, yes, they were. It was a representation of racial identity (whiteness) “to organize and distribute resources (economic, political, cultural) along particular racial lines”. Jeong is a journalist and scholar, and these arguments are happening in social media, which are always-already part of the capitalist attention economy. Jeong’s success is partly due to her confrontation of on-line harassers and responses to right-wing media figures. And her activity is the kind that rallies attention along racial lines–anti-racist, racist, etc.
Confusingly, the language she used in these tweets reads as hateful. “Dumbass fucking white people marking up the internet with their opinions like dogs pissing on fire hydrants” does, reasonably, sound like it expresses some racial animus. If we were to accept the definition of racism as merely the possession of ill will towards a race, which seems to be Andrew Sullivan’s definition, then we would have to say those were racist tweets.
We could invoke a defense here. Were the tweets satire? Did Jeong not actually have any ill will towards white people? One might wonder, similarly, whether 4chan anti-Semites are actually anti-Semitic or just trolling. The whole question of who is just trolling and who should be taken seriously on the Internet is such an interesting one. But it’s one I had to walk away from long ago after the heat got turned up on me one time. So it goes.
What everyone knows is at stake, though, is the contention that the ‘racial animus’ definition is not the real definition of racism, but rather that something like O&W’s definition is. By their account, (a) a racial project is only racist if it aligns with structures of racial domination, and (b) the structure of racial domination is a white supremacist one. Ergo, by this account, Jeong’s tweets are not racist, because insulting white people does not create or reproduce structures of white supremacist domination.
It’s worth pointing out that there are two different definitions of a word here and that neither one is inherently more correct of a definition. I’m hesitant to label the former definition “right” and the latter definition “left” because there’s nothing about the former definition that would make you, say, not want to abolish the cradle-to-prison system or any number of other real, institutional reforms. But the latter definition is favored by progressives, who have a fairly coherent world view. O&W’s theorizing is consistent with it. The helpful thing about this worldview is that it makes it difficult to complain about progressive rhetorical tactics without getting mired into a theoretical debate about their definitions, which makes it an excellent ideology for getting into fights on the Internet. This is largely what Andrew Sullivan was getting at in his critique.
what Jeong and the NYT seem to get, which some others don’t, is that comments that insult an entire race can be hurtful and bothersome even if they are not racist in the progressive sense of the term. It is not clear what we should call a racial project that is hurtful and bothersome to white people if we do not call it racist. A difficulty with the progressive definition of racism is that agreement on the application of the term is going to depend on agreement about what the dominate racial structures are. What we’ve learned in the past few years is that the left-wing view of what these racial structures are is not as widely shared as it was believed to be. Example, there are far more people who believe in anti-Semitic conspiracies, in which the dominant race is the Jews, active in American political life than was supposed. Given O&W’s definition of racism, if it were, factually, the case that Jews ran the world, then anti-Semitic comments would not be racist in the meaningful sense.
Which means that the progressive definition of racism, to be effective, depends on widespread agreement about white supremacist hegemony, which is a much, much more complicated thing to try to persuade somebody of than a particular person’s racial animus.
A number of people have been dismissing any negative reaction to the resurfacing of Jeong’s tweets, taking the opportunity to disparage that reaction as misguided and backwards. As far as I can tell, there is an argument that Jeong’s tweets are actually anti-racist. This article argues that casually disparaging white men is just something anti-racists do lightly to call attention to the dominant social structures and also the despicable behavior of some white men. Naturally, these comments are meant humorously, and not intended to refer to all white men (to assume it does it to distract from the structural issues at stake). They are jokes that should be celebrated, because the the progressives have already won this argument over #notallmen, also in 2014. Understood properly as progressive, anti-racist, social justice idiom, there is nothing offensive about Jeong’s tweets.
I am probably in a minority on this one, but I do not agree with this assessment, for a number of reasons.
First, the idea that you can have a private, in-group conversation on Twitter is absurd.
Second, the idea that a whole community of people casually expresses racial animus because of representative examples of wrongdoing by members of a social class can be alarming whether or not it’s Trump voters talking about Mexicans or anti-racists talking about white people. That alarm, as an emotional reaction, is a reality whether or not the dominant racial structures are being reproduced or challenged.
Third, I’m not convinced that as a racial project, tweets simply insulting white people really counts as “anti-racist” in a substantive sense. Anti-racist projects are “those that undo or resist structures of domination based on racial significations and identities.” Is saying “white men are bullshit” undoing a structure of domination? I’m pretty sure any white supremacist structures of domination have survived that attack. Does it resist white supremacist domination? The thrust of wise sociology of race is that what’s more important than the social meanings are the institutional structures that maintain racial inequality. Even if this statement has a meaning that is degrading to white people, it doesn’t seem to be doing any work of reorganizing resources around (anti-)racial lines. It’s just a crass insult. It may well have actually backfired, or had an effect on the racial organization of attention that neither harmed nor supported white supremacy, but rather just made its manifestation on the Internet more toxic (in response to other, much greater, toxicity, of course).
I suppose what I’m arguing for is greater nuance than either the “left” or “right” position has offered on this case. I’m saying that it is possible to engage in a racial project that is neither racist nor anti-racist. You could have a racial project that is amusingly absurd, or toxic, or cleverly insightful. Moreover, there is a complex of ethical responsibilities and principles that intersects with racial projects but is not contained by the logic of race. There are greater standards of decency that can be invoked. These are not simply constraints on etiquette. They also are relevant to the contest of racial projects and their outcomes.
Matt Levine has a recent piece discussing how discovering the history of sexual harassment complaints about a company’s leadership is becoming part of standard due diligence before an acquisition. Implicitly, the threat of liability, and presumably the costs of a public relations scandal, are material to the value of the company being acquired.
Perhaps relatedly, the National Venture Capital Association has added to its Model Legal Documents a slew of policies related to harassment and discrimination, codes of conduct, attracting and retaining diverse talent, and family friendly policies. Rumor has it that venture capitalists will now encourage companies they invest in to adopt these tested versions of the policies, much as an organization would adopt a tested and well-understood technical standard.
I have in various researcher roles studied social movements and political change, but these studies have left me with the conclusion that changes to culture are rarely self-propelled, but rather are often due to more fundamental changes in demographics or institutions. State legislation is very slow to move and limited in its range, and so often trails behind other amassing of power and will.
Corporate self-regulation, on the other hand, through standards, contracts, due diligence, and the like, seems to be quite adaptive. This is leading me to the conclusion that a best kept secret of cultural change is that some of the main drivers of it are actually deeply embedded in corporate law. Corporate law has the reputation of being a dry subject which sucks in recent law grads into soulless careers. But what if that wasn’t what corporate law was? What if corporate law was really where the action is?
In broader terms, the adaptivety of corporate policy to changing demographics and social needs perhaps explains the paradox of “progressive neoliberalism”, or the idea that the emerging professional business class seems to be socially liberal, whether or not it is fiscally conservative. Professional culture requires, due to antidiscrimination law and other policies, the compliance of its employees with a standard of ‘political correctness’. People can’t be hostile to each other in the workplace or else they will get fired, and they especially can’t be hostile to anybody on the basis of their being part of a protected category. This has been enshrined into law long ago. Part of the role of educational institutions is to teach students a coherent story about why these rules are what they are and how they are not just legally mandated, but morally compelling. So the professional class has an ideology of inclusivity because it must.
I have read many a think piece and critical take about AI, the Internet, and so on. I offer a new theory of What Happened, the best I can come up with based on my research and observations to date.
Consider this article, “The death of Don Draper”, as a story that represents the changes that occur more broadly. In this story, advertising was once a creative field that any company with capital could hire out to increase their chances of getting noticed and purchased, albeit in a noisy way. Because everything was very uncertain, those that could afford it blew a lot of money on it (“Half of advertising is useless; the problem is knowing which half”).
A similar story could be told about access to the news–dominated by big budgets that hid quality–and political candidates–whose activities were largely not exposed to scrutiny and could follow a similarly noisy pattern of hype and success.
Then along came the Internet and targeted advertising, which did a number of things:
It reduced search costs for people looking for particular products, because Google searches the web and Amazon indexes all the products (and because of lots of smaller versions of Google and Amazon).
It reduced the uncertainty of advertising effectiveness because it allowed for fine-grained measurement of conversion metrics. This reduced the search costs of producers to advertisers, and from advertisers to audiences.
It reduced the search costs of people finding alternative media and political interest groups, leading to a reorganization of culture. The media and cultural landscape could more precisely reflect the exogenous factors of social difference.
It reduced the cost of finding people based on their wealth, social influence, and so on, implicitly creating a kind of ‘social credit system’ distributed across various web services. (Gandy, 1993; Fourcade and Healy, 2016)
What happens when you reduce search costs in markets? Robert Jensen’s (2007) study of the introduction of mobile phones to fish markets in Kerala is illustrative here. Fish prices were very noisy due to bad communication until mobile phones were introduced. After that, the prices stabilized, owing to swifter communication between fisherman and markets. Suddenly able to preempt prices rather than subject to the vagaries to them, fisherman could then choose to go to the market that would give them the best price.
Reducing search costs makes markets more efficient and larger. In doing so, it increases inequality, because whereas a lot of lower quality goods and services can survive in a noisy economy, when consumers are more informed and more efficient at searching, they can cut out less useful services. They can then standardize on “the best” option available, which can be produced with economies of scale. So inefficient, noisy parts of the economy were squeezed out and the surplus amassed in the hands of a big few intermediaries, who we now see as Big Tech leveraging AI.
Is AI an appropriate term? I have always liked this definition of AI: “Anything that humans still do better than computers.” Most recently I’ve seen this restated in an interview with Andrew Moore, quoted by Zachary Lipton:
Artificial intelligence is the science and engineering of making computers behave in ways that, until recently, we thought required human intelligence.
The use of technical platforms to dramatically reduce search costs. “Searching” for people, products, and information is something that used to require human intelligence. Now it is assisted by computers. And whether or not the average user knows that they are doing when they search (Mulligan and Griffin, 2018), as a commercial function, the panoply of search engines and recommendation systems and auctions that occupy the central places in the information economy outperform human intelligence largely by virtue of having access to more data–a broader perspective–than any individual human could ever accomplish.
The comparison between the Google search engine and a human’s intelligence is therefore ill-posed. The kinds of functions tech platforms are performing are things that have only every been solved by human organizations, especially bureaucratic ones. And while the digital user interfaces of these services hides the people “inside” the machines, we know that of course there’s an enormous amount of ongoing human labor involved in the creation and maintenance of any successful “AI” that’s in production.
In conclusion, the Internet changed everything for a mundane reason that could have been predicted from neoclassical economic theory. It reduced search costs, creating economic efficiency and inequality, by allowing for new kinds of organizations based on broad digital connectivity. “AI” is a distraction from these accomplishments, as is most “critical” reaction to these developments, which do not do justice to the facts of the matter because by taking up a humanistic lens, they tend not to address how decisions by individual humans and changes to their experience experience are due to large-scale aggregate processes and strategic behaviors by businesses.
Gandy Jr, Oscar H. The Panoptic Sort: A Political Economy of Personal Information. Critical Studies in Communication and in the Cultural Industries. Westview Press, Inc., 5500 Central Avenue, Boulder, CO 80301-2877 (paperback: ISBN-0-8133-1657-X, $18.95; hardcover: ISBN-0-8133-1656-1, $61.50)., 1993.
Fourcade, Marion, and Kieran Healy. “Seeing like a market.” Socio-Economic Review 15.1 (2016): 9-29.
Jensen, Robert. “The digital provide: Information (technology), market performance, and welfare in the South Indian fisheries sector.” The quarterly journal of economics 122.3 (2007): 879-924.
Mulligan, Deirdre K. and Griffin, Daniel S. “Rescripting Search to Respect the Right to Truth.” 2 GEO. L. TECH. REV. 557 (2018)
I’ve been intrigued by Daniel Griffin’s tweets lately, which have been about situating some upcoming work of his an Deirdre Mulligan’s regarding the experience of using search engines. There is a lively discussion lately about the experience of those searching for information and the way they respond to misinformation or extremism that they discover through organic use of search engines and media recommendation systems. This is apparently how the concern around “fake news” has developed in the HCI and STS world since it became an issue shortly after the 2016 election.
I do not have much to add to this discussion directly. Consumer misuse of search engines is, to me, analogous to consumer misuse of other forms of print media. I would assume to best solution to it is education in the complete sense, and the problems with the U.S. education system are, despite all good intentions, not HCI problems.
Wearing my privacy researcher hat, however, I have become interested in a different aspect of search engines and the politics around them that is less obvious to the consumer and therefore less popularly discussed, but I fear is more pernicious precisely because it is not part of the general imaginary around search. This is the aspect that is around the tracking of search engine activity, and what it means for this activity to be in the hands of not just such benevolent organizations such as Google, but also such malevolent organizations such as Bizarro World Google*.
Here is the scenario, so to speak: for whatever reason, we begin to see ourselves in a more adversarial relationship with search engines. I mean “search engine” here in the broad sense, including Siri, Alexa, Google News, YouTube, Bing, Baidu, Yandex, and all the more minor search engines embedded in web services and appliances that do something more focused than crawl the whole web. By ‘search engine’ I mean entire UX paradigm of the query into the vast unknown of semantic and semiotic space that contemporary information access depends on. In all these cases, the user is at a systematic disadvantage in the sense that their query is a data point amount many others. The task of the search engine is to predict the desired response to the query and provide it. In return, the search engine gets the query, tied to the identity of the user. That is one piece of a larger mosaic; to be a search engine is to have a picture of a population and their interests and the mandate to categorize and understand those people.
In Western neoliberal political systems the central function of the search engine is realized as commercial transaction facilitating other commercial transactions. My “search” is a consumer service; I “pay” for this search by giving my query to the adjoined advertising function, which allows other commercial providers to “search” for me, indirectly, through the ad auction platform. It is a market with more than just two sides. There’s the consumer who wants information and may be tempted by other information. There are the primary content providers, who satisfy consumer content demand directly. And there are secondary content providers who want to intrude on consumer attention in a systematic and successful way. The commercial, ad-enabled search engine reduces transaction costs for the consumer’s search and sells a fraction of that attentional surplus to the advertisers. Striking the right balance, the consumer is happy enough with the trade.
Part of the success of commercial search engines is the promise of privacy in the sense that the consumer’s queries are entrusted secretly with the engine, and this data is not leaked or sold. Wise people know not to write into email things that they would not want in the worst case exposed to the public. Unwise people are more common than wise people, and ill-considered emails are written all the time. Most unwise people do not come to harm because of this because privacy in email is a de facto standard; it is the very security of email that makes the possibility of its being leaked alarming.
So to with search engine queries. “Ask me anything,” suggests the search engine, “I won’t tell”. “Well, I will reveal your data in an aggregate way; I’ll expose you to selective advertising. But I’m a trusted intermediary. You won’t come to any harms besides exposure to a few ads.”
That is all a safe assumption until it isn’t, at which point we must reconsider the role of the search engine. Suppose that, instead of living in a neoliberal democracy where the free search for information was sanctioned as necessary for the operation of a free market, we lived in an authoritarian country organized around the principle that disloyalty to the state should be crushed.
Under these conditions, the transition of a society into one that depends for its access to information on search engines is quite troubling. The act of looking for information is a political signal. Suppose you are looking for information about an extremist, subversive ideology. To do so is to flag yourself as a potential threat of the state. Suppose that you are looking for information about a morally dubious activity. To do so is to make yourself vulnerable to kompromat.
Under an authoritarian regime, curiosity and free thought are a problem, and a problem that are readily identified by ones search queries. Further, an authoritarian regime benefits if the risks of searching for the ‘wrong’ thing are widely known, since it suppresses inquiry. Hence, the very vaguely announced and, in fact, implausible to implement Social Credit System in China does not need to exist to be effective; people need only believe it exists for it to have a chilling and organizing effect on behavior. That is the lesson of the Foucouldean panopticon: it doesn’t need a guard sitting in it to function.
Do we have a word for this function of search engines in an authoritarian system? We haven’t needed one in our liberal democracy, which perhaps we take for granted. “Censorship” does not apply, because what’s at stake is not speech but the ability to listen and learn. “Surveillance” is too general. It doesn’t capture the specific constraints on acquiring information, on being curious. What is the right term for this threat? What is the term for the corresponding liberty?
I’ll conclude with a chilling thought: when at war, all states are authoritarian, to somebody. Every state has an extremist, subversive ideology that it watches out for and tries in one way or another to suppress. Our search queries are always of strategic or tactical interest to somebody. Search engine policies are always an issue of national security, in one way or another.
Most of these narratives and imaginings about BCIs tend to be utopian, or dystopian, imagining radical technological or social change. However, we instead aim to imagine futures that are not radically different from our own. In our project, we use design fiction to ask: how can we graft brain computer interfaces onto the everyday and mundane worlds we already live in? How can we explore how BCI uses, benefits, and labor practices may not be evenly distributed when they get adopted?
Brain computer interfaces allow the control of a computer from neural output. In recent years, several consumer-grade brain-computer interface devices have come to market. One example is the Neurable – it’s a headset used as an input device for virtual reality systems. It detects when a user recognizes an object that they want to select. It uses a phenomenon called the P300 – when a person either recognizes a stimulus, or receives a stimulus they are not expecting, electrical activity in their brain spikes approximately 300 milliseconds after the stimulus. This electrical spike can be detected by an EEG, and by several consumer BCI devices such as the Neurable. Applications utilizing the P300 phenomenon include hands-free ways to type or click.
Demo video of a text entry system using the P300
Neurable demonstration video
We base our analysis on this already-existing capability of brain computer interfaces, rather than the more fantastical narratives (at least for now) of computers being able to clearly read humans’ inner thoughts and emotions. Instead, we create a set of scenarios that makes use of the P300 phenomenon in new applications, combined with the adoption of consumer-grade BCIs by new groups and social systems.
Design fiction is a practice of creating conceptual designs or artifacts that help create a fictional reality. We can use design fiction to ask questions about possible configurations of the world and to think through issues that have relevance and implications for present realities. (I’ve written more about design fiction in prior blog posts).
We build on Lindley et al.’s proposal to use design fiction to study the “implications for adoption” of emerging technologies. They argue that design fiction can “create plausible, mundane, and speculative futures, within which today’s emerging technologies may be prototyped as if they are domesticated and situated,” which we can then analyze with a range of lenses, such as those from science and technology studies. For us, this lets us think about technologies beyond ideal use cases. It lets us be attuned to the experiences of power and inequalities that people experience today, and interrogate how emerging technologies might get uptaken, reused, and reinterpreted in a variety of existing social relations and systems of power.
To explore this, we thus created a set of interconnected design fictions that exist within the same fictional universe, showing different sites of adoptions and interactions. We build on Coulton et al.’s insight that design fiction can be a “world-building” exercise; design fictions can simultaneously exist in the same imagined world and provide multiple “entry points” into that world.
We created 4 design fictions that exist in the same world: (1) a README for a fictional BCI API, (2) a programmer’s question on StackOverflow who is working with the API, (3) an internal business memo from an online dating company, (4) a set of forum posts by crowdworkers who use BCIs to do content moderation tasks. These are downloadable at our project page if you want to see them in more detail. (I’ll also note that we conducted our work in the United States, and that our authorship of these fictions, as well as interpretations and analysis are informed by this sociocultural context.)
Design Fiction 1: README documentation of an API for identifying P300 spikes in a stream of EEG signals
First, this is README documentation of an API for identifying P300 spikes in a stream of EEG signals. The P300 response, or “oddball” response is a real phenomenon. It’s a spike in brain activity when a person is either surprised, or when see something that they’re looking for. This fictional API helps identify those spikes in EEG data. We made this fiction in the form of a GitHub page to emphasize the everyday nature of this documentation, from the viewpoint of a software developer. In the fiction, the algorithms underlying this API come from a specific set of training data from a controlled environment in a university research lab. The API discloses and openly links to the data that its algorithms were trained on.
In our creation and analysis of this fiction, for us it surfaces ambiguity and a tension about how generalizable the system’s model of the brain is. The API with a README implies that the system is meant to be generalizable, despite some indications based on its training dataset that it might be more limited. This fiction also gestures more broadly toward the involvement of academic research in larger technical infrastructures. The documentation notes that the API started as a research project by a professor at a University before becoming hosted and maintained by a large tech company. For us, this highlights how collaborations between research and industry may produce artifacts that move into broader contexts. Yet researchers may not be thinking about the potential effects or implications of their technical systems in these broader contexts.
Design Fiction 2: A question on StackOverflow
Second, a developer, Jay, is working with the BCI API to develop a tool for content moderation. He asks a question on Stack Overflow, a real website for developers to ask and answer technical questions. He questions the API’s applicability beyond lab-based stimuli, asking “do these ‘lab’ P300 responses really apply to other things? If you are looking over messages to see if any of them are abusive, will we really see the ‘same’ P300 response?” The answers from other developers suggest that they predominantly believe the API is generalizable to a broader class of tasks, with the most agreed-upon answer saying “The P300 is a general response, and should apply perfectly well to your problem.”
This fiction helps us explore how and where contestation may occur in technical communities, and where discussion of social values or social implications could arise. We imagine the first developer, Jay, as someone who is sensitive to the way the API was trained, and questions its applicability to a new domain. However, he encounters the commenters who believe that physiological signals are always generalizable, and don’t engage in questions of broader applicability. The community’s answers re-enforce notions not just of what the technical artifacts can do, but what the human brain can do. The stack overflow answers draw on a popular, though critiqued, notion of the “brain-as-computer,” framing the brain as a processing unit with generic processes that take inputs and produce outputs. Here, this notion is reinforced in the social realm on Stack Overflow.
Design Fiction 3: An internal business memo for a fictional online dating company
Meanwhile, SparkTheMatch.com, a fictional online dating service, is struggling to moderate and manage inappropriate user content on their platform. SparkTheMatch wants to utilize the P300 signal to tap into people’s tacit “gut feelings” to recognize inappropriate content. They are planning to implement a content moderation process using crowdsourced workers wearing BCIs.
In creating this fiction, we use the memo to provide insight into some of the practices and labor supporting the BCI-assisted review process from the company’s perspective. The memo suggests that the use of BCIs with Mechanical Turk will “help increase efficiency” for crowdworkers while still giving them a fair wage. The crowdworkers sit and watch a stream of flashing content, while wearing a BCI and the P300 response will subconsciously identity when workers recognize supposedly abnormal content. Yet we find it debatable whether or not this process improves the material conditions of the Turk workers. The amount of content to look at in order to make the supposedly fair wage may not actually be reasonable.
SparkTheMatch employees creating the Mechanical Turk tasks don’t directly interact with the BCI API. Instead they use pre-defined templates created by the company’s IT staff, a much more mediated interaction compared to the programmers and developers reading documentation and posting on Stack Overflow. By this point, the research lab origins of the P300 API underlying the service and questions about its broader applicability are hidden. From the viewpoint of SparkTheMatch staff, the BCI-aspects of their service just “works,” allowing managers to design their workflows around it, obfuscating the inner workings of the P300 API.
Design fiction 4: A crowdworker forum for workers who use BCIs
Fourth, the Mechanical Turk workers who do the SparkTheMatch content moderation work, share their experiences on a crowdworker forum. These crowd workers’ experiences and relationships to the P300 API is strikingly different from the people and organizations described in the other fictions—notably the API is something that they do not get to explicitly see. Aspects of the system are blackboxed or hidden away. While one poster discusses some errors that occurred, there’s ambiguity about whether fault lies with the BCI device or the data processing. EEG signals are not easily human-comprehensible, making feedback mechanisms difficult. Other posters blame the user for the errors. Which is problematic, given the preciousness of these workers’ positions, as crowd workers tend to have few forms of recourse when encountering problems with tasks.
For us, these forum accounts are interesting because they describe a situation in which the BCI user is not the person who obtains the real benefits of its use. It’s the company SparkTheMatch, not the BCI-end users, that is obtaining the most benefit from BCIs.
Some Emergent Themes and Reflections
From these design fictions, several salient themes arose for us. By looking at BCIs from the perspective of several everyday experiences, we can see different types of work done in relation to BCIs – whether that’s doing software development, being a client for a BCI-service, or using the BCI to conduct work. Our fictions are inspired by others’ research on the existing labor relationships and power dynamics in crowdwork and distributed content moderation (in particular work by scholars Lilly Irani and Sarah T. Roberts). Here we also critique utopian narratives of brain-controlled computing that suggest BCIs will create new efficiencies, seamless interactions, and increased productivity. We investigate a set of questions on the role of technology in shaping and reproducing social and economic inequalities.
Second, we use the design fiction to surface questions about the situatedness of brain sensing, questioning how generalizable and universal physiological signals are. Building on prior accounts of situated actions and extended cognition, we note the specific and the particular should be taken into account in the design of supposedly generalizable BCI systems.
These themes arose iteratively, and were somewhat surprising for us, particularly just how different the BCI system looks like from each of the different perspectives in the fictions. We initially set out to create a rather mundane fictional platform or infrastructure, an API for BCIs. With this starting point we brainstormed other types of direct and indirect relationships people might have with our BCI API to create multiple “entry points” into our API’s world. We iterated on various types of relationships and artifacts—there are end-users, but also clients, software engineers, app developers, each of whom might interact with an API in different ways, directly or indirectly. Through iterations of different scenarios (a BCI-assisted tax filing service was thought of at one point), and through discussions with our colleagues (some of whom posed questions about what labor in higher education might look like with BCIs), we slowly began to think that looking at the work practices implicated in these different relationships and artifacts would be a fruitful way to focus our designs.
Toward “Platform Fictions”
In part, we think that creating design fictions in mundane technical forms like documentation or stack overflow posts might help the artifacts be legible to software engineers and technical researchers. More generally, this leads us to think more about what it might mean to put platforms and infrastructures at the center of design fiction (as well as build on some of the insights from platform studies and infrastructure studies). Adoption and use does not occur in a vacuum. Rather, technologies get adopted into and by existing sociotechnical systems. We can use design fiction to open the “black boxes” of emerging sociotechnical systems. Given that infrastructures are often relegated to the background in everyday use, surfacing and focusing on an infrastructure helps us situate our design fictions in the everyday and mundane, rather than dystopia or utopia.
We find that using a digital infrastructure as a starting point helps surface multiple subject positions in relation to the system at different sites of interaction, beyond those of end-users. From each of these subject positions, we can see where contestation may occur, and how the system looks different. We can also see how assumptions, values, and practices surrounding the system at a particular place and time can be hidden, adapted, or changed by the time the system reaches others. Importantly, we also try to surface ways the system gets used in potentially unintended ways – we don’t think that the academic researchers who developed the API to detect brain signal spikes imagined that it would be used in a system of arguably exploitative crowd labor for content moderation.
Our fictions try to blur clear distinctions that might suggest what happens in “labs,” is separate from the “the outside world”, instead highlighting their entanglements. Given that much of BCI research currently exists in research labs, we raise this point to argue that BCI researchers and designers should also be concerned about the implications of adoption and application. This helps gives us insight into the responsibilities (and complicitness) of researchers and builders of technical systems. Some of the recent controversies around Cambridge Analytica’s use of Facebook’s API points to ways in which the building of platforms and infrastructures isn’t neutral, and that it’s incumbent upon designers, developers, and researchers to raise issues related to social concerns and potential inequalities related to adoption and appropriation by others.
This work isn’t meant to be predictive. The fictions and analysis present our specific viewpoints by focusing on several types of everyday experiences. One can read many themes into our fictions, and we encourage others to do so. But we find that focusing on potential adoptions of an emerging technology in the everyday and mundane helps surface contours of debates that might occur, which might not be immediately obvious when thinking about BCIs – and might not be immediately obvious if we think about social implications in terms of “worst case scenarios” or dystopias. We hope that this work can raise awareness among BCI researchers and designers about social responsibilities they may have for their technology’s adoption and use. In future work, we plan to use these fictions as research probes to understand how technical researchers envision BCI adoptions and their social responsibilities, building on some of our prior projects. And for design researchers, we show that using a fictional platform in design fiction can help raise important social issues about technology adoption and use from multiple perspectives beyond those of end-users, and help surface issues that might arise from unintended or unexpected adoption and use. Using design fiction to interrogate sociotechnical issues present in the everyday can better help us think about the futures we desire.
In summary, the act grants consumers a right to request that businesses disclose the categories of information about them that it collects and sells, and gives consumers the right to businesses to delete their information and opt out of sale.
What follows are points I found particularly interesting. Quotations from the Act (that’s what I’ll call it) will be in bold. Questions (meaning, questions that I don’t have an answer to at the time of writing) will be in italics.
SEC. 2. The Legislature finds and declares that:
(a) In 1972, California voters amended the California Constitution to include the right of privacy among the “inalienable” rights of all people. …
I did not know that. I was under the impression that in the United States, the ‘right to privacy’ was a matter of legal interpretation, derived from other more explicitly protected rights. A right to privacy is enumerated in Article 12 of the Universal Declaration of Human Rights, adopted in 1948 by the United Nations General Assembly. There’s something like a right to privacy in Article 8 of the 1950 European Convention on Human Rights. California appears to have followed their lead on this.
In several places in the Act, it specifies that exceptions may be made in order to be compliant with federal law. Is there an ideological or legal disconnect between privacy in California and privacy nationally? Consider the Snowden/Schrems/Privacy Shield issue: exchanges of European data to the United States are given protections from federal surveillance practices. This presumably means that the U.S. federal government agrees to respect EU privacy rights. Can California negotiate for such treatment from the U.S. government?
These are the rights specifically granted by the Act:
[SEC. 2.] (i) Therefore, it is the intent of the Legislature to further Californians’ right to privacy by giving consumers an effective way to control their personal information, by ensuring the following rights:
(1) The right of Californians to know what personal information is being collected about them.
(2) The right of Californians to know whether their personal information is sold or disclosed and to whom.
(3) The right of Californians to say no to the sale of personal information.
(4) The right of Californians to access their personal information.
(5) The right of Californians to equal service and price, even if they exercise their privacy rights.
It has been only recently that I’ve been attuned to the idea of privacy rights. Perhaps this is because I am from a place that apparently does not have them. A comparison that I believe should be made more often is the comparison of privacy rights to property rights. Clearly privacy rights have become as economically relevant as property rights. But currently, property rights enjoy a widespread acceptance and enforcement that privacy rights do not.
Personal information defined through example categories
“Information” is a notoriously difficult thing to define. The Act gets around the problem of defining “personal information” by repeatedly providing many examples of it. The examples are themselves rather abstract and are implicitly “categories” of personal information. Categorization of personal information is important to the law because under several conditions businesses must disclose the categories of personal information collected, sold, etc. to consumers.
SEC. 2. (e) Many businesses collect personal information from California consumers. They may know where a consumer lives and how many children a consumer has, how fast a consumer drives, a consumer’s personality, sleep habits, biometric and health information, financial information, precise geolocation information, and social networks, to name a few categories.
[1798.140.] (o) (1) “Personal information” means information that identifies, relates to, describes, is capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household. Personal information includes, but is not limited to, the following:
(A) Identifiers such as a real name, alias, postal address, unique personal identifier, online identifier Internet Protocol address, email address, account name, social security number, driver’s license number, passport number, or other similar identifiers.
(B) Any categories of personal information described in subdivision (e) of Section 1798.80.
(C) Characteristics of protected classifications under California or federal law.
(D) Commercial information, including records of personal property, products or services purchased, obtained, or considered, or other purchasing or consuming histories or tendencies.
Note that protected classifications (1798.140.(o)(1)(C)) includes race, which is socially constructed category (see Omi and Winant on racial formation). The Act appears to be saying that personal information includes the race of the consumer. Contrast this with information as identifiers (see 1798.140.(o)(1)(A)) and information as records (1798.140.(o)(1)(D)). So “personal information” in one case is the property of a person (and a socially constructed one at that); in another case it is the specific syntactic form; in another case it is a document representing some past action. The Act is very ontologically confused.
Other categories of personal information include (continuing this last section):
(E) Biometric information.
(F) Internet or other electronic network activity information, including, but not limited to, browsing history, search history, and information regarding a consumer’s interaction with an Internet Web site, application, or advertisement.
Devices and Internet activity will be discussed in more depth in the next section.
(G) Geolocation data.
(H) Audio, electronic, visual, thermal, olfactory, or similar information.
(I) Professional or employment-related information.
(J) Education information, defined as information that is not publicly available personally identifiable information as defined in the Family Educational Rights and Privacy Act (20 U.S.C. section 1232g, 34 C.F.R. Part 99).
(K) Inferences drawn from any of the information identified in this subdivision to create a profile about a consumer reflecting the consumer’s preferences, characteristics, psychological trends, preferences, predispositions, behavior, attitudes, intelligence, abilities, and aptitudes.
Given that the main use of information is to support inferences, it is notable that inferences are dealt with here as a special category of information, and that sensitive inferences are those that pertain to behavior and psychology. This may be narrowly interpreted to exclude some kinds of inferences that may be relevant and valuable but not so immediately recognizable as ‘personal’. For example, one could infer from personal information the ‘position’ of a person in an arbitrary multi-dimensional space that compresses everything known about a consumer, and use this representation for targeted interventions (such as advertising). Or one could interpret it broadly: since almost all personal information is relevant to ‘behavior’ in a broad sense, and inference from it is also ‘about behavior’, and therefore protected.
The Act focuses on the rights of consumers and deals somewhat awkwardly with the fact that most information collected about consumers is done indirectly through machines. The Act acknowledges that sometimes devices are used by more than one person (for example, when they are used by a family), but it does not deal easily with other forms of sharing arrangements (i.e., an open Wifi hotspot) and the problems associated with identifying which person a particular device’s activity is “about”.
[1798.140.] (g) “Consumer” means a natural person who is a California resident, as defined in Section 17014 of Title 18 of the California Code of Regulations, as that section read on September 1, 2017, however identified, including by any unique identifier. [SB: italics mine.]
[1798.140.] (x) “Unique identifier” or “Unique personal identifier” means a persistent identifier that can be used to recognize a consumer, a family, or a device that is linked to a consumer or family, over time and across different services, including, but not limited to, a device identifier; an Internet Protocol address; cookies, beacons, pixel tags, mobile ad identifiers, or similar technology; customer number, unique pseudonym, or user alias; telephone numbers, or other forms of persistent or probabilistic identifiers that can be used to identify a particular consumer or device. For purposes of this subdivision, “family” means a custodial parent or guardian and any minor children over which the parent or guardian has custody.
Suppose you are a business that collects traffic information and website behavior connected to IP addresses, but you don’t go through the effort of identifying the ‘consumer’ who is doing the behavior. In fact, you may collect a lot of traffic behavior that is not connected to any particular ‘consumer’ at all, but is rather the activity of a bot or crawler operated by a business. Are you on the hook to disclose personal information to consumers if they ask for their traffic activity? If they do, or if they do not, provide their IP address?
Incidentally, while the Act seems comfortable defining a Consumer as a natural person identified by a machine address, it also happily defines a Person as “proprietorship, firm, partnership, joint venture, syndicate, business trust, company, corporation, …” etc. in addition to “an individual”. Note that “personal information” is specifically information about a consumer, not a Person (i.e., business).
This may make you wonder what a Business is, since these are the entities that are bound by the Act.
Businesses and California
The Act mainly details the rights that consumers have with respect to businesses that collect, sell, or lose their information. But what is a business?
[1798.140.] (c) “Business” means:
(1) A sole proprietorship, partnership, limited liability company, corporation, association, or other legal entity that is organized or operated for the profit or financial benefit of its shareholders or other owners, that collects consumers’ personal information, or on the behalf of which such information is collected and that alone, or jointly with others, determines the purposes and means of the processing of consumers’ personal information, that does business in the State of California, and that satisfies one or more of the following thresholds:
(A) Has annual gross revenues in excess of twenty-five million dollars ($25,000,000), as adjusted pursuant to paragraph (5) of subdivision (a) of Section 1798.185.
(B) Alone or in combination, annually buys, receives for the business’ commercial purposes, sells, or shares for commercial purposes, alone or in combination, the personal information of 50,000 or more consumers, households, or devices.
(C) Derives 50 percent or more of its annual revenues from selling consumers’ personal information.
This is not a generic definition of a business, just as the earlier definition of ‘consumer’ is not a generic definition of consumer. This definition of ‘business’ is a sui generis definition for the purposes of consumer privacy protection, as it defines businesses in terms of their collection and use of personal information. The definition explicitly thresholds the applicability of the law to businesses over certain limits.
There does appear to be a lot of wiggle room and potential for abuse here. Consider: the Mirai botnet had by one estimate 2.5 million devices compromised. Say you are a small business that collects site traffic. Suppose the Mirai botnet targets your site with a DDOS attack. Suddenly, your business collects information of millions of devices, and the Act comes into effect. Now you are liable for disclosing consumer information. Is that right?
An alternative reading of this section would recall that the definition (!) of consumer, in this law, is a California resident. So maybe the thresholds in 1798.140.(c)(B) and 1798.140.(c)(C) refer specifically to Californian consumers. Of course, for any particular device, information about where that device’s owner lives is personal information.
Having 50,000 California customers or users is a decent threshold for defining whether or not a business “does business in California”. Given the size and demographics of California, you would expect that many of the, just for example, major Chinese technology companies like Tencent to have 50,000 Californian users. This brings up the question of extraterritorial enforcement, which gave the GDPR so much leverage.
Extraterritoriality and financing
In a nutshell, it looks like the Act is intended to allow Californians to sue foreign companies. How big a deal is this? The penalties for noncompliance are civil penalties and a price per violation (presumably individual violation), not a ratio of profit, but you could imagine them adding up:
[1798.155.] (b) Notwithstanding Section 17206 of the Business and Professions Code, any person, business, or service provider that intentionally violates this title may be liable for a civil penalty of up to seven thousand five hundred dollars ($7,500) for each violation.
(c) Notwithstanding Section 17206 of the Business and Professions Code, any civil penalty assessed pursuant to Section 17206 for a violation of this title, and the proceeds of any settlement of an action brought pursuant to subdivision (a), shall be allocated as follows:
(1) Twenty percent to the Consumer Privacy Fund, created within the General Fund pursuant to subdivision (a) of Section 1798.109, with the intent to fully offset any costs incurred by the state courts and the Attorney General in connection with this title.
(2) Eighty percent to the jurisdiction on whose behalf the action leading to the civil penalty was brought.
(d) It is the intent of the Legislature that the percentages specified in subdivision (c) be adjusted as necessary to ensure that any civil penalties assessed for a violation of this title fully offset any costs incurred by the state courts and the Attorney General in connection with this title, including a sufficient amount to cover any deficit from a prior fiscal year.
1798.160. (a) A special fund to be known as the “Consumer Privacy Fund” is hereby created within the General Fund in the State Treasury, and is available upon appropriation by the Legislature to offset any costs incurred by the state courts in connection with actions brought to enforce this title and any costs incurred by the Attorney General in carrying out the Attorney General’s duties under this title.
(b) Funds transferred to the Consumer Privacy Fund shall be used exclusively to offset any costs incurred by the state courts and the Attorney General in connection with this title. These funds shall not be subject to appropriation or transfer by the Legislature for any other purpose, unless the Director of Finance determines that the funds are in excess of the funding needed to fully offset the costs incurred by the state courts and the Attorney General in connection with this title, in which case the Legislature may appropriate excess funds for other purposes.
So, just to be concrete: suppose a business collects personal information on 50,000 Californians and does not disclose that information. California could then sue that business for $7,500 * 50,000 = $375 million in civil penalties, that then goes into the Consumer Privacy Fund, whose purpose is to cover the cost of further lawsuits. The process funds itself. If it makes any extra money, it can be appropriated for other things.
Meaning, I guess this Act basically sustains a very sustained bunch of investigations and fines. You could imagine that this starts out with just some lawyers responding to civil complaints. But consider the scope of the Act, and how it means that any business in the world not properly disclosing information about Californians is liable to be fined. Suppose that some kind of blockchain or botnet based entity starts committing surveillance in violation of this act on a large scale. What kinds of technical investigative capacity is necessary to enforce this kind of thing worldwide? Does this become a self-funding cybercrime investigative unit? How are foreign actors who are responsible for such things brought to justice?
This is where it’s totally clear that I am not a lawyer. I am still puzzling over the meaning of [1798.155.(c)(2), for example.
There are more weird quirks to this Act than I can dig into in this post, but one that deserves mention (as homage to Helen Nissenbaum, among other reasons) is the stipulation about publicly available information, which does not mean what you think it means:
(2) “Personal information” does not include publicly available information. For these purposes, “publicly available” means information that is lawfully made available from federal, state, or local government records, if any conditions associated with such information. “Publicly available” does not mean biometric information collected by a business about a consumer without the consumer’s knowledge. Information is not “publicly available” if that data is used for a purpose that is not compatible with the purpose for which the data is maintained and made available in the government records or for which it is publicly maintained. “Publicly available” does not include consumer information that is deidentified or aggregate consumer information.
The grammatical error in the second sentence (the phrase beginning with “if any conditions” trails off into nowhere…) indicates that this paragraph was hastily written and never finished, as if in response to an afterthought. There’s a lot going on here.
First, the sense of ‘public’ used here is the sense of ‘public institutions’ or the res publica. Amazingly and a bit implausibly, government records are considered publicly available only when they are used for purposes compatible with their maintenance. So if a business takes a public record and uses it differently that it was originally intended when it was ‘made available’, it becomes personal information that must be disclosed? As somebody who came out of the Open Data movement, I have to admit I find this baffling. On the other hand, it may be the brilliant solution to privacy in public on the Internet that society has been looking for.
Second, the stipulation that “publicly available” does not mean biometric information collected by a business about a consumer without the consumer’s knowledge” is surprising. It appears to be written with particular cases in mind–perhaps IoT sensing. But why specifically biometric information, as opposed to other kinds of information collected without consumer knowledge?
There is a lot going on in this paragraph. Oddly, it is not one of the ones explicitly flagged for review and revision in the section of soliciting public participation on changes before the Act goes into effect on 2020.
A work in progress
1798.185. (a) On or before January 1, 2020, the Attorney General shall solicit broad public participation to adopt regulations to further the purposes of this title, including, but not limited to, the following areas:
This is a weird law. I suppose it was written and passed to capitalize on a particular political moment and crisis (Sec. 2 specifically mentions Cambridge Analytica as a motivation), drafted to best express its purpose and intent, and given the horizon of 2020 to allow for revisions.
It must be said that there’s nothing in this Act that threatens the business models of any American Big Tech companies in any way, since storing consumer information in order to provide derivative ad targeting services is totally fine as long as businesses do the right disclosures, which they are now all doing because of GDPR anyway. There is a sense that this is California taking the opportunity to start the conversation about what U.S. data protection law post-GDPR will be like, which is of course commendable. As a statement of intent, it is great. Where it starts to get funky is in the definitions of its key terms and the underlying theory of privacy behind them. We can anticipate some rockiness there and try to unpack these assumptions before adopting similar policies in other states.
A firm basis for morality is the Kantian categorical imperative: treat others as ends and not means, with the corollary that one should be able to take the principles of ones actions and extend them as laws binding all rational beings. Closely associated and important ideas are those concerned with human dignity and rights. However, the great moral issues of today are about social forms (issues around race, gender, etc.), sociotechnical organizations (issues around the role of technology), or a totalizing systemic issues (issues around climate change). Morality based on individualism and individual equivalence seem out of place when the main moral difficulties are about body agonism. What is the basis for morality for these kinds of social moral problems?
Theodicy has its answer: it’s bounded rationality. Ultimately what makes us different from other people, that which creates our multiplicity, is our distance from each other, in terms of available information. Our disconnection, based on the different loci and foci within complex reality, is precisely that which gives reality its complexity. Dealing with each other’s ignorance is the problem of being a social being. Ignorance is therefore the condition of society. Society is the condition of moral behavior; if there were only one person, there would be no such thing as right or wrong. Therefore, ignorance is a condition of morality. How, then, can morality be known?
Notes on Omi and Winant, 2014, Chapter 4, Section: “Racialization”.
Race is often seen as either an objective category, or an illusory one.
Viewed objectively, it is seen as a biological property, tied to phenotypic markers and possibly other genetic traits. It is viewed as an ‘essence’.
Omi and Winant argue that the concept of ‘mixed-race’ depends on this kind of essentialism, as it implies a kind of blending of essences. This is the view associated with “scientific” racism, most prevalent in the prewar era.
View as an illusion, race is seen as an ideological construct. An epiphenomenon of culture, class, or peoplehood. Formed as a kind of “false consciousness”, in the Marxist terminology. This view is associated with certain critics of affirmative action who argue that any racial classification is inherently racist.
Omi and Winant are critical of both perspectives, and argue for an understanding of race as socially real and grounded non-reducibly in phenomic markers but ultimately significant because of the social conflicts and interests constructed around those markers.
They define race as: “a concept that signifies and symbolizes signifiers and symbolizes social conflicts and interests by referring to different types of human bodies.”
The visual aspect of race is irreducible, and becomes significant when, for example, is becomes “understood as a manifestation of more profound differences that are situated within racially identified persons: intelligence, athletic ability, temperament, and sexuality, among other traits.” These “understandings”, which it must be said may be fallacious, “become the basis to justify or reinforce social differentiation.
This process of adding social significance to phenomic markers is, in O&W’s language, racialization, which they define as “the extension of racial meanings to a previously racially unclassified relationship, social practice, or group.” They argue that racialization happens at both macro and micro scales, ranging from the consolidation of the world-system through colonialization to incidents of racial profiling.
Race, then, is a concept that refer to different kinds of bodies by phenotype and the meanings and social practices ascribed to them. When racial concepts are circulated and accepted as ‘social reality’, racial difference is not dependent on visual difference alone, but take on a life of their own.
Omi and Winant therefore take a nuanced view of what it means for a category to be socially constructed, and it is a view that has concrete political implications. They consider the question, raised frequently, as to whether “we” can “get past” race, or go beyond it somehow. (Recall that this edition of the book was written during the Obama administration and is largely a critique of the idea, which seems silly now, that his election made the United States “post-racial”).
Omi and Winant see this framing as unrealistically utopian and based on extreme view that race is “illusory”. It poses race as a problem, a misconception of the past. A more effective position, they claim, would note that race is an element of social structure, not an irregularity in it. “We” cannot naively “get past it”, but also “we” do not need to accept the erroneous conclusion that race is a fixed biological given.
Omi and Winant’s argument here is mainly one about the ontology of social forms.
In my view, this question of social form ontology is one of the “hard problems”
remaining in philosophy, perhaps equivalent to if not more difficult than the hard problem of consciousness. So no wonder it is such a fraught issue.
The two poles of thinking about race that they present initially, the essentialist view and the epiphenomenal view, had their heyday in particular historical intellectual movements. Proponents of these positions are still popularly active today, though perhaps it’s fair to say that both extremes are now marginalized out of the intellectual mainstream. Despite nobody really understanding how social construction works, most educated people are probably willing to accept that race is socially constructed in one way or another.
It is striking, then, that Omi and Winant’s view of the mechanism of racialization, which involves the reading of ‘deeper meanings’ into phenomic traits, is essentially a throwback to the objective, essentializing viewpoint.
Perhaps there is a kind of cognitive bias, maybe representativeness bias or fundamental attribution bias, which is responsible for the cognitive errors that make racialization possible and persistent.
If so, then the social construction of race would be due as much to the limits of human cognition as to the circulation of concepts. That would explain the temptation to believe that we can ‘get past’ race, because we can always believe in the potential for a society in which people are smarter and are trained out of their basic biases. But Omi and Winant would argue that this is utopian. Perhaps the wisdom of sociology and social science in general is the conservative recognition of the widespread implications of human limitation. As the social expert, one can take the privileged position that notes that social structure is the result of pervasive cognitive error. That pervasive cognitive error is perhaps a more powerful force than the forces developing and propagating social expertise. Whether it is or is not may be the existential question for liberal democracy.
An unanswered question at this point is whether, if race were broadly understood as a function of social structure, it remains as forceful a structuring element as if it is understood as biological essentialism. It is certainly possible that, if understood as socially contingent, the structural power of race will steadily erode through such statistical processes as regression to the mean. In terms of physics, we can ask whether the current state of the human race(s) is at equilibrium, or heading towards an equilibrium, or diverging in a chaotic and path-dependent way. In any of these cases, there is possibly a role to be played by technical infrastructure. In other words, there are many very substantive and difficult social scientific questions at the root of the question of whether and how technical infrastructure plays a role in the social reproduction of race.
This rhetorical strategy of presenting the historical development of multiple threads of prior theory before synthesizing them into something new is familiar to me from my work with Helen Nissenbaum on Contextual Integrity. CI is a theory of privacy that advances prior legal and social theories by teasing out their tensions. This seems to be a good way to advance theory through scholarship. It is interesting that the same method of theory building can work in multiple fields. My sense is that what’s going on is that there is an underlying logic to this process which in a less Anglophone world we might call “dialectical”. But I digress.
I have not finished Chapter 4 yet but I wanted to sketch out the outline of it before going into detail. That’s because what Omi and Winant are presenting a way of understanding the mechanisms behind the reproduction of race that are not simplistically “systemic” but rather break it down into discrete operations. This is a helpful contribution; even if the theory is not entirely accurate, its very specificity elevates the discourse.
So, in brief notes:
For Omi and Winant, race is a way of “making up people”; they attribute this phrase to Ian Hacking but do not develop Hacking’s definition. Their reference to a philosopher of science does situate them in a scholarly sense; it is nice that they seem to acknowledge an implicit hierarchy of theory that places philosophy at the foundation. This is correct.
Race-making is a form of othering, of having a group of people identify another group as outsiders. Othering is a basic and perhaps unavoidable human psychological function; their reference for this is powell and Menendian (Apparently, john a. powell being one of these people like danah boyd who decapitalizes their name.)
Race is of course a social construct that is neither a fixed and coherent category nor something that is “unreal”. That is, presumably, why we need a whole book on the dynamic mechanisms that form it. One reason why race is such a dynamic concept is because (a) it is a way of organizing inequality in society, (b) the people on “top” of the hierarchy implied by racial categories enforce/reproduce that category “downwards”, (c) the people on the “bottom” of the hierarchy implied by racial categories also enforce/reproduce a variation of those categories “upwards” as a form of resistance, and so (d) the state of the racial categories at any particular time is a temporary consequence of conflicting “elite” and “street” variations of it.
This presumes that race is fundamentally about inequality. Omi and Winant believe it is. In fact, they think racial categories are a template for all other social categories that are about inequality. This is what they mean by their claim that race is a master category. It’s “a frame used for organizing all manner of political thought”, particularly political thought about liberation struggles.
I’m not convinced by this point. They develop it with a long discussion of intersectionality that is also unconvincing to me. Historically, they point out that sometimes women’s movements have allied with black power movements, and sometimes they haven’t. They want the reader to think this is interesting; as a data scientist, I see randomness and lack of correlation. They make the poignant and true point that “perhaps at the core of intersectionality practice, as well as theory, is the ‘mixed race’ category. Well, how does it come about that people can be ‘mixed’?” They then drop the point with no further discussion.
Perhaps the book suffers from its being aimed at undergraduates. Omi and Winant are unable to bring up even the most basic explanation for why there are mixed race people: a male person of one race and a female person of a different race have a baby, and that creates a mixed race person (whether or not they are male or female). The basic fact that race is hereditary whereas sex is not is probably really important to the interesectionality between race and sex and the different ways those categories are formed; somehow this point is never mentioned in discussions of intersectionality. Perhaps this is because of the ways this salient difference in race and sex undermines the aim of political solidarity that so much intersectional analysis seems to be going for. Relatedly, contemporary sociological theory seems to have some trouble grasping conventional sexual reproduction, perhaps because it is so sensitized to all the exceptions to it. Still, they drop the ball a bit by bringing this up and not going into any analytic depth about it at all.
Omi and Winant make an intriguing comment, “In legal theory, the sexual contract and racial contract have often been compared”. I don’t know what this is about but I want to know more.
This is all a kind of preamble to their presentation of theory. They start to provide some definitions:
The sociohistorical process by which racial identities are created, lived out, transformed, and destroyed.
How phenomic-corporeal dimensions of bodies acquire meaning in social life.
The co-constitutive ways that racial meanings are translated into social structures and become racially signified.
Not defined. A property of racial projects that Omi and Winant will discuss later.
Ways that the politics (of a state?) can handle race, including racial despotism, racial democracy, and racial hegemony.
This is a useful breakdown. More detail in the next post.
Strategy has always been a fuzzy concept in my mind. What goes into a strategy? What makes a strategy good or bad? How is it different from vision and goals? Good Strategy / Bad Strategy, by UCLA Anderson School of Management professor Richard P. Rumelt, takes a nebulous concept and makes it concrete. He explains what goes into developing a strategy, what makes a strategy good, and what makes a strategy bad – which makes good strategy even clearer.
As I read the book, I kept underlining passages and scribbling notes in the margins because it’s so full of good information and useful techniques that are just as applicable to my everyday work as they are to running a multi-national corporation. To help me use the concepts I learned, I decided to publish my notes and key takeaways so I can refer back to them later.
The Kernel of Strategy
Strategy is designing a way to deal with a challenge. A good strategy, therefore, must identify the challenge to be overcome, and design a way to overcome it. To do that, the kernel of a good strategy contains three elements: a diagnosis, a guiding policy, and coherent action.
A diagnosis defines the challenge. What’s holding you back from reaching your goals? A good diagnosis simplifies the often overwhelming complexity of reality down to a simpler story by identifying certain aspects of the situation as critical. A good diagnosis often uses a metaphor, analogy, or an existing accepted framework to make it simple and understandable, which then suggests a domain of action.
A guiding policy is an overall approach chosen to cope with or overcome the obstacles identified in the diagnosis. Like the guardrails on a highway, the guiding policy directs and constrains action in certain directions without defining exactly what shall be done.
A set of coherent actions dictate how the guiding policy will be carried out. The actions should be coherent, meaning the use of resources, policies, and maneuvers that are undertaken should be coordinated and support each other (not fight each other, or be independent from one another).
Good Strategy vs. Bad Strategy
Good strategy is simple and obvious.
Good strategy identifies the key challenge to overcome. Bad strategy fails to identify the nature of the challenge. If you don’t know what the problem is, you can’t evaluate alternative guiding policies or actions to take, and you can’t adjust your strategy as you learn more over time.
Good strategy includes actions to take to overcome the challenge. Actions are not “implementation” details; they are the punch in the strategy. Strategy is about how an organization will move forward. Bad strategy lacks actions to take. Bad strategy mistakes goals, ambition, vision, values, and effort for strategy (these things are important, but on their own are not strategy).
Good strategy is designed to be coherent – all the actions an organization takes should reinforce and support each other. Leaders must do this deliberately and coordinate action across departments. Bad strategy is just a list of “priorities” that don’t support each other, at best, or actively conflict with each other, undermine each other, and fight for resources, at worst. The rich and powerful can get away with this, but it makes for bad strategy.
This was the biggest “ah-ha!” moment for me. All strategy I’ve seen has just been a list of unconnected objectives. Designing a strategy that’s coherent and mutually reinforces itself is a huge step forward in crafting good strategies.
Good strategy is about focusing and coordinating efforts to achieve an outcome, which necessarily means saying “No” to some goals, initiatives, and people. Bad strategy is the result of a leader who’s unwilling or unable to say “No.” The reason good strategy looks so simple is because it takes a lot of effort to maintain the coherence of its design by saying “No” to people.
Good strategy leverages sources of power to overcome an obstacle. It brings relative strength to bear against relative weakness (more on that below).
How to Identify Bad Strategy
Four Major Hallmarks of Bad Strategy
Fluff: A strategy written in gibberish masking as strategic concepts is classic bad strategy. It uses abstruse and inflated words to create the illusion of high-level thinking.
Failure to face the challenge: A strategy that does not define the challenge to overcome makes it impossible to evaluate, and impossible to improve.
Mistaking goals for strategy: Many bad strategies are just statements of desire rather than plans for overcoming obstacles.
Bad strategic objectives: A strategic objective is a means to overcoming an obstacle. Strategic objectives are “bad” when they fail to address critical issues or when they are impracticable.
Some Forms of Bad Strategy
Dog’s Dinner Objectives: A long list of “things to do,” often mislabeled as “strategies” or “objectives.” These lists usually grow out of planning meetings in which stakeholders state what they would like to accomplish, then they throw these initiatives onto a long list called the “strategic plan” so that no one’s feelings get hurt, and they apply the label “long-term” so that none of them need be done today.
In tech-land, I see a lot of companies conflate OKRs (Objectives and Key Results) with strategy. OKRs are an exercise in goal setting and measuring progress towards those goals (which is important), but it doesn’t replace strategy work. The process typically looks like this: once a year, each department head is asked to come up with their own departmental OKRs, which are supposed to be connected to company goals (increase revenue, decrease costs, etc.). Then each department breaks down their OKRs into sub-OKRs for their teams to carry out, which are then broken down into sub-sub-OKRs for sub-teams and/or specific people, so on down the chain. This process just perpetuates departmental silos and are rarely cohesive or mutually supportive of each other (if this does happen, it’s usually a happy accident). Department and team leaders often throw dependencies on other departments and teams, which causes extra work for teams that they often haven’t planned for and aren’t connected to their own OKRs, which drags down the efficiency and effectiveness of the entire organization. It’s easy for leaders to underestimate this drag since it’s hard to measure, and what isn’t measured isn’t managed.
As this book makes clear, setting objectives is not the same as creating a strategy to reach those goals. You still need to do the hard strategy work of making a diagnosis of what obstacle is holding you back, creating a guiding policy for overcoming the obstacle, and breaking that down into coherent actions for the company to take (which shouldn’t be based on what departments or people or expertise you already have, but instead you should look at what competencies you need to carry out your strategy and then apply existing teams and people to carrying them out, if they exist, and hire where you’re missing expertise, and get rid of competencies that are no longer needed in the strategy). OKRs can be applied at the top layer as company goals to reach, then applied again to the coherent actions (i.e. what’s the objective of each action, and how will you know if you reached it?), and further broken down for teams and people as needed. You still need an actual strategy before you can set OKRs, but most companies conflate OKRs with strategy.
Blue Sky Objectives: A blue-sky objective is a simple restatement of the desired state of affairs or of the challenge. It skips over the annoying fact that no one has a clue as to how to get there.
For example, “underperformance” isn’t a challenge, it’s a result. It’s a restatement of a goal. The true challenge are the reasons for the underperformance. Unless leadership offers a theory of why things haven’t worked in the past (a.k.a. a diagnosis), or why the challenge is difficult, it is hard to generate good strategy.
The Unwillingness or Inability to Choose: Any strategy that has universal buy-in signals the absence of choice. Because strategy focuses resources, energy, and attention on some objectives rather than others, a change in strategy will make some people worse off and there will be powerful forces opposed to almost any change in strategy (e.g. a department head who faces losing people, funding, headcount, support, etc., as a result of a change in strategy will most likely be opposed to the change). Therefore, strategy that has universal buy-in often indicates a leader who was unwilling to make a difficult choice as to the guiding policy and actions to take to overcome the obstacles.
This is true, but there are ways of mitigating this that he doesn’t discuss, which I talk about in the “Closing Thoughts” section below.
Template-style “strategic planning:” Many strategies are developed by following a template of what a “strategy” should look like. Since strategy is somewhat nebulous, leaders are quick to adopt a template they can fill in since they have no other frame of reference for what goes into a strategy.
These templates usually take this form:
The Vision: Your unique vision of what the org will be like in the future. Often starts with “the best” or “the leading.”
The Mission: High-sounding politically correct statement of the purpose of the org.
The Values: The company’s values. Make sure they are non-controversial.
The Strategies: Fill in some aspirations/goals but call them strategies.
This template-style strategy skips over the hard work of identifying the key challenge to overcome, and setting out a guiding policy and actions to overcome the obstacle. It mistakes pious statements of the obvious as if they were decisive insights. The vision, mission, and goals are usually statements that no one would argue against, but that no one is inspired by, either.
I found myself alternating between laughing and shaking my head in disbelief because this section is so on the nose.
New Thought: This is the belief that you only need to envision success to achieve it, and that thinking about failure will lead to failure. The problem with this belief is that strategy requires you to analyze the situation to understand the problem to be solved, as well as anticipating the actions/reactions of customers and competitors, which requires considering both positive and negative outcomes. Ignoring negative outcomes does not set you up for success or prepare you for the unthinkable to happen. It crowds out critical thinking.
Sources of Power
Good strategy will leverage one or more sources of power to overcome the key obstacles. Rumelt describes 7 sources of power, but the list is not exhaustive:
Leverage: Leverage is finding an imbalance in a situation, and exploiting it to produce a disproportionately large payoff. Or, in resource constrained situations (e.g. a startup), it’s using the limited resources at hand to achieve the biggest result (i.e. not trying to do everything at once). Strategic leverage arises from a mixture of anticipating the actions and reactions of competitors and buyers, identifying a pivot point that will magnify the effects of focused effort (e.g. an unmet need of people, an underserved market, your relative strengths/weaknesses, a competence you’ve developed that can be applied to a new context, and so on), and making a concentrated application of effort on only the most critical objectives to get there.
This is a lesson in constraints – a company that isn’t rich in resources (i.e. money, people) is forced to find a sustainable business model and strategy, or perish. I see startups avoid making hard choices about what objectives to pursue by taking investor money to hire their way out of deciding what not to do. They can avoid designing a strategy by just throwing spaghetti at the wall and hoping something sticks, and if it doesn’t go back to the investors for more handouts. “Fail fast,” “Ready, fire, aim,” “Move fast and break things,” etc., are all Silicon Valley versions of this thinking worshiped by the industry. If a company is resource constrained, they’re forced to find a sustainable business model and strategy sooner. VC money has a habit of making companies lazy when it comes to the business fundamentals of strategy and turning a profit.
Proximate Objectives: Choose an objective that is close enough at hand to be feasible, i.e. proximate. This doesn’t mean your goal needs to lack ambition, or be easy to reach, or that you’re sandbagging. Rather, you should know enough about the nature of the challenge that the sub-problems to work through are solvable, and it’s a matter of focusing individual minds and energy on the right areas to reach an otherwise unreachable goal. For example, landing a man on the moon by 1969 was a proximate objective because Kennedy knew the technology and science necessary was within reach, and it was a matter of allocating, focusing, and coordinating resources properly.
Chain-link Systems: A system has chain-link logic when its performance is limited by its weakest link. In a business context, this typically means each department is dependent on the other such that if one department underperforms, the performance of the entire system will decline. In a strategic setting, this can cause organizations to become stuck, meaning the chain is not made stronger by strengthening one link – you must strengthen the whole chain (and thus becoming un-stuck is its own strategic challenge to overcome). On the flip side, if you design a chain link system, then you can achieve a level of excellence that’s hard for competitors to replicate. For example, IKEA designs its own furniture, builds its own stores, and manages the entire supply chain, which allows it to have lower costs and a superior customer experience. Their system is chain-linked together such that it’s hard for competitors to replicate it without replicating the entire system. IKEA is susceptible to getting stuck, however, if one link of its chain suffers.
Design: Good strategy is design – fitting various pieces together so they work as a coherent whole. Creating a guiding policy and actions that are coherent is a source of power since so few companies do this well. As stated above, a lot of strategies aren’t “designed” and instead are just a list of independent or conflicting objectives.
The tight integration of a designed strategy comes with a downside, however — it’s narrower in focus, more fragile, and less flexible in responding to change. If you’re a huge company with a lot of resources at your disposal (e.g. Microsoft), a tightly designed strategy could be a hinderance. But in situations where resources are constrained (e.g. a startup grasping for a foothold in the market), or the competitive challenge is high, a well-designed strategy can give you the advantage you need to be successful.
Focus: Focus refers to attacking a segment of the market with a product or service that delivers more value to that segment than other players do for the entire market. Doing this requires coordinating policies and objectives across an organization to produce extra power through their interacting and overlapping effects (see design, above), and then applying that power to the right market segment (see leverage, above).
This source of power exists in the UX and product world in the form of building for one specific persona who will love your product, capturing a small – but loyal – share of the market, rather than trying to build a product for “everyone” that captures a potentially bigger part of the market but that no one loves or is loyal to (making it susceptible to people switching to competitors). This advice is especially valuable for small companies and startups who are trying to establish themselves.
Growth: Growing the size of the business is not a strategy – it is the result of increased demand for your products and services. It is the reward for successful innovation, cleverness, efficiency, and creativity. In business, there is blind faith that growth is good, but that is not the case. Growth itself does not automatically create value.
The tech industry has unquestioned faith in growth. VC-backed companies are expected to grow as big as possible, as fast as possible. If you don’t agree, you’re said to lack ambition, and investors won’t fund you. This myth is perpetuated by the tech media. But as Rumelt points out, growth isn’t automatically good. Most companies don’t need to be, and can’t be, as big as companies like Google, Facebook, Apple, and Amazon. Tech companies grow in an artificial way, i.e. spending the money of their investors, not money they’re making from customers. This growth isn’t sustainable, and when they can’t turn a profit they shut down (or get acquired). What could have been a great company, at a smaller size or slower growth rate, now no longer exists. This generally doesn’t harm investors because they only need a handful of big exits out of their entire portfolio, so they pay for the ones that fail off of the profits from the few that actually make it big.
Using Advantage: An advantage is the result of differences – an asymmetry between rivals. Knowing your relative strengths and weaknesses, as well as the relative strengths and weaknesses of your competitors, can help you find an advantage. Strengths and weaknesses are “relative” because a strength you have in one context, or against one competitor, may be a weakness in another context, or against a different competitor. You must press where you have advantage and side-step situations in which you do not. You must exploit your rivals’ weaknesses and avoid leading with your own.
The most basic advantage is producing at a lower cost than your competitors, or delivering more perceived value than your competitors, or a mix of the two. The difficult part is sustaining an advantage. To do that, you need an “isolating mechanism” that prevents competitors from duplicating it. Isolating mechanisms include patents, reputations, commercial and social relationships, network effects, dramatic economies of scale, and tacit knowledge and skill gained through experience.
Once you have an advantage, you should strengthen it by deepening it, broadening it, creating higher demand for your products and services, or strengthening your isolating mechanisms (all explained fully in the book).
Dynamics: Dynamics are waves of change that roll through an industry. They are the net result of a myriad of shifts and advances in technology, cost, competition, politics, and buyer perceptions. Such waves of change are largely exogenous – that is, beyond the control of any one organization. If you can see them coming, they are like an earthquake that creates new high ground and levels what had previously been high ground, leaving behind new sources of advantage for you to exploit.
There are 5 guideposts to look out for: 1. Rising fixed costs; 2. Deregulation; 3. Predictable Biases; 4. Incumbent Response; and 5. Attractor States (i.e. where an industry “should” go). (All of these are explained fully in the book).
Attractor states are especially interesting because he defines it as where an industry “should” end up in the light of technological forces and the structure of demand. By “should,” he means to emphasize an evolution in the direction of efficiency – meeting the needs and demands of buyers as efficiently as possible. They’re different from corporate visions because the attractor state is based on overall efficiency rather than a single company’s desire to capture most of the pie. Attractor states are what pundits and industry analysts write about. There’s no guarantee, however, that the attractor state will ever come to pass. As it relates to strategy, you can anticipate most players to chase the attractor state. This leads many companies to waste resources chasing the wrong vision, and faltering as a result (e.g. Cisco rode the wave of “dumb pipes” and “IP everywhere” that AT&T and other telecom companies should have exploited). If you “zig” when other companies “zag”, you can build yourself an advantage.
As a strategist, you should seek to do your own analysis of where an industry is going, and create a strategy based on that (rather than what pundits “predict” will happen). Combining your own proprietary knowledge of your customers, technology, and capabilities with industry trends can give you deeper insights that analysts on the outside can’t see. Taking that a step further, you should also look for second-order effects as a result of industry dynamics. For example, the rise of the microprocessor was predicted by many, and largely came true. But what most people didn’t predict was the second-order effect that commoditized microprocessors getting embedded in more products led to increased demand for software, making the ability to write good software a competitive advantage.
Inertia: Inertia is an organization’s unwillingness or inability to adapt to changing circumstances. As a strategist, you can exploit this by anticipating that it will take many years for large and well-established competitors to alter their basic functioning. For example, Netflix pushed past Blockbuster because the latter could or would not abandon its focus on retail stores.
Entropy: Entropy causes organizations to become less organized and less focused over time. As a strategist, you need to watch out for this in your organization to actively maintain your purpose, form, and methods, even if there are no changes in strategy or competition. You can also use it as a weakness to exploit against your competitors by anticipating that entropy will creep into their business lines. For example, less focused product lines are a sign of entropy. GM’s car lines used to have distinct price points, models, and target buyers, but over time entropy caused each line to creep into each other and overlap, causing declining sales from consumer confusion.
One of the things that surprised me as I read the book is how much overlap there is between doing strategy work and design work – diagnosing the problem, creating multiple potential solutions (i.e. the double diamond), looking at situations from multiple perspectives, weighing tradeoffs in potential solutions, and more. The core of strategy, as he defines it, is identifying and solving problems. Sound familiar? That’s the core of design! He even states, “A master strategist is a designer.”
Rumelt goes on to hold up many examples of winning strategies and advantages from understanding customer needs, behaviors, pain points, and building for a specific customer segment. In other words, doing user-centered design. He doesn’t specifically reference any UX methods, but it was clear to me that the tools of UX work apply to strategy work as well.
The overlap with design doesn’t end there. He has a section about how strategy work is rooted in intuition and subjectivity. There’s no way to prove a strategy is the “best” or “right” one. A strategy is a judgement of a situation and the best path forward. You can say the exact same thing about design as well.
Since a strategy can’t be proven to be right, Rumelt recommends considering a strategy a “hypothesis” that can be tested and refined over time. Leaders should listen for signals that their strategy is or is not working, and make adjustments accordingly. In other words, strategists should iterate on their solutions, same as designers.
Furthermore, this subjectivity causes all kinds of challenges for leaders, such as saying “no” to people, selling people on their version of reality, and so on. He doesn’t talk about how to overcome these challenges, but as I read the book I realized these are issues that designers have to learn how to deal with.
Effective designers have to sell their work to people to get it built. Then they have to be prepared for criticism, feedback, questions, and alternate ideas. Since their work can’t be “proven” to be correct, it’s open to attack from anyone and everyone. If their work gets built and shipped to customers, they still need to be open to it being “wrong” (or at least not perfect), listen to feedback from customers, and iterate further as needed. All of these soft skills are ways of dealing with the problems leaders face when implementing a strategy.
In other words, design work is strategy work. As Rumelt says, “Good strategy is design, and design is about fitting various pieces together so they work as a coherent whole.”
If you enjoyed this post (and I’m assuming you did if you made it this far), then I highly recommend reading the book yourself. I only covered the highlights here, and the book goes into a lot more depth on all of these topics. Enjoy!
Today the people I have personally interacted with are: a Russian immigrant, three black men, a Japanese-American woman, and a Jewish woman. I live in New York City and this is a typical day. But when I sign onto Twitter, I am flooded with messages suggesting that the United States is engaged in a political war over its racial destiny. I would gladly ignore these messages if I could, but there appears to be somebody with a lot of influence setting a media agenda on this.
So at last I got to Omi and Winant‘s chapter on “Nation” — on theories of race as nation. The few colleagues who expressed interest in these summaries of Omi and Winant were concerned that they would not tackle the relationship between race and colonialism; indeed they do tackle it in this chapter, though it comes perhaps surprisingly late in their analysis. Coming to this chapter, I had high hopes that these authors, whose scholarship has been very helpfully thorough on other aspects of race, would shed light on the connection between nation and race that would help shed light on the present political situation in the U.S. I have to say that I wound up being disappointed in their analysis, but that those disappointments were enlightening. Since this edition of their book was written in 2014 when their biggest target was “colorblindness”, the gaps in their analysis are telling precisely because they show how educated, informed imagination could not foresee today’s resurgence of white nationalism in the United States.
Having said that, Omi and Winant are not naive about white nationalism. On the contrary, they open their chapter with a long section on The White Nation, which is a phrase I can’t even type without cringing at. They paint a picture in broad strokes: yes, the United States has for most of its history explicitly been a nation of white people. This racial identity underwrote slavery, the conquest of land from Native Americans, and policies of immigration and naturalization and segregation. For much of its history, for most of its people, the national project of the United States was a racial project. So say Omi and Winant.
Then they also say (in 2014) that this sense of the nation as a white nation is breaking down. Much of their chapter is a treatment of “national insurgencies”, which have included such a wide variety of movements as Pan-Africanism, cultural insurgencies that promote ‘ethnic’ culture within the United States, and Communism. (They also make passing reference to feminism as comparable kind of national insurgency undermining the notion that the United States is a white male nation. While the suggestion is interesting, they do not develop it enough to be convincing, and instead the inclusion of gender into their history of racial nationalism comes off as a perfunctory nod to their progressive allies.)
Indeed, they open this chapter in a way that is quite uncharacteristic for them. They write in a completely different register: not historical and scholarly analysis, and but more overtly ideology-mythology. They pose the question (originally posed by du Bois) in personal and philosophical terms to the reader: whose nation is it? Is it yours? They do this quite brazenly, in a way the denies one the critical intervention of questioning what a nation really is, of dissecting it as an imaginary social form. It is troubling because it seems to be subtle abuse of the otherwise meticulously scholarly character of their work. They set of the question of national identity as a pitched battle over a binary, much as is being done today. It is troublingly done.
This Manichean painting of American destiny is perhaps excused because of the detail with which they have already discussed ethnicity and class at this point in the book. And it does set up their rather prodigious account of Pan-Africanism. But it puts them in the position of appearing to accept uncritically an intuitive notion of what a nation is even while pointing out how this intuitive idea gets challenged. Indeed, they only furnish one definition of a nation, and it is Joseph Stalin’s, from a 1908 pamphlet:
A nation is a historically constituted, stable community of people, formed on the basis of a common language, territory, economic life, and psychological make-up, manifested in a common culture. (Stalin, 1908)
So much for that.
Regarding colonialism, Omi and Winant are surprisingly active in their rejection of ‘colonialist’ explanations of race in the U.S. beyond the historical conditions. They write respectfully of Wallerstein’s world-system theory as contributing to a global understanding of race, but do not see it as illuminating the specific dynamics of race in the United States very much. Specifically, they bring up Bob Blauner’s Racial Oppression in America as a paradigmatic of the application of internal colonialism theory to the United States, then pick it apart and reject it. According to internal colonialism (roughly):
There’s a geography of spatial arrangement of population groups along racial line
There is a dynamic of cultural domination and resistance, organized on lines of racial antagonism
Theirs systems of exploitation and control organized along racial lines
Blauner took up internal colonialism theory explicitly in 1972 to contribute to ‘radical nationalist’ practice of the 60’s, admitting that it is more inspired by activists than sociologists. So we might suspect, with Omi and Winant, that his discussion of colonialism is more about crafting an exciting ideology than one that is descriptively accurate. For example, Blauner makes a distinction between “colonized and immigrant minorities”, where the “colonized” minorities are those whose participation in the United States project was forced (Africans and Latin Americans) while those (Europeans) who came voluntarily are “immigrants” and therefore qualitatively different. Omi and Winant take issue with this classification, as many European immigrants were themselves refugees of ethnic cleansing, while it leaves the status of Asian Americans very unclear. At best, ‘internal colonialism’ theory, as far as the U.S. is concerned, places emphasis on known history but does not add to it.
Omi and Winant frequently ascribe theorists of race agency in racial politics, as if the theories enable self-conceptions that enable movements. This may be professional self-aggrandizement. They also perhaps set up nationalist accounts of race weakly because they want to deliver the goods in their own theory of racial formation that appears in the next chapter. They see nation based theories as capturing something important:
In our view, the nation-based paradigm of race is an important component of our understanding of race: in highlighting “peoplehood,” collective identity, it “invents tradition” (Hobsbawm and Ranger, eds. 1983) and “imagines community” (Anderson, 1998). Nation-based understandings of race provide affective identification: They promise a sense of ineffable connection within racially identified groups; they engage in “collective representation” (Durkheim 2014). The tropes of “soul,” of “folk,” of hermanos/hermanas unidos/unidas uphold Duboisian themes. They channel Marti’s hemispheric consciousness (Marti 1977 ); and Vasconcelo’s ideas of la raza cosmica (1979, Stavans 2011). In communities and movements, in the arts and popular media, as well as universities and colleges (especially in ethnic studies) these frameworks of peoplehood play a vital part in maintaining a sense of racial solidarity, however uneven or partial.
Now, I don’t know most of the references in the above quotation. But one gets the sense that Omi and Winant believe strongly that race contains an affective identifciation component. This may be what they were appealing to in a performative or demonstrative way earlier in the chapter. While they must be on to something, it is strange that they have this as the main takeaway of the history of race and nationalism. It is especially unconvincing that their conclusion after studying the history of racial nationalism is that ethnic studies departments in universities are what racial solidarity is really about, because under their own account the creation of ethnic studies departments was an accomplishment of racial political organization, not the precursor to it.
Omi and Winant deal in only the most summary terms with the ways in which nationalism is part of the operation of a nation state. They see racial nationalism as a factor in slavery and colonialism, and also in Jim Crow segregation, but deal only loosely with whether and how the state benefited from this kind of nationalism. In other words, they have a theory of racial nationalism that is weak on political economy. Their only mention of integration in military service, for example, is the mention that service in the American Civil War was how many Irish Americans “became white”. Compare this with Fred Turner‘s account of how increased racial liberalization was part of the United States strategy to mobilize its own army against fascism.
In my view, Omi and Winant’s blind spot is their affective investment in their view of the United States as embroiled in perpetual racial conflict. While justified and largely information, it prevents them from seeing a wide range of different centrist views as anything but an extension of white nationalism. For example, they see white nationalism in nationalist celebrations of ‘the triumph of democracy’ on a Western model. There is of course a lot of truth in this, but also, as is abundantly clear today when now there appears to be a conflict between those who celebrate a multicultural democracy with civil liberties and those who prefer overt racial authoritarianism, there is something else going on that Omi and Winant miss.
My suspicion is this: in their haste to target “colorblind neoliberalism” as an extension of racism-as-usual, they have missed how in the past forty years or so, and especially in the past eight, such neoliberalism has itself been a national project. Nancy Fraser can argue that progressive neoliberalism has been hegemonic and rejected by right-wing populists. A brief look at the center left media will show how progressivism is at least as much of an affective identity in the United States as is whiteness, despite the fact that progressivism is not in and of itself a racial construct or “peoplehood”. Omi and Winant believed that colorblind neoliberalism would be supported by white nationalists because it was neoliberal. But now it has been rejected by white nationalist because it is colorblind. This is a difference that makes a difference.
I got into an actual argument with a real person about Melania Trump’s “I really don’t care. Do U?” jacket. I’m going to double down on it and write about it because I have the hot take nobody has been talking about.
I asked this person what they thought about Melania’s jacket, and the response was, “I don’t care what she wears. She wore a jacket to a plane; so what? Is she even worth paying attention to? She’s not an important person whose opinions matter. The media is too focused on something that doesn’t matter. Just leave her alone.”
To which I responded, “So, you agree with the message on the jacket. If Melania had said that out loud, you’d say, ‘yeah, I don’t care either.’ Isn’t that interesting?”
No, it wasn’t (to the person I spoke with). It was just annoying to be talking about it in the first place. Not interesting, nothing to see here.
Back it up and let’s make some assumptions:
FLOTUS thought at least as hard about what to wear that day than I do in the morning, and is a lot better at it than I am, because she is an experience professional at appearing.
Getting the mass media to fall over itself on a gossip item about the ideological implications of first lady fashion gets you a lot of clicks, followers, attention, etc. and that is the political currency of the time. It’s the attention economy, stupid.
FLOTUS got a lot of attention for wearing that jacket because of its ambiguity. The first-order ambiguity of whether it was a coded message playing into any preexisting political perspective was going to get attention, obviously. But the second-order ambiguity, the one that makes it actually clever, is its potential reference to the attention to the first order ambiguity. The jacket, in this second order frames, literally expresses apathy about any attention given to it and questions whether you care yourself. That’s a clever, cool concept for a jacket worn on, like, the street. As a viral social media play, it is even more clever.
It’s clever because with that second-order self-referentiality, everybody who hears about it (which might be everybody in the world, who knows) has to form an opinion about it, and the most sensible opinion about it, the one which you must ultimately concluded in order to preserve your sanity, is the original one expressed: “I don’t really care.” Clever.
What’s the point? First, I’m arguing that this is was deliberate self-referential virality of the same kind I used to give Weird Twitter a name. Having researched this subject before, I claim expertise and knowing-what-I’m-talking-about. This is a tactic one can use in social media to do something clever. Annoying, but clever.
Second, and maybe more profound: in the messed up social epistemology of our time, where any image or message fractally reverberates between thousands of echo chambers, there is hardly any ground for “social facts”, or matters of consensus about the social world. Such facts require not just accurate propositional content but also enough broad social awareness of them to be believed by a quorum of the broader population. The disintegration of social facts is, probably, very challenging for American self-conception as a democracy is part of our political crisis right now.
There aren’t a lot of ways to accomplish social facts today. But one way is to send an ambiguous or controversial message that sparks a viral media reaction whose inevitable self-examinations resolve onto the substance of the original message. The social fact becomes established as a fait accompli through everybody’s conversation about it before anybody knows what’s happened.
That’s what’s happened with this jacket: it spoke the truth. We can give FLOTUS credit for that. And truth is: do any of us really care about any of this? That’s maybe not an irrelevant question, however you answer it.
I’m excited to announce that I will be co-supervising up to four very generous and well-supported PhD scholarships at the University of New South Wales (Sydney) on the themes of “Living with Pervasive Media Technologies from Drones to Smart Homes” and “Data Justice: Technology, policy and community impact”. Please contact me directly if you have any questions. Expressions of Interest are due before 20 July, 2017 via the links below. Please note that you have to be eligible for post-graduate study at UNSW in order to apply – those requirements are slightly different for the Scientia programme but require that you have a first class honours degree or a Master’s by research. There may be some flexibility here but that would be ideal.
Digital assistants, smart devices, drones and other autonomous and artificial intelligence technologies are rapidly changing work, culture, cities and even the intimate spaces of the home. They are 21st century media forms: recording, representing and acting, often in real-time. This project investigates the impact of living with autonomous and intelligent media technologies. It explores the changing situation of media and communication studies in this expanded field. How do these media technologies refigure relations between people and the world? What policy challenges do they present? How do they include and exclude marginalized peoples? How are they transforming media and communications themselves? (Supervisory team: Michael Richardson, Andrew Murphie, Heather Ford)
With growing concerns that data mining, ubiquitous surveillance and automated decision making can unfairly disadvantage already marginalised groups, this research aims to identify policy areas where injustices are caused by data- or algorithm-driven decisions, examine the assumptions underlying these technologies, document the lived experiences of those who are affected, and explore innovative ways to prevent such injustices. Innovative qualitative and digital methods will be used to identify connections across community, policy and technology perspectives on ‘big data’. The project is expected to deepen social engagement with disadvantaged communities, and strengthen global impact in promoting social justice in a datafied world. (Supervisory team: Tanja Dreher, Heather Ford, Janet Chan)
For the second time in a week, my phone buzzed with a New York Times alert, notifying me that another celebrity had died by suicide. My heart sank. I tuned into the Crisis Text Line Slack channel to see how many people were waiting for a counselor’s help. Volunteer crisis counselors were pouring in, but the queue kept growing.
Celebrity suicides trigger people who are already on edge to wonder whether or not they too should seek death. Since the Werther effect study, in 1974, countless studies have conclusively and repeatedly shown that how the news media reports on suicide matters. The World Health Organization has adetailed set of recommendations for journalists and news media organizations on how to responsibly report on suicide so as to not trigger copycats.Yet in the past few years, few news organizations have bothered to abide by them, even as recent data shows that the reporting on Robin Williams’ death triggered an additional 10 percent increase in suicide and a 32 percent increase in people copying his method of death.The recommendations aren’t hard to follow — they focus on how to convey important information without adding to the problem.
Crisis counselors at the Crisis Text Line are on the front lines. As a board member, I’m in awe of their commitment and their willingness to help those who desperately need support and can’t find it anywhere else. But it pains me to watch as elite media amplifiers make counselors’ lives more difficult under the guise of reporting the news or entertaining the public.
Through data, we can see the pain triggered by 13 Reasons Why and the New York Times.We see how salacious reporting on method prompts people to consider that pathway of self-injury.Our volunteer counselors are desperately trying to keep people alive and get them help, while for-profit companies reap in dollars and clicks.If we’re lucky, the outlets triggering unstable people write off their guilt by providing a link to our services, with no consideration of how much pain they’ve caused or the costs we must endure.
I want to believe in journalism. But my faith is waning.
I want to believe in journalism. I want to believe in the idealized mandate of the fourth estate. I want to trust that editors and journalists are doing their best to responsibly inform the public and help create a more perfect union.But my faith is waning.
Many Americans — especially conservative Americans — do not trust contemporary news organizations.This “crisis” is well-trod territory, but the focus on fact-checking, media literacy, and business models tends to obscure three features of the contemporary information landscape that I think are poorly understood:
Differences in worldview are being weaponized to polarize society.
We cannot trust organizations, institutions, or professions when they’re abstracted away from us.
Economic structures built on value extraction cannot enable healthy information ecosystems.
Let me begin by apologizing for the heady article, but the issues that we’re grappling with are too heady for a hot take. Please read this to challenge me, debate me, offer data to show that I’m wrong. I think we’ve got an ugly fight in front of us, and I think we need to get more sophisticated about our thinking, especially in a world where foreign policy is being boiled down to 140 characters.
1. Your Worldview Is Being Weaponized
I was a teenager when I showed up at a church wearing jeans and a T-shirt to see my friend perform in her choir. The pastor told me that I was not welcomebecause this was a house of God, and we must dress in a manner that honors Him. Not good at following rules, I responded flatly, “God made me naked. Should I strip now?”Needless to say, I did not get to see my friend sing.
Faith is an anchor for many people in the United States, but the norms that surround religious institutions are man-made, designed to help people make sense of the world in which we operate.Many religions encourage interrogation and questioning, but only within a well-established framework.Children learn those boundaries, just as they learn what is acceptable insecular society.They learn that talking about race is taboo and that questioning the existence of God may leave them ostracized.
Like many teenagers before and after me, I was obsessed with taboos and forbidden knowledge. I sought out the music Tipper Gore hated, read the books my school banned, and tried to get answers to any question that made adults gasp. Anonymously, I spent late nights engaged in conversations on Usenet, determined to push boundaries and make sense of adult hypocrisy.
Following a template learned in Model UN, I took on strong positions in order to debate and learn. Having already lost faith in the religious leaders in my community, I saw no reason to respect the dogma of any institution.And because I made a hobby out of proving teachers wrong, I had little patience for the so-called experts in my hometown. I was intellectually ravenous, but utterly impatient with, if not outright cruel to the adults around me. I rebelled against hierarchy and was determined to carve my own path at any cost.
I have an amazing amount of empathy for those who do not trust the institutions that elders have told them they must respect.Rage against the machine. We don’t need no education, no thought control.I’m also fully aware that you don’t garner trust in institutions through coercion or rational discussion. Instead, trust often emerges from extreme situations.
Many people have a moment where they wake up and feel like the world doesn’t really work like they once thought or like they were once told.That moment of cognitive reckoning is overwhelming. It can be triggered by any number of things — a breakup, a death, depression, a humiliating experience.Everything comes undone, and you feel like you’re in the middle of a tornado, unable to find the ground.This is the basis of countless literary classics, the crux of humanity. But it’s also a pivotal feature in how a society comes together to function.
Everyone needs solid ground, so that when your world has just been destabilized, what comes next matters. Who is the friend that picks you up and helps you put together the pieces?What institution — or its representatives — steps in to help you organize your thinking? What information do you grab onto in order to make sense of your experiences?
Contemporary propaganda isn’t about convincing someone to believe something, but convincing them to doubt what they think they know.
Countless organizations and movements exist to pick you up during your personal tornado and provide structure and a framework. Take a look at how Alcoholics Anonymous works. Other institutions and social bodies know how to trigger that instability and then help you find ground. Check out the dynamics underpinning military basic training.Organizations, movements, and institutions that can manipulate psychological tendencies toward a sociological end have significant power. Religious organizations, social movements, and educational institutions all play this role, whether or not they want to understand themselves as doing so.
Because there is power in defining a framework for people, there is good reason to be wary of any body that pulls people in when they are most vulnerable. Of course, that power is not inherently malevolent. There is fundamental goodness in providing structures to help those who are hurting make sense of the world around them.Where there be dragons is when these processes are weaponized, when these processes are designed to produce societal hatred alongside personal stability.After all, one of the fastest ways to bond people and help them find purpose is to offer up an enemy.
And here’s where we’re in a sticky spot right now.Many large institutions — government, the church, educational institutions, news organizations — are brazenly asserting their moral authority without grappling with their own shit.They’re ignoring those among them who are using hate as a tool, and they’re ignoring their own best practices and ethics, all to help feed a bottom line.Each of these institutions justifies itself by blaming someone or something to explain why they’re not actually that powerful, why they’re actually the victim.And so they’re all poised to be weaponized in a cultural war rooted in how we stabilize American insecurity.And if we’re completely honest with ourselves, what we’re really up against is how we collectively come to terms with a dying empire.But that’s a longer tangent.
Any teacher knows that it only takes a few students to completely disrupt a classroom. Forest fires spark easily under certain conditions, and the ripple effects are huge. As a child, when I raged against everyone and everything, it was my mother who held me into the night. When I was a teenager chatting my nights away on Usenet, the two people who most memorably picked me up and helped me find stable ground were a deployed soldier and a transgender woman, both of whom held me as I asked insane questions. They absorbed the impact and showed me a different way of thinking. They taught me the power of strangers counseling someone in crisis. As a college freshman, when I was spinning out of control, a computer science professor kept me solid and taught me how profoundly important a true mentor could be. Everyone needs someone to hold them when their world spins, whether that person be a friend, family, mentor, or stranger.
Fifteen years ago, when parents and the news media were panicking about online bullying, I saw a different risk. I saw countless kids crying out online in pain only to be ignored by those who preferred to prevent teachers from engaging with students online or to create laws punishing online bullies.We saw the suicides triggered as youth tried to make “It Gets Better” videos to find community, only to be further harassed at school.We saw teens studying the acts of Columbine shooters, seeking out community among those with hateful agendas and relishing the power of lashing out at those they perceived to be benefiting at their expense. But it all just seemed like a peculiar online phenomenon, proof that the internet was cruel. Too few of us tried to hold those youth who were unquestionably in pain.
Teens who are coming of age today are already ripe for instability. Their parents are stressed; even if they have jobs, nothing feels certain or stable. There doesn’t seem to be a path toward economic stability that doesn’t involve college, but there doesn’t seem to be a path toward college that doesn’t involve mind-bending debt.Opioids seem like a reasonable way to numb the pain in far too many communities. School doesn’t seem like a safe place, so teenagers look around and whisper among friends about who they believe to be the most likely shooter in their community. As Stephanie Georgopulos notes, the idea that any institution can offer security seems like a farce.
When I look around at who’s “holding” these youth, I can’t help but notice the presence of people with a hateful agenda.And they terrify me, in no small part because I remember an earlier incarnation.
In 1995, when I was trying to make sense of my sexuality, I turned to various online forums and asked a lot of idiotic questions.I was adopted by the aforementioned transgender woman and numerous other folks who heard me out, gave me pointers, and helped me think through what I felt.In 2001, when I tried to figure out what the next generation did, I realized thatstruggling youth were more likely to encounter a Christian gay “conversion therapy” group than a supportive queer peer.Queer folks were sick of being attacked by anti-LGBT groups, and so they had created safe spaces on private mailing lists that were hard for lost queer youth to find.And so it was that in their darkest hours, these youth were getting picked up by those with a hurtful agenda.
Teens who are trying to make sense of social issues aren’t finding progressive activists. They’re finding the so-called alt-right.
Fast-forward 15 years, and teens who are trying to make sense of social issues aren’t finding progressive activists willing to pick them up. They’re finding the so-called alt-right.I can’t tell you how many youth we’ve seen asking questions like I asked being rejected by people identifying with progressive social movements, only to find camaraderie among hate groups. What’s most striking is how many people with extreme ideas are willing to spend time engaging with folks who are in the tornado.
Spend time reading the comments below the YouTube videos of youth struggling to make sense of the world around them. You’ll quickly find comments by people who spend time in the manosphere or subscribe to white supremacist thinking. They are diving in and talking to these youth, offering a framework to make sense of the world, one rooted in deeply hateful ideas.These self-fashioned self-help actors are grooming people to see that their pain and confusion isn’t their fault, but the fault of feminists, immigrants, people of color. They’re helping them believe that the institutions they already distrust — the news media, Hollywood, government, school, even the church — are actually working to oppress them.
Most people who encounter these ideas won’t embrace them, but some will. Still, even those who don’t will never let go of the doubt that has been instilled in the institutions around them. It just takes a spark.
So how do we collectively make sense of the world around us? There isn’t one universal way of thinking, but even the act of constructing knowledge is becoming polarized. Responding to the uproar in the news media over “alternative facts,” Cory Doctorow noted:
We’re not living through a crisis about what is true, we’re living through a crisis about how we know whether something is true.We’re not disagreeing about facts, we’re disagreeing about epistemology.The “establishment” version of epistemology is, “We use evidence to arrive at the truth, vetted by independent verification (but trust us when we tell you that it’s all been independently verified by people who were properly skeptical and not the bosom buddies of the people they were supposed to be fact-checking).”
The “alternative facts” epistemological method goes like this: “The ‘independent’ experts who were supposed to be verifying the ‘evidence-based’ truth were actually in bed with the people they were supposed to be fact-checking.In the end, it’s all a matter of faith, then: you either have faith that ‘their’ experts are being truthful, or you have faith that we are.Ask your gut, what version feels more truthful?”
Doctorow creates these oppositional positions to make a point and to highlight that there is a war over epistemology, or the way in which we produce knowledge.
The reality is much messier, because what’s at stake isn’t simply about resolving two competing worldviews.Rather, what’s at stake is how there is no universal way of knowing, and we have reached a stage in our political climate where there is more power in seeding doubt, destabilizing knowledge, and encouraging others to distrust other systems of knowledge production.
Contemporary propaganda isn’t about convincing someone to believe something, but convincing them to doubt what they think they know.Andonce people’s assumptions have come undone, who is going to pick them up and help them create a coherent worldview?
2. You Can’t Trust Abstractions
Deeply committed to democratic governance, George Washington believed that a representative government could only work if the public knew their representatives.As a result, our Constitution states that each member of the House should represent no more than 30,000 constituents.When we stopped adding additional representatives to the House in 1913 (frozen at 435), each member represented roughly 225,000 constituents. Today, the ratio of congresspeople to constituents is more than 700,000:1. Most people will never meet their representative, and few feel as though Washington truly represents their interests. The democracy that we have is representational only in ideal, not in practice.
As our Founding Fathers knew, it’s hard to trust an institution when it feels inaccessible and abstract.All around us, institutions are increasingly divorced from the community in which they operate, with often devastating costs.Thanks to new models of law enforcement, police officers don’t typically come from the community they serve.In many poor communities, teachers also don’t come from the community in which they teach.The volunteer U.S. military hardly draws from all communities, and those who don’t know a solider are less likely to trust or respect the military.
Journalism can only function as the fourth estate when it serves as a tool to voice the concerns of the people and to inform those people of the issues that matter.Throughout the 20th century, communities of color challenged mainstream media’s limitations and highlighted that few newsrooms represented the diverse backgrounds of their audiences. As such, we saw the rise of ethnic media and a challenge to newsrooms to be smarter about their coverage. But let’s be real — even as news organizations articulate a commitment to the concerns of everyone, newsrooms have done a dreadful job of becoming more representative. Over the past decade, we’ve seen racial justice activists challenge newsrooms for their failure to cover Ferguson, Standing Rock, and other stories that affect communities of color.
Meanwhile, local journalism has nearly died.The success of local journalismdidn’t just matter because those media outlets reported the news, but because it meant that many more people were likely to know journalists.It’s easier to trust an institution when it has a human face that you know and respect. Andas fewer and fewer people know journalists, they trust the institution less and less.Meanwhile, the rise of social media, blogging, and new forms of talk radio has meant that countless individuals have stepped in to cover issues not being covered by mainstream news, often using a style and voice that is quite unlike that deployed by mainstream news media.
We’ve also seen the rise of celebrity news hosts. These hosts help push the boundaries of parasocial interactions, allowing the audience to feel deep affinity toward these individuals, as though they are true friends. Tabloid papers have long capitalized on people’s desire to feel close to celebrities by helping people feel like they know the royal family or the Kardashians. Talking heads capitalize on this, in no small part by how they communicate with their audiences. So, when people watch Rachel Maddow or listen to Alex Jones, they feel more connected to the message than they would when reading a news article. They begin to trust these people as though they are neighbors. They feel real.
No amount of drop-in journalism will make up for the loss of journalists within the fabric of local communities.
People want to be informed, but who they trust to inform them is rooted in social networks, not institutions.The trust of institutions stems from trust in people. The loss of the local paper means a loss of trusted journalists and a connection to the practices of the newsroom. As always, people turn to their social networks to get information, but what flows through those social networks is less and less likely to be mainstream news. But here’s where you also get an epistemological divide.
As Francesca Tripodi points out, many conservative Christians have developed a media literacy practice that emphasizes the “original” text rather than an intermediary.Tripodi points out that the same type of scriptural inference that Christians apply in Bible study is often also applied to reading the Constitution, tax reform bills, and Google results. This approach is radically different than the approach others take when they rely on intermediaries to interpret news for them.
As the institutional construction of news media becomes more and more proximately divorced from the vast majority of people in the United States, we can and should expect trust in news to decline. No amount of fact-checking will make up for a widespread feeling that coverage is biased. No amount of articulated ethical commitments will make up for the feeling that you are being fed clickbait headlines.
No amount of drop-in journalism will make up for the loss of journalists within the fabric of local communities.And while the population who believes that CNN and the New York Times are “fake news” are not demographically representative, the questionable tactics that news organizations use are bound to increase distrust among those who still have faith in them.
3. The Fourth Estate and Financialization Are Incompatible
If you’re still with me at this point, you’re probably deeply invested in scholarship or journalism. And, unless you’re one of my friends, you’re probably bursting at the seams to tell me that the reason journalism is all screwed up is because the internet screwed news media’s business model. So I want to ask a favor: Quiet that voice in your head, take a deep breath, and let me offer an alternative perspective.
There are many types of capitalism. After all, the only thing that defines capitalism is the private control of industry (as opposed to government control). Most Americans have been socialized into believing that all forms of capitalism are inherently good (which, by the way, was a propaganda project).But few are encouraged to untangle the different types of capitalism and different dynamics that unfold depending on which structure is operating.
I grew up in mom-and-pop America, where many people dreamed of becoming small business owners. The model was simple: Go to the bank and get a loan to open a store or a company. Pay back that loan at a reasonable interest rate — knowing that the bank was making money — until eventually you owned the company outright. Build up assets, grow your company, and create something of value that you could pass on to your children.
In the 1980s, franchises became all the rage. Wannabe entrepreneurs saw a less risky path to owning their own business. Rather than having to figure it out alone, you could open a franchise with a known brand and a clear process for running the business. In return, you had to pay some overhead to the parent company. Sure, there were rules to follow and you could only buy supplies from known suppliers and you didn’t actually have full control, but it kinda felt like you did. Like being an Uber driver, it was the illusion of entrepreneurship that was so appealing.And most new franchise owners didn’t know any better, nor were they able to read the writing on the wall when the water all around them started boiling their froggy self. I watched my mother nearly drown, and the scars are still visible all over her body.
I will never forget the U.S. Savings & Loan crisis, not because I understood it, but because it was when I first realized that my Richard Scarry impression of how banks worked was way wrong. Only two decades later did I learn to seethe FIRE industries (Finance, Insurance, and Real Estate) as extractive ones.They aren’t there to help mom-and-pop companies build responsible businesses, but to extract value from their naiveté.Like today’s post-college youth are learning, loans aren’t there to help you be smart, but to bend your will.
It doesn’t take a quasi-documentary to realize thatMcDonald’s is not a fast-food franchise; it’s a real estate business that uses a franchise structure to extract capital from naive entrepreneurs.Go talk to a wannabe restaurant owner in New York City and ask them what it takes to start a business these days.You can’t even get a bank loan or lease in 2018 without significant investor backing, which means that the system isn’t set up for you to build a business and pay back the bank, pay a reasonable rent, and develop a valuable asset.You are simply a pawn in a financialized game between your investors, the real estate companies, the insurance companies, and the bank, all of which want to extract as much value from your effort as possible. You’re just another brick in the wall.
Now let’s look at the local news ecosystem. Starting in the 1980s, savvy investors realized that many local newspapers owned prime real estate in the center of key towns. These prized assets would make for great condos and office rentals. Throughout the country, local news shops started getting eaten up by private equity and hedge funds — or consolidated by organizations controlled by the same forces. Media conglomerates sold off their newsrooms as they felt increased pressure to increase profits quarter over quarter.
Building a sustainable news business was hard enough when the news had a wealthy patron who valued the goals of the enterprise. But the finance industry doesn’t care about sustaining the news business; it wants a return on investment. And the extractive financiers who targeted the news business weren’t looking to keep the news alive.They wanted to extract as much value from those business as possible.Taking a page out of McDonald’s, they forced the newsrooms to sell their real estate. Often, news organizations had to rent from new landlords who wanted obscene sums, often forcing them to move out of their buildings. News outlets were forced to reduce staff, reproduce more junk content, sell more ads, and find countless ways to cut costs. Of course the news suffered — the goal was to push news outlets into bankruptcy or sell, especially if the companies had pensions or other costs that couldn’t be excised.
Yes, the fragmentation of the advertising industry due to the internet hastened this process. And let’s also be clear that business models in the news business have never been clean. But no amount of innovative new business models will make up for the fact that you can’t sustain responsible journalism within a business structure that requires newsrooms to make more money quarter over quarter to appease investors.This does not mean that you can’t build a sustainable newsbusiness, but if the news is beholden to investors trying to extract value, it’s going to impossible.And if news companies have no assets to rely on (such as their now-sold real estate), they are fundamentally unstable and likely to engage in unhealthy business practices out of economic desperation.
Untangling our country from this current version of capitalism is going to be as difficult as curbing our addiction to fossil fuels.I’m not sure it can be done, but as long as we look at companies and blame their business models without looking at the infrastructure in which they are embedded, we won’t even begin taking the first steps. Fundamentally, both the New York Times and Facebook are public companies, beholden to investors and desperate to increase their market cap. Employees in both organizations believe themselves to be doing something important for society.
Of course, journalists don’t get paid well, while Facebook’s employees can easily threaten to walk out if the stock doesn’t keep rising, since they’re also investors. But we also need to recognize that the vast majority of Americans have a stake in the stock market. Pension plans, endowments, and retirement plans all depend on stocks going up — and those public companies depend on big investors investing in them. Financial managers don’t invest in news organizations that are happy to be stable break-even businesses. Heck, even Facebook is in deep trouble if it can’t continue to increase ROI, whether through attracting new customers (advertisers and users), increasing revenue per user, or diversifying its businesses. At some point, it too will get desperate, because no business can increase ROI forever.
ROI capitalism isn’t the only version of capitalism out there. We take it for granted and tacitly accept its weaknesses by creating binaries, as though the only alternative is Cold War Soviet Union–styled communism.We’re all frogs in an ocean that’s quickly getting warmer. Two degrees will affect a lot more than oceanfront properties.
In my mind, we have a hard road ahead of us if we actually want to rebuild trust in American society and its key institutions (which, TBH, I’m not sure is everyone’s goal). There are three key higher-order next steps, all of which are at the scale of the New Deal.
Create a sustainable business structure for information intermediaries (like news organizations) that allows them to be profitable without the pressure of ROI.In the case of local journalism, this could involve subsidized rent, restrictions on types of investors or takeovers, or a smartly structured double bottom-line model.But the focus should be on strategically building news organizations as a national project to meet the needs of the fourth estate. It means moving away from a journalism model that is built on competition for scarce resources (ads, attention) to one that’s incentivized by societal benefits.
Actively and strategically rebuild the social networks of America.Create programs beyond the military that incentivize people from different walks of life to come together and achieve something great for this country.This could be connected to job training programs or rooted in community service, but it cannot be done through the government alone or, perhaps, at all.We need the private sector, religious organizations, and educational institutions to come together and commit to designing programs that knit together America while also providing the tools of opportunity.
Find new ways of holding those who are struggling.We don’t have a social safety net in America.For many, the church provides the only accessible net when folks are lost and struggling, but we need a lot more.We need to work together to build networks that can catch people when they’re falling.We’ve relied on volunteer labor for a long time in this domain—women, churches, volunteer civic organizations—but our current social configuration makes this extraordinarily difficult.We’re in the middle of an opiate crisis for a reason. We need to think smartly about how these structures or networks can be built and sustained so that we can collectively reach out to those who are falling through the cracks.
Fundamentally, we need to stop triggering one another because we’re facing our own perceived pain.This means we need to build large-scale cultural resilience.While we may be teaching our children “social-emotional learning”in the classroom, we also need to start taking responsibility at scale.Individually, we need to step back and empathize with others’ worldviews and reach out to support those who are struggling. But our institutions also have important work to do.
At the end of the day, if journalistic ethics means anything, newsrooms cannot justify creating spectacle out of their reporting on suicide or other topics just because they feel pressure to create clicks.They have the privilege of choosing what to amplify, and they should focus on what is beneficial.If they can’t operate by those values, they don’t deserve our trust. While I strongly believe that technology companies have a lot of important work to do to be socially beneficial,I hold news organizations to a higher standard because of their own articulated commitments and expectations that they serve as the fourth estate. And if they can’t operationalize ethical practices, I fear the society that must be knitted together to self-govern is bound to fragment even further.
Trust cannot be demanded.It’s only earned by being there at critical junctures when people are in crisis and need help.You don’t earn trust when things are going well; you earn trust by being a rock during a tornado.The winds are blowing really hard right now.Look around.Who is helping us find solid ground?
Speaking of economics and race, Chapter 2 of Omi and Winant (2014), titled “Class”, is about economic theories of race. These are my notes on it
Throughout this chapter, Omi and Winant seem preoccupied with whether and to what extent economic theories of race fall on the left, center, or right within the political spectrum. This is despite their admission that there is no absolute connection between the variety of theories and political orientation, only general tendencies. One presumes when reading it that they are allowing the reader to find themselves within that political alignment and filter their analysis accordingly. I will as much as possible leave out these cues, because my intention in writing these blog posts is to encourage the reader to make an independent, informed judgment based on the complexity the theories reveal, as opposed to just finding ideological cannon fodder. I claim this idealistic stance as my privilege as an obscure blogger with no real intention of ever being read.
Omi and Winant devote this chapter to theories of race that attempt to more or less reduce the phenomenon of race to economic phenomena. They outline three varieties of class paradigms for race:
Market relations theories. These tend to presuppose some kind theory of market efficiency as an ideal.
Stratification theories. These are vaguely Weberian, based on classes as ‘systems of distribution’.
Product/labor based theories. These are Marxist theories about conflicts over social relations of production.
For market relations theories, markets are efficient, racial discrimination and inequality isn’t, and so the theory’s explicandum is what market problems are leading to the continuation of racial inequalities and discrimination. There are a few theories on the table:
Irrational prejudice. This theory says that people are racially prejudiced for some stubborn reason and so “limited and judicious state interventionism” is on the table. This was the theory of Chicago economist Gary Becker, who is not to be confused with the Chicago sociologist Howard Becker, whose intellectual contributions were totally different. Racial prejudice unnecessarily drives up labor costs and so eventually the smart money will become unprejudiced.
Monopolistic practices. The idea here is that society is structured in the interest of whites, who monopolize certain institutions and can collect rents from their control of resources. Jobs, union membership, favorably located housing, etc. are all tied up in this concept of race. Extra-market activity like violence is used to maintain these monopolies. This theory, Omi and Winant point out, is sympatico with white privilege theories, as well as nation-based analyses of race (cf. colonialism).
Disruptive state practices. This view sees class/race inequality as the result of state action of some kind. There’s a laissez-faire critique which argues that minimum wage and other labor laws, as well as affirmative action, entrench race and prevent the market from evening things out. Doing so would benefit both capital owners and people of color according to this theory. There’s a parallel neo-Marxist theory that says something similar, interestingly enough.
It must be noted that in the history of the United States, especially before the Civil Rights era, there absolutely was race-based state intervention on a massive scale and this was absolutely part of the social construction of race. So there hasn’t been a lot of time to test out the theory that market equilibrium without racialized state policies results in racial equality.
Omi and Winant begin to explicate their critique of “colorblind” theories in this chapter. They characterize “colorblind” theories as individualistic in principle, and opposed to the idea of “equality of result.” This is the familiar disparate treatment vs. disparate impact dichotomy from the interpretation of nondiscrimination law. I’m now concerned that this, which appears to be the crux of the problem of addressing contests over racial equality between the center and the left, will not be resolved even after O&W’s explication of it.
Stratification theory is about the distribution of resources, though understood in a broader sense than in a narrow market-based theory. Resources include social network ties, elite recruitment, and social mobility. This is the kind of theory of race an symbolic interactionist sociologist of class can get behind. Or a political scientist’s: the relationship between the elites and the masses, as well as the dynamics of authority systems, are all part of this theory, according to Omi and Winant. One gets the sense that of the class based theories, this nuanced and nonreductivist one is favored by the authors … except for the fascinating critique that these theories will position race vs. class as two dimensions of inequality, reifying them in their analysis, whereas “In experiential terms, of course, inequality is not differentiated by race or class.”
The phenomenon that there is a measurable difference in “life chances” between races in the United States is explored by two theorists to which O&W give ample credit: William J Wilson and Douglas Massey.
Wilson’s major work in 1978, The Declining Significance of Race, tells a long story of race after the Civil War and urbanization that sounds basically correct to me. It culminates with the observation that there are now elite and middle-class black people in the United States due to the uneven topology of reforms but that ‘the massive black “underclass” was relegated to permanent marginality’. He argued that race was no longer a significant linkage between these two classes, though Omi and Winant criticize this view, arguing that there is fragility to the middle-class status for blacks because of public sector job losses. His view that class divides have superseded racial divides is his most controversial claim and so perhaps what he is known best for. He advocated for a transracial alliance within the Democratic party to contest the ‘racial reaction’ to Civil Rights, which at this point was well underway with Nixon’s “southern strategy”. The political cleavages along lines of partisan racial alliance are familiar to us in the United States today. Perhaps little has changed.
He called for state policies to counteract class cleavages, such as day care services to low-income single mothers. These calls “went nowhere” because Democrats were unwilling to face Republican arguments against “giveaways” to “welfare queens”. Despite this, Omi and Winant believe that Wilson’s views converge with neoconservative views because he doesn’t favor public sector jobs as a solution to racial inequality; more recently, he’s become a “culture of poverty” theorist (because globalization reduces the need for black labor in the U.S.) and believes in race neutral policies to overcome urban poverty. The relationship between poverty and race is incidental to Wilson, which I suppose makes him ‘colorblind” in O&W’s analysis.
Massey’s work, which is also significantly reviewed in this chapter, deals with immigration and Latin@s. There’s a lot there, so I’ll cut to the critique of his recent book, Categorically Unequal (2008), in which Massey unites his theories of anti-black and anti-brown racism into a comprehensive theory of racial stratification based on ingrained, intrinsic, biological processes of prejudice. Naturally, to Omi and Winant, the view that there’s something biological going on is “problematic”. They (being quite mainstream, really) see this as tied to the implicit bias literature but think that there’s a big difference from implicit bias due to socialization vs. over permanent hindbrain perversity. This is apparently taken up again in their Chapter 4.
Omi and Winant’s final comment is that these stratification theories deny agency and can’t explain how “egalitarian or social justice-oriented transformations could ever occur, in the past, present, or future.” Which is, I suppose, bleak to the anti-racist activists Omi and Winant are implicitly aligned with. Which does raise the possibility that what O&W are really up to in advocating a hard line on the looser social construction of race is to keep the hope of possibility of egalitarian transformation alive. It had not occurred to me until just now that their sensitivity to the idea that implicit bias may be socially trained vs. being a more basic and inescapable part of psychology, a sensitivity which is mirrored elsewhere in society, is due to this concern for the possibility and hope for equality.
The last set of economic theories considered in this chapter are class-conflict theories, which are rooted in a Marxist conception of history as reducible to labor-production relations and therefore class conflict. There are two different kinds of Marxist theory of race. There are labor market segmentation theories, led by Michael Reich, a labor economist at Berkeley. According to this research, when the working class unifies across racial lines, it increases its bargaining power and so can get better wages in its negotiations with capital. So the capitalist in this theory may want to encourage racial political divisions even if they harbor no racial prejudices themselves. “Workers of the world unite!” is the message of these theories. An alternative view is split labor market theory, which argues that under economic pressure the white working class would rather throw other races under the bus than compete with them economically. Political mobilization for a racially homogenous, higher paid working class is then contested by both capitalists and lower paid minority workers.
Omi and Winant respect the contributions of these theories but think that trying to reduce race to economic relations ultimately fails. This is especially true for the market theorists, who always wind up introducing race as an non-economic, exogenous variable to avoid inequalities in the market.
The stratification theories are perhaps more realistic and complex.
I’m most surprised at how the class-conflict based theories are reflected in what for me are the major lenses into the zeitgeist of contemporary U.S. politics. This may be because I’m very disproportionately surrounded by Marxist-influenced intellectuals. But it is hard to miss the narrative that the white working class has rejected the alliance between neoliberal capital and low-wage immigrant and minority labor. Indeed, it is arguably this latter alliance that Nancy Fraser has called neoliberalism. This conflict accords with the split labor market theory. Fraser and other hopeful socialist types argue that a triumph over identity differences is necessary to realize racial conflicts in the working class play into the hands of capitalists, not white workers. It is very odd that this ideological question is not more settled empirically. It may be that the whole framing is perniciously oversimplified, and that really you have to talk about things in a more nuanced way to get real headway.
Unless of course there isn’t any such real hope. This was an interesting part of the stratification theory: the explanation that included an absence of agency. I used to study lots and lots of philosophy, and in philosophy it’s a permissible form of argument to say, “This line of reasoning, if followed to its conclusion, leads to an appalling and untenable conclusion, one that could never be philosophically satisfying. For that reason, we reject it and consider a premise to be false.” In other words, in philosophy you are allowed to be motivated by the fact that a philosophical stance is life negating or self-defeating in some way. I wonder if that is true of sociology of race. I also wonder whether bleak conclusions are necessary even if you deny the agency of racial minorities in the United States to liberate themselves on their own steam. Now there’s globalization, and earlier patterns of race may well be altered by forces outside of it. This is another theme in contemporary political discourse.
Once again Omi and Winant have raised the specter of “colorblind” policies without directly critiquing them. The question seems to boil down to whether or not the mechanisms that reproduce racial inequality can be mitigated better by removing those mechanisms that are explicitly racial or not. If part of the mechanism is irrational prejudice due to some hindbrain tick, then there may be grounds for a systematic correction of that tick. But that would require a scientific conclusion about the psychology of race that identifies a systematic error. If the error is rather interpreting an empirical inequality due to racialized policies as an essentialized difference, then that can be partially corrected by reducing the empirical inequality in fact.
It is in fact because I’m interested in what kinds of algorithms would be beneficial interventions in the process of racial formation that I’m reading Omi and Winant so closely in the first place.
Science, Technology, Engineering, and Mathematics (STEM) are a converged epistemic paradigm that is universally valid. Education in this field is socially prized because it is education in actual knowledge that is resilient to social and political change. These fields are constantly expanding their reach into domains that have resisted their fundamental principles in the past. That is because these principles really are used to design socially and psychologically active infrastructure that tests these principles. While this is socially uncomfortable and there’s plenty of resistance, that resistance is mostly futile.
Despite or even because of (1), phenomenology and methods based on it remain interesting. There are two reasons for this.
The first is that much of STEM rests on a phenomenological core, and this gives some of the ethos of objectivity around the field instability. There are interesting philosophical questions at the boundaries of STEM that have the possibility of flipping it on its head. These questions have to do with theory of probability, logic, causation, and complexity/emergence. There is a lot of work to be done here with increasingly urgent practical applications.
The second reason why phenomenology is important is that there is still a large human audience for knowledge and for pragmatic application in lived experience knowledge needs to be linked to phenomenology. The science of personal growth and transformation, as a science ready for consumption by people, is an ongoing field which may never be reconciled perfectly with the austere ontologies of STEM.
Contemporary social organizations depend on the rule of law. That law, as a practice centered around use of natural language, is strained under the new technologies of data collection and control, which are ultimately bound by physical logic, not rhetorical logic. This impedance mismatch is the source of much friction today and will be particularly challenging for legal regimes based on consensus and tradition such as those based on democracy and common law.
In terms of social philosophy, the moral challenge we are facing today is to devise a communicable, accurate account of how a diversity of bodies can and should cooperate despite their inequality. This is a harder problem than coming up with theory of morality wherein theoretical equals maintain their equality. One good place to start on this would be the theory of economics, and how economics proposes differently endowed actors can and should specialize and trade. Sadly, economics is a complex field that is largely left out of the discourse today. It is, perhaps, considered either too technocratic or too ideologically laden to take seriously. Nevertheless, we have to remember that economics was originally and may be again primarily a theory of the moral order; the fact that it is about the pragmatic matters of business and money, shunned by the cultural elite, does not make it any less significant a field of study in terms of its moral implications.
My Twitter account has been a source of great entertainment, distraction, and abuse over the years. It is time that I brought it under control. I am too proud and too cheap to buy a professional grade Twitter account manager, and so I’ve begun developing a new suite of tools in Python that will perform the necessary tasks for me.
I’ve decided to name these tools fancier, because the art and science of breeding domestic pigeons is called pigeon fancying. Go figure.
The project is now available on GitHub, and of course I welcome any collaboration or feedback!
At the time of this writing, the project has only one feature: it searches through who you follow on Twitter, finds which accounts are both inactive in 90 days and don’t follow you back, and then unfollows them.
This is a common thing to try to do when grooming and/or professionalizing your Twitter account. I saw a script for this shared in a pastebin years ago, but couldn’t find it again. There are some on-line services that will help you do this, but they charge a fee to do it at scale. Ergo: the open source solution. Voila!
I finally figured something out, philosophically, that has escaped me for a long time. I feel a little ashamed that it’s taken me so long to get there, since it’s something I’ve been told in one way or another many times before.
Here is the set up: liberalism is justified by universal equivalence between people. This is based in the Enlightenment idea that all people have something in common that makes them part of the same moral order. Recognizing this commonality is an accomplishment of reason and education. Whether this shows up in Habermasian discourse ethics, according to which people may not reason about politics from their personal individual situation, or in the Rawlsian ‘veil of ignorance’, in which moral precepts are intuitively defended under the presumption that one does not know who or where one will be, liberal ideals always require that people leave something out, something that is particular to them. What gets left out is people’s bodies–meaning both their physical characteristics and more broadly their place in lived history. Liberalism was in many ways a challenge to a moral order explicitly based on the body, one that took ancestry and heredity very seriously. So much a part of aristocratic regime was about birthright and, literally, “good breeding”. The bourgeois class, relatively self-made, used liberalism to level the moral playing field with the aristocrats.
The Enlightenment was followed by a period of severe theological and scientific racism that was obsessed with establishing differences between people based on their bodies. Institutions that were internally based on liberalism could then subjugate others, by creating an Other that was outside the moral order. Equivalently, sexism too.
Social Darwinism was a threat to liberalism because it threatened to bring back a much older notion of aristocracy. In WWII, the Nazis rallied behind such an ideology and were defeated in the West by a liberal alliance, which then established the liberal international order.
I’ve got to leave out the Cold War and Communism here for a minute, sorry.
Late modern challenges to the liberal ethos gained prominence in activist circles and the American academy during and following the Civil Rights Movement. These were and continue to be challenges because they were trying to bring bodies back into the conversation. The problem is that a rules-based order that is premised on the erasure of differences in bodies is going to be unable to deal with the political tensions that precisely do come from those bodily differences. Because the moral order of the rules was blind to those differences, the rules did not govern them. For many people, that’s an inadequate circumstance.
So here’s where things get murky for me. In recent years, you have had a tension between the liberal center and the progressive left. The progressive left reasserts the political importance of the body (“Black Lives Matter”), and assertions of liberal commonality (“All Lives Matter”) are first “pushed” to the right, but then bump into white supremacy, which is also a reassertion of the political importance of the body, on the far right. It’s worth mention Piketty, here, I think, because to some extent that also exposed how under liberal regimes the body has secretly been the organizing principle of wealth through the inheritance of private property.
So what has been undone is the sense, necessary for liberalism, that there is something that everybody has in common which is the basis for moral order. Now everybody is talking about their bodily differences.
That is on the one hand good because people do have bodily differences and those differences are definitely important. But it is bad because if everybody is questioning the moral order it’s hard to say that there really is one. We have today, I submit, a political nihilism crisis due to our inability to philosophically imagine a moral order that accounts for bodily difference.
This is about the Internet too!
Under liberalism, you had an idea that a public was a place people could come to agree on the rules. Some people thought that the Internet would become a gigantic public where everybody could get together and discuss the rules. Instead what happened was that the Internet became a place where everybody could discuss each other’s bodies. People with similar bodies could form counterpublics and realize their shared interests as body-classes. (This piece by David Weinberger critiquing the idea of an ‘echo chamber’ is inspiring.) Within these body-based counterpublics each form their own internal moral order whose purpose is to mobilize their body-interests against other kinds of bodies. I’m talking about both black lives matter and white supremacists here, radical feminists and MRA’s. They are all buffeting liberalism with their body interests.
I can’t say whether this is “good” or “bad” because the moral order is in flux. There is apparently no such thing as neutrality in a world of pervasive body agonism. That may be its finest criticism: body agonism is politically unstable. Body agonism leads to body anarchy.
I’ll conclude with two points. The first is that the Enlightenment view of people having something in common (their personhood, their rationality, etc.) which put them in the same moral order was an intellectual and institutional accomplishment. People do not naturally get outside themselves and put themselves in other people’s shoes; they have to be educated to do it. Perhaps there is a kernal of truth here about what moral education is that transcends liberal education. We have to ask whether today’s body agonism is an enlightened state relative to moral liberalism because it acknowledges a previously hidden descriptive reality of body difference and is no longer so naive, or if body agonism is a kind of ethical regress because it undoes moral education, reducing us to a more selfish state of nature, of body conflict, albeit in a world full of institutions based on something else entirely.
The second point is that there is an alternative to liberal order which appears to be alive and well in many places. This is an order that is not based on individual attitudes for legitimacy, but rather is more about the endurance of institutions for their own sake. I’m referring of course to authoritarianism. Without the pretense of individual equality, authoritarian regimes can focus on maintaining power on their own terms. Authoritarian regimes do not need to govern through moral order. U.S. foreign policy used to be based on the idea that such amoral governance would be shunned. But if body agonism has replaced the U.S. international moral order, we no longer have an ideology to export or enforce abroad.
In thinking about group governance practices, it seems like setting out explicit norms can be broadly useful, no matter the particular history that's motivated the adoption of those norms. In a way, it's a common lesson of open source collaborative practice: documentation is essential.
I have to admit that though I’m quite glad that we have a Code of Conduct now in BigBang, I’m uncomfortable with the ideological presumptions of its rationale and the rejection of ‘meritocracy’.
For what it's worth, I don't think this is an ideological presumption, but an empirical observation. Lots of people have noticed lots of open source communities where the stated goal of decision-making by "meritocracy" has apparently contributed to a culture where homogeneity is preferred (because maybe you measure the vague concept of "merit" in some ways by people who behave most similarly to you) and where harassment is tolerated (because if the harasser has some merit -- again, on that fuzzy scale -- maybe that merit could outweigh the negative consequences of their behavior).
I don't see the critiques of meritocracy as relativistic; that is, it's not an argument that there is no such thing as merit, that nothing can be better than something else. It's just a recognition that many implementations of claimed meritocracy aren't very systematic about evaluation of merit and that common models tend to have side effects that are bad for working communities, especially for communities that want to attract participants from a range of situations and backgrounds, where online collaboration can especially benefit.
To that point, you don't need to mention "merit" or "meritocracy" at all in writing a code of conduct and establishing such a norm doesn't require having had those experiences with "meritocratic" projects in the past. Having an established norm of inclusivity makes it easier for everyone. We don't have to decide on a case-by-case basis whether some harassing behavior needs to be tolerated by, for example, weighing the harm against the contributions of the harasser. When you start contributing to a new project, you don't have to just hope the leadership of that project shares your desire for respectful behavior. Instead, we just agree that we'll follow simple rules and anyone who wants to join in can get a signal of what's expected. Others have tried to describe why the practice can be useful in countering obstacles faced by underrepresented groups, but the tool of a Code of Conduct is in any case useful for all.
Could we use forking as a mechanism for promoting inclusivity rather than documenting a norm? Perhaps; open source projects could just fork whenever it became clear that a contributor was harassing other participants, and that capability is something of a back stop if, for example, harassment occurs and project maintainers do nothing about it. But that only seems effective (and efficient) if the new fork established a code of conduct that set a different expectation of behavior; without the documentary trace (a hallmark of open source software development practice) others can't benefit from that past experience and governance process. While forking is possible in open source development, we don't typically want to encourage it to happen rapidly, because it introduces costs in dividing a community and splitting their efforts. Where inclusivity is a goal of project maintainers, then, it's easier to state that norm up front, just like we state the license up front, and the contribution instructions up front, and the communication tools up front, rather than waiting for a conflict and then forking both the code and collaborators at each decision point. And if a project has a goal of broad use and participation, it wants to demonstrate inclusivity towards casual participants as well as dedicated contributors. A casual user (who provides documentation, files bugs, uses the software and contributes feedback on usability) isn't likely to fork an open source library that they're using if they're treated without respect, they'll just walk away instead.
It could be that some projects (or some developers) don't value inclusivity. That seems unusual for an open source project since such projects typically benefit from increased participation (both at the level of core contributers and at lower-intensity users who provide feedback) and online collaboration typically has the advantage of bringing in participation from outside one's direct neighbors and colleagues. But for the case of the happy lone hacker model, a Code of Conduct might be entirely unnecessary, because the lone contributor isn't interested in developing a community, but instead just wishes to share the fruits of a solitary labor. Permissive licensing allows interested groups with different norms to build on that work without the original author needing to collaborate at all -- and that's great, individuals shouldn't be pressured to collaborate if they don't want to. Indeed, the choice to refuse to set community norms is itself an expression which can be valuable to others; development communities who explicitly refuse to codify norms or developers who refuse to abide by them do others a favor by letting them know what to expect from potential collaboration.
I’m continuing to read Omi and Winant’s Racial Formation in the United States (2014). These are my notes on Chapter 1, “Ethnicity”.
There’s a long period during which the primary theory of race in the United States is a theological and/or “scientific” racism that maintains that different races are biologically different subspecies of humanity because some of them are the cursed descendants of some tribe mentioned in the Old Testament somewhere. In the 1800’s, there was a lot of pseudoscience involving skull measurements trying to back up a biblical literalism that rationalized, e.g., slavery. It was terrible.
Darwinism and improved statistical methods started changing all that, though these theological/”scientific” ideas about race were prominent in the United States until World War II. What took them out of the mainstream was the fact that the Nazis used biological racism to rationalize their evilness, and the U.S. fought them in a war. Jewish intellectuals in the United States in particular (and by now there were a lot of them) forcefully advocated for a different understanding of race based on ethnicity. This theory was dominant as a replacement for theories of scientific racism between WWII and the mid-60’s, when it lost its proponents on the left and morphed into a conservative ideology.
To understand why this happened, it’s important to point out how demographics were changing in the U.S. in the 20th century. The dominant group in the United States in the 1800’s were White Anglo-Saxon Protestants, or WASPs. Around 1870-1920, the U.S. started to get a lot more immigrants from Southern and Eastern Europe, as well as Ireland. These often economic refugees, though there were also people escaping religious persecution (Jews). Generally speaking these immigrants were not super welcome in the United States, but they came in at what may be thought of as a good time, as there was a lot of economic growth and opportunity for upward mobility in the coming century.
Partly because of this new wave of immigration, there was a lot of interest in different ethnic groups and whether or not they would assimilate in with the mainstream Anglo culture. American pragmatism, of the William James and Jown Dewey type, was an influential philosophical position in this whole scene. The early ethnicity theorists, who were part of the Chicago school of sociology that was pioneering grounded, qualitative sociological methods, were all pragmatists. Robert Park is a big figure here. All these guys apparently ripped off W.E.B. Du Bois, who was trained by William James and didn’t get enough credit because he was black.
Based on the observation of these European immigrants, the ethnicity theorists came to the conclusion that if you lower the structural barriers to participation in the economy, “ethnics” will assimilate to the mainstream culture (melt into the “melting pot”) and everything is fine. You can even tolerate some minor ethnic differences, resulting in the Italian-Americans, the Irish-Americans, and… the African-American. But that was a bigger leap for people.
What happened, as I’ve mentioned, is that scientific racism was discredited in the U.S. partly because it had to fight the Nazis and had so many Jewish intellectuals, who had been on the wrong end of scientific racism in Europe and who in the U.S. were eager to become “ethnics”. These became, in essence, the first “racial liberals”. At the time there was also a lot of displacement of African Americans who were migrating around the U.S. in search of economic opportunities. So in the post-war period ethnicity theorists optimistically proposed that race problems could be solved by treating all minority groups as if they were Southern and Eastern European immigrant groups. Reduce enough barriers and they would assimilate and/or exist in a comfortable equitable pluralism, they thought.
The radicalism of the Civil Rights movement broke the spell here, as racial minorities began to demand not just the kinds of liberties that European ethnics had taken advantage of, but also other changes to institutional racism and corrections to other racial injustices. The injustices persisted in part because racial differences are embodied differently than ethnic differences. This is an academic way of saying that the fact that (for example) black people often look different from white people matters for how society treats them. So treating race as a matter of voluntary cultural affiliation misses the point.
So ethnicity theory, which had been critical for dismantling scientific racism and opening the door for new policies on race, was ultimately rejected by the left. It was picked up by neoconservatives through their policies of “colorblindness”, which Omi and Winant describe in detail in the latter parts of their book.
There is a lot more detail in the chapter, which I found quite enlightening.
My main takeaways:
In today’s pitched media battles between “Enlightenment classical liberalism” and “postmodern identity politics”, we totally forget that a lot of American policy is based on American pragmatism, which is definitely neither an Enlightenment position nor postmodern. Everybody should shut up and read The Metaphysical Club.
There has been a social center, with views that are seen as center-left or center-right depending on the political winds, since WWII. The adoption of ethnicity theory into the center was a significant culture accomplishment with a specific history, however ultimately disappointing its legacy has been for anti-racist activists. Any resurgence of scientific racism is a definite backslide.
Omi and Winant are convincing about the limits of ethnicity theory in terms of: its dependence on economic “engines of mobility” that allow minorities to take part in economic growth, its failure to recognize the corporeal and ocular aspects of race, and its assumption that assimilation is going to be as appealing to minorities as it is to the white majority.
Their arguments about colorblind racism, which are at the end of their book, are going to be doing a lot of work and the value of the new edition of their book, for me at least, really depends on the strength of that theory.
Beginning to read Omi and Winant, Racial Formation in the United States, Third Edition, 2014. These are notes on the introduction, which outlines the trajectory of their book. This introduction is available on Google Books.
Omi and Winant are sociologists of race and their aim is to provide a coherent theory of race and racism, particularly as a United States phenomenon, and then to tell a history of race in the United States. One of their contentions is that race is a social construct and therefore varies over time. This means, in principle, that racial categories are actionable, and much of their analysis is about how anti-racist and racial reaction movements have transformed the politics and construction of race over the course of U.S. history. On the other hand, much of their work points to the persistence of racial categories despite the categorical changes.
Since the Third Edition, in 2014, comes twenty years after the Second Edition, much of the new material in the book addresses specifically what they call colorblind racial hegemony. This is a response to the commentary and question around the significance of Barack Obama’s presidency for race in America. It is interesting reading this in 2018, as in just a few brief years it seems like things have changed significantly. It’s a nice test, then to ask to what extent their theory explains what happened next.
Here is, broadly speaking, what is going on in their book based on the introduction.
First, they discuss prior theories of race they can find in earlier scholarship. They acknowledge that these are interesting lenses but believe they are ultimately reductionist. They will advance their own theory of racial formation in contrast with these. In the background of this section but dismissed outright is the “scientific” racism and religious theories of race that were prevalent before World War II and were used to legitimize what Omi and Winant call racial domination (this has specific meaning for them). Alternative theories of race that Omi and Winant appear to see as constructive contributions to racial theory include:
Race as ethnicity. As an alternative to scientific racism, post WWII thinkers advanced the idea of racial categories as reducing to ethnic categories, which were more granular social units based on shared and to some extent voluntary culture. This conception of race could be used for conflicting political agendas, including both pluralism and assimilation.
Race as class. The theory attempted to us economic theories–including both Marxist and market-based analysis–to explain race. Omi and Winant think this–especially the Marxist theory–was a productive lens but ultimate a reductive one. Race cannot be subsumed to class.
Race as nationality. Race has been used as the basis for national projects, and is tied up with the idea of “peoplehood”. In colonial projects especially, race and nationality are used both to motivate subjugation of a foreign people, and is also used in resistance movements to resist conquest.
It is interesting that these theories of race are ambiguous in their political import. Omi and Winant do a good job of showing how multi-dimensional race really is. Ultimately they reject all these theories and propose their own, racial formation theory. I have not read their chapter on it yet, so all I know is that: (a) they don’t shy away from the elephant in the room, which is that there is a distinctively ‘ocular’ component to race–people look different from each other in ways that are hereditary and have been used for political purposes, (b) they maintain that despite this biological aspect of race, the social phenomenon of race is a social construct and primarily one of political projects and interpretations, and (c) race is formed by a combination of action both at the representational level (depicting people in one way or another) and at the institutional level, with the latter determining real resource allocation and the former providing a rationalization for it.
Complete grokking of the racial formation picture is difficult, perhaps. This may be why instead of having a mainstream understanding of racial formation theory, we get reductive and ideological concepts of race active in politics. The latter part of Omi and Winant’s book is their historical account of the “trajectory” of racial politics in the United States, which they see in terms of a pendulum between anti-racist action (with feminist, etc., allies) and “racial reaction”–right-wing movements that subvert the ideas used by the anti-racists and spin them around into a backlash.
Omi and Winant describe three stages of racial politics in United States history:
Racial domination. Slavery and Jim Crow before WWII, based on religious and (now discredited, pseudo-)scientific theories of racial difference.
Racial hegemony. (Nod to Gramsci) Post-WWII race relations as theories of race-as-ethnicity open up egalitarian ideals. Opens way for Civil Rights movement.
Colorblind racism. A phase where the official ideology denies the significance of race in society while institutions continue to reinforce racial differences in a pernicious way. Necessarily tied up with neoliberalism, in Omi and Winant’s view.
The question of why colorblind racism is a form of racism is a subtle one. Omi and Winant do address this question head on, and I am in particular looking forward to their articulation of the point. Their analysis was done during the Obama presidency, which did seem to move the needle on race in a way that we are still seeing the repercussions of today. I’m interested in comparing their analysis with that of Fraser and Gilman. There seem to be some productive alignments and tensions there.
When confronted with white supremacists, newspaper editors should consider ‘strategic silence’
George Lincoln Rockwell, the head of the American Nazi party, had a simple media strategy in the 1960s. He wrote in his autobiography: “Only by forcing the Jews to spread our message with their facilities could we have any hope of success in counteracting their left-wing, racemixing propaganda!”
Campus by campus, from Harvard to Brown to Columbia, he would use the violence of his ideas and brawn of his followers to become headline news. To compel media coverage, Rockwell needed: “(1) A smashing, dramatic approach which could not be ignored, without exposing the most blatant press censorship, and (2) a super-tough, hard-core of young fighting men to enable such a dramatic presentation to the public.” He understood what other groups competing for media attention knew too well: a movement could only be successful if the media amplified their message.
Contemporary Jewish community groups challenged journalists to consider not covering white supremacists’ ideas. They called this strategy “quarantine”, and it involved working with community organizations to minimize public confrontations and provide local journalists with enough context to understand why the American Nazi party was not newsworthy.
In regions where quarantine was deployed successfully, violence remained minimal and Rockwell was unable to recruit new party members. The press in those areas was aware that amplification served the agenda of the American Nazi party, so informed journalists employed strategic silence to reduce public harm.
The Media Manipulation research initiative at the Data & Society institute is concerned precisely with the legacy of this battle in discourse and the way that modern extremists undermine journalists and set media agendas. Media has always had the ability to publish or amplify particular voices, perspectives and incidents. In choosing stories and voices they will or will not prioritize, editors weigh the benefits and costs of coverage against potential social consequences. In doing so, they help create broader societal values. We call this willingness to avoid amplifying extremist messages “strategic silence”.
Editors used to engage in strategic silence – set agendas, omit extremist ideas and manage voices – without knowing they were doing so. Yet the online context has enhanced extremists’ abilities to create controversies, prompting newsrooms to justify covering their spectacles. Because competition for audience is increasingly fierce and financially consequential, longstanding newsroom norms have come undone. We believe that journalists do not rebuild reputation through a race to the bottom. Rather, we think that it’s imperative that newsrooms actively take the high ground and re-embrace strategic silence in order to defy extremists’ platforms for spreading hate.
Strategic silence is not a new idea. The Ku Klux Klan of the 1920s considered media coverage their most effective recruitment tactic and accordingly cultivated friendly journalists. According to Felix Harcourt, thousands of readers joined the KKK after the New York World ran a three-week chronicle of the group in 1921. Catholic, Jewish and black presses of the 1920s consciously differed from Protestant-owned mainstream papers in their coverage of the Klan, conspicuously avoiding giving the group unnecessary attention. The black press called this use of editorial discretion in the public interest “dignified silence”, and limited their reporting to KKK follies, such as canceled parades, rejected donations and resignations. Some mainstream journalists also grew suspicious of the KKK’s attempts to bait them with camera-ready spectacles. Eventually coverage declined.
The KKK was so intent on getting the coverage they sought that they threatened violence and white boycotts of advertisers. Knowing they could bait coverage with violence, white vigilante groups of the 1960s staged cross burnings and engaged in high-profile murders and church bombings. Civil rights protesters countered white violence with black stillness, especially during lunch counter sit-ins. Journalists and editors had to make moral choices of which voices to privilege, and they chose those of peace and justice, championing stories of black resilience and shutting out white extremism. This was strategic silence in action, and it saved lives.
The emphasis of strategic silence must be placed on the strategic over the silencing. Every story requires a choice and the recent turn toward providing equal coverage to dangerous, antisocial opinions requires acknowledging the suffering that such reporting causes. Even attempts to cover extremism critically can result in the media disseminating the methods that hate groups aim to spread, such as when Virginia’s Westmoreland News reproduced in full a local KKK recruitment flier on its front page. Media outlets who cannot argue that their reporting benefits the goal of a just and ethical society must opt for silence.
Newsrooms must understand that even with the best of intentions, they can find themselves being used by extremists. By contrast, they must also understand they have the power to defy the goals of hate groups by optimizing for core American values of equality, respect and civil discourse. All Americans have the right to speak their minds, but not every person deserves to have their opinions amplified, particularly when their goals are to sow violence, hatred and chaos.
If telling stories didn’t change lives, journalists would never have started in their careers. We know that words matter and that coverage makes a difference. In this era of increasing violence and extremism, we appeal to editors to choose strategic silence over publishing stories that fuel the radicalization of their readers.
(Visit the original version at The Guardian to read the comments and help support their organization, as a sign of appreciation for their willingness to publish our work.)
I’ve taken a close look at Frank Pasquale’s recent article, “Tech Platforms and the Knowledge Problem” in American Affairs. This is a topic that Pasquale has had his finger on the pulse of for a long time, and I think with this recent articulation he’s really on to something. It’s an area that’s a bit of an attractor state in tech policy thinking at the moment, and as I appear to be in that mix more than ever before, I wanted to take a minute to parse out Frank’s view of the state of the art.
Here’s the setup: In 1945, Hayek points out that the economy needs to be managed somehow, and that this is the main economic use of information/knowledge. Hayek sees the knowledge as distributed and coordination accomplished through the price mechanism. Today we have giant centralizing organizations like Google and Amazon mediating markets, and it’s possible that these have the kind of ‘central planning’ role that Hayek didn’t want. There is a status quo where these companies run things in an unregulated way. Pasquale, being a bit of a regulatory hawk, not unreasonably thinks this may be disappointing and traces out two different modes of regulatory action that could respond to the alleged tech company dominance.
He does this with a nice binary opposition between Jeffersonians, who want to break up the big companies into smaller ones, and Hamiltonians, who want to keep the companies big but regulate them as utilities. His choice of Proper Nouns is a little odd to me, since many of his Hamiltonians are socialists and that doesn’t sound very Hamiltonian to me, but whatever: what can you do, writing for Americans? This table sums up some of the contrasts. Where I’m introducing new components I’m putting in a question mark (?).
Open Markets Institute, Lina Khan
Big is Beautiful, Rob Atkinson, Evgeny Morozov
Fully automated luxury communism
Regulatory capture (?)
Block mergers: unfair bargaining power
Encourage mergers: better service quality
Allow data flows to third parties to reduce market barriers
Platforms improve quality via data size, AI advances; economies of scale
Public utility law
Federal Search Commission?
Smallholding entrepreneur is hero
Responsible regulator/executive is hero
There is a lot going on here, but I think the article does a good job of developing two sides of a dialectic about tech companies and their regulation that’s been emerging. These framings extend beyond the context of the article. A lot of blockchain proponents are Jeffersonian, and their opponents are Hamiltonian, in this schema.
I don’t have much to add at this point except for the observation that it’s very hard to judge the “natural” amount of industrial concentration in these areas in part because of the crudeness of the way we measure concentration. We easily pay attention to the top five or ten companies in a sector. But we do so by ignoring the hundred or thousand or more very small companies. It’s just incorrect to say that there is only one search engine or social network; it’s just that the size distribution for the many many search engines and social networks is very skewed, like a heavy tail or log normal distribution. There may be perfectly neutral, “complex systems” oriented explanations for this distribution that make it very robust even with a number of possible interventions.
If that’s true, there will always be many many small companies and a few market leaders in the tech sector. The small companies will benefit from Jeffersonian policies, and those invested in the market leaders will benefit (in some sense) from Hamiltonian policies. The question of which strategy to take then becomes a political matter: it depends on the self-interest of differently positioned people in the socio-economic matrix. Or, alternatively, there is no tension between pursuing both kinds of policy agenda, because they target different groups that will persist no matter hat regime is in place.
In a recent paper I’ve been working on with Mark Hannah that he’s presenting this week at the International Communications Association conference, we take on the question of whether and how “big data” can be used to study the culture of a population.
By “big data” we meant, roughly large social media data sets. The pitfalls of using this sort of data for any general study of a population are perhaps best articled by Tufekci (2014). In short: studies based on social media data are often sampling on the dependent variable because they only consider the people representing themselves on social media, though this is only a small portion of the population. To put it another way, the sample suffers from the 1% rule of Internet cultures: for any on-line community, only 1% create content, 10% interact with the content somehow, and the rest lurk. The behavior and attitudes of the lurkers, in addition to any field effects in the “background” of the data (latent variables in the social field of production), are all out of band and so opaque to the analyst.
By “the culture of a population”, we meant something specific: the distribution of values, beliefs, dispositions, and tastes of a particular group of people. The best source we found on this was Marsden and Swingle (1994), and article from a time before the Internet had started to transform academia. Then and perhaps now, the best way to study the distribution of culture across a broad population was a survey. The idea is that you sample the population according to some responsible statistics, you ask them some questions about their values, beliefs, dispositions, and tastes, and you report the results. Viola!
(Given the methodological divergence here, the fact that many people, especially ‘people on the Internet’, now view culture mainly through the lens of other people on the Internet is obviously a huge problem. Most people are not in this sample, and yet we pretend that it is representative because it’s easily available for analysis. Hence, our concept of culture (or cultures) is screwy, reflecting much more than is warranted whatever sorts of cultures are flourishing in a pseudonymous, bot-ridden, commercial attention economy.)
Can we productively combine social media data with surveys methods to get a better method for studying the culture of a population? We think so. We propose the following as a general method framework:
(1) Figure out the population of interest by their stable, independent ‘population traits’ and look for their activity on social media. Sample from this.
(2) Do exploratory data analysis to inductively get content themes and observations about social structure from this data.
(3) Use the inductively generated themes from step (2) to design a survey addressing cultural traits of the population (beliefs, values, dispositions, tastes).
(4) Conduct a stratified sample specifically across social media creators, synthesizers (e.g. people who like, retweet, and respond), and the general population and/or known audience, and distribute the survey.
(5) Extrapolate the results to general conclusions.
(6) Validate the conclusions with other data or not discrepancies for future iterations.
I feel pretty good about this framework as a step forward, except that in the interest of time we had to sidestep what is maybe the most interesting question raised by it, which is: what’s the difference between a population trait and a cultural trait.
Here’s what we were thinking:
Twitter use (creator, synthesizer, lurker, none)
Political views: left, right, center
Permanent unique identifier
Attitude towards media
Preferred news source
Pepsi or coke?
One thing to note: we decided that traits about media production and consumption were a subtype of cultural traits. I.e., if you use Twitter, that’s a particular cultural trait that may be correlated with other cultural traits. That makes the problem of sampling on the dependent variable explicit.
But the other thing to note is that there are certain categories that we did not put on this list. Which ones? Gender, race, etc. Why not? Because choosing whether these are population traits or cultural traits opens a big bag of worms that is the subject of active political contest. That discussion was well beyond the scope of the paper!
The dicey thing about this kind of research is that we explicitly designed it to try to avoid investigator bias. That includes the bias of seeing the world through social categories that we might otherwise naturalize of reify. Naturally, though, if we were to actually conduct this method on a sample, such as, I dunno, a sample of Twitter-using academics, we would very quickly discover that certain social categories (men, women, person of color, etc.) were themes people talked about and so would be included as survey items under cultural traits.
That is not terrible. It’s probably safer to do that than to treat them like immutable, independent properties of a person. It does seem to leave something out though. For example, say one were to identify race as a cultural trait and then ask people to identify with a race. Then one takes the results, does a factor analysis, and discovers a factor that combines a racial affinity with media preferences and participation rates. It then identifies the prevalence of this factor in a certain region with a certain age demographic. One might object to this result as a representation of a racial category as entailing certain cultural categories, and leaving out the cultural minority within a racial demographic that wants more representation.
This is upsetting to some people when, for example, Facebook does this and allows advertisers to target things based on “ethnic affinity”. Presumably, Facebook is doing just this kind of factor analysis when they identify these categories.
Arguably, that’s not what this sort of science is for. But the fact that the objection seems pertinent is an informative intuition in its own right.
Maybe the right framework for understanding why this is problematic is Omi and Winant’s racial formation theory (2014). I’m just getting into this theory recently, at the recommendation of Bruce Haynes, who I look up to as an authority on race in America. According to racial projects theory, racial categories are stable because they include both representations of groups of people as having certain qualities and social structures controlling the distribution of resources. So, the white/black divide in the U.S. is both racial stereotypes and segregating urban policy, because the divide is stable because of how the material and cultural factors reinforce each other.
This view is enlightening because it helps explain why hereditary phenotype, representations of people based on hereditary phenotype, requests for people to identify with a race even when this may not make any sense, policies about inheritance and schooling, etc. all are part of the same complex. When we were setting out to develop the method described above, we were trying to correct for a sampling bias in media while testing for the distribution of culture across some objectively determinable population variables. But the objective qualities (such as zip code) are themselves functions of the cultural traits when considered over the course of time. In short, our model, which just tabulates individual differences without looking at temporal mechanisms, is naive.
But it’s a start, if only to an interesting discussion.
Marsden, Peter V., and Joseph F. Swingle. “Conceptualizing and measuring culture in surveys: Values, strategies, and symbols.” Poetics 22.4 (1994): 269-289.
Omi, Michael, and Howard Winant. Racial formation in the United States. Routledge, 2014.
Tufekci, Zeynep. “Big Questions for Social Media Big Data: Representativeness, Validity and Other Methodological Pitfalls.” ICWSM 14 (2014): 505-514.
People often ask me, “Where do you get ideas for blog posts?” I have many sources, but my most effective one is simple: pay attention to the questions people ask you.
When a person asks you a question, it means they’re seeking your advice or expertise to fill a gap in their knowledge. Take your answer, and write it down.
This technique works so well because it overcomes the two biggest barriers to blogging: “What should I write about?” and, “Does anyone care what I have to say?”
It overcomes the first barrier by giving you a specific topic to write about. Our minds contain a lifetime of experiences to draw from, but when you try to find something specific to write about you’re blank. All that accumulated knowledge is locked up in your head, as if trapped behind a dam. A question cracks the dam and starts the flow of ideas.
It overcomes the second barrier (“will anyone care?”) because you already have your first reader: the question asker. Congratulations! You just infinitely increased your reader base. And chances are they aren’t the only person who’s ever asked this question, or ever will ask it. When this question comes up in the future, you’ll be more articulate when responding, and you can keep building your audience by sharing your post.
Having at least one reader has another benefit: you now have a specific person to write for. A leading cause of poorly written blog posts is that the author doesn’t know who they’re writing for (trust me, I’ve made this mistake plenty). This leads them to try to write for everyone. Which means their writing connects with no one. The resulting article is a Frankenstein’s monster of ideas bolted together that aimlessly stumbles around mumbling and groaning and scaring away the villagers.
Instead, you can avoid this fate by conjuring up the question asker in your mind, and write your response as if you’re talking to them. Instead of creating a monster, your post will sound like a polished, engaging TED speaker.
A final benefit to answering a specific question is that it keeps your post focused. Just answer the question, and call it a day. No more, no less. Another leading cause of Frankenstein’s monster blog posts is that they don’t have a specific point they’re trying to make. So the post tries to say everything there is to say about a subject, or deviates down side roads, or doesn’t say anything remarkable, or is just plain confusing. Answering a specific question keeps these temptations at bay.
So the next time you’re wondering where to get started blogging, start by paying attention to the questions people ask you. Then write down your answers.
p.s. Yes, I applied the advice in this post to the post itself :)
p.p.s. If you’d like more writing advice, I created a page to house all of the tips and tricks I’ve picked up from books and articles over the years. Check it out at jlzych.com/writing.
There has been a trend in open source development culture over the past ten years or so. It is the rejection of ‘meritocracy’. Just now, I saw this Post-Meritocracy Manifesto, originally created by Coraline Ada Ehmke. It is exactly what it sounds like: an explicit rejection of meritocracy, specifically in open source development. It captures a recent progressive wing of software development culture. It is attracting signatories.
I believe this is a “trend” because I’ve noticed a more subtle expression of similar ideas a few months ago. This came up when we were coming up with a Code of Conduct for BigBang. We wound up picking the Contributor Covenant Code of Conduct, though there’s still some open questions about how to integrate it with our Governance policy.
This Contributor Covenant is widely adopted and the language of it seems good to me. I was surprised though when I found the rationale for it specifically mentioned meritocracy as a problem the code of conduct was trying to avoid:
Marginalized people also suffer some of the unintended consequences of dogmatic insistence on meritocratic principles of governance. Studies have shown that organizational cultures that value meritocracy often result in greater inequality. People with “merit” are often excused for their bad behavior in public spaces based on the value of their technical contributions. Meritocracy also naively assumes a level playing field, in which everyone has access to the same resources, free time, and common life experiences to draw upon. These factors and more make contributing to open source a daunting prospect for many people, especially women and other underrepresented people.
If it looks familiar, it may be because it was written by the same author, Coraline Ada Ehmke.
I have to admit that though I’m quite glad that we have a Code of Conduct now in BigBang, I’m uncomfortable with the ideological presumptions of its rationale and the rejection of ‘meritocracy’. There is a lot packed into this paragraph that is open to productive disagreement and which is not necessary for a commitment to the general point that harassment is bad for an open source community.
Perhaps this would be easier for me to ignore if this political framing did not mirror so many other political tensions today, and if open source governance were not something I’ve been so invested in understanding. I’ve taught a course on open source management, and BigBang spun out of that effort as an experiment in scientific analysis of open source communities. I am, I believe, deep in on this topic.
So what’s the problem? The problem is that I think there’s something painfully misaligned about criticism of meritocracy in culture at large and open source development, which is a very particular kind of organizational form. There is also perhaps a misalignment between the progressive politics of inclusion expressed in these manifestos and what many open source communities are really trying to accomplish. Surely there must be some kind of merit that is not in scare quotes, or else there would not be any good open source software to use a raise a fuss about.
Though it does not directly address the issue, I’m reminded of an old email discussion on the Numpy mailing list that I found when I was trying to do ethnographic work on the Scientific Python community. It was a response by John Hunter, the creator of Matplotlib, in response to concerns raised when Travis Oliphant, the leader of NumPy, started Continuum Analytics and there were concerns about corporate control over NumPy. Hunter quite thoughtfully, in my opinion, debunked the idea that open source governance should be a ‘democracy’, like many people assume institutions ought to be by default. After a long discussion about how Travis had great merit as a leader, he argued:
Democracy is something that many of us have grown up by default to consider as the right solution to many, if not most or, problems of governance. I believe it is a solution to a specific problem of governance. I do not believe democracy is a panacea or an ideal solution for most problems: rather it is the right solution for which the consequences of failure are too high. In a state (by which I mean a government with a power to subject its people to its will by force of arms) where the consequences of failure to submit include the death, dismemberment, or imprisonment of dissenters, democracy is a safeguard against the excesses of the powerful. Generally, there is no reason to believe that the simple majority of people polled is the “best” or “right” answer, but there is also no reason to believe that those who hold power will rule beneficiently. The democratic ability of the people to check to the rule of the few and powerful is essential to insure the survival of the minority.
In open source software development, we face none of these problems. Our power to fork is precisely the power the minority in a tyranical democracy lacks: noone will kill us for going off the reservation. We are free to use the product or not, to modify it or not, to enhance it or not.
The power to fork is not abstract: it is essential. matplotlib, and chaco, both rely *heavily* on agg, the Antigrain C++ rendering library. At some point many years ago, Maxim, the author of Agg, decided to change the license of Agg (circa version 2.5) to GPL rather than BSD. Obviously, this was a non-starter for projects like mpl, scipy and chaco which assumed BSD licensing terms. Unfortunately, Maxim had a new employer which appeared to us to be dictating the terms and our best arguments fell on deaf ears. No matter: mpl and Enthought chaco have continued to ship agg 2.4, pre-GPL, and I think that less than 1% of our users have even noticed. Yes, we forked the project, and yes, noone has noticed. To me this is the ultimate reason why governance of open source, free projects does not need to be democratic. As painful as a fork may be, it is the ultimate antidote to a leader who may not have your interests in mind. It is an antidote that we citizens in a state government may not have.
It is true that numpy exists in a privileged position in a way that matplotlib or scipy does not. Numpy is the core. Yes, Continuum is different than STScI because Travis is both the lead of Numpy and the lead of the company sponsoring numpy. These are important differences. In the worst cases, we might imagine that these differences will negatively impact numpy and associated tools. But these worst case scenarios that we imagine will most likely simply distract us from what is going on: Travis, one of the most prolific and valuable contributers to the scientific python community, has decided to refocus his efforts to do more. And that is a very happy moment for all of us.
This is a nice articulation of how forking, not voting, is the most powerful governance mechanism in open source development, and how it changes what our default assumptions about leadership ought to be. A critical but I think unacknowledged question is to how the possibility of forking interacts with the critique of meritocracy in organizations in general, and specifically what that means for community inclusiveness as a goal in open source communities. I don’t think it’s straightforward.
My understanding is that the thesis of the book is that income inequality has a measurable effect on public health, especially certain kinds of chronic illnesses. The proposed mechanism for this effect is the psychological state of those perceiving themselves to be relatively worse off. This is a hardwired mechanism, it would seem, and one that is being turned on more and more by socioeconomic conditions today.
I’m happy to take this argument for granted until I hear otherwise. I’m interested in (and am jotting notes down here, not having read the book) the physics of this mechanism. It’s part of a larger puzzle about social forms, emergent social properties, and factor analysis that I’ve written about it some other posts.
Here’s the idea: income inequality is a very specific kind of social metric and not one that is easy to directly perceive. Measuring it from tax records, which short be straightforward, is fraught with technicalities. Therefore, it is highly implausible that direct perception of this metric is what causes the psychological impact of inequality.
Therefore, there must be one or more mediating factors between income inequality as an economic fact and psychological inequality as a mental phenomenon. Let’s suppose–because it’s actually what we should see as a ‘null hypothesis’–that there are many, many factors linking these phenomena. Some may be common causes of income inequality and psychological inequality, such as entrenched forms of social inequality that prevent equal access to resources and are internalized somehow. Others may be direct perception of the impact of inequality, such as seeing other people flying in higher class seats, or (ahem) hearing other people talk about flying at all. And yet we seem comfortable deriving from this very complex mess a generalized sense of inequality and its impact, and now that’s one of the most pressing political topics today.
I want to argue that when a person perceives inequality in a general way, they are in effect performing a kind of factor analysis on their perceptions of other people. When we compare ourselves with others, we can do so on a large number of dimensions. Cognitively, we can’t grok all of it–we have to reduce the feature space, and so we come to understand the world through a few blunt indicators that combine many other correlated data points into one.
These blunt categories can suggest that there is structure in the world that isn’t really there, but rather is an artifact of constraints on human perception and cognition. In other words, downward causation would happen in part through a dimensionality reduction of social perception.
On the other hand, if those constraints are regular enough, they may in turn impose a kind of structure on the social world (upward causation). If downward causation and upward causation reinforced each other, then that would create some stable social conditions. But there’s also no guarantee that stable social perceptions en masse track the real conditions. There may be systematic biases.
I’m not sure where this line of inquiry goes, to be honest. It needs more work.
I came upon this excellent essay by Cosma Shalizi about how factor analysis has been spuriously used to support the scientific theory of General Intelligence (i.e., IQ). Shalizi, if you don’t know, is one of the best statisticians around. He writes really well and isn’t afraid to point out major blunders in things. He’s one of my favorite academics, and I don’t think I’m alone in this assessment.
First, a motive: Shalizi writes this essay because he thinks the scientific theory of General Intelligence, or a g factor that is some real property of the mind, is wrong. This theory is famous because (a) a lot of people DO believe in IQ as a real feature of the mind, and (b) a significant percentage of these people believe that IQ is hereditary and correlated with race, and (c) the ideas in (b) are used to justify pernicious and unjust social policy. Shalizi, being a principled statistician, appears to take scientific objection to (a) independently of his objection to (c), and argues persuasively that we can reject (a). How?
Shalizi’s point is that the general intelligence factor g is a latent variable that was supposedly discovered using a factor analysis of several different intelligence tests that were supposed to be independent of each other. You can take the data from these data sets and do a dimensionality reduction (that’s what factor analysis is) and get something that looks like a single factor, just as you can take a set of cars and do a dimensionality reduction and get something that looks like a single factor, “size”. The problem is that “intelligence”, just like “size”, can also be a combination of many other factors that are only indirectly associated with each other (height, length, mass, mass of specific components independent of each other, etc.). Once you have many different independent factors combining into one single reduced “dimension” of analysis, you no longer have a coherent causal story of how your general latent variable caused the phenomenon. You have, effectively, correlation without demonstrated causation and, moreover, the correlation is a construct of your data analysis method, and so isn’t really even telling you what correlations normally tell you.
To put it another way: the fact that some people seem to be generally smarter than other people can be due to thousands of independent factors that happen to combine when people apply themselves to different kinds of tasks. If some people were NOT seeming generally smarter than others, that would allow you to reject the hypothesis that there was general intelligence. But the mere presence of the aggregate phenomenon does not prove the existence of a real latent variable. In fact, Shalizi goes on to say, when you do the right kinds of tests to see if there really is a latent factor of ‘general intelligence’, you find that there isn’t any. And so it’s just the persistent and possibly motivated interpretation of the observational data that allows the stubborn myth of general intelligence to continue.
Are you following so far? If you are, it’s likely because you were already skeptical of IQ and its racial correlates to begin with. Now I’m going to switch it up though…
It is fairly common for educated people in the United States (for example) to talk about “privilege” of social groups. White privilege, male privilege–don’t tell me you haven’t at least heard of this stuff before; it is literally everywhere on the center-left news. Privilege here is considered to be a general factor that adheres in certain social groups. It is reinforced by all manner of social conditioning, especially through implicit bias in individual decision-making. This bias is so powerful it extends not to just cases of direct discrimination but also in cases where discrimination happens in a mediated way, for example through technical design. The evidence for these kinds of social privileging effects is obvious: we see inequality everywhere, and we can who is more powerful and benefited by the status quo and who isn’t.
You see where this is going now. I have the momentum. I can’t stop. Here it goes: Maybe this whole story about social privilege is as spuriously supported as the story about general intelligence? What if both narratives were over-interpretations of data that serve a political purpose, but which are not in fact based on sound causal inference techniques?
How could this be? Well, we might gather a lot of data about people: wealth, status, neighborhood, lifespan, etc. And then we could run a dimensionality reduction/factor analysis and get a significant factor that we could name “privilege” or “power”. Potentially that’s a single, real, latent variable. But also potentially it’s hundreds of independent factors spuriously combined into one. It would probably, if I had to bet on it, wind up looking a lot like the factor for “general intelligence”, which plays into the whole controversy about whether and how privilege and intelligence get confused. You must have heard the debates about, say, representation in the technical (or other high-status, high-paying) work force? One side says the smart people get hired; the other side say it’s the privileged (white male) people that get hired. Some jerk suggests that maybe the white males are smarter, and he gets fired. It’s a mess.
I’m offering you a pill right now. It’s not the red pill. It’s not the blue pill. It’s some other colored pill. Green?
There is no such thing as either general intelligence or group based social privilege. Each of these are the results of sloppy data compression over thousands of factors with a loose and subtle correlational structure. The reason why patterns of social behavior that we see are so robust against interventions is that each intervention can work against only one or two of these thousands of factors at a time. Discovering the real causal structure here is hard partly because the effect sizes are very small. Anybody with a simple explanation, especially a politically convenient explanation, is lying to you but also probably lying to themselves. We live in a complex world that resists our understanding and our actions to change it, though it can be better understood and changed through sound statistics. Most people aren’t bothering to do this, and that’s why the world is so dumb right now.
Today I got an email I never thought I’d get: a message from the creators of TheListserve saying they were closing down the service after over 6 years.
TheListserve was a fantastic idea: it was a mailing list that allowed one person, randomly selected from the subscribers each day, to email everyone else.
It was an experiment in creating a different kind of conversational space on-line. And it worked great! Tens of thousands of subscribers, really interesting content–a space unlike most others in social media. You really did get a daily email with what some random person thought was the most interesting thing they had to say.
I was inspired enough by TheListserve to write a Twitter bot based on similar principles, TheTweetserve. Maybe the Twitter bot was also inspired by Habermas. It was not nearly as successful or interesting as TheListserve, for reasons that you could deduce if you thought about it.
Six years ago, “The Internet” was a very different imaginary. There was this idea that a lightweight intervention could capture some of the magic of serendipity that scale and connection had to offer, and that this was going to be really, really big.
It was, I guess, but then the charm wore off.
What’s happened now, I think, is that we’ve been so exposed to connection and scale that novelty has worn off. We now find ourselves exposed on-line mainly to the imposing weight of statistical aggregates and regressions to the mean. After years of messages to TheListserve, it started, somehow, to seem formulaic. You would get honest, encouraging advice, or a self-promotion. It became, after thousands of emails, a genre in itself.
I wonder if people who are younger and less jaded than I am are still finding and creating cool corners of the Internet. What I hear about more and more now are the ugly parts; they make the news. The Internet used to be full of creative chaos. Now it is so heavily instrumented and commercialized I get the sense that the next generation will see it much like I saw radio or television when I was growing up: as a medium dominated by companies, large and small. Something you had to work hard to break into as a professional choice or otherwise not at all.
You can also check out this slide deck from my “defense”. It covers the highlights.
I’ll be blogging about this material as I break it out into more digestible forms over time. For now, I’m obviously honored by any interest anybody takes in this work and happy to answer questions about it.
The four teams in CTSP’s Facebook-sponsored Data for Good Competition will be presenting today in CITRIS and CTSP’s Tech & Data for Good Showcase Day. The event will be streamed through Facebook Live on the CTSP Facebook page. After deliberations from the judges, the top team will receive $5000 and the runner-up will receive $2000.
5:05 Reception in Kvamme Atrium and CITRIS Tech Museum featuring posters and demos by Big Ideas finalists in the Connected Communities category
5:30 Event concludes
Data for Good Judges:
Joy Bonaguro, Chief Data Officer, City and County of San Francisco
Joy Bonaguro the first Chief Data Officer for the City and County of San Francisco, where she manages the City’s open data program. Joy has spent more than a decade working at the nexus of public policy, data, and technology. Joy earned her Masters from UC Berkeley’s Goldman School of Public Policy, where she focused on IT policy.
Lisa García Bedolla, Professor, UC Berkeley Graduate School of Education and Director of UC Berkeley’s Institute of Governmental Studies
Professor Lisa García Bedolla is a Professor in the Graduate School of Education and Director of the Institute of Governmental Studies. Professor García Bedolla uses the tools of social science to reveal the causes of political and economic inequalities in the United States. Her current projects include the development of a multi-dimensional data system, called Data for Social Good, that can be used to track and improve organizing efforts on the ground to empower low-income communities of color. Professor García Bedolla earned her PhD in political science from Yale University and her BA in Latin American Studies and Comparative Literature from UC Berkeley.
Chaya Nayak, Research Manager, Public Policy, Data for Good at Facebook
Chaya Nayak is a Public Policy Research Manager at Facebook, where she leads Facebook’s Data for Good Initiative around how to use data to generate positive social impact and address policy issues. Chaya received a Masters of Public Policy from the Goldman School of Public Policy at UC Berkeley, where she focused on the intersection between Public Policy, Technology, and Utilizing Data for Social Impact.
Michael Valle, Manager, Technology Policy and Planning for California’s Office of Statewide Health Planning and Development
Michael D. Valle is Manager of Technology Policy and Planning at the California Office of Statewide Health Planning and Development, where he oversees the digital product portfolio. Michael has worked since 2009 in various roles within the California Health and Human Services Agency. In 2014 he helped launch the first statewide health open data portal in California. Michael also serves as Adjunct Professor of Political Science at American River College.
As detailed in the call for proposals, the teams will be judged on the quality of their application of data science skills, the demonstration of how the proposal or project addresses a social good problem, their advancing the use of public open data, all while demonstrating how the proposal or project mitigates potential pitfalls.
This is a post that first appeared on the Software Sustainability Institute’s blog and was co-authored by myself, Alejandra Gonzalez-Beltran, Robert Haines, James Hetherington, Chris Holdgraf, Heiko Mueller, Martin O’Reilly, Tomas Petricek, Jake VanderPlas (authors in alphabetical order) during a workshop at the Alan Turing Institute.
Introduction: Sustaining Data Science and Research Software Engineering
Data and software have enmeshed themselves in the academic world, and are a growing force in most academic disciplines (many of which are not traditionally seen as “data-intensive”). Many universities wish to improve their ability to create software tools, enable efficient data-intensive collaborations, and spread the use of “data science” methods in the academic community.
The fundamentally cross-disciplinary nature of such activities has led to a common model: the creation of institutes or organisations not bound to a particular department or discipline, focusing on the skills and tools that are common across the academic world. However, creating institutes with a cross-university mandate and non-standard academic practices is challenging. These organisations often do not fit into the “traditional” academic model of institutes or departments, and involve work that is not incentivised or rewarded under traditional academic metrics. To add to this challenge, the combination of quantitative and qualitative skills needed is also highly in-demand in non-academic sectors. This raises the question: how do you create such institutes so that they attract top-notch candidates, sustain themselves over time, and provide value both to members of the group as well as the broader university community?
In recent years many universities have experimented with organisational structures aimed at acheiving this goal. They focus on combining research software, data analytics, and training for the broader academic world, and intentionally cut across scientific disciplines. Two-such groups are the Moore-Sloan Data Science Environments based in the USA and the Research Software Engineer groups based in the UK. Representatives from both countries recently met at the Alan Turing Institute in London for the RSE4DataScience18 Workshop to discuss their collective experiences at creating successful data science and research software institutes.
This article synthesises the collective experience of these groups, with a focus on challenges and solutions around the topic of sustainability. To put it bluntly: a sustainable institute depends on sustaining the people within it. This article focuses on three topics that have proven crucial.
Creating consistent and competitive funding models.
Building a positive culture and an environment where all members feel valued.
Defining career trajectories that cater to the diverse goals of members within the organisation.
We’ll discuss each of these points below, and provide some suggestions, tips, and lessons-learned in accomplishing each.
An Aside on Nomenclature
The terms Research Software Engineer (i.e. RSE; most often used by UK partners) and Data Scientist (most often used by USA partners) have slightly different connotations, but we will not dwell on those aspects here (see Research Software Engineers and Data Scientists: More in Common for some more thoughts on this). In the current document, we will mostly use the terms RSE and Data Scientist interchangeably, to denote the broad range of positions that focus on software-intensive and data-intensive research within academia. In practice, we find that most people flexibly operate in both worlds simultaneously.
Challenges & Proposed Solutions
Challenge: Financial sustainability
How can institutions find the financial support to run an RSE program?
The primary challenge for sustainability of this type of program is often financial: how do you raise the funding necessary to hire data scientists and support their research? While this doesn’t require paying industry-leading rates for similar work, it does require resources to compensate people comfortably. In practice, institutions have come at this from a number of angles:
Private Funding: Funding from private philanthropic organisations has been instrumental in getting some of these programs off the ground: for example, the Moore-Sloan Data Science Initiative funded these types of programs for five years at the University of Washington (UW), UC Berkeley, and New York University (NYU). This is probably best viewed as seed funding to help the institutions get on their feet, with the goal of seeking other funding sources for the long term.
Organisational Grants: Many granting organisations (such as the NSF or the UK Research Councils) have seen the importance of software to research, and are beginning to make funding available specifically for cross-disciplinary software-related and data science efforts. Examples are the Alan Turing Institute, mainly funded by the UK Engineering and Physical Sciences Research Council (EPSRC) and the NSF IGERT grant awarded to UW, which funded the interdisciplinary graduate program centered on the data science institute there.
Project-based Grants: There are also opportunities to gain funding for the development of software or to carry out scientific work that requires creating new tools. For example, several members of UC Berkeley were awarded a grant from the Sloan Foundation to hire developers for the NumPy software project. The grant provided enough funding to pay competitive wages with the broader tech community in the Bay Area.
Individual Grants: For organisations that give their RSEs principal investigator status, grants to individuals’ research programs can be a route to sustainable funding, particularly as granting organisations become more aware of and attuned to the importance of software in science. In the UK, the EPSRC has run two rounds of Research Software Engineer Fellowships, supporting leaders in the research software field for a period of five years to establish their RSE groups. Another example of a small grant for individuals promoting and supporting RSE activities is the Software Sustainability Institute fellowship.
Paid Consulting: Some RSE organisations have adopted a paid consulting model, in which they fund their institute by consulting with groups both inside and outside the university. This requires finding common goals with non-academic organisations, and agreeing to create open tools in order to accomplish those goals. An example is at Manchester, where as part of their role in research IT, RSEs provide paid on-demand technical research consulting services for members of the University community. Having a group of experts on campus able to do this sort of work is broadly beneficial to the University as a whole.
University Funding: Universities generally spend part of their budget on in-house services for students and researchers; a prime example is IT departments. When RSE institutes establish themselves as providing a benefit to the University community, the University administration may see fit to support those efforts: this has been the case at UW, where the University funds faculty positions within the data science institute. In addition, several RSE groups perform on-demand training sessions to research groups on campus in exchange for proceeds from research grants.
Information Technology (IT) Connections: IT organisations in universities are generally well-funded, and their present-day role is often far removed from their original mission of supporting computational research. One vision for sustainability is to reimagine RSE programs as the “research wing” of university IT, to make use of the relatively large IT funding stream to help enable more efficient computational research. This model has been implemented at the University of Manchester, where Research IT sits directly within the Division of IT Services. Some baseline funding is provided to support things like research application support and training, and RSE projects are funded via cost recovery.
Professors of Practice: Many U.S. universities have the notion of “professors of practice” or “clinical professors,” which often exist in professional schools like medicine, public policy, business, and law. In these positions, experts with specialised fields are recruited as faculty for their experience outside of traditional academic research. Such positions are typically salaried, but not tenure-track, with these faculty evaluated on different qualities than traditional faculty. Professors of practice are typically able to teach specialised courses, advise students, influence the direction of their departments, and get institutional support for various projects. Such a model could be applied to support academic data science efforts, perhaps by adopting the “Professor of practice” pattern within computational science departments.
Research Librarians: We also see similarities in how academic libraries have supported stable, long-term career paths for their staff. Many academic librarians are experts in both a particular domain specialty and in library science, and spend much of their time helping members of the community with their research. At some universities, librarians have tenure-track positions equivilant to those in academic departments, while at others, librarians are a distinct kind of administrative or staff track that often have substantial long-term job security and career progression. These types of institutions and positions provide a precedent for the kinds of flexible, yet stable academic careers that our data science institutes support.
Challenge: Community cohesion and personal value
How to create a successful environment where people feel valued?
From our experience, there are four main points that help create an enjoyable and successful environment to facilitate success and makes people feel valued in their role.
Physical Space. The physical space that hosts the group plays an important role to creating an enjoyable working environment. In most cases there will be a lot of collaboration going on between people within the group but also with people from other departments within the university. Having facilities (e.g. meeting spaces) that support collaborative work on software projects will be a big facilitator for successful outputs.
Get Started Early. Another important aspect to creating a successful environment is to connect the group to other researchers with the university early on. It is important to inform people about the tasks and services the group provides, and to involve people early on who are well connected and respected within the university so that they can promote and champion the group within the university. This helps get the efforts off the ground early, and spread the word and bring on further opportunities.
Celebrate Each Other’s Work. While it may not be possible to convince the broader academic community to treat software as first-class research output, data science organisations should explicitly recognise many forms of scientific output, including tools and software, analytics workflows, or non-standard written communication. This is especially true for projects where there is no “owner”, such as major open-source projects. Just because your name isn’t “first” doesn’t mean you can’t make a valuable contribution to science. Creating a culture that celebrates these efforts makes individuals feel that their work is valued.
Allow Free Headspace. The roles of individuals should (i) enable them to work in collaboration with researchers from other domains (e.g., in a support role on their research projects) and (ii) also allow them to explore their own ‘research’ ideas. Involvement in research projects not only helps these projects develop reliable and reproducible results but can be an important source to help identify areas and tasks that are currently poorly supported be existing research software. Having free head space allows individuals to further pursue ideas that help solve the identified tasks. There are a lot of examples for successful open source software projects that have started as small side projects.
Challenge: Preparing members for a diversity of careers
How do we establish career trajectories that value people’s skills and experience in this new inter-disciplinary domain?
The final dimension that we consider is that of the career progression of data scientists. Their career path generally differs from the traditional academic progression, and the traditional academic incentives and assessment criteria do not necessarily apply to the work they perform.
Professional Development. A data science institute should prepare its staff both in technical skills (such as software development best practices and data-intensive activities) as well as soft skills (such as team work and communication skills) that would allow them to be ready for their next career step in multiple interdisciplinary settings. Whether it is in academia or industry, data science is inherently collaborative, and requires working with a team composed of diverse skillsets.
Where Next. Most individuals will not spend their entire careers within a data science institute, which means their time must be seen as adequately preparing them for their next step. We envision that a data scientist could progress in their career either staying in academia, or moving to industry positions. For the former, career progression might involve moving to new supervisory roles, attaining PI status, or building research groups. For the latter, the acquired technical and soft skills are valuable in industrial settings and should allow for a smooth transition. Members should be encouraged to collaborate or communicate with industry partners in order to understand the roles that data analytics and software play in those organisations.
The Revolving Door. The career trajectory from academia to industry has traditionally been mostly a one-way street, with academic researchers and industry engineers living in different worlds. However, the value of data analytic methods cuts across both groups, and offers opportunities to learn from one another. We believe a Data Science Institute should encourage strong collaborations and a bi-directional and fluid interchange between academic and industrial endeavours. This will enable a more rapid spread of tools and best-practices, and support the intermixing of career paths between research and industry. We see the institute as ‘the revolving door’ with movement of personnel between different research and commercial roles, rather than a one-time commitment where members must choose one or the other.
Though these efforts are still young, we have already seen the dividends of supporting RSEs and Data Scientists within our institutions in the USA and the UK. We hope this document can provide a roadmap for other institutions to develop sustainable programs in support of cross-disciplinary software and research.
This is a post that first appeared on the Software Sustainability Institute’s blog and was co-authored by Matthew Archer, Stephen Dowsland, Rosa Filgueira, R. Stuart Geiger, Alejandra Gonzalez-Beltran, Robert Haines, James Hetherington, Christopher Holdgraf, Sanaz Jabbari Bayandor, David Mawdsley, Heiko Mueller, Tom Redfern, Martin O’Reilly, Valentina Staneva, Mark Turner, Jake VanderPlas, Kirstie Whitaker (authors in alphabetical order) during a workshop at the Alan Turing Institute.
In our institutions, we employ multidisciplinary research staff who work with colleagues across many research fields to use and create software to understand and exploit research data. These researchers collaborate with others across the academy to create software and models to understand, predict and classify data not just as a service to advance the research of others, but also as scholars with opinions about computational research as a field, making supportive interventions to advance the practice of science.
Some of us use the term “data scientist” to refer to our team members, in others we use “research software engineer” (RSE), and in some both. Where both terms are used, the difference seems to be that data scientists in an academic context focus more on using software to understand data, while research software engineers more often make software libraries for others to use. However, in some places, one or other term is used to cover both, according to local tradition.
What we have in common
Regardless of job title, we hold in common many of the skills involved and the goal of driving the use of open and reproducible research practices.
Shared skill focuses include:
Literate programming: writing code to be read by humans.
Performant programming: the time or memory used by the code really matters
Algorithmic understanding: you need to know what the maths of the code you’re working with actually does.
Coding for a product: software and scripts need to live beyond the author, being used by others.
Verification and testing: it’s important that the script does what you think it does.
Scaling beyond the laptop: because performance matters, cloud and HPC skills are important.
Data wrangling: parsing, managing, linking and cleaning research data in an arcane variety of file formats.
Interactivity: the visual display of quantitative information.
Shared attitudes and approaches to work are also important commonalities:
Multidisciplinary agility: the ability to learn what you need from a new research domain as you begin a collaboration.
Navigating the research landscape: learning the techniques, languages, libraries and algorithms you need as you need them.
Managing impostor syndrome: as generalists, we know we don’t know the detail of our methods quite as well as the focused specialists, and we know how to work with experts when we need to.
Our differences emerge from historical context
The very close relationship thus seen between the two professional titles is not an accident. In different places, different tactics have been tried to resolve a common set of frustrations seen as scholars struggle to make effective use of information technology.
In the UK, the RSE Groups have tried to move computational research forward by embracing a service culture while retaining participation in the academic community, sometimes described as being both a “craftsperson and a scholar”, or science-as-a-service. We believe we make a real difference to computational research as a discipline by helping individual research groups use and create software more effectively for research, and that this helps us to create genuine value for researchers rather than to build and publish tools that are not used by researchers to do research.
The Moore-Sloan Data Science Environments (MSDSE) in the US are working to establish Data Science as a new academic interdisciplinary field, bringing together researchers from domain and methodology fields to collectively develop best practices and software for academic research. While these institutes also facilitate collaboration across academia, their funding models are less based on a service model than in UKRSE groups and more based on bringing together graduate students, postdocs, research staff, and faculty across academia together in a shared environment.
Although these approaches differ strongly, we nevertheless see that the skills, behaviours and attitudes used by the people struggling to make this work are very similar. Both movements are tackling similar issues, but in different institutional contexts. We took diverging paths from a common starting point, but now find ourselves envisaging a shared future.
The Alan Turing Institute in the UK straddles the two models, with both a Research Engineering Group following a science-as-a-service model and comprising both Data Scientists and RSEs, and a wider collaborative academic data science engagement across eleven partner universities.
Observing this convergence, we recommend:
Create adverts and job descriptions that are welcoming to people who identify as one or the other title: the important thing is to attract and retain the right people.
Standardised nomenclature is important, but over-specification is harmful. Don’t try too hard to delineate the exact differences in the responsibilities of the two roles: people can and will move between projects and focuses, and this is a good thing.
These roles, titles, groups, and fields are emerging and defined differently across institutions. It is important to have clear messaging to various stakeholders about the responsibilities and expectations of people in these roles.
Be open to evolving roles for team members, and ensure that stable, long-term career paths exist to support those who have taken the risk to work in emerging roles.
Don’t restrict your recruitment drive to people who have worked with one or other of these titles: the skills you need could be found in someone whose earlier roles used the other term.
Don’t be afraid to embrace service models to allow financial and institutional sustainability, but always maintain the genuine academic collaboration needed for research to flourish.
We recently held a workshop at ETHOS Lab and the Data as Relation project at ITU Copenhagen, as part of Stuart Geiger’s seminar talk on “Computational Ethnography and the Ethnography of Computation: The Case for Context” on 26th of March 2018. Tapping into his valuable experience, and position as a staff ethnographer at Berkeley Institute for Data Science, we wanted to think together about the role that computational methods could play in ethnographic and interpretivist research. Over the past decade, computational methods have exploded in popularity across academia, including in the humanities and interpretive social sciences. Stuart’s talk made an argument for a broad, collaborative, and pluralistic approach to the intersection of computation and ethnography, arguing that ethnography has many roles to play in what is often called “data science.”
Based on Stuart’s talk the previous day, we began the workshop with three different distinctions about how ethnographers can work with computation and computational data: First, the “ethnography of computation” is using traditional qualitative methods to study the social, organizational, and epistemic life of computation in a particular context: how do people build, produce, work with, and relate to systems of computation in their everyday life and work? Ethnographers have been doing such ethnographies of computation for some time, and many frameworks — from actor-network theory (Callon 1986, Law 1992) to “technography” (Jansen and Vellema 2011, Bucher 2012) — have been useful to think about how to put computation at the center of these research projects.
Second, “computational ethnography” involves extending the traditional qualitative toolkit of methods to include the computational analysis of data from a fieldsite, particularly when working with trace or archival data that ethnographers have not generated themselves. Computational ethnography is not replacing methods like interviews and participant-observation with such methods, but supplementing them. Frameworks like “trace ethnography” (Geiger and Ribes 2010) and “computational grounded theory” (Nelson 2017) have been useful ways of thinking about how to integrate these new methods alongside traditional qualitative methods, while upholding the particular epistemological commitments that make ethnography a rich, holistic, situated, iterative, and inductive method. Stuart walked through a few Jupyter notebooks from a recent paper (Geiger and Halfaker, 2017) in which they replicated and extended a previously published study about bots in Wikipedia. In this project, they found computational methods quite useful in identifying cases for qualitative inquiry, and they also used ethnographic methods to inform a set of computational analyses in ways that were more specific to Wikipedians’ local understandings of conflict and cooperation than previous research.
Finally, the “computation of ethnography” (thanks to Mace for this phrasing) involves applying computational methods to the qualitative data that ethnographers generate themselves, like interview transcripts or typed fieldnotes. Qualitative researchers have long used software tools like NVivo, Atlas.TI, or MaxQDA to assist in the storage and analysis of data, but what are the possibilities and pitfalls of storing and analyzing our qualitative data in various computational ways? Even ethnographers who use more standard word processing tools like Google Docs or Scrivener for fieldnotes and interviews can use computational methods to organize, index, tag, annotate, aggregate and analyze their data. From topic modeling of text data to semantic tagging of concepts to network analyses of people and objects mentioned, there are many possibilities. As multi-sited and collaborative ethnography are also growing, what tools let us collect, store, and analyze data from multiple ethnographers around the world? Finally, how should ethnographers deal with the documents and software code that circulate in their fieldsites, which often need to be linked to their interviews, fieldnotes, memos, and manuscripts?
These are not hard-and-fast distinctions, but instead should be seen as sensitizing concepts that draw our attention to different aspects of the computation / ethnography intersection. In many cases, we spoke about doing all three (or wanting to do all three) in our own projects. Like all definitions, they blur as we look closer at them, but this does not mean we should abandon the distinctions. For example, computation of ethnography can also strongly overlap with computational ethnography, particularly when thinking about how to analyze unstructured qualitative data, as in Nelson’s computational grounded theory. Yet it was productive to have different terms to refer to particular scopings: our discussion of using topic modeling of interview transcripts to help identify common themes was different than our discussion of analyzing of activity logs to see how prevalent a particular phenomenon, which were different than our discussion a situated investigation of the invisible work of code and data maintenance.
We then worked through these issues in the specific context of two cases from ETHOS Lab and Data as Relation project, where Bastian and Michael are both studying public sector organizations in Denmark that work with vast quantities and qualities of data and are often seeking to become more “data-driven.” In the Danish tax administration (SKAT) and the Municipality of Copenhagen’s Department of Cultural and Recreational Activities, there are many projects that are attempting to leverage data further in various ways. For Michael, the challenge is to be able to trace how method assemblages and sociotechnical imaginaries of data travel between private organisations and sites to public organisations, and influence the way data is worked with and what possibilities data are associated with. Whilst doing participant-observation, Michael suggested that a “computation of ethnography” approach might make it easier to trace connections between disparate sites and actors.
The ethnographer enters the perfect information organization
In one group, we explored the idea of the Perfect Information Organisation, or PIO, in which there are traces available of all workplace activity. This nightmarish panopticon construction would include video and audio surveillance of every meeting and interaction, detailed traces of every activity online, and detailed minutes on meetings and decisions. All of this would be available for the ethnographer, as she went about her work.
The PIO is of course a thought experiment designed to provoke the common desire or fantasy for more data. This is something we all often feel in our fieldwork, but we felt this raised many implicit risks if one combined and extended the three types of ethnography detailed earlier on. By thinking about the PIO, ludicrous though it might be, we would challenge ourselves to look at what sort of questions we could and should ask in such a situation. We came up with the following questions, although there are bound to be many more:
What do members know about the data being collected?
Does it change their behaviour?
What takes place outside of the “surveilled” space? I.e. what happens at the bar after work?
What spills out of the organisation, like when members of the organization visit other sites as part of their work?
How can such a system be slowed down and/or “disconcerted” (a concept from Helen Verran that have found useful in thinking about data in context)?
How can such a system even exist as an assemblage of many surveillance technologies, and would not the weight of the labour sustaining it outstrip its ability to function?
What the list shows is that although the PIO may come off as a wet-dream of the data obsessed or fetisitch researcher, even it has limits as a hypothetical thought experiment. Information is always situated in a context, often defined in relation to where and what information is not available. Yet as we often see in our own fieldwork (and constantly in the public sphere), the fantasies of total or perfect information persist for powerful reasons. Our suggestion was that such a thought experiment would be a good initial exercise for the researcher about to embark on a mixed-methods/ANT/trace ethnography inspired research approach in a site heavily infused with many data sources. The challenge of what topics and questions to ask in ethnography is always as difficult as asking what kind of data to work with, even if we put computational methods and trace data aside. We brought up many tradeoffs in our own fieldwork, such as when getting access to archival data means that the ethnographer is not spending as much time in interviews or participant observation.
This also touches on some of the central questions which the workshop provoked but didn’t answer: what is the phenomenon we are studying, in any given situation? Is it the social life in an organisation, that life distributed over a platform and “real life” social interactions or the platform’s affordances and traces itself? While there is always a risk of making problematic methodological trade-offs in trying to get both digital and more classic ethnographic traces, there is also, perhaps, a methodological necessity in paying attention to the many different types of traces available when the phenomenon we are interested in takes place both online, at the bar and elsewhere. We concluded that ethnography’s intentionally iterative, inductive, and flexible approach to research applies to these methodological tradeoffs as well: as you get access to new data (either through traditional fieldwork or digitized data) ask what you are not focusing on as you see something new.
In the end, these reflections bear a distinct risk of indulging in fantasy: the belief that we can ever achieve a full view (the view from nowhere), or a holistic or even total view of social life in all its myriad forms, whether digital or analog. The principles of ethnography are most certainly not about exhausting the phenomenon, so we do well to remain wary of this fantasy. Today, ethnography is often theorized as documentation of an encounter between an ethnographer and people in a particular context, with the partial perspectives to be embraced. However, we do believe that it is productive to think through the PIO and to not write off in advance traces which do not correspond with an orthodox view of what ethnography might consider proper material or data.
The perfect total information ethnographers
In the second group conversation originated from the wish of an ethnographer to gain access to a document sharing platform from the organization in which the ethnographer is doing fieldwork. Of course, it is not just one platform, but a loose collection of platforms in various stages of construction, adoption, and acceptance. As we know, ethnographers are not only careful about the wishes of others but also of their own wishes — how would this change their ethnography if they had access to countless internal documents, records, archives, and logs? So rather than “just doing (something)”, the ethnographer took a step back and became puzzled over wanting such a strange thing in the first place.
The imaginaries of access to data
In the group, we speculated about if ethnographer got their wish to get access to as much data as possible from the field. Would a “Google Street view” recorded from head-mounted 360° cameras into the site be too much? Probably. On highly mediated sites — Wikipedia serving as an example during the workshop — plenty of traces are publicly left by design. Such archival completeness is a property of some media in some organizations, but not others. In ethnographies of computation, the wish of total access brings some particular problems (or opportunities) as a plenitude of traces and documents are being shared on digital platforms. We talked about three potential problems, the first and most obvious being that the ethnographer drowns in the available data. A second problem, is for the ethnographer to believe that getting more access will provide them with a more “whole” or full picture of the situation. The final problem we discussed was whether the ethnographer would end up replicating the problems of the people in the organization they are studying, which was working out how to deal with a multitude of heterogeneous data in their work.
Besides the problems we also discussed, we asked why the ethnographer would want access to the many documents and traces in the first place. What ideas of ethnography and epistemology does such a desire imply? Would the ethnographer want to “power up” their analysis by mimicking the rhetoric of “the more data the better”? Would the ethnographer add their own data (in the form of field notes and pictures) and through visualisations, show a different perspective on the situation? Even though we reject the notion of a panoptic view on various grounds, we are still left with the question of how much data we need or should want as ethnographers. Imagine that we are puzzled by a particular discussion, would we benefit from having access to a large pile of documents or logs that we could computationally search through for further information? Or would more traditional ethnographic methods like interviews actually be better for the goals of ethnography?
Bringing data home
“Bringing data home” is an idea and phrase that originates from the fieldsite and captures something about the intentions that are playing out. One must wonder what is implied by that idea, and what does the idea do. A straightforward reading would be that it describes a strategic and managerial struggle to cut off a particular data intermediary — a middleman — and restore a more direct data-relationship between the agency and actors using the data they provide. A product/design struggle, so to say. Pushing the speculations further, what might that homecoming, that completion of the re-redesign of data products be like? As ethnographers, and participants in the events we write about, when do we say “come home, data”, or “go home, data”? What ethnography or computation will be left to do, when data has arrived home? In all, we found a common theme in ethnographic fieldwork — that our own positionalities and situations often reflect those of the people in our fieldsites.
Concluding thoughts – why this was interesting/a good idea
It is interesting that our two groups did not explicitly coordinate our topics – we split up and independently arrived at very similar thought experiments and provocations. We reflected that this is likely because all of us attending the workshop were in similar kinds of situations, as we are all struggling with the dual problem of studying computation as an object and working with computation as a method. We found that these kinds of speculative thought experiments were useful in helping us define what we mean by ethnography. What are the principles, practices, and procedures that we mean when we use this term, as opposed to any number of others that we could also use to describe this kind of work? We did not want to do too much boundary work or policing what is and isn’t “real” ethnography, but we did want to reflect on how our positionality as ethnographers is different than, say, digital humanities or computational social science.
We left with no single, simple answers, but more questions — as is probably appropriate. Where do contributions of ethnography of computation, computational ethnography, or computation of ethnography go in the future? We instead offer a few next steps:
Of all the various fields and disciplines that have taken up ethnography in a computational context, what are their various theories, methods, approaches, commitments, and tools? For example, how is work that has more of a home in STS different from that in CSCW or anthropology? Should ethnographies of computation, computational ethnography, and computation of ethnography look the same across fields and disciplines, or different?
Of all the various ethnographies of computation taking place in different contexts, what are we finding about the ways in which people relate to computation? Ethnography is good at coming up with case studies, but we often struggle (or hesitate) to generalize across cases. Our workshop brought together a diverse group of people who were studying different kinds of topics, cases, sites, peoples, and doing so from different disciplines, methods, and epistemologies. Not everyone at the workshop primarily identified as an ethnographer, which was also productive. We found this mixed group was a great way to force us to make our assumptions explicit, in ways we often get away with when we work closer to home.
Of computational ethnography, did we propose some new, operationalizable mathematical approaches to working with trace data in context? How much should the analysis of trace data depend on the ethnographer’s personal intuition about how to collect and analyze data? How much should computational ethnography involve the integration of interviews and fieldnotes alongside computational analyses?
Of computation of ethnography, what does “tooling up” involve? What do our current tools do well, and what do we struggle to do with them? How do their affordances shape the expectations and epistemologies we have of ethnography? How can we decouple the interfaces from their data, such as exporting the back-end database used by a more standard QDA program and analyzing it programmatically using text analysis packages, and find useful cuts to intervene in, in an ethnographic fashion, without engineering everything from some set of first principles? What skills would be useful in doing so?
I’m getting a lot of requests for my syllabi. Here are links to my most recent courses. Please note that we changed our LMS in 2014 and so some of my older course syllabi are missing. I’m going to round those up.
In my last post, I set up an A/B test through Google Optimize and learned Google Tag Manager (GTM), Google Analytics (GA) and Google Data Studio (GDS) along the way. When I was done, I wanted to learn how to integrate Enhanced E-commerce and Adwords into my mock-site, so I set that as my next little project.
As the name suggests, Enhanced E-commerce works best with an e-commerce site—which I don’t quite have. Fortunately, I was able to find a bunch of different mock e-commerce website source code repositories on Github which I could use to bootstrap my own. After some false starts, I found one that worked well for my purposes, based on this repository that made a mock e-commerce site using the “MEAN” stack (MongoDB, Express.js, AngularJS, and node.js).
Properly implementing Enhanced E-commerce does require some back end development—specifically to render static values on a page that can then be passed to GTM (and ultimately to GA) via the dataLayer. In the source code I inherited, this was done through the nunjucks templating library, which was well suited to the task.
Once again, I used Selenium to simulate traffic to the site. I wanted to have semi-realistic traffic to test the GA pipes, so I modeled consumer preferences off of the beta distribution with and . That looks something like this:
The value of the beta distribution is normally constrained to the (0,1) interval, but I multiplied it by the number of items in my store to simulate preferences for my customers. So in the graph, the 6th item (according to an arbitrary indexing of the store items) is the most popular, while the 22nd and 23rd items are the least popular.
For the customer basket size, I drew from a poisson distribution with . That looks like this:
Although the two distributions do look quite similar, they are actually somewhat different. For one thing, the Poisson distribution is discrete while the beta distribution is continuous—though I do end up dropping all decimal figures when drawing samples from the beta distribution since the items are also discrete. However, the two distributions do serve different purposes in the simulation. The axis in the beta distribution represents an arbitrary item index, and in the poisson distribution, it represents the number of items in a customer’s basket.
So putting everything together, the simulation process goes like this: for every customer, we first draw from the Poisson distribution with to determine , i.e. how many items that customer will purchase. Then we draw times from the beta distribution to see which items the customer will buy. Then, using Selenium, these items are added to the customer’s basket and the purchase is executed, while sending the Enhanced Ecommerce data to GA via GTM and the dataLayer.
When it came to implementing Adwords, my plan had been to bid on uber obscure keywords that would be super cheap to bid on (think “idle giraffe” or “bellicose baby”), but unfortunately Google requires that your ad links be live, properly hosted websites. Since my website is running on my localhost, Adwords wouldn’t let me create a campaign with my mock e-commerce website
As a workaround, I created a mock search engine results page that my users would navigate to before going to my mock e-commerce site’s homepage. 20% of users would click on my ‘Adwords ad’ for hoody sweatshirts on that page (that’s one of the things my store sells, BTW) . The ad link was encoded with the same UTM parameters that would be used in Google Adwords to make sure the ad click is attributed to the correct source, medium, and campaign in GA. After imposing a 40% bounce probability on these users, the remaining ones buy a hoody.
It seemed like I might as well use this project as another opportunity to work with GDS, so I went ahead and made another dashboard for my e-commerce website (live link):
If you notice that the big bar graph in the dashboard above looks a little like the beta distribution from before, that’s not an accident. Seeing the Hoody Promo Conv. Rate hover around 60% was another sign things were working as expected (implemented as a Goal in GA).
In my second go-around with GDS, however, I did come up against a few more frustrating limitations. One thing I really wanted to do was create a scorecard element that would tell you the name of the most popular item in the store, but GDS won’t let you do that.
I also wanted to make a histogram, but that is also not supported in GDS. Using my own log data, I did manage to generate the histogram I wanted—of the average order value.
I’m pretty sure we’re seeing evidence of the Central Limit Theorem kicking in here. The CLT says that the distribution of sample means—even when drawn from a distribution that is not normal—will tend towards normality as the sample size gets larger.
A few things have me wondering here, however. In this simulation, the sample size is itself a random variable which is never that big. The rule of thumb says that 30 counts as a large sample size, but if you look at the Poisson graph above you’ll see the sample size rarely goes above 8. I’m wondering whether this is mitigated by a large number of samples (i.e. simulated users); the histogram above is based on 50,000 simulated users. Also, because average order values can never be negative, we can only have at best a truncated normal distribution, so unfortunately we cannot graphically verify the symmetry typical of the normal distribution in this case.
But anyway, that’s just me trying to inject a bit of probability/stats into an otherwise implementation-heavy analytics project. Next I might try to re-implement the mock e-commerce site through something like Shopify or WordPress. We’ll see.
Learn how to make space for designers and researchers to do user-centered design in an Agile/scrum engineering environment. By creating an explicit Discovery process to focus on customer needs before committing engineers to shipping code, you will unlock design’s potential to deliver great user experiences to your customers.
By the end of this class, you will have built a Discovery Kanban board and learned how to use it to plan and manage the work of your team.
While I was at Optimizely, I implemented a Discovery kanban process to improve the effectiveness of my design team (which I blogged about previously here and here, and spoke about here). I took the lessons I learned from doing that and turned them into a class on Skillshare to help any design leader implement an explicit Discovery process at their organization.
Whether you’re a design manager, a product designer, a program manager, a product manager, or just someone who’s interested in user-centered design, I hope you find this course valuable. If you have any thoughts or questions, don’t hesitate to reach out: @jlzych
I’m continuing to read Moretti’s The new geography of jobs (2012). Except for the occasional gushing over the revolutionary-ness of some new payments startup, a symptom no doubt of being so close to Silicon Valley, it continues to be an enlightening and measured read on economic change.
There are a number of useful arguments and ideas from the book, which are probably sourced more generally from economics, which I’ll outline here, with my comments:
Local, artisanal production can never substitute for large-scale manufacturing. Moretti argues that while in many places in the United States local artisinal production has cropped up, it will never replace the work done by large-scale production. Why? Because by definition, local artisinal production is (a) geographically local, and therefore unable to scale beyond a certain region, and (b) defined in part by its uniqueness, differentiating it from mainstream products. In other words, if your local small-batch shop grows to the point where it competes with large-scale production, it is no longer local and small-batch.
Interestingly, this argument about production scaling echoes work on empirical heavy tail distributions in social and economic phenomena. A world where small-scale production constituted most of production would have an exponentially bounded distribution of firm productivity. The world doesn’t look that way, and so we have very very big companies, and many many small companies, and they coexist.
Higher labor productivity in a sector results in both a richer society and fewer jobs in that sector. Productivity is how much a person’s labor produces. The idea here is that when labor productivity increases, the firm that hires those laborers needs fewer people working to satisfy its demand. But those people will be paid more, because their labor is worth more to the firm.
I think Moretti is hand-waving a bit when he argues that a society only gets richer through increased labor productivity. I don’t follow it exactly.
But I do find it interesting that Moretti calls “increases in productivity” what many others would call “automation”. Several related phenomena are viewed critically in the popular discourse on job automation: more automation causes people to lose jobs; more automation causes some people to get richer (they are higher paid); this means there is a perhaps pernicious link between automation and inequality. One aspect of this is that automation is good for capitalists. But another aspect of this is that automation is good for lucky laborers whose productivity and earnings increase as a result of automation. It’s a more nuanced story than one that is only about job loss.
The economic engine of an economy is what brings in money, it need not be the largest sector of the economy. The idea here is that for a particular (local) economy, the economic engine of that economy will be what pulls in money from outside. Moretti argues that the economic engine must be a “trade sector”, meaning a sector that trades (sells) its goods beyond its borders. It is the workers in this trade-sector economic engine that then spend their income on the “non-trade” sector of local services, which includes schoolteachers, hairdressers, personal trainers, doctors, lawyers, etc. Moretti’s book is largely about how the innovation sector is the new economic engine of many American economies.
One thing that comes to mind reading this point is that not all economic engines are engaged in commercial trade. I’m thinking about Washington, DC, and the surrounding area; the economic engine there is obviously the federal government. Another strange kind of economic engine are top-tier research universities, like Carnegie Mellon or UC Berkeley. Top-tier research universities, unlike many other forms of educational institutions, are constantly selling their degrees to foreign students. This means that they can serve as an economic engine.
Overall, Moretti’s book is a useful guide to economic geography, one that clarifies the economic causes of a number of political tensions that are often discussed in a more heated and, to me, less useful way.
Moretti, Enrico. The new geography of jobs. Houghton Mifflin Harcourt, 2012.
We’ve all heard about the social construction of knowledge.
Here’s the story: Knowledge isn’t just in the head. Knowledge is a social construct. What we call “knowledge” is what it is because of social institutions and human interactions that sustain, communicate, and define it. Therefore all claims to absolute and unsituated knowledge are suspect.
There are many different social constructivist theories. One of the best, in my opinion, is Bourdieu’s, because he has one of the best social theories. For Bourdieu, social fields get their structure in part through the distribution of various kinds of social capital. Economic capital (money!) is one kind of social capital. Symbolic capital (the fact of having published in a peer-reviewed journal) is a different form of capital. What makes the sciences special, for Bourdieu, is that they are built around a particular mechanism for awarding symbolic capital that makes it (science) get the truth (the real truth). Bourdieu thereby harmonizes social constructivism with scientific realism, which is a huge relief for anybody trying to maintain their sanity in these trying times.
This is all super. What I’m beginning to appreciate more as I age, develop, and in some sense I suppose ‘progress’, is that economic capital is truly the trump card of all the forms of social capital, and that this point is underrated in social constructivist theories in general. What I mean by this is that flows of economic capital are a condition for the existence of the social fields (institutions, professions, etc.) in which knowledge is constructed. This is not to say that everybody engaged in the creation of knowledge is thinking about monetization all the time–to make that leap would be to commit the ecological fallacy. But at the heart of almost every institution where knowledge is created, there is somebody fundraising or selling.
Why, then, don’t we talk more about the economic construction of knowledge? It is a straightforward idea. To understand an institution or social field, you “follow the money”, seeing where it comes from and where it goes, and that allows you to situated the practice in its economic context and thereby determine its economic meaning.
The below original text was the basis for Data & Society Founder and President danah boyd’s March 2018 SXSW Edu keynote,“What Hath We Wrought?” — Ed.
Growing up, I took certain truths to be self evident. Democracy is good. War is bad. And of course, all men are created equal.
My mother was a teacher who encouraged me to question everything. But I quickly learned that some questions were taboo. Is democracy inherently good? Is the military ethical? Does God exist?
I loved pushing people’s buttons with these philosophical questions,but they weren’t nearly as existentially destabilizing as the moments in my life in which my experiences didn’t line up with frames that were sacred cows in my community. Police were revered, so my boss didn’t believe me when I told him that cops were forcing me to give them free food, which is why there was food missing. Pastors were moral authorities and so our pastor’s infidelities were not to be discussed, at least not among us youth. Forgiveness is a beautiful thing, but hypocrisy is destabilizing.Nothing can radicalize someone more than feeling like you’re being lied to.Or when the world order you’ve adopted comes crumbling down.
The funny thing about education is that we ask our students to challenge their assumptions. And that process can be enlightening.
The funny thing about education is that we ask our students to challenge their assumptions. And that process can be enlightening.I will never forget being a teenager and reading “The People’s History of the United States.”The idea that there could be multiple histories, multiple truths blew my mind.Realizing thathistory is written by the winners shook me to my core.This is the power of education.But the hole that opens up, that invites people to look for new explanations…that hole can be filled in deeply problematic ways.When we ask students to challenge their sacred cows but don’t give them a new framework through which to make sense of the world, others are often there to do it for us.
For the last year, I’ve been struggling with media literacy. I have a deep level of respect for the primary goal. As Renee Hobbs has written, media literacy is the “active inquiry and critical thinking about the messages we receive and create.”The field talks about the development of competencies or skills to help people analyze, evaluate, and even create media.Media literacy is imagined to be empowering, enabling individuals to have agency and giving them the tools to help create a democratic society.But fundamentally, it is a form of critical thinking that asks people to doubt what they see. And that makes me nervous.
Most media literacy proponents tell me that media literacy doesn’t exist in schools. And it’s true that the ideal version that they’re aiming for definitely doesn’t.But I spent a decade in and out of all sorts of schools in the US, where I quickly learned that a perverted version of media literacy does already exist.Students are asked to distinguish between CNN and Fox. Or to identify bias in a news story. When tech is involved, it often comes in the form of “don’t trust Wikipedia; use Google.”We might collectively dismiss these practices as not-media-literacy, but these activities are often couched in those terms.
I’m painfully aware of this, in part because media literacy is regularly proposed as the “solution” to the so-called “fake news” problem. I hear this from funders and journalists, social media companies and elected officials. My colleagues Monica Bulger and Patrick Davison just released a report on media literacy in light of “fake news” given the gaps in current conversations. I don’t know what version of media literacy they’re imagining but I’m pretty certain it’s not the CNN vs Fox News version. Yet, when I drill in, they often argue for the need to combat propaganda, to get students to ask where the money is coming from, to ask who is writing the stories for what purposes, to know how to fact-check, etcetera. And when I push them further, I often hear decidedly liberal narratives. They talk about the Mercers or about InfoWars or about the Russians. They mock “alternative facts.” While I identify as a progressive, I am deeply concerned by how people understand these different conservative phenomena and what they see media literacy as solving.
I get that many progressive communities are panicked about conservative media, but we live in a polarized society and I worry about how people judge those they don’t understand or respect.It also seems to me that the narrow version of media literacy that I hear as the “solution” is supposed to magically solve our political divide.It won’t.More importantly, as I’m watching social media and news media get weaponized, I’m deeply concerned that the well-intended interventions I hear people propose will backfire, because I’m fairly certain that the crass versions of critical thinking already have.
My talk today is intended to interrogate some of the foundations upon which educating people about the media landscape depends. Rather than coming at this from the idealized perspective, I am trying to come at this from the perspective of where good intentions might go awry, especially in a moment in which narrow versions of media literacy and critical thinking are being proposed as the solution to major socio-cultural issues.I want to examine the instability of our current media ecosystem to then return to the question of:what kind of media literacy should we be working towards? So let’s dig in.
Why do we value precision in language? I sat down for breakfast with Gillian Tett, a Financial Times journalist and anthropologist. She told me that when she first moved to the States from the UK, she was confounded by our inability to talk about class. She was trying to make sense of what distinguished class in America. In her mind, it wasn’t race. Or education.It came down to what construction of language was respected and valued by whom.People became elite by mastering the language marked as elite.Academics, journalists, corporate executives, traditional politicians: they all master the art of communication.I did too. I will never forget being accused of speaking like an elite by my high school classmates when I returned home after a semester of college.More importantly, although it’s taboo in America to be explicitly condescending towards people on the basis of race or education, there’s no social cost among elites to mock someone for an inability to master language.For using terms like “shithole.”
Linguistic and communications skills are not universally valued. Those who do not define themselves through this skill loathe hearing the never-ending parade of rich and powerful people suggesting that they’re stupid, backwards, and otherwise lesser.Embracing being anti-PC has become a source of pride, a tactic of resistance. Anger boils over as people who reject “the establishment” are happy to watch the elites quiver over their institutions being dismantled. This is why this is a culture war. Everyone believes they are part of the resistance.
We’re not living through a crisis about what is true, we’re living through a crisis about how we know whether something is true.We’re not disagreeing about facts,we’re disagreeing about epistemology.The “establishment” version of epistemology is, “We use evidence to arrive at the truth, vetted by independent verification (but trust us when we tell you that it’s all been independently verified by people who were properly skeptical and not the bosom buddies of the people they were supposed to be fact-checking).”
The “alternative facts” epistemological method goes like this: “The ‘independent’ experts who were supposed to be verifying the ‘evidence-based’ truth were actually in bed with the people they were supposed to be fact-checking.In the end, it’s all a matter of faith, then: you either have faith that ‘their’ experts are being truthful, or you have faith that we are. Ask your gut, what version feels more truthful?”
Let’s be honest — most of us educators are deeply committed to a way of knowing that is rooted in evidence, reason, and fact.But who gets to decide what constitutes a fact? In philosophy circles, social constructivists challenge basic tenets like fact, truth, reason, and evidence. Yet, it doesn’t take a doctorate of philosophy to challenge the dominant way of constructing knowledge. Heck, 75 years ago, evidence suggesting black people were biologically inferior was regularly used to justify discrimination. And this was called science!
In many Native communities, experience trumps Western science as the key to knowledge. These communities have a different way of understanding topics like weather or climate or medicine. Experience is also used in activist circles as a way of seeking truth and challenging the status quo. Experience-based epistemologies also rely on evidence, but not the kind of evidence that would be recognized or accepted by those in Western scientific communities.
Those whose worldview is rooted in religious faith, particularly Abrahamic religions, draw on different types of information to construct knowledge. Resolving scientific knowledge and faith-based knowledge has never been easy; this tension has countless political and social ramifications. As a result, American society has long danced around this yawning gulf and tried to find solutions that can appease everyone. But you can’t resolve fundamental epistemological differences through compromise.
No matter what worldview or way of knowing someone holds dear, they always believe that they are engaging in critical thinking when developing a sense of what is right and wrong, true and false, honest and deceptive.But much of what they conclude may be more rooted in their way of knowing than any specific source of information.
If we’re not careful, “media literacy” and “critical thinking”will simply be deployed as an assertion of authority over epistemology.
Right now, the conversation around fact-checking has already devolved to suggest that there’s only one truth. And we have to recognize that there are plenty of students who are taught that there’s only one legitimate way of knowing, one accepted worldview. This is particularly dicey at the collegiate level, where us professors have been taught nothing about how to teach across epistemologies.
Personally, it took me a long time to recognize the limits of my teachers. Like many Americans in less-than-ideal classrooms, I was taught that history was a set of facts to be memorized. When I questioned those facts, I was sent to the principal’s office for disruption.Frustrated and confused, I thought that I was being force-fed information for someone else’s agenda. Now I can recognize that that teacher was simply exhausted, underpaid, and waiting for retirement.But it took me a long time to realize that there was value in history and that history is a powerful tool.
Weaponizing Critical Thinking
The political scientist Deen Freelon was trying to make sense of the role of critical thinking to address “fake news.” He ended up looking back at a fascinating campaign by Russian Today (known as RT). Their motto for a while was “questionmore.” They produced a series of advertisements as teasers for their channel. These advertisements were promptly banned in the US and UK, resulting in RT putting up additional ads about how they were banned and getting tremendous mainstream media coverage about being banned. What was so controversial? Here’s an example:
“Just how reliable is the evidence that suggests human activity impacts on climate change? The answer isn’t always clear-cut. And it’s only possible to make a balanced judgement if you are better informed. By challenging the accepted view, we reveal a side of the news that you wouldn’t normally see. Because we believe that the more you question, the more you know.”
If you don’t start from a place where you’re confident that climate change is real, this sounds quite reasonable. Why wouldn’t you want more information? Why shouldn’t you be engaged in critical thinking? Isn’t this what you’re encouraged to do at school? So why is asking this so taboo? And lest you think that this is a moment to be condescending towards climate deniers, let me offer another one of their ads.
“Is terror only committed by terrorists? The answer isn’t always clear-cut. And it’s only possible to make a balanced judgement if you are better informed. By challenging the accepted view, we reveal a side of the news that you wouldn’t normally see. Because we believe that the more you question, the more you know.”
Many progressive activists ask whether or not the US government commits terrorism in other countries. The ads all came down because they were too political, but RT got what they wanted: an effective ad campaign. They didn’t come across as conservative or liberal, but rather a media entity that was “censored” for asking questions. Furthermore, by covering the fact that they were banned, major news media legitimized their frame under the rubric of “free speech.” Under the assumption that everyone should have the right to know and to decide for themselves.
We live in a world now where we equate free speech with the right to be amplified.Does everyone have the right to be amplified?Social media gave us that infrastructure under the false imagination that if we were all gathered in one place, we’d find common ground and eliminate conflict. We’ve seen this logic before. After World War II, the world thought that connecting the globe through financial interdependence would prevent World War III. It’s not clear that this logic will hold.
For better and worse, by connecting the world through social media and allowing anyone to be amplified, information can spread at record speed.There is no true curation or editorial control.The onus is on the public to interpret what they see. To self-investigate.Since we live in a neoliberal society that prioritizes individual agency, we double down on media literacy as the “solution” to misinformation.It’s up to each of us as individuals to decide for ourselves whether or not what we’re getting is true.
Yet, if you talk with someone who has posted clear, unquestionable misinformation, more often than not, they know it’s bullshit. Or they don’t care whether or not it’s true. Why do they post it then? Because they’re making a statement.The people who posted this meme (figure 1) didn’t bother to fact check this claim. They didn’t care.What they wanted to signal loud and clear is that they hated Hillary Clinton. And that message was indeed heard loud and clear. As a result, they are very offended if you tell them that they’ve been duped by Russians into spreading propaganda. They don’t believe you for one second.
Misinformation is contextual.Most people believe that people they know are gullible to false information, but that they themselves are equipped to separate the wheat from the chaff.There’s widespread sentiment that we can fact check and moderate our way out of this conundrum.This will fail.Don’t forget that for many people in this country, both education and the media are seen as the enemy — two institutions who are trying to have power over how people think. Two institutions that are trying to assert authority over epistemology.
Finding the Red Pill
Growing up on Usenet, Godwin’s Law was more than an adage to me. I spent countless nights lured into conversation by the idea that someone was wrong on the internet. And I long ago lost count about how many of them ended up with someone invoking Hitler or the Holocaust. I might have even been to blame in some of these conversations.
Fast forward 15 years to the point when Nathan Poe wrote a poignant comment on an online forum dedicated to Christianity: “Without a winking smiley or other blatant display of humor, it is utterly impossible to parody a Creationist in such a way that someone won’t mistake for the genuine article.”Poe’s Law, as it became known, signals that it’s hard to tell the difference between an extreme view and a parody of an extreme view on the internet.
In their book, “The Ambivalent Internet,”media studies scholars Whitney Phillips and Ryan Milner highlight how a segment of society has become so well-versed at digital communications — memes, GIFs, videos, etc. — that they can use these tools of expression to fundamentally destabilize others’communication structures and worldviews. It’s hard to tell what’s real and what’s fiction, what’s cruel and what’s a joke. But that’s the point. That is howirony and ambiguity can be weaponized.And for some, the goal is simple:dismantle the very foundations of elite epistemological structures that are so deeply rooted in fact and evidence.
Many people, especially young people, turn to online communities to make sense of the world around them. They want to ask uncomfortable questions, interrogate assumptions, and poke holes at things they’ve heard. Welcome to youth.There are some questions that are unacceptable to ask in public and they’ve learned that. But in many online fora, no question or intellectual exploration is seen as unacceptable. To restrict the freedom of thought is to censor. And so all sorts of communities have popped up for people to explore questions of race and gender and other topics in the most extreme ways possible. And these communities have become slippery. Are those taking on such hateful views real? Or are they being ironic?
In the 1999 film The Matrix, Morpheus says to Neo: “You take the blue pill,the story ends. You wake up in your bed and believe whatever you want. You take the red pill, you stay in Wonderland, and I show you how deep the rabbit hole goes.” Most youth aren’t interested in having the wool pulled over their head, even if blind faith might be a very calming way of living. Restricted in mobility and stressed to holy hell, they want to have access to what’s inaccessible, know what’s taboo, and say what’s politically incorrect.So who wouldn’t want to take the red pill?
In some online communities, taking the red pill refers to the idea of waking up to how education and media are designed to deceive you into progressive propaganda. In these environments, visitors are asked to question more. They’re invited to rid themselves of their politically correct shackles.There’s an entire online university designed to undo accepted ideas about diversity, climate, and history. Some communities are even more extreme in their agenda. These are all meant to fill in the gaps for those who are opening to questioning what they’ve been taught.
In 2012, it was hard not to avoid the names Trayvon Martin and George Zimmerman, but that didn’t mean that most people understood the storyline.In South Carolina, a white teenager who wasn’t interested in the news felt like he needed to know what the fuss was all about. He decided to go to Wikipedia to understand more. He was left with the impression that Zimmerman was clearly in the right and disgusted that everyone was defending Martin. While reading up on this case, he ran across the term “black on white crime” on Wikipedia and decided to throw that term into Google where he encountered a deeply racist website inviting him to wake up to a reality that he had never considered. He took that red pill and dove deep into a worldview whose theory of power positioned white people as victims. Over a matter of years, he began to embrace those views, to be radicalized towards extreme thinking. On June 17, 2015, he sat down for an hour with a group of African-American church-goers in Charleston South Carolina before opening fire on them, killing 9 and injuring 1. His goal was simple: he wanted to start a race war.
It’s easy to say that this domestic terrorist was insane or irrational, but he began his exploration trying to critically interrogate the media coverage of a story he didn’t understand. That led him to online fora filled with people who have spent decades working to indoctrinate people into a deeply troubling, racist worldview. They draw on countless amounts of “evidence,” engage in deeply persuasive discursive practices, and have the mechanisms to challenge countless assumptions.The difference between what is deemed missionary work, education, and radicalization depends a lot on your worldview. And your understanding of power.
Who Do You Trust?
The majority of Americans do not trust the news media. There are many explanations for this — loss of local news, financial incentives, hard to distinguish between opinion and reporting, etc. But what does it mean to encourage people to be critical of the media’s narratives when they are already predisposed against the news media?
Perhaps you want to encourage people to think critically about how information is constructed, who is paying for it, and what is being left out. Yet, among those whose prior is to not trust a news media institution, among those who see CNN and The New York Times as “fake news,” they’re already there. They’re looking for flaws. It’s not hard to find them. After all, the news industry is made of people in institutions in a society.So when youth are encouraged to be critical of the news media, they come away thinking that the media is lying. Depending on someone’s prior, they may even take what they learn to be proof that the media is in on the conspiracy. That’s where things get very dicey.
Many of my digital media and learning colleagues encourage people to make media to help understand how information is produced. Realistically, many young people have learned these skills outside the classroom as they seek to represent themselves on Instagram, get their friends excited about a meme, or gain followers on YouTube. Many are quite skilled at using media, but to what end?Every day, I watch teenagers produce anti-Semitic and misogynistic content using the same tools that activists use to combat prejudice. It’s notable that many of those who are espousing extreme viewpoints are extraordinarily skilled at using media. Today’s neo-Nazis are a digital propaganda machine. Developing media making skills doesn’t guarantee that someone will use them for good. This is the hard part.
Most of my peers think that if more people are skilled and more people are asking hard questions, goodness will see the light.In talking about misunderstandings of the First Amendment, Nabiha Syed of Buzzfeedhighlights that the frame of the “marketplace of ideas” sounds great, but is extremely naive. Doubling down on investing in individuals as a solution to a systemic abuse of power is very American.But the best ideas don’t always surface to the top.Nervously, many of us tracking manipulation of media are starting to think that adversarial messages are far more likely to surface than well-intended ones.
This is not to say that we shouldn’t try to educate people. Or that producing critical thinkers is inherently a bad thing. I don’t want a world full of sheeple.But I also don’t want to naively assume what media literacy could do in responding to a culture war that is already underway. I want us to grapple with reality, not just the ideals that we imagine we could maybe one day build.
It’s one thing to talk about interrogating assumptions when a person can keep emotional distance from the object of study. It’s an entirely different thing to talk about these issues when the very act of asking questions is what’s being weaponized.This isn’t historical propaganda distributed through mass media. Or an exercise in understanding state power. This is about making sense of an information landscape where the very tools that people use to make sense of the world around them have been strategically perverted by other people who believe themselves to be resisting the same powerful actors that we normally seek to critique.
Take a look at the graph above. Can you guess what search term this is? This is the search query for “crisis actors.” This concept emerged as a conspiracy theory after Sandy Hook. Online communities worked hard to get this to land with the major news media after each shooting. With Parkland, they finally succeeded. Every major news outlet is now talking about crisis actors, as though it’s a real thing, or something to be debunked.When teenage witnesses of the mass shooting in Parkland speak to journalists these days, they have to now say that they are not crisis actors. They must negate a conspiracy theory that was created to dismiss them. A conspiracy theory that undermines their message from the get-go. And because of this, many people have turned to Google and Bing to ask what a crisis actor is. They quickly get to the Snopes page. Snopes provides a clear explanation of why this is a conspiracy. But you are now asked to not think of an elephant.
You may just dismiss this as craziness, but getting this narrative into the media was designed to help radicalize more people. Some number of people will keep researching, trying to understand what the fuss is all about. They’ll find online fora discussing the images of a brunette woman and ask themselves if it might be the same person. They will try to understand the fight between David Hogg and Infowars or question why Infowars is being restricted by YouTube. They may think this is censorship.Seeds of doubt will start to form. And they’ll ask whether or not any of the articulate people they see on TV might actually be crisis actors. That’s the power of weaponized narratives.
One of the main goals for those who are trying to manipulate media is to pervert the public’s thinking.It’s called gaslighting. Do you trust what is real?One of the best ways to gaslight the public is to troll the media.By getting the news media to be forced into negating frames, they can rely on the fact that people who distrust the media often respond by self-investigating. This is the power of the boomerang effect. And it has a history. After all, the CDC realized that the more news media negated the connection between autism and vaccination, the more the public believed there was something real there.
In 2016, I watched networks of online participants test this theory through an incident now known as Pizzagate. They worked hard to get the news media to negate the conspiracy theory, believing that this would prompt more people to try to research if there was something real there. They were effective. The news media covered the story to negate it. Lots of people decided to self-investigate. One guy even showed up with a gun.
The term “gaslighting” originates in the context of domestic violence. The term refers back to an 1944 movie called Gas Light where a woman is manipulated by her husband in a way that leaves her thinking she’s crazy.It’sa very effective technique of control. It makes someone submissive and disoriented, unable to respond to a relationship productively.While many anti-domestic violence activists argue that the first step is to understand that gaslighting exists, the “solution” is not to fight back against the person doing the gaslighting. Instead, it’s to get out. Furthermore, anti-domestic violence experts argue that recovery from gaslighting is a long and arduous process, requiring therapy. They recognize that once instilled, self-doubt is hard to overcome.
While we have many problems in our media landscape, the most dangerous is how it is being weaponized to gaslight people.
And unlike the domestic violence context, there is no “getting out” that is really possible in a media ecosystem. Sure, we can talk about going off the grid and opting out of social media and news media, but c’mon now.
The Cost of Triggering
In 2017, Netflix released a show called 13 Reasons Why. Before parents and educators had even heard of the darn show, millions of teenagers had watched it. For most viewers, it was a fascinating show. The storyline was enticing, the acting was phenomenal. But I’m on the board of Crisis Text Line, an amazing service where people around this country talk with trained counselors via text message when they’re in a crisis. Before the news media even began talking about the show, we started to see the impact. After all, the premise of the show is that a teen girl died by suicide and left behind 13 tapes explaining how people had bullied her to justify her decision.
At Crisis Text Line, we do active rescues every night. This means that we send emergency personnel to the homes of someone who is in the middle of a suicide attempt in an effort to save their lives. Sometimes, we succeed. Sometimes, we don’t. It’s heartbreaking work. As word of 13 Reasons Why got out and people started watching the show, our numbers went through the roof. We were drowning in young people referencing the show, signaling how it had given them a framework for ending their lives. We panicked. All hands on deck. As we got things under control, I got angry. What the hell was Netflix thinking?
Researchers know the data on suicide and media. The more the media normalizes suicide, the more suicide is put into people’s head as a possibility,the more people who are on the edge start to take it seriously and consider it for themselves.After early media effects research was published, journalists developed best practices to minimize their coverage of suicide. As Joan Donovan often discusses, this form of “strategic silence” was viable in earlier media landscapes; it’s a lot harder now. Today, journalists and media makers feel as though the fact that anyone could talk about suicide on the internet means that they should have a right to do so too.
We know that you can’t combat depression through rational discourse.Addressing depression is hard work. And I’m deeply concerned that we don’t have the foggiest clue how to approach the media landscape today.I’m confident that giving grounded people tools to think smarter can be effective.But I’m not convinced that we know how to educate people who do not share our epistemological frame.I’m not convinced that we know how to undo gaslighting. I’m not convinced that we understand how engaging people about the media intersects with those struggling with mental health issues.And I’m not convinced that we’ve even begun to think about the unintended consequences of our good — let alone naive — intentions.
In other words, I think that there are a lot of assumptions baked into how we approach educating people about sensitive issues and our current media crisis has made those painfully visible.
Oh, and by the way, the Netflix TV show ends by setting up Season 2 to start with a school shooting. WTF, Netflix?
Pulling Back Out
So what role do educators play in grappling with the contemporary media landscape? What kind of media literacy makes sense? To be honest, I don’t know. But it’s unfair to end a talk like this without offering some path forward so I’m going to make an educated guess.
I believe that we need to develop antibodies to help people not be deceived.
That’s really tricky because most people like to follow their gut more than than their mind. No one wants to hear that they’re being tricked. Still, I thinkthere might be some value in helping people understand their own psychology.
Consider the power of nightly news and talk radio personalities.If you bring Sean Hannity, Rachel Maddow, or any other host into your home every night,you start to appreciate how they think. You may not agree with them, but youbuild a cognitive model of their words such that they have a coherent logic to them.They become real to you, even if they don’t know who you are. This is what scholars call “parasocial interaction.”And the funny thing about humanpsychology is that we trust people who we invest our energies into understanding.That’s why bridging difference requires humanizing people across viewpoints.
Empathy is a powerful emotion, one that most educators want to encourage.But when you start to empathize with worldviews that are toxic, it’s very hard to stay grounded. It requires deep cognitive strength.Scholars who spend a lot of time trying to understand dangerous worldviews work hard to keep their emotional distance.One very basic tactic is to separate the different signals. Just read the text rather than consume the multimedia presentation of that. Narrow the scope. Actively taking things out of context can be helpful for analysis precisely because it creates a cognitive disconnect. This is the opposite of how most people encourage everyday analysis of media, where the goal is to appreciate the context first. Of course, the trick here is wanting to keep that emotional distance. Most people aren’t looking for that.
I also believe that it’s important to help students truly appreciate epistemological differences. In other words, why do people from different worldviews interpret the same piece of content differently?Rather than thinking about the intention behind the production, let’s analyze the contradictions in the interpretation.This requires developing a strong sense of how others think and where the differences in perspective lie.From an educational point of view, this means building the capacity to truly hear and embrace someone else’s perspective and teaching people to understand another’s view while also holding their view firm.It’s hard work, an extension of empathy into a practice that is common among ethnographers. It’s also a skill that is honed in many debate clubs. The goal is to understand the multiple ways of making sense of the world and use that to interpret media.Of course, appreciating the view of someone who is deeply toxic isn’t always psychologically stabilizing.
Another thing I recommend is to help students see how they fill in gaps when the information presented to them is sparse and how hard it is to overcome priors.Conversations about confirmation bias are important here because it’s important to understand what information we accept and what information we reject.Selective attention is another tool, most famously shown to students through the “gorilla experiment.” If you aren’t familiar with this experiment, it involves showing a basketball video and focusing on counting passes made by people in one color shirt and then asking if they saw the gorilla. Many people do not. Inverting these cognitive science exercises,asking students to consider different fan fiction that fills in the gaps of a story with divergent explanations is another way to train someone to recognize how their brain fills in gaps.
What’s common about the different approaches I’m suggesting is that they are designed to be cognitive strengthening exercises,tohelp students recognize their own fault lines, not the fault lines of the media landscape around them. I can imagine that this too could be called media literacy and if you want to bend your definition that way, I’ll accept it.But the key is to realize the humanity in ourselves and in others.We cannot and should not assert authority over epistemology, but we can encourage our students to be more aware of how interpretation is socially constructed. And to understand how that can be manipulated.Of course, just because you know you’re being manipulated doesn’t mean that you can resist it. And that’s where my proposal starts to get shaky.
Let’s be honest — our information landscape is going to get more and more complex. Educators have a critical role to play in helping individuals and societies navigate what we encounter.But the path forward isn’t about doubling down on what constitutes a fact or teaching people to assess sources.Rebuilding trust in institutions and information intermediaries is important, but we can’t assume the answer is teaching students to rely on those signals.The first wave of media literacy was responding to propaganda in a mass media context.We live in a world of networks now. We need to understand how those networks are intertwined and how information that spreads through dyadic — even if asymmetric — encounters is understood and experienced differently than that which is produced and disseminated through mass media.
Above all, we need to recognize that information can, is, and will be weaponized in new ways. Today’s propagandist messages are no longer simply created by Madison Avenue or Edward Bernays-style State campaigns. For the last 15 years, a cohort of young people has learned how to hack the attention economy in an effort to have power and status in this new information ecosystem.These aren’t just any youth. They are young people who are disenfranchised, who feel as though the information they’re getting isn’t fulfilling, who struggle to feel powerful.They are trying to make sense of an unstable world and trying to respond to it in a way that is personally fulfilling.Most youth are engaged in invigorating activities. Others are doing the same things youth have always done. But there are youth out there who feel alienated and disenfranchised, who distrust the system and want to see it all come down. Sometimes, this frustration leads to productive ends. Often it does not. But until we start understanding their response to our media society, we will not be able to produce responsible interventions. So I would argue that we need to start developing a networked response to this networked landscape. And it starts by understanding different ways of constructing knowledge.
Special thanks to Monica Bulger, Mimi Ito, Whitney Phillips, Cathy Davidson, Sam Hinds Garcia, Frank Shaw, and Alondra Nelson for feedback.
The fable of the millipede and the songbird is a story about the difference between instinct and knowledge. It goes like this:
High above the forest floor, a millipede strolled along the branch of a tree, her thousand pairs of legs swinging in an easy gait. From the tree top, song birds looked down, fascinated by the synchronization of the millipede’s stride. “That’s an amazing talent,” chirped the songbirds. “You have more limbs than we can count. How do you do it?” And for the first time in her life the millipede thought about this. “Yes,” she wondered, “how do I do what I do?” As she turned to look back, her bristling legs suddenly ran into one another and tangled like vines of ivy. The songbirds laughed as the millipede, in a panic of confusion, twisted herself in a knot and fell to earth below.
On the forest floor, the millipede, realizing that only her pride was hurt, slowly, carefully, limb by limb, unraveled herself. With patience and hard work, she studied and flexed and tested her appendages, until she was able to stand and walk. What was once instinct became knowledge. She realized she didn’t have to move at her old, slow, rote pace. She could amble, strut, prance, even run and jump. Then, as never before, she listened to the symphony of the songbirds and let music touch her heart. Now in perfect command of thousands of talented legs, she gathered courage, and, with a style of her own, danced and danced a dazzling dance that astonished all the creatures of her world. 
The lesson here is that conscious reflection of an unconscious action will impair your ability to do that action. But after you introspect and really study how you do what you do, it will transform into knowledge and you will have greater command of that skill.
That, in a nutshell, is why I blog. The act of introspection — of turning abstract thoughts into concrete words — strengthens my knowledge of that subject and enables me to dance a dazzling dance.
I’ve start reading Enrico Moretti’s The New Geography of Jobs and finding it very clear and persuasive (though I’m not far in).
Moretti is taking up the major theme of What The Hell Is Happening To The United States, which is being addressed by so many from different angles. But whereas many writers seem to have an agenda–e.g., Noble advocating for political reform regulating algorithms; Deenan arguing for return to traditional community values in some sense; etc.–or to focus on particularly scandalous or dramatic aspects of changing political winds–such as Gilman’s work on plutocratic insurgency and collapsing racial liberalism–Moretti is doing economic geography showing how long term economic trends are shaping the distribution of prosperity within the U.S.
From the introduction, it looks like there are a few notable points.
The first is about what Moretti calls the Great Divergence, which has been going on since the 1980’s. This is the decline of U.S. manufacturing as jobs moved from Detroit, Michegan to Shenzhen, Guangdong, paired with the rise of an innovation economy where the U.S. takes the lead in high-tech and creative work. The needs of the high-tech industry–high-skilled workers, who may often be educated immigrants–changes the demographics of the innovation hubs and results in the political polarization we’re seeing on the national stage. This is an account of the economic base determining the cultural superstructure which is so fraught right now, and exactly what I was getting at yesterday with my rant yesterday about the politics of business.
The second major point Moretti makes which is probably understated in more polemical accounts of the U.S. political economy is the multiplier effect of high-skilled jobs in innovation hubs. Moretti argues that every high-paid innovation job (like software engineer or scientist) results in four other jobs in the same city. These other jobs are in service sectors that are by their nature local and not able to be exported. The consequence is that the innovation economy does not, contrary to its greatest skeptics, only benefit the wealthy minority of innovators to the ruin of the working class. However, it does move the location of working class prosperity into the same urban centers where the innovating class is.
This gives one explanation for why the backlash against Obama-era economic policies was such a shock to the coastal elites. In the locations where the “winners” of the innovation economy were gathered, there was also growth in the service economy which by objective measures increased the prosperity of the working class in those cities. The problem was the neglected working class in those other locations, who felt left behind and struck back against the changes.
A consequence of this line of reasoning is that arguments about increasing political tribalism are really a red herring. Social tribes on the Internet are a consequence, not a cause, of divisions that come from material conditions of economy and geography.
Moretti even appears to have a constructive solution in mind. He argues that there are “three Americas”: the rich innovation hubs, the poor former manufacturing centers, and mid-sized cities that have not yet gone either way. His recipe for economic success in these middle cities is attracting high-skilled workers who are a kind of keystone species for prosperous economic ecosystems.
Deneen, Patrick J. Why Liberalism Failed. Yale University Press, 2018.
Gilman, Nils. “The twin insurgency.” American Interest 15 (2014): 3-11.
Gilman, Nils. “The Collapse of Racial Liberalism.” The American Interest (2018).
Moretti, Enrico. The new geography of jobs. Houghton Mifflin Harcourt, 2012.
Noble, Safiya Umoja. Algorithms of Oppression: How search engines reinforce racism. NYU Press, 2018.
As a huge enthusiast of A/B testing, I have been wanting to learn how to run A/B tests through Google Optimize for some time. However, it’s hard to do this without being familiar with all the different parts of the Google product eco-system. So I decided it was time to take the plunge and finally Google myself. This post will cover my adventures with several products in the Google product suite including: Google Analytics (GA), Google Tag Manager (GTM), Google Optimize (GO), and Google Data Studio (GDS).
Of course, in order to do A/B testing, you have to have A) something to test, and B) sufficient traffic to drive significant results. Early on I counted out trying to A/B test this blog—not because I don’t have sufficient traffic—I got tons of it, believe me . . . (said in my best Trump voice). The main reason I didn’t try do it with my blog is that I don’t host it, WordPress does, so I can’t easily access or manipulate the source code to implement an A/B test. It’s much easier if I host the website myself (which I can do locally using MAMP).
But how do I send traffic to a website I’m hosting locally? By simulating it, of course. Using a nifty python library called Selenium, I can be as popular as I want! I can also simulate any kind of behavior I want, and that gives me maximum control. Since I can set the expected outcomes ahead of time, I can more easily troubleshoot/debug whenever the results don’t square with expectations.
My Mini “Conversion Funnel”
When it came to designing my first A/B test, I wanted to keep things relatively simple while still mimicking the general flow of an e-commerce conversion funnel. I designed a basic website with two different landing page variants—one with a green button and one with a red button. I arbitrarily decided that users would be 80% likely to click on the button when it’s green and 95% likely to click on the button when it’s red (these conversion rates are unrealistically high, I know). Users who didn’t click on the button would bounce, while those who did would advance to the “Purchase Page”.
To make things a little more complicated, I decided to have 20% of ‘green’ users bounce after reaching the purchase page. The main reason for this was to test out GA’s funnel visualizations to see if they would faithfully reproduce the graphic above (they did). After the purchase page, users would reach a final “Thank You” page with a button to claim their gift. There would be no further attrition at this point; all users who arrived on this page would click the “Claim Your Gift” button. This final action was the conversion (or ‘Goal’ in GA-speak) that I set as the objective for the A/B test.
In terms of features, the real time event tracking is a fantastic resource for debugging GA implementations. However, the one feature I wasn’t expecting GA to have was the benchmarking feature. It allows you to compare the traffic on your site with websites in similar verticals. This is really great because even if you’re totally out of ideas on what to analyze (which you shouldn’t be given the rest of the features in GA), you can use the benchmarking feature as a starting point for figuring out the weak points in your site.
The other great thing about the two courses I mentioned is that they’re free, and at the end you can take the GA Individual Qualification exam to certify your knowledge about GA (which I did). If you’re gonna put it the time to learn the platform, it’s nice to have a little endorsement at the end.
Google Tag Manager
Before trying out GO, I implemented my little A/B test through Google’s legacy system, Content Experiments. I can definitely see why GO is the way of the future. There’s a nifty tool that lets you edit visual DOM elements right in the browser while you’re defining your variants. In Content Experiments, you have to either provide two separate A or B pages or implement the expected changes on your end. It’s a nice thing to not have to worry about, especially if you’re not a pro front-end developer.
Also, it’s clear that GO has more powerful decision features. For one thing, it has Bayesian decision logic which is more comprehensible for business stakeholders and is gaining steam in online a/b testing. Also, it has the ability to do multivariate testing, which is a great addition, though I don’t use that functionality for this test.
The one thing that was a bit irritating with GO was setting it up to run on localhost. It took a few hours of yak shaving to get the different variants to actually show up on my computer. It boiled down to 1) editing my etc/hosts file with an extra line in accordance with this post on the Google Advertiser Community forum and 2) making sure the Selenium driver navigated to localhost.domain instead of just localhost.
Google Data Studio
Nothing is worth doing unless you can make a dashboard at the end of it, right? While GA has some amazing report power generating capabilities, it can feel somewhat rigid in terms of customizability. GDS is a relatively new program that gives you way more options to visualize the data sitting in GA. But while GDS has an advantage over GA, it does have some frustrating limitations which I hope they resolve soon. In particular, I hope they’ll let you show percent differences between two score cards soon. As someone who’s done a lot of A/B test reports, I know that the thing stakeholders are most interested in seeing is the % difference, or lift, caused by one variant versus another.
Here is a screenshot of the ultimate dashboard (or a link it you want to see it live):
The dashboard was also a good way to do a quick check to make sure everything in the test was working as expected. For example, the expected conversion rate for the “Claim Your Gift” button was 64% versus 95%, and we see more or less those numbers in the first bar chart on the left. The conditional conversion rate (the conversion rate of users conditioned on clicking off the landing page) is also close to what was expected: 80% vs. 100%
Notes about Selenium
So I really like Selenium, and after this project I have a little personal library to do automated tests in the future that I can apply to any website, not just this little dinky one I ran locally on my machine.
When you’re writing code dealing with Selenium, one thing I’ve realized is that it’s important to write highly fault tolerant code. Things that depend on the internet imply many things that can go wrong—the wifi in the cafe you’re in might go down. Resources might randomly fail to load. So many different things that can go wrong… But if you’ve written fault-tolerant code, hitting one of these snags won’t cause your program to stop running.
Along with fault-tolerant code, it’s a good idea to write good logs. When stuff does go wrong, this helps you figure out what it was. In this particular case, logs also served as a good source of ground truth to compare against the numbers I was seeing in GA.
The End! (for now…)
I think I’ll be back soon with another post about AdWords and Advanced E-Commerce in GA…
This post is an attempt to articulate something that’s on the tip of my tongue, so bear with me.
Fraser has made the point that the politics of recognition and the politics of distribution are not the same. In her view, the conflict in the U.S. over recognition (i.e., or women, racial minorities, LGBTQ, etc. on the progressive side, and on the straight white male ‘majority’ on the reactionary side) has overshadowed the politics of distribution, which has been at a steady neoliberal status quo for some time.
First, it’s worth pointing out that in between these two political contests is a politics of representation, which may be more to the point. The claim here is that if a particular group is represented within a powerful organization–say, the government, or within a company with a lot of power such as a major financial institution or tech company–then that organization will use its power in a way that is responsive to the needs of the represented group.
Politics of representation are the link between recognition and distribution: the idea is that if “we” recognize a certain group, then through democratic or social processes members of that group will be lifted into positions of representative power, which then will lead to (re)distribution towards that group in the longer run.
I believe this is the implicit theory of social change at the heart of a lot of democratish movements today. It’s an interesting theory in part because it doesn’t seem to have any room for “good governance”, or broadly beneficial governance, or technocracy. There’s nothing deliberative about this form of democracy; it’s a tribal war-by-other-means. It is also not clear that this theory of social change based on demographic representation is any more effective at changing distributional outcomes than a pure politics of recognition, which we have reason to believhe is ineffectual.
Who do we expect to have power over distributional outcomes in our (and probably other) democracies? Realistically, it’s corporations. Businesses comprise most of the economic activity; businesses have the profits needed to reinvest in lobbying power for the sake of economic capture. So maybe if what we’re interested in is politics of distribution, we should stop trying to parse out the politics of recognition, with its deep dark rabbit hole of identity politics and the historical injustice and Jungian archetypal conflicts over the implications of the long arc of sexual maturity. These conversations do not seem to be getting anyone anywhere! It is, perhaps, fake news: not because the contents are fake, but because the idea that these issues are new is fake. They are perhaps just a lot of old issues stirred to conflagration by the feedback loops between social and traditional media.
If we are interested in the politics of distribution, let’s talk about something else, something that we all know must be more relevant, when it comes down to it, than the politics of recognition. I’m talking about the politics of business.
We have a rather complex economy with many competing business interests. Let’s assume that one of the things these businesses compete over is regulatory capture–their ability to influence economic policy in their favor.
When academics talk about neoliberal economic policy, they are often talking about those policies that benefit the financial sector and big businesses. But these big businesses are not always in agreement.
Take, for example, the steel tariff proposed by the Trump administration. There is no blunter example of a policy that benefits some business interests–U.S. steelmakers–and not others–U.S. manufacturers of steel-based products.
It’s important from the perspective of electoral politics to recognize that the U.S. steelmakers are a particular set of people who live in particular voting districts with certain demographics. That’s because, probably, if I am a U.S. steelworker, I will vote in the interest of my industry. Just as if I am a U.S. based urban information worker at an Internet company, I will vote in the interest of my company, which in my case would mean supporting net neutrality. If I worked for AT&T, I would vote against net neutrality, which today means I would vote Republican.
It’s an interesting fact that AT&T employs a lot more people than Google and (I believe this is the case, though I don’t know where to look up the data) that they are much more geographically distributed that Google because, you know, wires and towers and such. Which means that AT&T employees will be drawn from more rural, less diverse areas, giving them an additional allegiance to Republican identity politics.
You must see where I’m getting at. Assume that the main driver of U.S. politics is not popular will (which nobody really believes, right?) and is in fact corporate interests (which basically everybody admits, right?). In that case the politics of recognition will not be determining anything; rather it will be a symptom, an epiphenomenon, of an underlying politics of business. Immigration of high-talent foreigners then becomes a proxy issue for the economic battle between coastal tech companies and, say, old energy companies which have a much less geographically mobile labor base. Nationalism, or multinationalism, becomes a function of trade relations rather than a driving economic force in its own right. (Hence, Russia remains an enemy of the U.S. largely because Putin paid off all its debt to the U.S. and doesn’t owe it any money, unlike many of its other allies around the world.)
I would very much like to devote myself better to the understanding of politics of business because, as I’ve indicated, I think the politics of recognition have become a huge distraction.
One of my favorite articles presented at the recent FAT* 2018 conference was Barabas et al. on “Interventions over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment” (link). To me, this was the correct response to recent academic debate about the use of actuarial risk-assessment in determining criminal bail and parole rates. I had a position on this before the conference which I drafted up here; my main frustration with the debate had been that it had gone unquestioned why bail and parole rates are based on actuarial prediction of recidivism in the first place, given that rearrest rates are so contingent on social structural factors such as whether or not police are racist.
Barabas et al. point out that there’s an implicit theory of crime behind the use of actuarial risk assessments. In that theory of crime, there are individual “bad people” and “good people”. “Bad people” are more likely to commit crimes because of their individual nature, and the goal of the criminal policing system is to keep bad people from committing crimes by putting them in prison. This is the sort of theory that, even if it is a little bit true, is also deeply wrong, and so we should probably reassess the whole criminal justice system as a result. Even leaving aside the important issue of whether “recidivism” is interpreted as reoffense or rearrest rate, it is socially quite dangerous to see probability of offense as due to the specific individual moral character of a person. One reason why this is dangerous is that if the conditions for offense are correlated with the conditions for some sort of unjust desperation, then we risk falsely justifying an injustice with the idea that the bad things are only happening to bad people.
I’d like to juxtapose this position with a couple others that may on the surface appear to be in tension with it.
Nils Gilman’s new piece on “The Collapse of Racial Liberalism” is a helpful account of how we got where we are as an American polity. True to the title, Gilman’s point is that there was a centrist consensus on ‘racial liberalism’ that it reached its apotheosis in the election of Obama and then collapsed under its one contradictions, getting us where we are today.
By racial liberalism, I mean the basic consensus that existed across the mainstream of both political parties since the 1970s, to the effect that, first, bigotry of any overt sort would not be tolerated, but second, that what was intolerable was only overt bigotry—in other words, white people’s definition of racism. Institutional or “structural” racism—that is, race-based exclusions that result from deep social habits such as where people live, who they know socially, what private organizations they belong to, and so on—were not to be addressed. The core ethic of the racial liberal consensus was colorblind individualism.
Bill Clinton was good at toeing the line of racial liberalism, and Obama, as a black meritocratic elected president, was its culmination. But:
“Obama’s election marked at once the high point and the end of a particular historical cycle: a moment when the realization of a particular ideal reveals the limits of that ideal.”
The limit of the ideal is, of course, that all the things not addressed–“race-based exclusions that result from deep social habits such as where people live, who they know socially, what private organizations they belong to, and so on”–matter, and result in, for example, innocent black guys getting shot disproportionately by police even when there is a black meritocratic sitting as president.
And interesting juxtaposition here is that in both cases discussed so far, we have a case of a system that is reaching its obsolescence due to the contradictions of individualism. In the case of actuarial policing (as it is done today; I think a properly sociological version of actuarial policing could be great), there’s the problem of considering criminals as individuals whose crimes are symptoms of their individual moral character. The solution to crime is to ostracize and contain the criminals by, e.g., putting them in prison. In the case of racial liberalism, there’s the problem of considering bigotry a symptom of individual moral character. The solution to the bigotry is to ostracize and contain the bigots by teaching them that it is socially unacceptable to express bigotry and keeping the worst bigots out of respectable organizations.
Could it be that our broken theories of both crime and bigotry both have the same problem, which is the commitment to moral individualism, by which I mean the theory that it’s individual moral character that is the cause of and solution to these problems? If a case of individual crime and individual bigotry is the result of, instead of an individual moral failing, a collective action problem, what then?
I still haven’t looked carefully into Deenan’s argument (see notes here), but I’m intrigued that his point may be that the crisis of liberalism may be, at its root, a crisis of individualism. Indeed, Kantian views of individual autonomy are really nice but they have not stood the test of time; I’d say the combined works of Haberams, Foucault, and Bourdieu have each from very different directions developed Kantian ideas into a more sociological frame. And that’s just on the continental grand theory side of the equation. I have not followed up on what Anglophone liberal theory has been doing, but I suspect that it has been going the same way.
I am wary, as I always am, of giving too much credit to theory. I know, as somebody who has read altogether too much of it, what little use it actually is. However, the notion of political and social consensus is one that tangibly effects my life these days. For this reason, it’s a topic of great personal interest.
One last point, that’s intended as constructive. It’s been argued that the appeal of individualism is due in part to the methodological individualism of rational choice theory and neoclassical economic theory. Because we can’t model economic interactions on anything but an individualistic level, we can’t design mechanisms or institutions that treat individual activity as a function of social form. This is another good reason to take seriously computational modeling of social forms.
Barabas, Chelsea, et al. “Interventions over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment.” arXiv preprint arXiv:1712.08238 (2017).
Deneen, Patrick J. Why Liberalism Failed. Yale University Press, 2018.
Gilman, Nils. “The Collapse of Racial Liberalism.” The American Interest (2018).
I don’t know much about China, really, so I’m always fascinated to learn more.
This FT article, “Anbang arrests demonstrates hostility to business”, by Jamil Anderlini, provides some wonderful historical context to a story about the arrest of an insurance oligarch.
In ancient times, merchants were at the very bottom of the four official social classes, below warrior-scholars, farmers and artisans. Although some became very rich they were considered parasites in Chinese society.
Ever since the Han emperors established the state salt monopoly in the second century BCE (remnants of which remain to this day), large-scale business enterprises have been controlled by the state or completely reliant on the favour of the emperor and the bureaucrat class.
In the 20th century, the Communist emperor Mao Zedong effectively managed to stamp out all private enterprise for a while.
Until the party finally allowed “capitalists” to join its ranks in 2002, many of the business activities carried out by the resurgent merchant class were technically illegal.
China’s rich lists are populated by entrepreneurs operating in just a handful of industries — particularly real estate and the internet.
Tycoons like Mr Wu who emerge in state-dominated sectors are still exceedingly rare. They are almost always closely linked to one of the old revolutionary families exercising enormous power from the shadows.
Everything about this is interesting.
First, in Western scholarship we rarely give China credit for its history of bureaucracy in the absence of capitalism. In the well know Weberian account, bureaucracy is an institutional invention that provides regular rule of law so that capitalism can thrive. But China’s history is one that is statist “from ancient times”, but with effective bureaucracy from the beginning. A managerialist history, perhaps.
Which makes the second point so unusual: why, given this long history of bureaucratic rule, are Internet companies operating in a comparatively unregulated way? This seems like a massive concession of power, not unlike how (arguably) the government of the United States conceded a lot of power to Silicon Valley under the Obama administration.
The article dramatically foreshadows a potential power struggle between Xi Jinping’s consolidated state and the tech giant oligarchs:
Now that Chinese President Xi Jinping has abolished his own term limits, setting the stage for him to rule for life if he wants to, the system of state patronage and the punishment of independent oligarchs is likely to expand. Any company or billionaire who offends the emperor or his minions will be swiftly dealt with in the same way as Mr Wu.
There is one group of Chinese companies with charismatic — some would say arrogant — founders that enjoy immense economic power in China today. They would seem to be prime candidates if the assault on private enterprise is stepped up.
Internet giants Alibaba, Tencent and Baidu are not only hugely profitable, they control the data that is the lifeblood of the modern economy. That is why Alibaba founder Jack Ma has repeatedly said, including to the FT, that he would gladly hand his company over to the state if Beijing ever asked him to. Investors in BABA can only hope it never comes to that.
That is quite the expression of feudal fealty from Jack Ma. Truly, a totally different business culture from that of the United States.
I’ve begun reading the recently published book, Why Liberalism Failed (2018), by Patrick Deenan. It appears to be making some waves in the political theory commentary. The author claims that it was 10 years in the making but was finished three weeks before the 2016 presidential election, which suggests that the argument within it is prescient.
I’m not far in yet.
There is an intriguing forward from James Davison Hunter and John M. Owen IV, the editors. Their framing of the book is surprisingly continental:
They declare that liberalism has arrived at its “legitimacy crisis”, a Habermasian term.
They claim that the core contention of the book is a critique of the contradictions within Immanuel Kant’s view of individual autonomy.
They compare Deenan with other “radical” critics of liberalism, of which they name: Marx, the Frankfurt School, Foucault, Nietzsche, Schmitt, and the Catholic Church.
In search of a litmus-test like clue as to where in the political spectrum the book falls, I’ve found this passage in the Foreward:
Deneen’s book is disruptive not only for the way it links social maladies to liberalism’s first principles, but also because it is difficult to categorize along our conventional left-right spectrum. Much of what he writes will cheer social democrats and anger free-market advocates; much else will hearten traditionalists and alienate social progressives.
Well, well, well. If we are to fit Deenan’s book into the conceptual 2-by-2 provided in Fraser’s recent work, it appears that Deenan’s political theory is a form of reactionary populism, rejecting progressive neoliberalism. In other words, the Foreward evinces that Deenan’s book is a high-brow political theory contribution that weighs in favor of the kind of politics that has been heretofore only articulated by intellectual pariahs.
I completely agree with this view on mastery from American fashion designer, writer, television personality, entrepreneur, and occasional cabaret star Isaac Mizrahi:
I’m a person who’s interested in doing a bunch of things. It’s just what I like. I like it better than doing one thing over and over. This idea of mastery—of being the very best at just one thing—is not in my future. I don’t really care that much. I care about doing things that are interesting to me and that I don’t lose interest in.
Mastery – “being the very best at just one thing” – doesn’t hold much appeal for me. I’m a very curious person. I like jumping between various creative endeavors that “are interesting to me and that I don’t lose interest in.” Guitar, web design, coding, writing, hand lettering – these are just some of the creative paths I’ve gone down so far, and I know that list will continue to grow.
I’ve found that my understanding of one discipline fosters a deeper understanding of other disciplines. New skills don’t take away from each other – they only add.
So no, mastery isn’t for me. The more creative paths I go down, the better. Keep ‘em coming.
Quartz recently profiled Charlie Munger, Warren Buffett’s billionaire deputy, who credits his investing success to not mastering just 1 field — investment theory — but instead “mastering the multiple models which underlie reality.” In other words, Munger is an expert-generalist. The term was coined by Orit Gadiesh, chairman of Bain & Co, who describes an expert-generalist as:
Someone who has the ability and curiosity to master and collect expertise in many different disciplines, industries, skills, capabilities, countries, and topics., etc. He or she can then, without necessarily even realizing it, but often by design:
Draw on that palette of diverse knowledge to recognize patterns and connect the dots across multiple areas.
Drill deep to focus and perfect the thinking.
The article goes on to describe the strength of this strategy:
Being an expert-generalist allows individuals to quickly adapt to change. Research shows that they:
Have more breakthrough ideas, because they pull insights that already work in one area into ones where they haven’t been tried yet.
Build deeper connections with people who are different than them because of understanding of their perspectives.
Build more open networks, which allows them to serve as a connector between people in different groups. According to network science research, having an open network is the #1 predictor of career success.
All of this sounds exactly right. I had never thought about the benefits of being an expert-generalist, nor did I deliberately set out to be one (my natural curiosity got me here), but reading these descriptions gave form to something that previously felt intuitively true.
When writing music, ambient music composer Brian Eno makes music that’s pleasurable to listen to by switching between “maker” mode and “listener” mode. He says:
I just start something simple [in the studio]—like a couple of tones that overlay each other—and then I come back in here and do emails or write or whatever I have to do. So as I’m listening, I’ll think, It would be nice if I had more harmonics in there. So I take a few minutes to go and fix that up, and I leave it playing. Sometimes that’s all that happens, and I do my emails and then go home. But other times, it starts to sound like a piece of music. So then I start working on it.
I always try to keep this balance with ambient pieces between making them and listening to them. If you’re only in maker mode all the time, you put too much in. […] As a maker, you tend to do too much, because you’re there with all the tools and you keep putting things in. As a listener, you’re happy with quite a lot less.
In other words, Eno makes great music by experiencing it the way his listeners do: by listening to it.
This is also a great lesson for product development teams: to make a great product, regularly use your product.
By switching between “maker” and “listener” modes, you put yourself in your user’s shoes and seeing your work through their eyes, which helps prevent you from “put[ting] too much in.”
This isn’t a replacement for user testing, of course. We are not our users. But in my experience, it’s all too common for product development teams to rarely, if ever, use what they’re building. No shade – I’ve been there. We get caught on the treadmill of building new features, always moving on to the next without stopping to catch our breath and use what we’ve built. This is how products devolve into an incomprehensible pile of features.
Eno’s process is an important reminder to keep your focus on the user by regularly switching between “maker” mode and “listener” mode.
Continuing with what seems like a never-ending side project to get a handle on computational social science methods, I’m doing a literature review on ‘big data’ sociological methods papers. Recent reading has led to two striking revelations.
The first is that Tufekci’s 2014 critique of Big Data methodologies is the best thing on the subject I’ve ever read. What it does is very clearly and precisely lay out the methodological pitfalls of sourcing the data from social media platforms: use of a platform as a model organism; selecting on a dependent variable; not taking into account exogenous, ecological, or field factors; and so on. I suspect this is old news to people who have more rigorously surveyed the literature on this in the past. But I’ve been exposed to and distracted by literature that seems aimed mainly to discredit social scientists who want to work with this data, rather than helpfully engaging them on the promises and limitations of their methods.
The second striking revelation is that for the second time in my literature survey, I’ve found a reference to that time when the field of cultural sociology decided they’d had enough of Talcott Parsons. From (Bail, 2014):
The capacity to capture all – or nearly all – relevant text on a given topic opens exciting new lines of meso- and macro-level inquiry into what environments (Bail forthcoming). Ecological or functionalist interpretations of culture have been unpopular with cultural sociologists for some time – most likely because the subfield defined itself as an alternative to the general theory proposed by Talcott Parsons (Alexander 2006). Yet many cultural sociologists also draw inspiration from Mary Douglas (e.g., Alexander 2006; Lamont 1992; Zelizer 1985), who – like Swidler – insists upon the need for our subfield to engage broader levels of analysis. “For sociology to accept that no functionalist arguments work,” writes Douglas (1986, p. 43), “is like cutting off one’s nose to spite one’s face.” To be fair, cultural sociologists have recently made several programmatic statements about the need to engage functional or ecological theories of culture. Abbott (1995), for example, explains the formation of boundaries between professional fields as the result of an evolutionary process. Similarly, Lieberson (2000), presents an ecological model of fashion trends in child-naming practices. In a review essay, Kaufman (2004) describes such ecological approaches to cultural sociology as one of the three most promising directions for the future of the subfield.
I’m not sure what’s going on with all these references to Talcott Parsons. I gather that at one time he was a giant in sociology, but that then a generation of sociologists tried to bury him. Then the next generation of sociologists reinvented structural functionalism with new language–“ecological approaches”, “field theory”?
One wonder what Talcott Parsons did or didn’t do to inspire such a rebellion.
Bail, Christopher A. “The cultural environment: measuring culture with big data.” Theory and Society 43.3-4 (2014): 465-482.
Tufekci, Zeynep. “Big Questions for Social Media Big Data: Representativeness, Validity and Other Methodological Pitfalls.” ICWSM 14 (2014): 505-514.