School of Information Blogs

June 25, 2016

Ph.D. student

an experiment with ephemeral URLs

Friends,

I welcome feedback on an experimental feature, exploring ephemerality and URLs, or “ephemerurls”. Here’s the idea: sometimes I’ve posted something on my website that I want to share with some colleagues, but the thing isn’t quite finished yet. I might want to post the URL in some forum (an IRC or Slack channel, an archived mailing list, or on Twitter), but I don’t want the generally accessible URL to be permanently, publicly archived in one of those settings. That is, I want to give out a URL, but the URL should only work temporarily.

Ephemerurl is a service I’ve built and deployed on my own site. Here’s how it works. Let’s say I’ve been working on a piece of writing, a static HTML page, that I want to share just for a little while for some feedback. Maybe I’m presenting the in-progress work to a group of people at an in-person or virtual meeting and want to share a link in the group’s chatroom. Here’s a screenshot of that page, at its permanent URL:

Screen shot of the in-progress page I want to share

I decide I want to share a link that will only work until 6pm this afternoon. So I change the URL, and add “/until6pm/” between “npdoty.name” and the rest of the URL. My site responds:

Screen shot of the ephemeral URL creation page

“Okay, Nick, here’s an ephemeral URL you can use” Great, I copy and paste this opaque, short URL into the chatroom: https://npdoty.name/u/vepu

Right now, that URL will redirect to the original page. (But if you don’t see this email until after 6pm my time, you’ll instead get a 410 Gone error message.) But if the chatroom logs are archived after our meeting (which they often are in groups where I work), the permanent link won’t be useful.

Of course, if you follow a URL like that, you might not realize that it’s intended to be a time-boxed URL. So the static page provides a little disclosure to you, letting you know this might not be public, and suggesting that if you share the URL, you use the same ephemeral URL that you received.

Screen shot of the landing page with nudge

This builds on a well-known pattern. Private, “unguessable” links are a common way of building in a kind of flexible privacy/access-control into our use of the Web. They’re examples of Capability URLs. Sites will often, when accessing a private or capability URL, provide a warning to the user letting them know about the sharing norms that might apply:

YouTube screenshot with warning about private URL

But ephemerurls also provide a specific, informal ephemerality, another increasingly popular privacy feature. It’s not effective against a malicious attacker — if I don’t want you to see my content or I don’t trust you to follow some basic norms of sharing, then this feature won’t stop you, and I’m not sure anything on the Web really could — but it uses norms and the way we often share URLs to introduce another layer of control over sharing information. Snapchat is great not because it could somehow prevent a malicious recipient from taking a screenshot, but because it introduces a norm of disappearance, which makes a certain kind of informal sharing easier.

I’d like to see the same kinds of sharing available on the Web. Disappearing URLs might be one piece, but folks are also talking about easy ways to make social media posts have a pre-determined lifetime where they’ll automatically disappear.

What do you think? Code, documentation, issues, etc. on Github.

Update: it’s been pointed out (thanks Seb, Andrew) that while I’ve built and deployed this for my own domain, it would also make sense to have a standalone service (you know, like bit.ly) that created ephemeral URLs that could work for any page on the Web without having to install some PHP. It’s like perma.cc, but the opposite. See issue #1.

Cheers,
Nick

P.S. Thanks to the Homebrew Website Club for their useful feedback when I presented some of this last month.

by npdoty@ischool.berkeley.edu at June 25, 2016 09:16 PM

June 10, 2016

Ph.D. student

Speculative and Anticipatory Orientations Towards the Future

This is part 3 in a 3 part series of posts based on work I presented at Designing Interactive Systems (DIS) this year on analyzing concept videos. Read part 1, part 2, or find out more about the project on the project page or download the full paper

After doing a close reading and analyzing the concept videos for Google Glass (a pair of glasses with a heads up display) and Microsoft HoloLens (a pair of augmented reality goggles), we also looked at media reaction to these videos and these products’ announcements.

After both concept videos were released, media authors used the videos as a starting point to further imagine the future world with Glass and HoloLens, and the implications of living in those worlds. Yet they portrayed the future in two different ways: some discussed the future by critiquing the world depicted in the companies’ concept videos, while others accepted the depicted worlds. We distinguish between these two orientations, terming them speculative and anticipatory.

Some of the authors challenged the future narratives presented by companies which we term speculative orientations, inspired by speculative design (as discussed by Dunne & Raby and Gaver & Martin). These challenge corporate narratives and often present or explore alternative future visions. Speculative orientations may suggest an opportunity to change, refine, or consider other designs.

One example of this orientation is Kashmir Hill’s discussion of a possible future much different than Google’s concept video:

It’s easy to imagine lots of other situations in which it’d be attractive to be able to snap photos all of the time, whether with friends, on the subway, on a road trip, walking down the street, at the beach, at clubs, at bars, on an airplane […] We could all become surveillance cameras, but with legs and Instagram filters

Another example is Lore Sjoberg’s imaginings of different types of “smart” devices that might supplant Glass, such as smart hats or smart walking sticks, in some ways critiquing the perceived silliness of having computerized glasses.

One example was the creation of a Google Glass parody concept video by Tom Scott, showing other ways Glass might be used in the world – recording things in inopportune times, showing users literal popup ads, and people bumping into unseen physical objects.

Written critiques, alternate subversive scenarios, and parodies provided reflections on what social experience and intimacy might mean with Glass, questioned Google’s motives, and explored how new social norms enabled by Glass may raise privacy concerns.

Alternatively, other authors largely accepted the corporate narratives of the future as true and imagined the world within those parameters, which we term anticipatory orientations (influenced by work on anticipation in Science & Technology Studies and Steinhardt & Jackson’s concept of “anticipation work”). These orientations foresee a singular future, and work to build, maintain, and move toward a particular vision of the future. Anticipatory orientations are more likely as a technology moves closer to its public release. Anticipatory orientations may suggest greater acceptance of a new product but less space for changing a product’s design.  For instance, with less space for critique or reconsideration of the design, some people began taking other types of concrete steps to prepare for the arrival of the anticipated future, such as bars and other private establishments that pre-emptively banned Glass due to privacy concerns, or the Stop the Cyborgs campaign.

Another example of the anticipatory orientation is Jessi Hempel’s description of the future with HoloLens, which followed the parameters of the future world presented in the HoloLens concept video.

you used to compute on a screen, entering commands on a keyboard. Cyberspace was somewhere else. […] In the very near future, you’ll compute in the physical world. […] What will this look like? Well, holograms.

Similarly, Patrick Moorhead imagines the use of HoloLens in line with the way it is portrayed  in the video.

I like that the headset is only designed to be worn for a few hours a day for a very specific purpose […] That is one reason why I am considering the HoloLens as a productivity device first and entertainment second

These anticipatory orientations imagine the future, but take the videos’ depicted future at face value.

These two orientations are not mutually exclusive, but rather lay on a spectrum. However, distinguishing between them allows us to be more precise about ways people discuss and imagine the future.

These raise interesting questions to look at in the future. What might it mean to design or present an artifact to evoke an anticipatory or speculative orientation? Is that something even worth doing? What might it mean for a Kickstarter video to encourage one orientation over another? What might it mean to have one orientation over another with regards to a technology that violates rights like privacy or free speech? As new technologies are created and distributed, it is worth asking how people imagine how those technologies will be used in daily life.


by Richmond at June 10, 2016 07:40 AM

June 08, 2016

BioSENSE research project

does it capture emotions as they are, or as they are performed?



does it capture emotions as they are, or as they are performed?

June 08, 2016 03:42 AM

June 07, 2016

Ph.D. student

A Closer Look at Google Glass’ & Microsoft HoloLens’ Concept Videos, and Surveillance Concerns

This is part 2 in a 3 part series of posts based on work I presented at Designing Interactive Systems (DIS) this year on analyzing concept videos. Read part 1, part 3, or find out more about the project on the project page or download the full paper

In this post, I walk through our close reading of the Glass and HoloLens concept videos and how they imagine potential futures. I then discuss how this analysis can be used to think about surveillance issues and other values associated with these representations of the future.

Google Glass Concept Video

Google’s concept video “Project Glass: One Day…” was released on April 4, 2012. The video portrays a day in the life of a male Glass user, as he makes his way around New York City. The video follows a single wearer throughout the day in New York City from when he wakes up until sunset. This video is shot entirely in a first person point of view, putting the video viewer in the place of a person wearing Glass.

Take a look at the video below. While you’re watching, pay attention to:

  • What the device looks like
  • Where are the contexts and settings of use
  • Who is the user

You can see how the video is shot in a first person point of view right away. As for what the device looks like – that was a bit of a trick question! The entire video is shot from this first person point of view, but never shows what the device actually looks like. This suggests that the device is worn all the time, and not taken on and off between interactions with it.

For contexts of use he was walking in the street, at a bookstore, meeting a friend at a food truck, and making a video call outdoors. Notably, except for the beginning at home, all of the settings of use take place in public or semi-public spaces, mostly outdoors.

Looking at the singular user, it also starts to portray a picture about who Glass users might be – a young white male, rather affluent, and has the ability to have a nice apartment and spend a leisurely day in New York City (rather than working. Presumably, seeing the businessmen walking in the street near the beginning of the video, it might be a weekday).

This video begins to portray Glass as a device something that seems to fade into the background, is always worn and always turned on throughout the day, and used across many contexts by one person. Glass is framed as a device that invisibly fits into the daily life patterns of an individual. It is highly mobile, and augments a user’s ability to communicate, navigate, and gather contextual information from his or her surroundings.

Microsoft HoloLens Concept Video

Microsoft’s concept video “Microsoft HoloLens – Transform your world with holograms” was released on January 21, 2015. It shows HoloLens as a set of head worn goggles that projects holograms around the user. The video imagines various settings in which different users may use HoloLens’ augmented reality holograms, depicting a number of different people doing different tasks.

Take a look at the video below. Again, let’s try to pay attention to:

  • What the device looks like
  • Where are the contexts and settings of use
  • Who are the users

The third person point of view is pretty apparent (though some scenes switch briefly into a first person point of view). This third person allows us to see what the actual HoloLens device might look like. It appears similar to a large pair of black ski goggles.

The context and settings of use of HoloLens are all indoors, taking either in an office or home. Furthermore, the video shows multiple users each doing a single task in a different location: a woman uses HoloLens to design a motorcycle at work, while separately a man uses it to play Minecraft at home.

The video shows multiple users of HoloLens. There are multiple male and female users, although some stereotypical gender roles are reinforced, such as a male assisting a female to fix a sink. Users of HoloLens are portrayed as adults in the professional working class.

The video begins to portray HoloLens as a device that’s used for particular tasks and activities, and isn’t worn all the time as it’s large and bulky. Most of its uses are indoors, centered around the office or home. HoloLens is not used by one person for doing everything in multiple places, but rather it is used by many people for doing one thing in specific places. Potential HoloLens users are those in the professional working class.

Understanding Surveillance Concerns through the Videos

These videos are one way in which people are started to form ideas about what these new technologies will do, for whom, by what means. (Remember that these concept videos were released over a year before people in the public would have any actual access to be able to use these devices).  People began to associate values and narratives with these technologies.

We can see how a close reading of the concept videos’ imagining of the future can surface a discussion of values in imagined futures; here we focus on surveillance concerns.

There are a couple of things in the video portrayals that make it easier to see Glass as a privacy-infringing surveillance device moreso than HoloLens. Glass is portrayed as invisible and always on, with the potential to always record in any location or context, presenting possible surveillance concerns. The video only shows one person wearing Glass during the entire time, which suggests some power imbalance between the augmented capabilities the wearer gets compared to everyone else who doesn’t get to wear Glass.

HoloLens is depicted as a very visible and very bulky thing that can be easily seen. Furthermore, a lot of different people get to wear it as well in the video, and they’re seen putting it on and taking it off, so it’s not setting up the same type of power imbalance between the wearer and non-wearer with Glass. Here, everyone is a wearer and non-wearer of HoloLens. Its use is also seemingly limited to a few specific places – at work or at home, so it’s not portrayed as a device that could record anything at any time.

The close reading of the videos and the analysis of the media articles helps give us some insight into the ways values like privacy were portrayed and debated. This is important, because the videos don’t necessarily set out to say “we’re going to show something that violates privacy or upholds it,” but the values are more latent, implicit, and embedded within the narratives of the video. This type of critical analysis or close reading of concept videos allows us to explore these implicit values.

Next, read Part 3, Speculative and Anticipatory Orientations Towards the Future


by Richmond at June 07, 2016 12:12 AM

June 05, 2016

Ph.D. student

Analyzing Concept Videos

This is part 1 in a 3 part series of posts based on work I presented at Designing Interactive Systems (DIS) this year on analyzing concept videos. Read part 2, part 3, or find out more about the project on the project page or download the full paper

So What is a Concept Video?

I am defining concept video as a video created by a company, showing a new novel device or product that is not yet available for public purchase, though it might be in a few years. Concept videos depict what the world might be like if that device or product exists, and how people might interact with it or use it. An early example is Apple’s Knowledge Navigator video, while more contemporary examples include Amazon’s Prime Air video, Google’s Glass video, and Microsoft’s HoloLens video. (I’ll take a closer look at the latter two in a following blog post). Concept videos embed a vision about the social and technical future of computing: how computing will be done, for whom, by what means, and what the norms of that world will be.

Concept videos are related to other types of design practices as well. One such practice is design fiction, described by Julian Bleecker as a practice that exists in the space between science fiction and science fact. Bruce Sterling describes design fiction as diegetic prototypes. Importantly, the artifacts created through design fiction exist within a narrative world, story, or fictional reality. This can take on many forms: in text, such as fictional research abstracts or fictional research papers, short films, or the creation of artifacts in a fictional world like steampunk. Generally, these are used to explore alternative possibilities for technology design and social life. Design fiction is related to speculative design – by creating fictional worlds and yet-to-be-realized design concepts, it tries to understand possible alternative futures. Placing corporate concept videos in the realm of design fiction frames the video’s narrative as something that is not predicting the future, but presenting a representation of one possible future out of many. It also allows us to interpret the video and investigate the ideas and values it promotes. Design fictions are also discursive in the sense that their stories have a discourse, and they interact with other social discourses that are occurring.

Yet, we also have to recognize the corporate source of the videos. Unlike other design fictions which invite users to into a narrative world to imagine technologies as if they are real, many corporate concept videos portray technologies that will be real in some form. These videos more directly serve corporate purposes as well. While these videos do not explicitly direct users to purchase a particular product, they do reflect advertising imperatives.  The concept videos also share qualities with “vision videos,” or corporate research videos. However these tend to depict further futures than concept videos, and aren’t about specific products, but show broader technological worlds. Concept videos also contain elements of video prototyping, which has a long history in HCI. These often show short use scenarios of a technology, or simulate the use and interaction of a technology. But they’re often either used internally within a company or team, or as part of a user testing process. Concept videos show similar things, but are also public-facing artifacts, in dialog with other types of public conversation.

Why Should We Analyze Concept Videos?

Prior work shows that representations of technology affect broader perceptions, reactions, and debate. For instance, commercials and news articles create sociotechnical narratives about smartphones and smartphone users. The circulation of stories and the discourses that arise frame a debate about what it means to be a smartphone user, and associate moral values with using a smartphone. Representations of technologies influence the way people imagine future technologies, build broader collective narratives about what technologies mean, and influence technological development and use.

Concept videos similarly create a narrative world that takes place in the future, depicting technical artifacts and how humans interact with them. Furthermore, the public release of the videos provides a starting point for public discussion and discourse about the technologies’ social implications – these videos are usually released in advance of the actual products, allowing a broader public audience to engage with and contest the politics and values of the presented futures.

The lens of design fiction lets us analyze the videos’ future-oriented narratives. Analyzing corporate uses of design fictions helps surface aspects of the companies’ narratives that may not be at their central focus, but could have significant implications for people if those narratives come to fruition. Analyzing the creation – and contestation – of narratives by companies and the media response also provides insight into the processes that embed or associate social and political values with new technologies.

Concept videos depict narratives which imply what technical affordances technologies have, but there may be gaps between portrayal and actual capabilities that we do not know about. The videos also do not show the technical mechanisms that enable the design and function of the technology. However, these ambiguities should be viewed as features, not bugs, of concept videos. Concept videos’ usefulness, like design fictions, comes from their narrative features and their ability to elicit multiple interpretations, reflections, and questions. Concept videos’ representations of technology should not be seen as final design solutions, but a work in progress still amenable to change.

How Might We Analyze Concept Videos?

In our approach, we adapted Gillian Dyer’s method for investigating visual signs in advertisements by focusing on five main signals:

  • physical appearance of people,
  • people’s emotions,
  • people’s behavior and activities,
  • props and physical objects, and
  • settings

We identified these elements in each video, watching each video several times. We also looked at additional features, including:

  • visual camera techniques such as camera angle and focus, and
  • narration or dialogue that takes place in the video

After identifying these elements, we found that asking several questions allowed us to surface further questions and insights, and helped us interpret what types of values the various video elements may signify. This allows us to do a close reading and critical analysis of videos. We note that this is not an exhaustive list, and is likely to expand as more analyses are done on different types of concept videos.

  • How are technologies portrayed? This includes looking at the design and form of artifacts, their technical affordances, and possible values they embody.
  • How are humans portrayed? Who are users and non-users of the technology? This draws on factors like behaviors, appearance, emotion, and setting to see what types of people are imagined to be interacting with the technology.
  • How is the sociotechnical system portrayed? This focuses on how humans and the technology interact together, the settings and contexts in which they interact, and raises questions about who or what has agency over different types of interactions.
  • What is not in the video? What populations or needs are unaddressed? What would it look like if certain technical capabilities are taken to the extreme? Can we imagine alternate futures from what the video depicts?

Our goal in analyzing concept videos is not to argue that our interpretation is the only “correct” reading. Rather our goal is to present a method that allows viewers to surface ideas, questions, and reflections while watching concept videos, and to consider how the presentation of technologies relates and responds to public discourse around their introduction into society.

By analyzing and interpreting the corporate concept videos as viewers, we do not know about the process behind the creation of these videos, making the creators’ intent difficult to discern. Unlike design fictions published in other venues or formats, there is no accompanying paratext, essay, or academic paper describing the authors’ intent or process. Regardless of intent, we find that the futures portrayed by the videos are ideological and express cultural values, whether consciously or unconsciously, explicitly or implicitly.

Takeaway

Commercial concept videos help us acknowledge and explore the ways artifacts represent values at a time before the design of a company’s product is finalized. Analysis and critique of these videos can surface potential problems at a time when designs can still be changed. Given that, we call on the HCI and design communities to leverage their expertise and engage in this type of critique. These communities can open a space for the discussion of cultural values embedded in the concept videos, and promote or explore alternative values.

Read Part 2, A Closer Look at Google Glass’s & Microsoft HoloLens’ Concept Videos, and Surveillance Concerns


by Richmond at June 05, 2016 08:37 PM

Ph.D. student

The FTC and pragmatism; Hoofnagle and Holmes

I’ve started working my way through Chris Hoofnagle’s Federal Trade Commission Privacy Law and Policy. Where I’m situated at the I School, there’s a lot of representation and discussion of the FTC in part because of Hoofnagle’s presence there. I find all this tremendously interesting but a bit difficult to get a grip on, as I have only peripheral experiences of actually existing governance. Instead I’m looking at things with a technical background and what can probably be described as overdeveloped political theory baggage.

So a clearly written and knowledgeable account of the history and contemporary practice of the FTC is exactly what I need to read, I figure.

With the poor judgment of commenting on the book having just cracked it open, I can say that the book reads so far as, not surprisingly, a favorable account of the FTC and its role in privacy law. In broad strokes, I’d say Hoofnagle’s narrative is that while the FTC started out as a compromise between politicians with many different positions on trade regulation, and while its had at times “mediocre” leadership, now the FTC is run by selfless, competent experts with the appropriate balance of economic savvy and empathy for consumers.

I can’t say I have any reason to disagree. I’m not reading for either a critique or an endorsement of the agency. I’m reading with my own idiosyncratic interests in mind: algorithmic law and pragmatist legal theory, and the relationship between intellectual property and antitrust. I’m also learning (through reading) how involved the FTC has been in regulating advertising, which endears me to the adjacency because I find most advertising annoying.

Missing as I am any substantial knowledge of 20th century legal history, I’m intrigued by resonances between Hoofnagle’s account of the FTC and Oliver Wendell Holmes Jr.’s “The Path of the Law“, which I mentioned earlier. Apparently there’s some tension around the FTC as some critics would like to limit its powers by holding it more narrowly accountable to common law, as oppose to (if I’m getting this right) a more broadly scoped administrative law that, among other things, allows it to employ skilled economist and technologists. As somebody who has been intellectually very informed by American pragmatism, I’m pleased to notice that Holmes himself would have probably approved of the current state of the FTC:

At present, in very many cases, if we want to know why a rule of law has taken its particular shape, and more or less if we want to know why it exists at all, we go to tradition. We follow it into the Year Books, and perhaps beyond them to the customs of the Salian Franks, and somewhere in the past, in the German forests, in the needs of Norman kings, in the assumptions of a dominant class, in the absence of generalized ideas, we find out the practical motive for what now best is justified by the mere fact of its acceptance and that men are accustomed to it. The rational study of law is still to a large extent the study of history. History must be a part of the study, because without it we cannot know the precise scope of rules which it is our business to know. It is a part of the rational study, because it is the first step toward an enlightened scepticism, that is, towards a deliberate reconsideration of the worth of those rules. When you get the dragon out of his cave on to the plain and in the daylight, you can count his teeth and claws, and see just what is his strength. But to get him out is only the first step. The next is either to kill him, or to tame him and make him a useful animal. For the rational study of the law the blackletter man may be the man of the present, but the man of the future is the man of statistics and the master of economics. It is revolting to have no better reason for a rule of law than that so it was laid down in the time of Henry IV. It is still more revolting if the grounds upon which it was laid down have vanished long since, and the rule simply persists from blind imitation of the past. (Holmes, 1897)

These are strong words from a Supreme Court justice about the limitations of common law! It’s also a wholehearted endorsement of quantified science as the basis for legal rules. Perhaps what Holmes would have preferred is a world in which statistics and economics themselves became part of the logic of law. However, he goes to pains to point out how often legal judgment itself does not depend on logic so much as the unconscious biases of judges and juries, especially with respect to questions of “social advantage”:

I think that the judges themselves have failed adequately to recognize their duty of weighing considerations of social advantage. The duty is inevitable, and the result of the often proclaimed judicial aversion to deal with such considerations is simply to leave the very ground and foundation of judgments inarticulate, and often unconscious, as I have said. When socialism first began to be talked about, the comfortable classes of the community were a good deal frightened. I suspect that this fear has influenced judicial action both here and in England, yet it is certain that it is not a conscious factor in the decisions to which I refer. I think that something similar has led people who no longer hope to control the legislatures to look to the courts as expounders of the constitutions, and that in some courts new principles have been discovered outside the bodies of those instruments, which may be generalized into acceptance of the economic doctrines which prevailed about fifty years ago, and a wholesale prohibition of what a tribunal of lawyers does not think about right. I cannot but believe that if the training of lawyers led them habitually to consider more definitely and explicitly the social advantage on which the rule they lay down must be justified, they sometimes would hesitate where now they are confident, and see that really they were taking sides upon debatable and often burning questions.

What I find interesting about this essay is that it somehow endorses both the use of economics and statistics in advancing legal thinking and also endorses what has become critical legal theory, with its specific consciousness of the role of social power relations in law. So often in contemporary academic discourse, especially when it comes to discussion of regulation technology businesses, these approaches to law are considered opposed. Perhaps it’s appropriate to call a more politically centered position, if there were one today, a pragmatist position.

Perhaps quixotically, I’m very interested in the limits of these arguments and their foundation in legal scholarship because I’m wondering to what extent computational logic can become a first class legal logic. Holmes’s essay is very concerned with the limitations of legal logic:

The fallacy to which I refer is the notion that the only force at work in the development of the law is logic. In the broadest sense, indeed, that notion would be true. The postulate on which we think about the universe is that there is a fixed quantitative relation between every phenomenon and its antecedents and consequents. If there is such a thing as a phenomenon without these fixed quantitative relations, it is a miracle. It is outside the law of cause and effect, and as such transcends our power of thought, or at least is something to or from which we cannot reason. The condition of our thinking about the universe is that it is capable of being thought about rationally, or, in other words, that every part of it is effect and cause in the same sense in which those parts are with which we are most familiar. So in the broadest sense it is true that the law is a logical development, like everything else. The danger of which I speak is not the admission that the principles governing other phenomena also govern the law, but the notion that a given system, ours, for instance, can be worked out like mathematics from some general axioms of conduct. This is the natural error of the schools, but it is not confined to them. I once heard a very eminent judge say that he never let a decision go until he was absolutely sure that it was right. So judicial dissent often is blamed, as if it meant simply that one side or the other were not doing their sums right, and if they would take more trouble, agreement inevitably would come.

This mode of thinking is entirely natural. The training of lawyers is a training in logic. The processes of analogy, discrimination, and deduction are those in which they are most at home. The language of judicial decision is mainly the language of logic. And the logical method and form flatter that longing for certainty and for repose which is in every human mind. But certainty generally is illusion, and repose is not the destiny of man. Behind the logical form lies a judgment as to the relative worth and importance of competing legislative grounds, often an inarticulate and unconscious judgment, it is true, and yet the very root and nerve of the whole proceeding. You can give any conclusion a logical form. You always can imply a condition in a contract. But why do you imply it? It is because of some belief as to the practice of the community or of a class, or because of some opinion as to policy, or, in short, because of some attitude of yours upon a matter not capable of exact quantitative measurement, and therefore not capable of founding exact logical conclusions. Such matters really are battle grounds where the means do not exist for the determinations that shall be good for all time, and where the decision can do no more than embody the preference of a given body in a given time and place. We do not realize how large a part of our law is open to reconsideration upon a slight change in the habit of the public mind. No concrete proposition is self evident, no matter how ready we may be to accept it, not even Mr. Herbert Spencer’s “Every man has a right to do what he wills, provided he interferes not with a like right on the part of his neighbors.”

For Holmes, nature can be understood through a mathematized physics and is in this sense logical. But the law itself is not logical in the narrow sense of providing certainty about concrete propositions and the legal interpretation of events.

I wonder whether the development of more flexible probabilistic logics, such as those that inform contemporary machine learning techniques, would have for Holmes adequately bridged the gap between the logic of nature and the ambiguity of law. These probabilistic logics are designed to allow for precise quantification of uncertainty and ambiguity.

This is not a purely academic question. I’m thinking concretely about applications to regulation. Some of this has already been implemented. I’m thinking about Datta, Tschantz, and Datta’s “Automated Experiments on Ad Privacy Settings: A Tale of Opacity, Choice, and Discrimination” (pdf). I know several other discrimination auditing tools have been developed by computer science researchers. What is the legal status of these tools? Could they or should they be implemented as a scalable or real-time autonomous system?

I was talking to an engineer friend the other day and he was telling me that internally to Google, there’s a team responsible for building the automated system that tests all of its other automated systems to make sure that it is adherence to its own internal privacy standards. This was a comforting thing to hear and not a surprise, as I get the sense from conversations I’ve had with Googler’s that they are in general a very ethically conscientious company. What’s distressing to me is that Google may have more powerful techniques available for self-monitoring than the government has for regulation. This is because (I think…again my knowledge of these matters is actually quite limited) at Google they know when a well-engineered computing system is going to perform better than a team of clerks, and so developing this sort of system is considered worthy of investment. It will be internally trusted as much as any other internal expertise. Whereas in the court system, institutional inertia and dependency on discursive law mean that at best this sort of system can be brought in as an expensive and not entirely trusted external source.

What I’d like to figure out is to what extent agency law in particular is flexible enough to be extended to algorithmic law.


by Sebastian Benthall at June 05, 2016 02:20 AM

June 03, 2016

BioSENSE research project

This startup "makes your app into a habit"

This startup "makes your app into a habit":

This rhetoric about “hacking users”:

HACKING YOUR USERS ISN’T LUCK: IT’S (MAD) SCIENCE.

Reinforce a user at the perfect moment, and they’ll stay longer and do more. That little burst of dopamine they feel? That’s the brain’s engagement fuel. But when is that perfect, unexpected moment? And how do you know?

We use neuroscience to tell your app when to reinforce a user at that perfect moment. Optimized for each user. Adapting over time.

It’s literally an API for Dopamine. And cheaper than hiring a PhD.

June 03, 2016 09:46 PM

Ph.D. student

algorithmic law and pragmatist legal theory: Oliver Wendell Holmes Jr. “The Path of the Law”

Several months ago I was taken by the idea that in the future (and maybe depending on how you think about it, already in the present) laws should be written as computer algorithms. While the idea that “code is law” and that technology regulates is by no means original, what I thought perhaps provocative is the positive case for the (re-)implementation of the fundamental laws of the city or state in software code.

The argument went roughly like this:

  • Effective law must control a complex society
  • Effective control requires social and political prediciton.
  • Unassisted humans are not good at social and political prediction. For this conclusion I drew heavily on Philip Tetlock’s work in Expert Political Judgment.
  • Therefore laws, in order to keep pace with the complexity of society, should be implemented as technical systems capable of bringing data and machine learning to bear on social control.

Science fiction is full of both dystopias and utopias in which society is literally controlled by a giant, intelligent machine. Avoiding either extreme, I just want to make the modest point that there may be scalability problems with law and regulation based on discourse in natural language. To some extent the failure of the state to provide sophisticated, personalized regulation in society has created myriad opportunities for businesses to fill these roles. Now there’s anxiety about the relationship between these businesses and the state as they compete for social regulation. To the extent that businesses are less legitimate rulers of society than the state, it seems a practical, technical necessity that the state adopt the same efficient technologies for regulation that businesses have. To do otherwise is to become obsolete.

There are lots of reasons to object to this position. I’m interested in hearing yours and hope you will comment on this and future blog posts or otherwise contact me with your considered thoughts on the matter. To me the strongest objection is that the whole point of the law is that it is based on precedent, and so any claim about the future trajectory of the law has to be based on past thinking about the law. Since I am not a lawyer and I know precious little about the law, you shouldn’t listen to my argument because I don’t know what I’m talking about. Q.E.D.

My counterargument to this is that there’s lots of academics who opine about things they don’t have particular expertise in. One way to get away with this is by deferring to somebody else who has credibility in field of interest. This is just one of several reasons why I’ve been reading “The Path of the Law“, a classic essay about pragmatist legal theory written by Supreme Court Justice Oliver Wendell Holmes Jr. in 1897.

One of the key points of this essay is that it is a mistake to consider the study of law the study of morality per se. Rather, the study of law is the attempt to predict the decisions that courts will make in the future, based on the decisions courts will make in the past. What courts actually decide is based in part of legal precedent but also on the unconsciously inclinations of judges and juries. In ambiguous cases, different legal framings of the same facts will be in competition, and the judgment will give weight to one interpretation or another. Perhaps the judge will attempt to reconcile these differences into a single, logically consistent code.

I’d like to take up the arguments of this essay again in later blog posts, but for now I want to focus on the concept of legal study as prediction. I think this demands focus because while Holmes, like most American pragmatists, had a thorough and nuanced understanding of what prediction is, our mathematical understanding of prediction has come a long way since 1897. Indeed, it is a direct consequence of these formalizations and implementations of predictive systems that we today see so much tacit social regulation performed by algorithms. We know now that effective prediction depends on access to data and the computational power to process it according to well-known algorithms. These algorithms can optimize themselves to such a degree that their specific operations are seemingly beyond the comprehension of the people affected by them. Some lawyers have argued that this complexity should not be allowed to exist.

What I am pointing to is a fundamental tension between the requirement that practitioners of the law be able to predict legal outcomes, and the fact that the logic of the most powerful predictive engines today is written in software code not words. This is because of physical properties of computation and prediction that are not likely to ever change. And since a powerful predictive engine can just as easily use its power to be strategically unpredictable, this presents an existential challenge to the law. It may simply be impossible for lawyers acting as human lawyers have for hundreds of years to effectively predict and therefor regulate powerful computational systems.

One could argue that this means that such powerful computational systems should simply be outlawed. Indeed this is the thrust of certain lawyers. But if we believe that these systems are not going to go away, perhaps because they won’t allow us to regulate them out of existence, then our only viable alternative to suffering under their lawless control is to develop a competing system of computational legalism with the legitimacy of the state.


by Sebastian Benthall at June 03, 2016 04:48 AM

June 02, 2016

Ph.D. student

second-order cybernetics

The mathematical foundations of modern information technology are:

  • The logic of computation and complexity, developed by Turing, Church, and others. These mathematics specify the nature and limits of the algorithm.
  • The mathematics of probability and, by extension, information theory. These specify the conditions and limitations of inference from evidence, and the conditions and limits of communication.

Since the discovery of these mathematical truths and their myriad application, there have been those that have recognized that these truths apply both to physical objects, such as natural life and artificial technology, and also to lived experience, mental concepts, and social life. Humanity and nature obey the same discoverable, mathematical logic. This allowed for a vision of a unified science of communication and control: cybernetics.

There have been many intellectual resistance to these facts. One of the most cogent is Understanding Computers and Cognition, by Terry Winograd and Fernando Flores. Terry Winograd is the AI professor who advised the founders of Google. His credentials are beyond question. And so the fact that he coauthored a critique of “rationalist” artificial intelligence with Fernando Flores, Chilean entrepreneur, politician, and philosophy PhD , is significant. In this book, the two authors base their critique of AI on the work of Humberto Maturana, a second-order cyberneticist who believed that life’s organization and phenomenology could be explained by a resonance between organism and environment, structural coupling. Theories of artificial intelligence are incomplete when not embedded in a more comprehensive theory of the logic of life.

I’ve begun studying this logic, which was laid out by Francisco Varela in 1979. Notably, like the other cybernetic logics, it is an account of both physical and phenomenological aspects of life. Significantly Varela claims that his work is a foundation for an observer-inclusive science, which addresses some of the paradoxes of the physicist’s conception of the universe and humanity’s place in it.
y hunch is that these principles can be applied to social scientific phenomena as well, as organizations are just organisms bigger than us. This is a rather strong claim and difficult to test. However, it seems to me after years of study the necessary conclusion of available theory. It also seems consistent with recent trends in economics towards complexity and institutional economics, and the intuition that’s now rather widespread that the economy functions as a complex ecosystem.

This would be a victory for science if we could only formalize these intuitions well enough to either make these theories testable, or to be so communicable as to be recognized as ‘proved’ by any with the wherewithal to study it.


by Sebastian Benthall at June 02, 2016 03:57 AM

May 30, 2016

Ph.D. student

the inertia of secular materialism

Throughout graduate school, I’ve been enthralled with ideas that I’ve learned from sources that on the surface have very little to do with my department. The School of Information chases the most important trends in politics and technology, teaches the most relevant skills. It has a wonderful grasp of today’s problem space.

Where I’ve had to look outside of the department, and often outside of the university entirely, is for the solution space. If I think about any of the problems that raised in the course of my academic work very seriously, I wind up learning about topics that nobody around me claims expertise about. Complex systems theory, especially ecological modeling, has seemed extraordinarily pertinent. Jungian psychology, which I’ve learned about from several independent advisers, colleagues, sources but never once heard a professor mention. The work of Maturana and Varela, which is cited and highly regarded by Winograd and Flores and Luhmann, among others, but not picked up as scientific material.

What these three schools of thought have in common is that they began from rather conventional scientific roots and, at the height of their theory, began to see things in a way that became, for lack of a better way to put it, spiritually touched. Ulanowicz’s ecological theory is inspired by Bateson, and leads to conclusions about life and its organization that challenge a linear physical conception of causation. Jung began as Freud’s student studying repressed desires and wound up with a theory of the collective unconscious, which without much interpretation becomes a kind of synonym for God. Francisco Varela began as a biologist and wound up being instrumental in bringing the teachings of the Dali Lama to the West.

These are not ideas that are likely to be taken seriously by any mainstream academic. I’ve heard that one of the most devastating critiques you can make of a scientific theory is that it is “crypto-theological”. If you want to throw shade on some scientists, suggest that they are attempting to be a priesthood (e.g. “Computational research techniques are not barometers of the social. They produce hieroglyphs: shaped by the tool by which they are carved, requiring of priestly interpretation, they tell powerful but often mythological stories — usually in the service of the gods.”). Cosma Shalizi points out that this is fatuous rhetoric: “When one mob of secular materialists accuses another of being quasi-theologians, it’s an almost sure sign that they’ve run out of good arguments against them.”

But what it really indicates is how averse mainstream academics are to considering theology at all. Scientists–even and perhaps especially the professional pseudoscientists in the “social sciences”–are forever trying to distinguish themselves intellectually from religion. But it doesn’t take a degree in the social sciences to see what’s obvious, which is that every social scientific “discipline” has its own ideology, its own unquestioned dogmas, and a selective priesthood. The academic establishment is a secular materialist clerical order expanded to address the ideological needs of modern society in its vastness and complexity.

The entire process of getting a doctorate as a credential is a way of limiting access to the priesthood. This is similar to other kinds of credentialing and licensing programs. The strategy is the same: espouse an ethical doctrine in order to justify the market restriction as good for public welfare so that you can restrict the distribution of licenses, increasing the market value of your services. If this is too cynical a view, forgive me: I’ve been advised by close friends that I should simply accept the hypocrisy of the university as a given, as this will make my life easier professionally. Sadly I have to conclude that if hypocrisy in education is tolerated, then indeed the public education system is vulnerable to libertarian arguments for its defunding and dissolution. This is not an outcome I want, because I think education is sacred.

So instead I have to maintain that the purpose of education should be to train people with real skills and teach them true knowledge. Herein lies the rub: bring up “truth” to an academic and you will get laughed at. No no, there’s only models, frameworks, disciplines, etc. Everything is constructed, etc. This is again partly a matter of professional necessity–if anybody really knows anything, then logically that entails that people that don’t know it are ignorant of something, and that leaves them vulnerable to the intellectual authority of another. As the contemporary academic environment is set up non-hierarchically in order to preserve professional peace, it has become politically correct to deny the truth of any ideas at all.

For these systemic reasons, let alone the political climate in the United States and the ugly reputations of religious institutions, there is tremendous resistance to anything that looks like theological affirmation in academic thought. The world must be full of problems so that academics can endlessly complain about them and search for solutions. If others start to find solutions, they are critiqued for being “solutionist”. The cycle continues. The possibility that everything might be just fine, that people should live and let live, that it’s the anxious political interventions that cause the problems solved later by more anxious political interventions–this is a threatening possibility because it gives people less to do. Boredom, as much as necessity, drives politics. Political conflict is created so that the otherwise satisfied can compete for status, prestige. Priests battling priests, all godless.

It does make me wonder what would happen if scientific discoveries were able to put these debates to rest.


by Sebastian Benthall at May 30, 2016 04:11 PM

May 29, 2016

BioSENSE research project

May 23, 2016

Ph.D. student

discovering agency in symbolic politics as psychic expression of Blau space

If the Blau space is exogenous to manifest society, then politics is an epiphenomenon. There will be hustlers; there will be the oscillations of who is in control. But there is no agency. Particularities are illusory, much as how in quantum field theory the whole notion of the ‘particle’ is due to our perceptual limitations.

An alternative hypothesis is that the Blau space shifts over time as a result of societal change.

Demographics surely do change over time. But this does not in itself show that Blau space shifts are endogenous to the political system. We could possibly attribute all Blau space shifts to, for example, apolitical terms of population growth and natural resource availability. This is the geographic determinism stance. (I’ve never read Guns, Germs, and Steel… I’ve heard mixed reviews.)

Detecting political agency within a complex system is bound to be difficult because it’s a lot like trying to detect free will, only with a more hierarchical ontology. Social structure may or may not be intelligent. Our individual ability to determine whether it is or not will be very limited. Any individual will have a limited set of cognitive frames with which to understand the world. Most of them will be acquired in childhood. While it’s a controversial theory, the Lakoff thesis that whether one is politically liberal or conservative depends on ones relationship with ones parents is certainly very plausible. How does one relate to authority? Parental authority is replaced by state and institutional authority. The rest follows.

None of these projects are scientific. This is why politics is so messed up. Whereas the Blau space is an objective multidimensional space of demographic variability, the political imaginary is the battleground of conscious nightmares in the symbolic sphere. Pathetic humanity, pained by cruel life, fated to be too tall, or too short, born too rich or too poor, disabled, misunderstood, or damned to mediocrity, unfurls its anguish in so many flags in parades, semaphore, and war. But what is it good for?

“Absolutely nothin’!”

I’ve written before about how I think Jung and Bourdieu are an improvement on Freud and Habermas as the basis of unifying political ideal. Whereas for Freud psychological health is the rational repression of the id so that the moralism of the superego can hold sway over society, Jung sees the spiritual value of the unconscious. All literature and mythology is an expression of emotional data. Awakening to the impersonal nature of ones emotions–as they are rooted in a collective unconscious constituted by history and culture as well as biology and individual circumstance–is necessary for healthy individuation.

So whereas Habermasian direct democracy, being Freudian through the Frankfurt School tradition, is a matter of rational consensus around norms, presumably coupled with the repression of that which does not accord with those norms, we can wonder what a democracy based on Jungian psychology would look like. It would need to acknowledge social difference within society, as Bourdieu does, and that this social difference puts constraints on democratic participation.

There’s nothing so remarkable about what I’m saying. I’m a little embarrassed to be drawing from European Grand Theorists and psychoanalysts when it would be much more appropriate for me to be looking at, say, the tradition of American political science with its thorough analysis of the role of elites and partisan democracy. But what I’m really looking for is a theory of justice, and the main way injustice seems to manifest itself now is in the resentment of different kinds of people toward each other. Some of this resentment is “populist” resentment, but I suspect that this is not really the source of strife. Rather, it’s the conflict of different kinds of elites, with their bases of power in different kinds of capital (economic, institutional, symbolic, etc.) that has macro-level impact, if politics is real at all. Political forces, which will have leaders (“elites”) simply as a matter of the statistical expression of variable available energy in the society to fill political roles, will recruit members by drawing from the psychic Blau space. As part of recruitment, the political force will activate the habitus shadow of its members, using the dark aspects of the psyche to mobilize action.

It is at this point, when power stokes the shadow through symbols, that injustice becomes psychologically real. Therefore (speaking for now only of symbolic politics, as opposed to justice in material economic actuality, which is something else entirely) a just political system is one that nurtures individuation to such an extent that its population is no longer susceptible to political mobilization.

To make this vision of democracy a bit more concrete, I think where this argument goes is that the public health system should provide art therapy services to every citizen. We won’t have a society that people feel is “fair” unless we address the psychological roots of feelings of disempowerment and injustice. And while there are certainly some causes of these feelings that are real and can be improved through better policy-making, it is the rare policy that actually improves things for everybody rather than just shifting resources around according to a new alignment of political power, thereby creating a new elite and new grudges. Instead I’m proposing that justice will require peace, and that peace is more a matter of the personal victory of the psyche than it is a matter of political victory of ones party.


by Sebastian Benthall at May 23, 2016 04:17 PM

Ph.D. student

directions to migrate your WebFaction site to HTTPS

Hiya friends using WebFaction,

Securing the Web, even our little websites, is important — to set a good example, to maintain the confidentiality and integrity of our visitors, to get the best Google search ranking. While secure Web connections had been difficult and/or costly in the past, more recently, migrating a site to HTTPS has become fairly straightforward and costs $0 a year. It may get even easier in the future, but for now, the following steps should do the trick.

Hope this helps, and please let me know if you have any issues,
Nick

P.S. Yes, other friends, I recommend WebFaction as a host; I’ve been very happy with them. Services are reasonably priced and easy to use and I can SSH into a server and install stuff. Sign up via this affiliate link and maybe I get a discount on my service or something.

P.S. And really, let me know if and when you have issues. Encrypting access to your website has gotten easier, but it needs to become much easier still, and one part of that is knowing which parts of the process prove to be the most cumbersome. I’ll make sure your feedback gets to the appropriate people who can, for realsies, make changes as necessary to standards and implementations.


One day soon I hope WebFaction will make many of these steps unnecessary, but the configuring and testing will be something you have to do manually in pretty much any case. You should be able to complete all of this in an hour some evening. You might have to wait a bit on WebFaction installing your certificate and the last two parts can be done on the following day if you like.

Create a secure version of your website in the WebFaction Control Panel

Login to the Web Faction Control Panel, choose the “DOMAINS/WEBSITES” tab and then click “Websites”.

“Add new website”, one that will correspond to one of your existing websites. I suggest choosing a name like existingname-secure. Choose “Encrypted website (https)”. For Domains, testing will be easiest if you choose both your custom domain and a subdomain of yourusername.webfactional.com. (If you don’t have one of those subdomains set up, switch to the Domains tab and add it real quick.) So, for my site, I chose npdoty.name and npdoty.npd.webfactional.com.

Finally, for “Contents”, click “Re-use an existing application” and select whatever application (or multiple applications) you’re currently using for your http:// site.

Click “Save” and this step is done. This shouldn’t affect your existing site one whit.

Test to make sure your site works over HTTPS

Now you can test how your site works over HTTPS, even before you’ve created any certificates, by going to https://subdomain.yourusername.webfactional.com in your browser. Hopefully everything will load smoothly, but it’s reasonably likely that you’ll have some mixed content issues. The debug console of your browser should show them to you: that’s Apple-Option-K in Firefox or Apple-Option-J in Chrome. You may see some warnings like this, telling you that an image, a stylesheet or a script is being requested over HTTP instead of HTTPS:

Mixed Content: The page at ‘https://npdoty.name/’ was loaded over HTTPS, but requested an insecure image ‘http://example.com/blah.jpg’. This content should also be served over HTTPS.

Change these URLs so that they point to https://example.com/script.js (you could also use a scheme-relative URL, like //example.com/script.js) and update the files on the webserver and re-test.

Good job! Now, https://subdomain.yourusername.webfactional.com should work just fine, but https://yourcustomdomain.com shows a really scary message. You need a proper certificate.

Get a free certificate for your domain

Let’s Encrypt is a new, free, automated certificate authority from a bunch of wonderful people. But to get it to setup certificates on WebFaction is a little tricky, so we’ll use the letsencrypt-webfaction utility —- thanks will-in-wi!

SSH into the server with ssh yourusername@yourusername.webfactional.com.

To install, run this command:

GEM_HOME=$HOME/.letsencrypt_webfaction/gems RUBYLIB=$GEM_HOME/lib gem2.2 install letsencrypt_webfaction

For convenience, you can add this as a function to make it easier to call. Edit ~/.bash_profile to include:

function letsencrypt_webfaction {
    PATH=$PATH:$GEM_HOME/bin GEM_HOME=$HOME/.letsencrypt_webfaction/gems RUBYLIB=$GEM_HOME/lib ruby2.2 $HOME/.letsencrypt_webfaction/gems/bin/letsencrypt_webfaction $*
}

Now, let’s test the certificate creation process. You’ll need your email address (preferably not GMail, which has longer instructions), e.g. nick@npdoty.name and the path to the files for the root of your website on the server, e.g. /home/yourusername/webapps/sitename/. Filling those in as appropriate, run this command:

letsencrypt_webfaction --account_email you@example.com --support_email you@example.com --domains yourcustomdomain.com --public /home/yourusername/webapps/sitename/

It’s important to use your email address for both --account_email and --support_email so that for this test, you’ll get the emails rather than sending them to the WebFaction support staff.

If all went well, you’ll see a new directory in your home directory called le_certs, and inside that a directory with the name of your custom domain (and inside that, a directory named with a timestamp, which has a bunch of cryptographic keys in it that we don’t care much about). You should also have received a couple of emails with appropriate instructions, e.g.:

LetsEncrypt Webfaction has generated a new certificate for yourcustomdomain.com. The certificates have been placed in /home/yourusername/le_certs/yourcustomdomain.com/20160522004546. WebFaction support has been contacted with the following message:

Please apply the new certificate in /home/yourusername/le_certs/yourcustomdomain.com/20160522004546 to yourcustomdomain.com. Thanks!

Now, run the same command again but without the --support_email parameter and this time the email will get sent directly to the WebFaction staff. One of the friendly staff will need to manually copy your certificates to the right spot, so you may need to wait a while. You’ll get a support notification once it’s done.

Test your website over HTTPS

This time you get to test it for real. Load https://yourcustomdomain.com in your browser. (You may need to force refresh to get the new certificate.) Hopefully it loads smoothly and without any mixed content warnings. Congrats, your site is available over HTTPS!

You are not done. You might think you are done, but if you think so, you are wrong.

Set up automatic renewal of your certificates

Certificates from Let’s Encrypt expire in no more than 90 days. (Why? There are two good reasons.) Your certificates aren’t truly set up until you’ve set them up to renew automatically. You do not want to do this manually every few months; you would forget, I promise.

Cron lets us run code on WebFaction’s server automatically on a regular schedule. If you haven’t set up a cron job before, it’s just a fancy way of editing a special text file. Run this command:

EDITOR=nano crontab -e

If you haven’t done this before, this file will be empty, and you’ll want to test it to see how it works. Paste the following line of code exactly, and then hit Ctrl-O and Ctrl-X to save and exit.

* * * * * echo "cron is running" >> $HOME/logs/user/cron.log 2>&1

This will output to that log every single minute; not a good cron job to have in general, but a handy test. Wait a few minutes and check ~/logs/user/cron.log to make sure it’s working. Now, let’s remove that test and add the renewal line, being sure to fill in your email address, domain name and the path to your website’s directory, as you did above:

0 4 15 */2 * PATH=$PATH:$GEM_HOME/bin GEM_HOME=$HOME/.letsencrypt_webfaction/gems RUBYLIB=$GEM_HOME/lib ruby2.2 $HOME/.letsencrypt_webfaction/gems/bin/letsencrypt_webfaction --account_email you@example.com --domains example.com --public /home/yourusername/webapps/sitename/

You’ll probably want to create the line in a text editor on your computer and then copy and paste it to make sure you get all the substitutions right. Ctrl-O and Ctrl-X to save and close it. Check with crontab -l that it looks correct.

That will create a new certificate at 4am on the 15th of alternating months (January, March, May, July, September, November) and ask WebFaction to install it. New certificates every two months is fine, though one day in the future we might change this to get a new certificate every few days; before then WebFaction will have taken over the renewal process anyway.

Redirect your HTTP site (optional, but recommended)

Now you’re serving your website in parallel via http:// and https://. You can keep doing that for a while, but everyone who follows old links to the HTTP site won’t get the added security, so it’s best to start permanently re-directing the HTTP version to HTTPS.

WebFaction has very good documentation on how to do this, and I won’t duplicate it all here. In short, you’ll create a new static application named “redirect”, which just has a .htaccess file with, for example, the following:

RewriteEngine On
RewriteCond %{HTTP_HOST} ^www\.(.*)$ [NC]
RewriteRule ^(.*)$ https://%1/$1 [R=301,L]
RewriteCond %{HTTP:X-Forwarded-SSL} !on
RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]

This particular variation will both redirect any URLs that have www to the “naked” domain and make all requests HTTPS. And in the Control Panel, make the redirect application the only one on the HTTP version of your site. You can re-use the “redirect” application for different domains.

Test to make sure it’s working! http://yourcustomdomain.com, http://www.yourcustomdomain.com, https://www.yourcustomdomain.com and https://yourcustomdomain.com should all end up at https://yourcustomdomain.com. (You may need to force refresh a couple of times.)

by nick@npdoty.name at May 23, 2016 01:22 AM

May 21, 2016

Ph.D. student

on intellectual sincerity

I have recently received some extraordinary encouragement regarding this blog. There are a handful of people who have told me how much they get out of my writing here.

This is very meaningful to me, since I often feel like blogging is the only intellectually sincere outlet I have. I have had a lot of difficulty this past year with academic collaboration. My many flaws have come to the fore, I’m afraid. One of these flaws is an inability to make certain intellectual compromises that would probably be good for my career if I were able to make.

A consolation in what has otherwise been a painful process is that a blog provides an outlet that cannot be censored even when it departs from the style and mores of academic writing, which I have come up against in such contexts such as internal university emails and memos. I’ve been told that writing research memos in an assertive way that reflects my conviction as I write them is counterproductive, for example. It was cited as an auxiliary reason for a major bureaucratic obstacle. One is expected, it seems, to play a kind of linguistic game as a graduate student working on a dissertation: one must not write with more courage than ones advisors have to offer as readers. To do so upsets the social authority on which the social system depends.

These sociolinguistic norms hold internal to the organization, despite the fact that every researcher may and is even expected or encouraged to publish their research outwardly with a professional confidence. In an academic paper, I can write assertively because I will not be published without peer review verifying that my work warrants the confidence with which it is written. In a blog, I can write even more assertively, because I can expect to be ignored. More importantly, others can expect each other to ignore writing in blogs. Recognition of blog writing as academically relevant happens very rarely, because to do so would be to acknowledge the legitimacy of a system of thought outside the system of academically legitimized thought. Since the whole game of the academy depends on maintaining its monopoly on expertise or at least the value of its intellectual currency relative to others, it’s very dangerous to acknowledge a blog.

I am unwise, terrifically unwise, and in my youthful folly I continue to blog with the candor that I once used as a pseudonymous teenager. Surely this will ruin me in the end, as now this writing has a permanence and makes a professional impression whose impact is real. Stakes are high; I’m an adult. I have responsibilities, or should; as I am still a graduate student I sometimes feel I have nothing to lose. Will I be forgiven for speaking my mind? I suppose that depends on whether there is freedom and justice in society or not. I would like to think that if the demands of professional success are such that to publish reflective writing is a career killer for an academic, that means The Terrorists Have Won in a way much more profound than any neoconservative has ever fantasized about.

There are lots of good reasons to dislike intellectuals. But some of us are by nature. I apologize on behalf of all of us. Please allow us to continue our obscure practices in the margins. We are harmless when ignored.


by Sebastian Benthall at May 21, 2016 07:02 PM

May 18, 2016

Center for Technology, Society & Policy

A User-Centered Perspective on Algorithmic Personalization

By Rena Coen, Emily Paul, Pavel Vanegas, and G.S. Hans, CTSP Fellows | Permalink

We conducted a survey using experimentally controlled vignettes to measure user attitudes about online personalization and develop an understanding of the factors that contribute to personalization being seen as unfair or discriminatory. Come learn more about these findings and hear from the Center for Democracy & Technology on the policy implications of this work at our event tonight!

What is online personalization?

Some of you may be familiar with a recent story, in which United Artists presented Facebook users with different movie trailers for the film Straight Outta Compton based on their race, or “ethnic affinity group,” which was determined based on users’ activity on the site.

This is just one example of online personalization, where content is tailored to users based on some user attribute. Such personalization can be beneficial to consumers but it can also have negative and discriminatory effects, as in the targeted trailers for Straight Outta Compton or Staples’ differential retail pricing based on zip code. Of course, not all personalization is discriminatory; there are examples of online personalization that many of us see as useful and have even come to expect. One example of this is providing location-based results for generic search terms like “coffee” or “movie showtimes.”

The role of algorithms

A big part of this story is the role of algorithms in personalization. This could mean that the data that is used to drive the personalization has been inferred, as in the Straight Outta Compton example where Facebook algorithmically inferred people’s ethnic affinity group based on the things they liked and clicked on. In this case the decision about who to target was made deductively. Facebook offers companies the opportunity to target their ads to ethnic affinity groups and United Artists thought it made sense to show different movie trailers to people based on their race. In other cases, there may not be a clear logic used in deciding what kind of targeting to do. Companies can use algorithms to identify patterns in customer data and target content, based on the assumption that people who like one thing will like another.

When does personalization discriminate?

We have a range of responses to personalization practices; we may see some as useful while others may violate our expectations. But how can we think about the range of responses to these examples more systematically – in a way that helps us articulate what these expectations are?

This is something that policy makers and privacy scholars have been examining and debating. From the policy side there is a need for practices and procedures that reflect and protect users’ expectations. These personalization practices, especially the use of inference, create challenges for existing policy frameworks. Several reports from the Federal Trade Commission (e.g. here and here) and the White House (e.g. here and here) look at how existing policy frameworks like the Fair Information Practice Principles (FIPPs) can address the use of algorithms to infer user data and target content. Some of the proposals from authors including Kate Crawford, Jason Schultz, Danielle Citron, and Frank Pasquale look to expand due process to allow users to correct data that has been inaccurately inferred about them.

Theoretical work from privacy scholars attempts to understand what users’ expectations are around inference and personalization, attempting to understand how these might be protected in the face of new technology. Many of these scholars have talked about the importance of context. Helen Nissenbaum and Solon Barocas discuss Nissenbaum’s conception of privacy as contextual integrity based on whether the inference conflicts with information flow norms and expectations. So, in the Straight Outta Compton example, does Facebook inferring people’s ethnic affinity based on their activity on the site violate norms and expectations of what users think Facebook is doing with their data?

This policy and privacy work highlights some of the important factors that seem to affect user attitudes about personalization: there is the use of inferred data and all of the privacy concerns it raises, there are questions around accuracy when inference is used, and there is the notion of contextual integrity.

One way to find more clarity around these factors and how they affect user attitudes is to ask the users directly. There is empirical work looking at how users feel about targeted content. In particular, there are several studies on user attitudes about targeted advertising, including by Chris Hoofnagle, Joseph Turow, Jen King, and others which found that most users (66%) did not want targeted advertising at all and that once users were informed of the tracking mechanisms that support targeted ads even more (over 70%) did not want targeted ads. There has also been empirical work from researchers at Northeastern University who have examined where and how often personalization is taking place online in search results and pricing. In addition, a recent Pew study looked at when people are willing to share personal information in return for something of value.

Experimental approach to understanding user attitudes

Given the current prevalence of personalization online and the fact that some of it does seem to be useful to people, we chose to take personalization as a given and dig into the particular factors that push it from something that is beneficial or acceptable to something that is unfair.

Using an experimental vignette design, we measure users’ perceptions of fairness in response to content that is personalized to them. We situate these vignettes in three domains: targeted advertising, filtered search results, and differential retail pricing using a range of data types including race, gender, city or town of residence, and household income level.

We find that users’ perceptions of fairness are highly context-dependent. By looking at the fairness ratings based on the contextual factors of domain and data type, we observe the importance of both the sensitivity of the data used to personalize and its relevance to the domain of the personalization in determining what forms of personalization might violate user norms and expectations.

Join us tonight from 6-9 pm with the Startup Policy Lab to hear Rena Coen, Emily Paul, and Pavel Vanegas present the research findings, followed by a conversation about the policy implications of the findings with Alethea Lange, policy analyst at the Center for Democracy & Technology, and Jen King, privacy expert and Ph.D. candidate at the UC Berkeley School of Information, moderated by Gautam Hans.

Event details and RSVP

This project is funded by the Center for Technology, Society & Policy and the Center for Long-Term Cybersecurity.

by Nick Doty at May 18, 2016 08:02 PM

May 13, 2016

Ph.D. student

A Bourdieusian anticipation of the SciPy proceedings review process

My currently defunct dissertation was about applying Bourdieu’s sociology of science to data scientific software production, with a specific focus on Scientific Python. I’m done a lot of work look at the statistical distribution of mailing list discussions so far.

But perhaps the problem with my dissertation is that mailing list miss the point. I believe I was able to discover the statistical properties of email discussion and how these can be modeled using the Central Limit Theorem and an underlying Blau space. Through this I could make a case for the autonomous participation of participants on the mailing list.

But autonomy alone is not sufficient for science. The Enron email corpus has the same statistical properties. So, the punchline (which seems to have been rejected by my committee) was that an autonomous organization that was not committed, as per Bourdieu’s recommendation, to logical necessity as a social norm can potentially commit massive fraud. Whether this allusion to the corruption of the social sciences was caught by my committee, I cannot say. The point was not substantively addressed.

Viewing the situation more positively, autonomy affords the possibility of autonomous recognition of logical progress, and this, for Bourdieu, is science. The question, then, is what constitutes this recognition in open source software development?

I have the opportunity to learn a lot about this in the coming months through my participation with the SciPy conference this year.

  • I’m on the proceedings committee, which means I’ll be with co-chair Scott Rostrup managing the paper submission and review process. Interestingly, this is all done openly on GitHub, with public comments and real identities used. It’s also much more incrementalist, at least potentially, than the typical conference revise-and-resubmit approach. That the SciPy conference is secretly an experiment in open scholarly publishing is one of its coolest quirks, in my opinion.
  • I’ll be working on a paper about software dependency. The more I think about it, the more I realize that it’s this dependency structure that most closely mirrors the ‘citation’ function of paper scholarship. So getting more familiar with these networks (which I expect to look completely different, statistically, from email discussions!) will be very interesting.

by Sebastian Benthall at May 13, 2016 10:57 PM

May 10, 2016

Ph.D. student

dissertation update

I gave up blogging to work on my dissertation.

Over the course of the past several weeks, I’ve been bureaucratically compelled to stop working on my dissertation. Were I to tell you the series of events of this semester in great detail, you would find it utterly amazing. You would, as I do, have any number of diverging theories about individual agency and motivation of the characters involved. Nobody involved is naive, so all possibilities are open. But then the system is so vast, and I such an insignificant part of it, that inertia is the most likely explanatory factor. No particular intellectual logic predetermines the outcome. I put energy in, put a lot of energy in, and the system’s response is: stop.

I guess I can start blogging again.

In my last post, I wrote about some of the frustrations I’ve had with the priority of narrative in the social sciences, and the politics of technology and narration. Looking back on past blog posts, I see now that this has been a steady theme since I began graduate school. In fact the majority of my posts to this blog have probably in one way or another been about the challenges of navigating the politics of interdisciplinary space.

I have to reflect on why. This is not a topic I find intrinsically valuable or interesting. I would much rather be doing something productive, or finding the truth about something. By virtue of professional circumstance and participant observation, I do think I now “get” the contours of academic politics with ethnographic sensitivity. This experience and note-taking process does not afford me with material for academic publication because I did not begin the “study” in an official way. It has not given me any insights that anybody else who has grown cynical about academia would not also attest to. When I have allowed my experiences to inform my dissertation in a broad thematic way, I have been told I have not provided enough empirical evidence? The standard of empiricism seems fluid enough to accommodate any bureaucratic move, while the standard of logic is routinely denied as a matter of disciplinary specialization.

Mainly what I’ve discovered–and it seems obvious in retrospect–is that as a social system the university’s primary purpose is to maintain its own equilibrium. Perhaps this is to be expected in an organization characterized by extreme autonomy. The status quo cannot change internally through disruption. As an institution, it is what the entrepreneurs call installed base. Disruptive innovation will come in the form of external pressure, which will result in a loss of market share and, consequently, funding. The budget crisis of UC Berkeley confirms this. Internal organizational shifts will be incremental, difficult, petty, concessional.

I think I made a mistake, which was to try to write a dissertation that confronted these politics and provide an alternative model for intellectual organization. I think the Scientific Python communities are a significant and successful alternative model. “Write what you know.”

The problem is that I want to do two different things. The first is explain to the established intellectual authorities why this alternative model has legitimate intellectual grounds in earlier theory and therefore represents a progression in, not a rupture to, mainstream intellectual thought.

The second is to translate those grounds into a logic that the new field can accept on its own terms. My main problem here is that the primary logic of the new field is computational statistics, and for good reason it is hard to get access to computational statisticians for research mentorship at Berkeley.

Are these two ideas “two different dissertations”? Is either one of them “too broad for a dissertation”? The institutional answer is “Yes”. I am not supposed to be working on these problems as a graduate student. A dissertation should be narrow, about something in particular, and follow the conventions of a discipline so that it can be judged and completed.

So, as I’ve said, I’ve made a mistake, or many mistakes. I should not be making the most challenging social and political problems that I encounter during the course of my academic career to be the subject of my dissertation. I am doing this because of a number of inappropriate impulses, perhaps the narcissistic  impulse of blogging being one of them. Doing this only complicates my life, and the lives of others, by surfacing the self-referential complexity of our institutional and social context. Are there new possibilities in that complexity, somewhere? Who cares. It is a headache. Reflexivity is not appropriate for academic work because it upsets personal equilibrium. Its mirage of emancipation is dangerous!

So the system is right. I need to take some time to disentangle myself.


by Sebastian Benthall at May 10, 2016 04:11 PM

May 07, 2016

Ph.D. student

the end of narrative in social science

‘Narrative’ is a term you hear a lot in the humanities, the humanities-oriented social sciences, and in journalism. There’s loads of scholarship dedicated to narrative. There’s many academic “disciplines” whose bread and butter is the telling of a good story, backed up by something like a scientific method.

Contrast this with engineering schools and professions, where the narrative is icing on the cake if anything at all. The proof of some knowledge claim is in its formal logic or operational efficacy.

In the interdisciplinary world of research around science, technology, and society, the priority of narrative is one of the major points of contention. This is similar to the tension I found I encountered in earlier work on data journalism. There are narrative and mechanistic modes of explanation. The mechanists are currently gaining in wealth and power. Narrativists struggle to maintain their social position in such a context.

A struggle I’ve had while working on my dissertation is trying to figure out how to narrate to narrativists a research process that is fundamentally formal and mechanistic. My work is “computational social science” in that it is computer science applied to the social. But in order to graduate from my department I have to write lots of words about how this ties in to a universe of academic literature that is largely by narrativists. I’ve been grounding my work in Pierre Bourdieu because I think he (correctly) identifies mathematics as the logical heart of science. He goes so far as to argue that mathematics should be at the heart of an ideal social science or sociology. My gloss on this after struggling with this material both theoretically and in practice is that narratively driven social sciences will always be politically or at least perspectivally inflected in ways that threaten the objectivity of the results. Narrativists will try to deny the objectivity of mathematical explanation, but for the most part that’s because they don’t understand the mathematical ambition. Most mathematicians will not go out of their way to correct the narrativists, so this perception of the field persists.

So I was interested to discover in the work of Miller McPherson, the sociologist who I’ve identified as the bridge between traditional sociology and computational sociology (his work gets picked up, for example, in the generative modeling of Kim and Leskovec, which is about as representative of the new industrial social science paradigm as you can get), an admonition about the consequences of his formally modeled social network formation process (the Blau space, which is very interesting). His warning is that the sociology his work encourages loses narrative and with it individual agency.

IMG_20160506_160149

(McPherson, 2004, “A Blau space primer: prolegomenon to an ecology of affiliation”)

It’s ironic that the whole idea of a Blau space, which is that the social network of society is sampled from an underlying multidimensional space of demographic dimensions, predicts the quantitative/qualitative divide in academic methods as not just a methodological difference but a difference in social groups. The formation of ‘disciplines’ is endogenous to the greater social process and there isn’t much individual agency in this choice. This lack of agency is apparent, perhaps, to the mathematicians and a constant source of bewilderment and annoyance, perhaps, to the narrativists who will insist on the efficacy of a narratively driven ‘politics’–however much this may run counter to the brute fact of the industrial machine–because it is the position that rationalizes and is accessible from their subject position in Blau space.

“Subject position in Blau space” is basically the same idea, in more words, as the Bourdieusian habitus. So, nicely, we have a convergence between French sociological grand theory and American computational social science. As the Bourdieusian theory provides us with a serviceable philosophy of science grounded in sociological reality of science, we can breathe easily and accept the correctness of technocratic hegemony.

By “we” here I mean…ah, here’s the rub. There’s certainly a class of people who will resist this hegemony. They can be located easily in Blau space. I’ve spent years of my life now trying to engage with them, persuading them of the ideas that rule the world. But this turns out to be largely impossible. It’s demanding they cross too much distance, removes them from their local bases of institutional support and recognition, etc. The “disciplines” are what’s left in the receding tide before the next oceanic wave of the unified scientific field. Unified by a shared computational logic, that is.

What is at stake, really, is logic.


by Sebastian Benthall at May 07, 2016 12:00 AM

May 05, 2016

Center for Technology, Society & Policy

FutureGov: Drones and Open Data

By Kristine Gloria, CTSP Fellow | Permalink

As we’ve explored in previous blog posts, civil drone applications are growing, and concerns regarding violations of privacy follow closely. We’ve thrown in our own two cents offering a privacy policy-by-design framework. But, this post isn’t (necessarily) about privacy. Instead, we pivot our focus towards the benefits and challenges of producing Open Government Drone Data. As proponents of open data initiatives, we advocate its potential for increased collaboration, accessibility and transparency of government programs. The question, therefore, is: How can we make government drone data more open?

A drone’s capability to capture large amounts of data – audio, sensory, geospatial and visual – serves as a promising pathway for future smart city proposals. It also has many data collection, use and retention policies that require considering data formats and structures.

Why is this worth exploring? We suggest that it opens up additional (complementary) questions about access, information sharing, security and accountability. The challenge with the personal UAS ecosystem is its black box nature comprised of proprietary software/hardware developers and third-party vendors. This leads to technical hurdles such as the development of adaptable middleware, specified application development, centralized access control, etc.

How do governments make data public?

Reviewing this through an open data lens — as our work focuses on municipal use cases –offers a more technical discussion and highlights available open source developer tools and databases. In this thought experiment, we assume a government agency prescribes to and is in the development of an open data practice.  At this stage, the agency now faces the question: How do we make the data public?  Additional general guidance on how to approach Open Data in government, please refer to our work: Open Data Privacy Report 2015.

Drawing from the Sunlight Foundation’s Open Data Guidelines, information should be released in “open formats” or “open standards”, and be machine-readable and machine-processable (or structured appropriately). Translation: data designated by a municipality as “shareable” should follow a data publishing standard in order to facilitate sharing and reuse by both human and machine. These formats may include XML, CSV, JSON, etc. Doing so enables access (where designated) and opportunities for more sophisticated analysis. Note that the PDF format is generally discouraged as it prevents data from being shared and reused.

Practical Guidelines for Open Data Initiatives

It seems simple enough, right? Yes and no. Learning from challenges of early open data initiatives, database managers should also consider the following: completeness, timeliness, and reliability & trustworthiness.

  • Completeness refers to the entirety of a record. Again, the Sunlight Foundation suggests: “All raw information from a dataset should be released to the public, except to the extent necessary to comply with federal law regarding the release of personally identifiable information.” We add that completeness must also align with internal privacy policies. For example, one should consider whether the open data could lead to risks of re-identification.  
  • Timeliness is particularly important given the potential applications of UAS real-time data gathering. Take for example emergency or disaster recovery use cases. Knowing what types of data can be shared, by whom, to whom and how quickly can lead to innovative application development for utility services or aide distribution. Published data should therefore be released as quickly as possible with priority given to time-sensitive data.
  • Reliability and Trustworthiness are key data qualities that highlight authority and primacy, such as the source name of specific data agencies. Through metadata provenance, we can capture and define resources, access points, derivatives, formulas, applications, etc. Examples of this include W3C’s PROV-XML schema. Identifying the source of the data, any derivatives, additions, etc., helps increase the reliability and trustworthiness of the data.

What of Linked Open Government Data?

For those closely following the open government data space, much debate has focused on the need for a standardized data format in order to link data across formats, organizations, governments etc. Advocates suggest that, linking open data may increase its utility through interoperability.  This may be achieved using structured machine-processable formats, such as the Resource Description Framework (RDF). This format uses Uniform Resource Identifiers (URIs), which can be identified by reference and linked with other relevant data by subject, predicate, or object. For a deep dive on this specific format, check out the “Cookbook for Open Government Linked Data”. One strength of this approach  is its capability to generate a large searchable knowledge graph. Check out the Linked Open Data Cloud for an example of all linked databases currently available. Paired with Semantic Web standards and a robust ontology, the potential for its use with drone data could be quite impactful.

No matter the data standard chosen, linked or not, incorporating a reflexive review process should also be considered. This may include some form of a dataset scoring methodology, such as the 5-Star Linked Data System or Montgomery County’s scoring system (Appendix H), in order to ensure that designated datasets comply to both internal and external standards.

Image from: MapBox Project

Hacking Drone Data

Now to the fun stuff. If you’re interested in drone data, there are a few open drone databases and toolkits available for people to use. The data ranges from GIS imaging to airport/airspace information. See MapBox as an example of work (note: this is now part of the B4UFLY smartphone app available by the FAA).  Tools and datasets include:

And, finally, for those interested in more operational control of their drone experience, check out these Linux based drones highlighted in 2015 by Network World.

So, will the future of drones include open data? Hopefully. Drones have already proven to to be incredibly useful as a means to surveying the environment and for search and rescue efforts. Unfortunately, drones also raise considerable concerns regarding surveillance, security and privacy. The combination of an open data practice with drones therefore requires a proactive, deliberate balancing act. Fortunately, we can and should learn from our past open data faux pas. Projects such as our own CityDrone initiative and or fellow CSTP colleagues’ “Operationalizing Privacy for Open Data Initiatives: A Guide for Cities” project serve as excellent reference points for those interested in opening up their drone data. 

by charles at May 05, 2016 12:22 PM

May 04, 2016

Center for Technology, Society & Policy

Exciting Upcoming Events from CTSP Fellows

By Galen Panger, CTSP Director | Permalink

Five of CTSP’s eleven collaborative teams will present progress and reflections from their work in two exciting Bay Area events happening this month, and there’s a common theme: How can we better involve the communities and stakeholders impacted by technology’s advance? On May 17, three teams sketch preliminary answers to questions about racial and socioeconomic inclusion in technology using findings from their research right here in local Bay Area communities. Then, on May 18, join two teams as they discuss the importance of including critical stakeholders in the development of policies on algorithms and drones.

Please join us on May 17 and 18, and help us spread the word by sharing this post with friends or retweeting the tweet below. All the details below the jump.


Inclusive Technologies: Designing Systems to Support Diverse Bay Area Communities & Perspectives

Three collaborative teams from the Center for Technology, Society & Policy share insights from their research investigating racial and socioeconomic inclusion in technology development from the perspective of local Bay Area communities. How do Oakland neighborhoods with diverse demographics use technology to enact neighborhood watch, and are all perspectives being represented? How can low-income families of color in Richmond overcome barriers to participating in our technology futures? Can we help immigrant women access social support and social services through technology-based interventions? Presenters will discuss their thoughts on these questions, and raise important new ones, as they share the preliminary findings of their work.

Tuesday, May 17, 2016
4 p.m. – 5:30 p.m.
202 South Hall, Second Floor
School of Information
Berkeley, CA

Refreshments will be served.

About the Speakers:

  • Fan Mai is a sociologist studying the intersection of technology, culture, identities and mobility. She holds a Ph.D. from the University of Virginia.
  • Rebecca Jablonsky is a professional UX designer and researcher. She holds a Master’s of Human-Computer Interaction from Carnegie Mellon and will be starting a Ph.D. at Rensselaer Polytechnic Institute this fall in Science & Technology Studies.
  • Kristen Barta is a Ph.D. candidate in the Department of Communication at the University of Washington whose research investigates online support spaces and the recovery narratives of survivors of sexual assault. She earned her Master’s from Stanford.
  • Robyn Perry researches language shift in Bay Area immigrant communities and has a background in community organizing and technology. She holds a Master’s from the Berkeley School of Information.
  • Morgan G. Ames is a postdoctoral researcher at the Center for Science, Technology, Medicine, and Society at UC Berkeley investigating the role, and limitations, of technological utopianism in computing cultures.
  • Anne Jonas is a Ph.D. student at the Berkeley School of Information researching education, social justice, and social movements.

 


Involving Critical Stakeholders in the Governance of Algorithms & Drones

Advances in the use of algorithms and drones are challenging privacy norms and raising important policy questions about the impact of technology. Two collaborative teams from the Center for Technology, Society & Policy share insights from their research investigating stakeholder perceptions of algorithms and drones. First, you’ll hear from fellows about their research on user attitudes toward algorithmic personalization, followed by a panel of experts who will discuss the implications. Then, hear from the fellows at CityDrones, who are working with the City of San Francisco to regulate the use of drones by municipal agencies. Can the city adopt drone policies that balance privacy and support innovation?

Wednesday, May 18, 2016
6 p.m. – 7:30 p.m.
1355 Market St, Suite 488
Runway Headquarters
San Francisco, CA

Refreshments will be served. Registration requested.

About the Speakers:

  • Charles Belle is the CEO and Founder of Startup Policy Lab and is an appointed member of the City & County of San Francisco’s Committee on Information Technology.
  • CTSP Fellows Rena Coen, Emily Paul, and Pavel Vanegas are 2016 graduates of the Master’s program at the Berkeley School of Information. They are working collaboratively with the Center for Democracy & Technology to carry out their study of user perspectives toward algorithmic personalization. The study is jointly funded by CTSP and the Center for Long-Term Cybersecurity.

Panelists:

  • Gautam Hans is Policy Counsel and Director of CDT-SF at the Center for Democracy & Technology. His work encompasses a range of technology policy issues, focused on privacy, security and speech.
  • Alethea Lange is a Policy Analyst on the Center for Democracy & Technology’s Privacy and Data Project. Her work focuses on empowering users to control their digital presence and developing standards for fairness and transparency in algorithms.
  • Jen King is a Ph.D. candidate at the Berkeley School of Information, where she studies the social aspects of how people make decisions about their information privacy, and how privacy by design — or the lack of it — influences privacy choices.

 
Stay tuned for more events from the CTSP fellows!

by Galen Panger at May 04, 2016 09:49 PM

April 26, 2016

Ph.D. student

Reflections on CSCW 2016

CSCW 2016 (ACM’s conference on Computer Supported Cooperative Work and Social Computing) took place in San Francisco last month. I attended (my second time at this conference!), and it was wonderful meeting new and old colleagues alike. I thought I would share some reflections and highlights that I’ve had from this year’s proceedings.

Privacy

Many papers addressed issues of privacy from a number of perspectives. Bo Zhang and Heng Xu study how behavioral nudges can shift behavior toward more privacy-conscious actions, rather than merely providing greater information transparency and hoping users will make better decisions. A nudge showing users how often an app accesses phone permissions made users feel creepy, while a nudge showing other users’ behaviors reduced users’ privacy concerns and elevated their comfort. I think there may be value in studying the emotional experience of privacy (such as creepiness), in addition to traditional measurements of disclosure and comfort. To me, the paper suggests a further ethical question about the use of paternalistic measures in privacy. Given that nudges could affect users’ behaviors both positively and negatively toward an app, how should we make ethical decisions when designing nudges into systems?

Looking at the role of anonymity, Ruogu Kang, Dabbish, and Sutton conducted interviews with users of anonymous smartphone apps, focusing on Whisper and YikYak. They found that users mostly use these apps to post personal disclosures and do so for social reasons: social validation from other users, making short-term connections (on or off-line), sharing information, or avoiding social risk and context collapse. Anonymity and a lack of social boundaries allowed participants to feel alright venting certain complaints or opinions that they wouldn’t feel comfortable disclosing on non-anonymous social media.

Investigating privacy and data, Peter Tolmie, Crabtree, Rodden, Colley, and Luger discuss the need for articulation work in order to make fine-grained sensing data legible, challenging the notion that the more data home sensing systems collect, the more that can learned about the individuals in the home. Instead, they find that in isolation, these personal data are quite opaque. Someone is needed to explain the data and provide contextual insights, and social and moral understandings of data – for subjects, the data does not just show events but also insight into what should (or should not) be happening in the home. One potential side effect about home data collection might be that the data surfaces practices that otherwise might not be seen (such as activities in a child’s room), which might create concerns about accountability and surveillance within a home.

Related to privacy and data, Janet Vertesi, Kaye, Jarosewski, Khovansakaya, and Song frame data management practices in the context of moral economy. I found this a welcome perspective to online privacy issues, adding to well-researched perspectives of information disclosure, context, and behavioral economics. Using mapping techniques with interviewees, the authors focused on participants’ narratives over their practices, finding a strong moral undertone to the way people managed their personal data – managing what they considered “their data” in a “good” or “appropriate” way.  Participants spoke about managing overlapping systems, devices, and networks, but also managing multiple human relationships – both with other individuals and with the companies making the products and services. I found two points particularly compelling: First the description that interviewees did not describe sharing data as a performance to an audience, but rather a moral action (e.g. being a “good daughter” means sharing and protecting data in particular ways). Second, that given the importance participants placed on the moral aspects of data management, many feared that changes in companies’ products or interfaces would make it harder to manage data in “the right way,” rather than a fear of inadvertent data disclosure.

Policy

I was happy to see continued attention at CSCW to issues of policy. Casey Fiesler, Lampe, and Bruckman investigate the terms of service for copyright of online content. Drawing some parallels to research on privacy policies, they find that users are often unaware of sites’ copyright policies (which are often not very readable), but do care about content ownership. They note that different websites use very different licensing terms, and that some users may have a decent intuition about some rights. However, there are some licensing terms across sites that users often do not expect or know about – such as the right for sites to modify users’ content. These misalignments could be potentially problematic. Their work suggests that clearer copyright terms of service could be beneficial. While this approach has been heavily researched in the privacy space to varying degrees of success, there is a clear set of rights associated with copyright which (at the outset at least) would seem to indicate plain language descriptions may be useful and helpful to users.

In another discussion of policy, Alissa Centivany uses the case of HathiTrust (a repository of digital content from research libraries in partnership with Google Books) to frame policy not just as a regulating force, but as a source of embedded generativity – that policy can also open up new spaces and possibilities (and foreclose others).  Policy can open and close technical and social possibilities similar to the way design choices can. Specifically, she cites the importance of a specific clause in the 2004 agreement between the University of Michigan and Google that allowed the University the right to use its digital copies “in cooperation with partner research libraries,” which eventually led to the creation of HathiTrust. HathiTrust represents an emergent system out of the conditions of possibility enabled by the policy. It’s important to also recognize that policies can also foreclose other possibilities – for example, the restriction to other library consortia excludes the Internet Archive from HathiTrust. In the end, Cenitvany posits that policy is a potential non-technical solution to help bridge the sociotechnical gap.

Infrastructure

I was similarly pleased to see several papers using the lens of infrastructure. Ingrid Erickson and Jarrahi investigate knowledge workers’ experience of seams and creating workarounds. They find both technological and contextual constraints that create seams in infrastructure – such as public Wi-Fi access that doesn’t accommodate higher bandwidth applications like Skype, incompatibility between platforms, or contextual examples might include locations of available 4G and Wi-Fi access or cafes that set time limits on how long patrons can use free Wi-Fi. Workers respond with a number of “workarounds” when encountering work systems and infrastructures that do not fully meet their needs: bridging these gaps, assembling new infrastructural solutions, or circumventing regulations.

Susann Wagenknecht and Matthias Korn look at hacking as a way to critically engage and (re)make infrastructures, noting that hacking is one way to make tacit conventions visible. They follow a group of German phone hackers who “open” the GSM mobile phone system (a system much more closed and controlled by proprietary interests than the internet) by hacking phones and creating alternative GSM networks. Through reverse engineering, re-implementing parts of the system, and running their own versions of the system, the hackers appropriate knowledge about the GSM system: how it functions; how to repair, build, and maintain it; and how to control where and who it is used. These hacking actions can be considered “infrastructring” as they render network components visible and open to experimentation, as well as contributing toward a sociotechnical imaginary foreseeing GSM as a more transparent and open system.

Time

Adding to a growing body of CSCW work on time, Nan-Chen Chen, Poon, Ramakrishnan, and Aragon investigate the role of time and temporal rhythms in a high performance computing center at a National Lab, following in the vein of other authors’ work on temporal rhythms that I thoroughly enjoy (Mazmanian,  Erickson & Harmon, Lindley, Sharma, and Steinhardt & Jackson). They draw on collective notions of time over individual ones, finding frictions between human and computer patterns of time and human and human patterns of time. For instance scientists doing research and writing code have to weigh the (computer) time patterns related to code efficiency and (human) time patterns related to project schedules or learning additional programming skills. Or they may have to weigh the (human) time it takes to debug their own code versus the (human) time it takes to invest time in getting another person to help debug their code and make it more efficient. In this timesharing environment, scientists have to juggle multiple temporal rhythms and temporal uncertainties caused by system updates, queue waiting time, human prep work, or other unexpected delays.

Cooperation and Work

Contributing to the heart of CSCW, several research papers studied problems at the forefront of cooperation and work. Carman Neustaedter, Venolia, Procyk, and Hawkins reported on a study of remote telepresence robots (“Beams”) at the ACM Ubicomp and ISWC conferences. Notably, remote tele-presence bot use at academic conferences differed greatly from office contexts. Issues of autonomy are particularly interesting: is it alright for someone to physically move the robot? How could Beam users benefit from feedback of their microphone volume, peripheral cameras, or other ways to show social cues? Some remote users used strategies to manage privacy in their home environment by blocking the camera or turning off the microphone, but had a harder time managing privacy in the public conference environment, such as speaking more loudly than intended. Some participants also created strong ties with their remote Beams, feeling that they were in the “wrong body” when their feed transferred between robots. It was fascinating to read this paper and compare it to my personal experience after seeing Beams used and interacting with some of them at CSCW 2016.

Tawanna Dilahunt, Ng, Fiesta, and Wang et al research how MOOCs support (or don’t support) employability, with a focus on low socioeconomic status learners, a population that is not well understood in this environment. They note that while MOOCs (Massive Open Online Courses) help provide human capital (learners attain new skills like programming), they lack support for increasing social capital, helping form career identity, and personal adaptability. Many low socioeconomic status learners said that they were not able to afford formal higher education (due to financial cost, time, family obligations, and other reasons). Most felt that MOOCs would be beneficial to employment, and unlike a broader population of respondents, largely were not concerned about the lack of accreditation for MOOCs. They did, however, discuss other barriers, such as lack of overall technical literacy, or how MOOCs can’t substitute for actual experience.

Noopur Raval and Dourish bring in concepts from feminist political economy to look at the experience of ridesharing crowd labor. They bring in notions of immaterial labor, affective labor, and temporal cultural politics – all of which are traditionally not considered as “work.” They find that Uber and Lyft drivers must engage in types of affective and immaterial labor, needing to perform the identity of a 5-star driver, pressured by the frustration that many passengers don’t understand how the rating systems are weighed. Their status as contractors provides individual opportunities for microbranding, but also creates individual risks that may pose customers’ desires and the drivers’ safety against each other. Through their paper, the authors suggest that we may need to reconceptualize ideas of labor if previously informal activities are now considered work, and that we may be able to draw more strongly from labor relation studies and labor theory.

Research Methods and Ethics

Several papers provided reflections on doing research in CSCW. Jessica Vitak, Shilton, and Ashktorab present survey results on researchers’ ethical beliefs and practices when using online data sets. Online research poses challenges to the ways Institutional Review Boards have traditionally interpreted the Belmont Report’s principles of respect, beneficence, and justice. One finding was that researchers believe that ethical standards, norms, and practices differ relative to different disciplines and different work fields (such as academia or industry). (I’ve often heard this discussed anecdotally by people working in the privacy space). However, the authors find that ethical attitudes do not significantly vary across disciplinary boundaries and in fact there is general agreement across five practices which may serve as a set of foundational ethical research practices. This opens the possibility for researchers across disciplines, and across academia and industry, to united around and learn from a common set of data research practices.

Daniela Rosner, Kawas, Li, Tilly and Sung provide great insight into design workshops as a research method, particularly when things don’t go quite as the researcher intends. They look at design workshops as sites of study, research instruments, and as a process that invites researchers to reflexively examine research practices and methods. They suggest that workshops can engage with different types of temporal relations – both the long lasting and meaningful relationships participants might have with objects before and after the workshops, and the temporal rhythms of the workshops themselves. What counts as participation? The timing of the workshop may not match the time that participants want to or can spend. Alternate (sometimes unintended or challenging) practices brought by participants to the workshops can be useful too. They provide important insights that might make us rethink about how we define participation in CSCW (and perhaps in interventionist and participatory work more broadly), and how we can gain insights from interventional and exploratory research approaches.

In all, I’m excited by the directions CSCW research is heading in, and I’m very much looking forward to CSCW 2017!


by Richmond at April 26, 2016 06:33 PM

April 25, 2016

Center for Technology, Society & Policy

Developing Strategies to Counter Online Abuse

By Nick Doty, CTSP | Permalink

We are excited to host a panel of experts this Wednesday, talking about strategies for making the Internet more gender-inclusive and countering online harassment and abuse.

Toward a Gender-Inclusive Internet: Strategies to Counter Harassment, Revenge Porn, Threats, and Online Abuse
Wednesday, April 27; 4:10-5:30 pm
202 South Hall, Berkeley, CA
Open to the public; Livestream available

These are experts and practitioners in law, journalism and technology with an interest in the problem of online harassment. And more importantly, they’re all involved with ongoing concrete approaches to push back against this problem (see, for example, Activate Your Squad and Block Together). While raising awareness about online harassment and understanding the causes and implications remains important, we have reached the point where we can work on direct countermeasures.

The Center for Technology, Society & Policy intends to fund work in this area, which we believe is essential for the future of the Internet and its use for digital citizenship. We encourage students, civil society and industry to identify productive collaborations. These might include:

  • hosting a hackathon for developing moderation or anti-harassment tools
  • drafting model legislation to address revenge porn while maintaining support for free expression
  • standardizing block lists or other tools for collaboratively filtering out abuse
  • meeting in reading groups, support groups or discussion groups (sometimes beer and pizza can go a long way)
  • conducting user research and needs assessments for different groups that encounter online harassment in different ways

And we’re excited to hear other ideas: leave your comments here, join us on Wednesday, or get in touch via email or Twitter.


See also:

by Nick Doty at April 25, 2016 09:50 PM

April 15, 2016

Center for Technology, Society & Policy

Please Can We Not Try to Rationalize Emoji

By Galen Panger, CTSP Director | Permalink

Emoji are open to interpretation, and that’s a good thing. Credit: Samuel Barnes

Emoji are open to interpretation, and that’s a good thing. Credit: Samuel Barnes

This week a study appeared on the scene suggesting an earth-shattering, truly groundbreaking notion: Emoji “may be open to interpretation.”

And then the headlines. “We Really Don’t Know What We’re Saying When We Use Emoji,” a normally level-headed Quartz proclaimed. “That Emoji Does Not Mean What You Think It Means,” Gizmodo declared. “If Emoji Are the Future of Communication Then We’re Screwed,” New York Magazine cried, obviously not trying to get anyone to click on its headline.

Normally I might be tempted to blame journalists for sensationalizing academic research, but in this instance, I think the fault actually lies with the research. In their study, Hannah Miller, Jacob Thebault-Spieker and colleagues from the University of Minnesota took a bunch of smiley face emoji out of context, asked a bunch of people what they meant, and were apparently dismayed to find that, 25% of the time, people didn’t even agree on whether a particular emoji was positive or negative. “Overall,” the authors write, “we find significant potential for miscommunication.”

It’s odd that an academic paper apparently informed by such highfalutin things as psycholinguistic theory would be concerned that words and symbols can have a range of meanings, even going so far as to be sometimes positive and sometimes negative. But of course they do. The word “crude” can refer to “crude” oil, or it can refer to the double meanings people are assigning to emoji of fruits and vegetables. “Crude” gains meaning in context. That people might not agree on what a word or symbol means outside of the context in which it is used is most uninteresting.

The authors mention this at the end of their paper. “One limitation of this work is that it considered emoji out of context (i.e., not in the presence of a larger conversation).” Actually, once the authors realized this, they should have started over and come up with a research design that included context.

The fact that emoji are ambiguous, can stand for many things, and might even evolve to stand for new things, is part of what makes them expressive. It’s part of what makes them dynamic and fun, and trying to force a one-to-one relationship between emoji and interpretation would make them less, not more, communicative. So please, if we’re going to try to measure the potential for miscommunication wrought by our new emoji society, let’s measure real miscommunication. Not normal variations in meaning that might be clear (even clever!) in context or that might be clarified during the normal course of conversation. Or that might remain ambiguous but at least not harm our understanding (while still making our message just that much cuter 💁). Once we’ve measured actual miscommunication, then we can decide whether we want to generate a bunch of alarmist headlines or not.

That said, all of the headlines the authors generated with their study did help to raise awareness of a legitimate problem for people texting between platforms like iOS and Android. Differences in how a few emoji are rendered by different platforms can mean we think we’re sending a grinning face, when in fact we’re sending a grimacing one. Or perhaps we’re sending aliens. “I downloaded the new iOS platform and I sent some nice faces,” one participant in the study said, “and they came to my wife’s phone as aliens.”

That’s no good. Although, at least they were probably cute aliens. 🙌

Cross-posted to Medium.

by Galen Panger at April 15, 2016 11:58 PM

April 12, 2016

Center for Technology, Society & Policy

Start Research Project. Fix. Then Actually Start.

By Robyn Perry, CTSP Fellow | Permalink

If you were living in a new country, would you know how to enroll your child in school, get access to health insurance, or find affordable legal assistance? And if you didn’t, how would you deal?

As Maggie, Kristen, and I are starting to interview immigrant women living in the US and the organizations that provide support to them, we are trying to understand how they deal – particularly, how they seek social support when they face stress.

This post gives a bit of an orientation to our project and helps us document our research process.

We’ve developed two semi-structured questionnaires to guide our interviews: one for immigrants and one for staff at service providers (organizations that provide immigrants with legal aid, job training, access to resources for navigating life in the US, and otherwise support their entry and integration). We are seeking to learn about women immigrants who have been in the US between 1-7 years. All interviews are conducted in English by one of the team members. For this reason, we are striving for a high degree of consistency in our interview process because we will each be conducting one-on-one interviews separately from each other.

As we have begun interviewing, we’ve realized a couple of things:

  1. Balancing specificity and ambiguity is difficult, but necessary. We need questions that are agile enough to be adapted to each respondent, but answers that are not too divergent to compare once we do analysis.
  2. The first version of our interview questions needed to be sharpened to better enable us to learn more about the particular technologies respondents use for specific types of relationships and with respect to different modes of social support. Without this refinement, it may have been difficult to complete the analysis as we had intended.

We have sought mentorship and advice from several mentors, and that has helped us arrive at our current balance of trade-offs. We have found support in Gina Neff and Nancy Rivenburgh, Maggie and Kristen’s PhD advisors, as well as in Betsy Cooper, the new director of the Center for Long-term Cybersecurity.

When we met with Betsy, she took about 30 seconds to assess the work we had done so far and begin to offer suggestions for dramatic improvements. She challenged us to reexamine our instruments, asking, “Do the answers you will get by asking these questions actually get at your big research questions?” The other advisors have challenged us similarly.

This helpful scrutiny has pushed us to complete a full revision of our migrant interview questions. Betsy also recommended we further narrow our pool of immigrants to limit to one to three geographical regions of origin, so that we might be able to either compare between groups, or make qualified claims about at least one subset of interviewees. As a result, we would like to interview at least 5 people within particular subsets based on geography or other demographics. We are making an effort to interview in clusters that are geographical. For example, we are angling to interview individuals originally from East Africa, Central America, and/or Mexico, given each group’s large presence at both sites.

However, we anticipate that there may be greater similarities among immigrants who are similarly positioned in terms of socioeconomic status and potentially reason for immigrating than those that are from similar geographies. We’ll be considering as many points of similarity between interviewees as possibility. 

We’re finding that these three scholars, Nancy, Gina, and Betsy, fall on different points on the spectrum of preferring structured interviews to eschewing rigidity and allowing the interview to take the direction that the researcher finds most fruitful during conversation. Our compromise has been to improve our interview process by sharpening the questions (ambiguity removed or reduced as much as possible) we will step through with interviewees, leaning on clarifying notes and subquestions we are asking when the themes we’re looking for don’t organically emerge from the answer to the broader question.

Honing in on what’s really important is both imperative and iterative. In addition to our questionnaire, this includes definitions, objectives, and projected outcomes. This may sound like a banality, but doing so has been quite challenging. For example, what specifically do we mean by ‘migrant’, ‘social support’, ‘problem’, or ‘stress’? Reaching group clarity about this is essential. We also must remain as flexible as possible, because in the spirit of our work, we recognize we have a lot to learn, so artificially rigid definitions may not position us as well to learn from those we are interviewing (or even to find the right people to interview).

As we seek clarity in our definitions, we’ve looked to existing models in the great scientific tradition of standing on the shoulders of others. Social support defies easy definition, but one helpful distinction in an article by Cutrona & Suhr splits social support into two types, nurturant (attempts to care for person rather than the problem) and action-facilitating (attempts to mitigate the problem causing the stress). We’ve found this distinction helpful as both a guide and a foil for revising our questionnaire instrument to clarify what we’re looking for. We may find another classification from the literature that better matches what we are finding in our interviews, so we’ll stay open to this possibility.

Stay tuned. We’re excited to share what we learn as we get deeper into the interviews and begin analysis in a couple of weeks.

Cross-posted over at the Migrant Women & Technology Project blog.

by Robyn Perry at April 12, 2016 05:50 PM

April 08, 2016

BioSENSE research project

April 07, 2016

Center for Technology, Society & Policy

Moderating Harassment in Twitter with Blockbots

By Stuart Geiger, ethnographer and post-doctoral scholar at the Berkeley Institute for Data Science | Permalink

I’ve been working on a research project about counter-harassment projects in Twitter, where I’ve been focusing on blockbots (or bot-based collective blocklists) in Twitter. Blockbots are a different way of responding to online harassment, representing a more decentralized alternative to the standard practice of moderation — typically, a site’s staff has to go through their own process to definitively decide what accounts should be suspended from the entire site. I’m excited to announce that my first paper on this topic will soon be published in Information, Communication, and Society (the PDF on my website and the publisher’s version).

This post is a summary of that article and some thoughts about future work in this area. The paper is based on my empirical research on this topic, but it takes a more theoretical and conceptual approach given how novel these projects are. I give an overview of what blockbots are, the context in which they have emerged, and the issues that they raise about how social networking sites are to be governed and moderated with computational tools. I think there is room for much future research on this topic, and I hope to see more work on this topic from a variety of disciplines and methods.

What are blockbots?

Blockbots are automated software agents developed and used by independent, volunteer users of Twitter, who have developed their own social-computational tools to help moderate their own experiences on Twitter.

blockbot

The blocktogether.org interface, which lets people subscribe to other people’s blocklists, publish their own blocklists, and automatically block certain kinds of accounts.

Functionally, blockbots work similarly to ad blockers: people can curate lists of accounts they do not wish to encounter, and others can subscribe to these lists. To subscribe, you have to give the blockbot limited access to your account, so that it can update your blocks based on the blocklists you subscribe to. One of the most popular platforms for supporting blockbots in Twitter is blocktogether.org, which is what hosts the popular ggautoblocker project and many more smaller, private blocklists. Blockbots were developed to help combat harassment in the site, particularly coordinated harassment campaigns, although they are a general purpose approach that can be used to filter across any group or dimension. (More on this later in this post.)

A subscription-based model

Blockbots extend the functionality of the social networking site to make the work of responding to harassment more efficient and more communal. Blockbots are based on the standard feature of individual blocking, in which users can hide specific accounts from their experience of the site. Blocking has long been directly integrated into Twitter’s user interfaces, which is necessary because by default, any user on Twitter can send tweets and notifications to any other user. Users can make their accounts private, but this limits their ability to interact with a broad public — one of the big draws of Twitter versus more tightly bound social networking sites like Facebook.

For those who wish to use Twitter to interact with a broad public and find themselves facing rising harassment, abuse, trolling, and general incivility, the typical solution is to individually block accounts. Users can also report harassing accounts to Twitter for suspension, but this process has long been accused of being slow and opaque. People also have quite different ideas about what constitutes harassment, and the threshold to get suspended from Twitter is relatively high. As a result, those who are facing unsolicited remarks find themselves facing a Sisyphean task of individually blocking all accounts that send them inappropriate mentions. This is a large reason why a subscription model has emerged in bot-based collective blocklists.

Some blockbots use lists curated by a single person, others use community-curated blocklists, and a final set use algorithmically-generated blocklists. The benefit of using blockbots is that it lets ad-hoc groups form around common understandings of what they want their experiences to be on the site. Blockbots are opt-in, and they only apply for the people who subscribe to them. There are groups who coordinate campaigns against specific individuals (I need not name specific movements, but you can observe this by reading the abusive tweets received in just one week by Anita Sarkeesian of Feminist Frequency, many of which are incredibly emotionally disturbing). With blockbots, the work of responding to harassers can be efficiently distributed to a group of likeminded individuals or delegated to an algorithmic process.

Blockbots first emerged in Twitter in 2012, and their development followed a common trend in technological automation. People who were a part of the same online community found that they were being harassed by a common set of accounts. They first started sharing these accounts manually whenever they encountered them, but they found that this process of sharing blockworthy accounts could be automated and made more collective and efficient. Since then, the computational infrastructure has grown in leaps and bounds. It has been standardized with the development of the blocktogether.org service, which makes it easy and efficient for blocklist curators and subscribers to connect. People do not need to develop their own bots anymore, they only need to develop their own process for generating a list of accounts.

How are public platforms governed with data and algorithms?

Beyond the specific issue of online harassment, blockbots are an interesting development in how public platforms are governed with data and algorithmic systems. Typically, the responsibility for moderating behavior on social networking sites and discussion forums falls to the organizations that own and operate these sites. They have to both make the rules and enforce them, which is increasingly difficult to do at the scale that many of these platforms now have achieved. At this scale, not only is there a substantial amount of labor involved to do this moderation work, but it is also increasingly unlikely to find a common understanding about what is acceptable and unacceptable. As a result, there has been a proliferation of flagging and reporting features that are designed to collect information from users about what material they find inappropriate, offensive, or harassing. These user reports are then fed into a complex system of humans and algorithmic agents, who evaluate the reports and sometimes take an action in response.

I can’t write too much more about the flagging/reporting process on the platform operator’s side in Twitter, because it largely takes place behind closed doors. I haven’t found too many people who are satisfied with how moderation takes place on Twitter. There are people who claim that Twitter is not doing nearly enough to suspend accounts that are sending harassing tweets to others, and there are people who claim that Twitter has gone too far when they do suspend accounts for harassment. This is a problem faced by any centralized system of authority that has to make binary decisions on behalf of a large group of people; the typical solution is what we usually call “politics.” People petition and organize around particular ideas about how this centralized system ought to operate, seeking to influence the rules, procedures, and people who make up the system.

Blockbots are a quite different mode of using data and algorithms to moderate large-scale platforms at scale. They are still political, but they operate according to a different model of politics than the top-down approach that places a substantial responsibility for governing a site on platform operators. In my studies of blockbot projects, I’ve found that members of these groups have serious discussions and debates about what kind of activity they are trying to identify and block. I’ve even seen groups fracture and fork over different standards of blockworthyness — which I think can sometimes be productive. A major benefit of blockbots is that they do not operate according to a centralized system of authority where there is only one standard of blockworthyness, such that someone is either allowed to contact anyone or no one.  

Blockbots as counterpublic technologies

In the paper, I analyze blockbot projects as counterpublics, taking from the term Nancy Fraser coined in her excellent critique of Jurgen Habermas’s account of the public sphere. Fraser argues that there are many publics where people assemble to discuss issues relevant to them, but only a few of these publics get elevated to the status of “the public.” She argues that we need to pay attention to the “counterpublics” that are created when non-dominant groups find themselves excluded from more mainstream public spheres. Typically, counterpublics have been analyzed as “separate discursive spaces:” safe spaces where members of these groups can assemble without facing the chilling effects that are common in public spheres. However, blockbots are a different way of parallelizing the public sphere than ones that have been historically analyzed by scholars of the public sphere.

One aspect of counterpublics is that they serve as sites of collective sensemaking: they are a space where members of non-dominant groups can work out their own understandings about issues that they face. I found a substantial amount of collective sensemaking in these groups, which can be seen in the intense debates that sometimes take place over defining standards of blockworthyness. As a blockbot can be easily forked (particularly with the blocktogether.org service), people are free to imagine and implement all kinds of possibilities about how to define harassment or any other criterion. People can also introduce new processes for curating a blocklist, such as adding a human appeals board for a blocklist that was generated by an algorithmic process. I’ve also seen a human curated blocklist move from a “two eyes” to “four eyes” principle, requiring that every addition to a blocklist be approved by another authorized curator before it would be synchronized with all the subscribers.

Going beyond “What is really harassment?”

Blockbots were originally created as counter-harassment technologies, but harassment is a very complicated term — one that even has different kinds of legal significance in different jurisdictions. One of the things I have found in conducting this research is that if you ask a dozen people to define harassment, you’ll get two dozen different answers. Some people who have found themselves on blocklists have claimed that they do not deserve to be there. And like in any debate on the Internet, there have even been legal threats made, including those alleging infringements of freedom of speech. I do think that the handful of major social networking sites are close to having a monopoly on mediated social interaction, and so the decisions they make about who to suspend or ban are ones we should look at incredibly closely. However, I think it is important to acknowledge these multiple definitions of harassment and other related terms, rather than try and close them down and find one that will work for everyone.

I think it is important and useful to move away from having just one single authoritative system that returns a binary decision about whether an activity is or is not allowed for all users of a site. I’ve seen controversies over this not just with harassment/abuse/trolling on Twitter, but also with things like photos of breastfeeding on Facebook. I think we should be exploring tools to give people more agency over moderating their own experiences on social networking sites, where ‘better’ means both more efficiently and more collectively. Facebook already uses sophisticated machine learning models to try and intuit what it thinks you want to see (i.e. what will keep you on the site looking for ads the longest), but I’d rather see this take place in a more deliberate and transparent manner, where people take an active role in defining their own expectations.

I also think it is important distinguish between the right to speak and the right to be heard, particularly in privately-owned social networking sites. Being placed on a blocklist means that someone’s potential audience is cut, which can be a troubling prospect for people who are used to their speech being heard by default. In the paper, I do discuss how modes of filtering and promotion are the mechanisms in which cultural hegemony operates. Scholars who focus on marginalized and non-dominant groups have long noted the need to investigate such mechanisms. However, I also review the literature about how harassment, trolling, incivility, and related phenomena are also ways in which people are pushed out of public participation. The public sphere has never been neutral, although the fiction that it is a neutral space where all are on equal group is one that has long been advanced by people who have a strong voice in such spaces.

How do you best build a decentralized classification system?

One issue that is relevant in these projects is about the kind of false positive versus false negative rates we comfortable having. No classification system is perfect (Bowker and Star’s Sorting Things Out is a great book on this), and it isn’t hard to see why someone facing a barrage of unwanted messages might be more willing to face a false positive than a false negative. On this issue, I see an interesting parallel with Wikipedia’s quality control systems, which my collaborators and I have written extensively about. There was a point in time when Wikipedians were facing a substantial amount of vandalism and hate speech in the “anyone can edit” encyclopedia, far too much for them to tackle on their own. They developed a lot of sophisticated tools (see The Banning of a Vandal and Wikipedia’s Immune System). However, my collaborators and I found that there are a lot of false positives, and this can inhibit participation among the good-faith newcomers who get hit as collateral damage. And so there have been some really interesting projects to try and correct that, using new kinds of feedback mechanisms, user interfaces, and Application Programming Interfaces (like Snuggle and ORES, led by Aaron Halfaker).

I suspect that if this decentralized approach to moderation in social networking sites gets more popular, then we might see a whole sub-field emerge around this issue, extending work done in spam filtering and recommender systems. Blockbots are still at the initial stages of their development, and think there is a lot of work still to be done. How do we best design and operate a social and technological system so that people with different ideas about what constitutes harassment can thoughtfully and reflectively work out these ideas? How do we give people the support that they need, so that responding to harassment isn’t something people have to do on their own? And how can we do this at scale, leveraging computational techniques without losing the nuance and context that is crucial for this kind of work? Thankfully, there are lots of dedicated, energetic, and bright people who are working on these kinds of issues and thinking about these questions.

Personal issues around researching online harassment

I want to conclude by sharing some anxieties that I face in publishing this work. In my studies of these counter-harassment projects, I’ve seen the people who have taken a lead on these projects become targets themselves. Often this stays at the level of trolling and incivility, but it has extended to more traditional definitions of harassment, such as people who contact someone’s employer and try to get them fired, or people who send photos of themselves outside of that person’s place of work. In some cases, it becomes something closer to domestic terrorism, with documented cases of people who have had the police come to their house because someone reported a hostage situation at their address, as well as people who have had to cancel presentations because someone threatened to bring a gun and open fire on their talk.

Given these situations, I’d be lying if I said I wasn’t concerned that this kind of activity might come my way. However, this is part of what the current landscape around online harassment is like. It shows how significant this problem is and how important it is that people work on this issue using many methods and strategies. In the paper, I spend some time arguing why I don’t think that blockbots are part of the dominant trend of “technological solutionism,” where a new technology is celebrated as the definitive way to fix what is ultimately a social problem. The people who work on these projects don’t talk about them in this solutionist way either. However, blockbots are tackling the symptoms of a larger issue, which is why I am glad that people are working on multifaceted projects and initiatives that investigate and tackle the root causes of harassment, like HeartMob, Hollaback, Women, Action, and the Media, Crash Override, the Online Abuse Prevention Initiative,, the many researchers working on harassment (see this resource guide), the members of Twitter’s recently announced Trust and Safety Council, and many more people and groups I’m inevitably leaving out.

Cross-posted to the Berkeley Institute for Data Science.

by Stuart Geiger at April 07, 2016 07:21 PM

April 06, 2016

Ph.D. alumna

Where Do We Find Ethics?

I was in elementary school, watching the TV live, when the Challenger exploded. My classmates and I were stunned and confused by what we saw. With the logic of a 9-year-old, I wrote a report on O-rings, trying desperately to make sense of a science I did not know and a public outcry that I couldn’t truly understand. I wanted to be an astronaut (and I wouldn’t give up that dream until high school!).

Years later, with a lot more training under my belt, I became fascinated not simply by the scientific aspects of the failure, but by the organizational aspects of it. Last week, Bob Ebeling died. He was an engineer at a contracting firm, and he understood just how badly the O-rings handled cold weather. He tried desperately to convince NASA that the launch was going to end in disaster. Unlike many people inside organizations, he was willing to challenge his superiors, to tell them what they didn’t want to hear. Yet, he didn’t have organizational power to stop the disaster. And at the end of the day, NASA and his superiors decided that the political risk of not launching was much greater than the engineering risk.

Organizations are messy, and the process of developing and launching a space shuttle or any scientific product is complex and filled with trade-offs. This creates an interesting question about the site of ethics in decision-making. Over the last two years, Data & Society has been convening a Council on Big Data, Ethics, and Society where we’ve had intense discussions about how to situate ethics in the practice of data science. We talked about the importance of education and the need for ethical thinking as a cornerstone of computational thinking. We talked about the practices of ethical oversight in research, deeply examining the role of IRBs and the different oversight mechanisms that can and do operate in industrial research. Our mandate was to think about research, but, as I listened to our debates and discussions, I couldn’t help but think about the messiness of ethical thinking in complex organizations and technical systems more generally.

I’m still in love with NASA. One of my dear friends — Janet Vertesi — has been embedded inside different spacecraft teams, understanding how rovers get built. On one hand, I’m extraordinarily jealous of her field site (NASA!!!), but I’m also intrigued by how challenging it is to get a group of engineers and scientists to work together for what sounds like an ultimate shared goal. I will never forget her description of what can go wrong: Imagine if a group of people were given a school bus to drive, only they were each given a steering wheel of their own and had to coordinate among themselves which way to go. Introduce power dynamics, and it’s amazing what all can go wrong.

Like many college students, encountering Stanley Milgram’s famous electric shock experiment floored me. Although I understood why ethics reviews came out of the work that Milgram did, I’ve never forgotten the moment when I fully understood that humans could do inhuman things because they’ve been asked to do so. Hannah Arendt’s work on the banality of evil taught me to appreciate, if not fear, how messy organizations can get when bureaucracies set in motion dynamics in which decision-making is distributed. While we think we understand the ethics of warfare and psychology experiments, I don’t think we have the foggiest clue how to truly manage ethics in organizations. As I continue to reflect on these issues, I keep returning to a college debate that has constantly weighed on me. Audre Lorde said, “the master’s tools will never dismantle the master’s house.” And, in some senses, I agree. But I also can’t see a way of throwing rocks at a complex system that would enable ethics.

My team at Data & Society has been grappling with different aspects of ethics since we began the Institute, often in unexpected ways. When the Intelligence and Autonomy group started looking at autonomous vehicles, they quickly realized that humans were often left in the loop to serve as “liability sponges,” producing “moral crumple zones.” We’ve seen this in organizations for a long time. When a complex system breaks down, who is to be blamed? As the Intelligence & Autonomy team has shown, this only gets more messy when one of the key actors is a computational system.

And that leaves me with a question that plagues me as we work on our Council on Big Data, Ethics, and Society whitepaper: How do we enable ethics in the complex big data systems that are situated within organizations, influenced by diverse intentions and motivations, shaped by politics and organizational logics, complicated by issues of power and control?

No matter how thoughtful individuals are, no matter how much foresight people have, launches can end explosively.

(This was originally posted on Points.)

by zephoria at April 06, 2016 12:55 AM

April 01, 2016

Ph.D. student

Ebb

Ebb: Dynamic Textile Displays from Laura Devendorf on Vimeo.

Ebb is an exploration of dynamic textiles created in partnership with Project Jacquard. Myself and my collaborators at UC Berkeley coated conductive threads with thermochromic pigments and explored how we could leverage the geometries of weaving and crochet to create unique aesthetic effects and power efficiencies. The thermochromic pigments change colors in slow, subtle, and even ghostly ways, and when we weave them into fabrics, they create calming “animations” that move across the threads. The name “Ebb” reflects this slowness, as it conjures images of the ebb and flow of the tides rather than the rapid-fire changes we typically associate with light emitting information displays. For this reason, Ebb offers a nuanced and subtle approach to displaying information on fabrics. A study we conducted with fashion designers and non-designers (i.e. people who wear clothes) explored potentials for dynamic fabrics in everyday life and revealed an important role for subtle, abstract displays of information in these contexts.

weave_4x4_3up_WEB

crochet_3up_WEB

weave_sevenseg_WEB

Publications

Noura Howell, Laura Devendorf, Rundong (Kevin) Tian, Tomas Vega, Nan-Wei Gong, Ivan Poupyrev, Eric Paulos, Kimiko Ryokai.
“Biosignals as Social Cues: Ambiguity and Emotional Interpretation in Social Displays of Skin Conductance ”
In Proceedings of the SIGCHI Conference on Designing Interactive Systems
(DIS ’16)

Laura Devendorf, Joanne Lo, Noura Howell, Jung Lin Lee, Nan-Wei Gong, Emre Karagozler, Shiho Fukuhara, Ivan Poupyrev, Eric Paulos, Kimiko Ryokai.
“I don’t want to wear a screen”: Probing Perceptions of and Possibilities for Dynamic Displays on Clothing.
In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
(CHI ’16 – Best Paper Award)

Press
Gizmodo: http://gizmodo.com/color-changing-threads-might-one-day-turn-your-t-shirt-1774966030

by admin at April 01, 2016 09:47 PM

March 31, 2016

Ph.D. student

Hands in the Land of Machines


Drawing Hands from Laura Devendorf on Vimeo.

I created a CAD model of 12 hands arranged along different axis, converted that CAD file to G-Code, and used my “Being the Machine” laser guide to draw the G-Code while I followed by hand. After each mark, I used my palm to smear the charcoal in order to account for the presence of layers that would build in darkness as successive layers were added. I preformed the drawing live to emphasize the attention to being human (with all of the imprecision, labor, and messiness that comes along with it) in response to an overwhelming attention to technologies that are clean, virtual, simulated, and magical. This drawing took about four hours and the charcoal took about two days to be completely cleaned off my hand.

IMG_3251

IMG_3244

by admin at March 31, 2016 09:51 PM

March 29, 2016

Center for Technology, Society & Policy

Privacy for Citizen Drones: Use Cases for Municipal Drone Applications

By Timothy Yim, CTSP Fellow and Director of Data & Privacy at Startup Policy Lab | Permalink

Previous Citizen Drone Articles:

  1. Citizen Drones: delivering burritos and changing public policy
  2. Privacy for Citizen Drones: Privacy Policy-By-Design
  3. Privacy for Citizen Drones: Use Cases for Municipal Drone Applications

Startup Policy Lab is leading a multi-disciplinary initiative to create a model policy and framework for municipal drone use.

A Day in the Park

We previously conceptualized a privacy policy-by-design framework for municipal drone applications—one that begins with gathering broad stakeholder input from academia, industry, civil society organizations, and municipal departments themselves. To demonstrate the benefits of such an approach, we play out a basic scenario.

A city’s Recreation and Parks Department (“Parks Dept.”) wants to use a drone to monitor the state of its public parks for maintenance purposes, such as proactive tree trimming prior to heavy seasonal winds, vegetation pruning around walking paths, and any directional or turbidity changes in water flows. For most parks, this would amount to twice-daily flights of approximately 15–30 minutes each. The flight video would then be reviewed, processed, and stored by the Parks Dept.

Even with this “basic” scenario, a number of questions immediately jump to mind. Here are a few:

Intentional & Unintentional Collection

  • Will the drone be recording audio as well as video? And will the drone begin recording within the boundaries of the park? Or over surrounding public streets? What data is actually needed for the stated flight purpose?
  • Will the drone potentially be recording city employees or park-goers? Does the city need to do so for the stated purpose of monitoring for park maintenance? Is such collection avoidable? If not, how can the city build privacy safeguards for unintentional collection of city employees or park-goers into the process?
  • How can notice, consent, and choice principles be brought to bear for municipal employees for whom data is collected? How can they be applied to park-goers? To residents of surrounding homes? To citizens merely walking along the edge of the park?

 

Administrative & Technical Safeguards

  • What sort of access to the collected data will the employees of the recreation and parks department have? Will access be tiered? Who needs access to the raw video? Who needs access only to the post-processed data reports?
  • What sort of processing on the video will occur? Can the processing be algorithmically defined or adapted for machine learning? Can safeguards be placed into the technical processing itself? For example, by algorithmically blurring any persons on the video before long-term storage?
  • What sort of data retention limits will apply to the video data? The post-processed data reports? The flight plans? Should there be a shorter retention period, e.g., 30 days, for the raw video footage?

 

Sharing: Vendors, Open Data, & Onward Transfer

  • Who outside the recreational and parks department will have access to any of the data? Are there outside vendors who will manage the video processing? Are there other agencies that would want access to that data? Should the raw video data even be shared with other agencies? Which ones? Under what conditions?
  • What happens if the drone video data is requested by members of the public via municipal FOIA-analogue requests? What sorts of data will be released via the city’s open data portal? In each case, how can the privacy of city employees and park-goers be protected?

 

Assessing Stakeholder Interests

We’ve got a good list of potential issues to start considering, but in the interest of demonstrating the process as a whole and not getting lost in the details, we’re going to limit the scope of discussion down to just one facet—the unintentional collection of municipal employee data.

The Park Dept. begins by assembling both internal municipal stakeholders and external stakeholders—such as industry stakeholders, interdisciplinary academics, and public policy experts—and then proceeds to iterate through a simple privacy impact assessment.

Data Minimization for Specified Purposes

Stakeholder: Parks Dept. Drone Project Lead

After assembling the stakeholder group, the Parks Dept. drone project manager outlines the use case above, adding the following relevant details:

During the twice-daily drone flights at a specific park, two municipal employees are working in the park. One employee is clearing brush and debris from heavy seasonal winds. Another is pruning the vegetation around walking paths. The drone collects video focused on the health and structural integrity of trees as well as the proximity of any overhanging branches to walking paths.

The Parks Dept. then defers to the privacy and data subject matter experts to highlight the potential legal and policy issues at stake.

Stakeholder: Privacy & Data Expert, Legal Academic or Civil Society

Privacy best practices usually dictate that data collected, processed, or stored be limited to that which is necessary for the specified purpose. Here, the Parks Dept.’s purpose is to detect changes in park features and vegetation that will allow the Parks Dept. to better maintain the park. The drone flight video and associated data will focus on the trees, foliage, and plant debris. Unfortunately, this video data will also unintentionally capture, on occasion, the two Parks Dept. workers. Perhaps there’s a way to limit the collection of video data or secondary data on the Parks Dept. employees?

Stakeholder: Outsourced Video Processing Vendor

At this point, the external vendor that handles the processing of the video data helpfully chimes in. The vendor can create a machine learning method that will recognize human faces and bodies and effectively blur them out of both the subsequently stored video and the data analytics report produced. Problem solved the vendor says.

Stakeholder: Privacy & Data Expert, Engineering & Public Policy Academic

The privacy academic pipes up. That might not solve the problem the academic says. Even if blurred, because there are likely only a limited number of employees who would be performing a given task at a given date, time, and location, it might be easy to cross-reference the blurred images with other data, and identify the Parks Dept. gardener. Even going beyond blurring and producing full redactions within the video data might be insufficient. It would be safer to simply discard those portions of video data entirely and rely on the data reports.

Stakeholder: Parks Dept. Management

One manager within the Parks Dept. speaks up. Why do we even care? If we have Parks Dept. employees in the video data, that’s not so bad. We can monitor them while they work, to see how hard they’re really working.

Another manager responds. That wasn’t an approved purpose for the drone flights. Plus we already have performance metrics that help assess employee productivity.

Stakeholder: Union of Laborers Local 711

The representative from the Union Laborers Local 711, to which the two municipal workers belong, adds that there are pre-existing agreed-upon policies around the privacy of their union members. Especially since we haven’t determined how this data might be made available via the city’s open data portal or via municipal FOIA-analogue requests. While the union understands that drone video might unintentionally capture union members, it appreciates best efforts to cleanse and disregard that information.

 

Notice, Consent, & Choice

The team comes to a consensus that Parks Dept. employees may be unintentionally captured on drone video footage, but will not be factored into the post-processed data summary reports. Additionally, the raw footage will include video redactions and will be retained for a shorter period of time than the data summary reports.

The team meeting goes on to determine how to provide and present notice and choice options to the Parks Dept. workers.

Stakeholder: City Attorney

The city attorney happily reports that he can easily write notification language into the Parks Dept. employee contracts. Will that be enough for meaningful notice? And will there be any choice for Parks Dept. workers?

Stakeholder: Privacy & Data Expert, Academic or Civil Society

The privacy expert addresses the group. That may depend on the varying privacy laws in a particular state or country, but it’d be much better if additional notice were given. For example, the flights could be limited in number and scheduled, with updates accessible via the city’s mobile application for employees.

Stakeholder: Union of Laborers Local 711

The representative from the Union Laborers Local 711 adds that simplified, graphic drone flight notice should also be posted as a supplement to the physical Board of State and Federal Employee Notices in the Parks Dept. staff lounge.

 

Data-Driven “Pan Out”

As the camera pans out from our imagined privacy policy-by-design meeting, the privacy and policy expert from civil society suggests that the general policy framework around municipal drone use should start with broad privacy safeguards, evolving from that beginning only once additional data is gathered from both actual municipal drone use as well as stabilizing societal norms.

Takeaways

The creation of a robust, privacy policy-by-design framework for municipal drone use is indeed a challenging endeavor. Understanding the privacy interests for the many impacted stakeholders is a critical starting point. Policymakers should also encourage meta-policies that allow the collection of data around the implemented policy itself. Our goal is develop frameworks that enable law and policy to evolve in lockstep with emerging technologies, so that society can innovate and thrive without compromising on its normative values. Here that means the creation of innovative, positive-sum solutions that safeguard privacy while enabling modern drone use in and by cities.

If you are one of the interested stakeholder groups above or are otherwise interested in participating in our roundtables or research, please let us know at drones@startuppolicylab.org.

by charles at March 29, 2016 08:00 AM

March 28, 2016

Ph.D. student

Trace Ethnography: A Retrospective

This is a cross-post of a post I wrote for Ethnography Matters, in their “The Person in the (Big) Data” series

When I was an M.A. student back in 2009, I was trying to explain various things about how Wikipedia worked to my then-advisor David Ribes. I had been ethnographically studying the cultures of collaboration in the encyclopedia project, and I had gotten to the point where I could look through the metadata documenting changes to Wikipedia and know quite a bit about the context of whatever activity was taking place. I was able to do this becauseWikipedians do this: they leave publicly accessible trace data in particular ways, in order to make their actions and intentions visible to other Wikipedians. However, this was practically illegible to David, who had not done this kind of participant-observation in Wikipedia and had therefore not gained this kind of socio-technical competency.

For example, if I added “{{db-a7}}” to the top an article, a big red notice would be automatically added to the page, saying that the page has been nominated for “speedy deletion.” Tagging the article in this way would also put it into various information flows where Wikipedia administrators would review it. If any of Wikipedia’s administrators agreed that the article met speedy deletion criteria A7, then they would be empowered to unilaterally delete it without further discussion. If I was not the article’s creator, I could remove the {{db-a7}} trace from the article to take it out of the speedy deletion process, which means the person who nominated it for deletion would have to go through the standard deletion process. However, if I was the article’s creator, it would not be proper for me to remove that tag — and if I did, others would find out and put it back. If someone added the “{{db-a7}}” trace to an article I created, I could add “{{hangon}}” below it in order to inhibit this process a bit — although a hangon is a just a request, it does not prevent an administrator from deleting the article.

File:Wiki Women's Edit-a-thon-1.jpg

Wikipedians at an in-person edit-a-thon (the Women’s History Month edit-a-thon in 2012). However, most of the time, Wikipedians don’t get to do their work sitting right next to each other, which is why they rely extensively on trace data to coordinate render their activities accountable to each other. Photo by Matthew Roth, CC-BY-SA 3.0

I knew all of this both because Wikipedians told me and because this was something I experienced again and again as a participant observer. Wikipedians had documented this documentary practice in many different places on Wikipedia’s meta pages. I had first-hand experience with these trace data, first on the receiving end with one of my own articles. Then later, I became someone who nominated others’ articles for deletion. When I was learning how to participate in the project as a Wikipedian (which I now consider myself to be), I started to use these kinds of trace data practices and conventions to signify my own actions and intentions to others. This made things far easier for me as a Wikipedian, in the same way that learning my university’s arcane budgeting and human resource codes helps me navigate that bureaucracy far easier.

This “trace ethnography” emerged out of a realization that people in mediated communities and organizations increasingly rely on these kinds of techniques to render their own activities and intentions legible to each other. I should note that this was not my and David’s original insight — it is one that can can be found across the fields of history, communication studies, micro-sociology, ethnomethodology, organizational studies, science and technology studies, computer-supported cooperative work, and more. As we say in the paper, we merely “assemble their various solutions” to the problem of how to qualitatively study interaction at scale and at a distance. There are jargons, conventions, and grammars learned as a condition of membership in any group, and people learn how to interact with others by learning these techniques.

The affordances of mediated platforms are increasingly being used by participants themselves to manage collaboration and context at massive scales and asynchronous latencies. Part of the trace ethnography approach involves coming to understand why these kinds of systems were developed in the way that they were. For me and Wikipedia’s deletion process, it went from being strange and obtuse to something that I expected and anticipated. I got frustrated when newcomers didn’t have the proper literacy to communicate their intentions in a way that I and other Wikipedians would understand. I am now at the point where I can even morally defend this trace-based process as Wikipedians do. I can list reason after reason why this particular process ought to unfold in the way that it does, independent of my own views on this process. I understand the values that are embedded in and assumed by this process, and they cohere with other values I have found among Wikipedians. And I’ve also met Wikipedians who are massive critics of this process and think that we should be using a far different way to deal with inappropriate articles. I’ve even helped redesign it a bit.

Trace ethnography is based in the realization that these practices around metadata are learned literacies and constitute a crucial part of what it means to participate in many communities and organizations. It turns our attention to an ethnographic understanding of these practices as they make sense for the people who rely on them. In this approach, reading through log data can be seen as a form of participation, not just observation — if and only if this is how members themselves spend their time. However, it is crucial that this approach is distinguished from more passive forms of ethnography (such as “lurker ethnography”), as trace ethnography involves an ethnographer’s socialization into a group prior to the ability to decode and interpret trace data. If trace data is simply being automatically generated without it being integrated into people’s practices of participation, if people in a community don’t regularly rely on following traces in their everyday practices, then the “ethnography” label is likely not appropriate.

Looking at all kinds of online communities and mediated organizations, Wikipedia’s deletion process might appear to be the most arcane and out-of-the-ordinary. However, modes of participation are increasingly linked to the encoding and decoding of trace data, whether that is a global scientific collaboration, an open source software project, a guild of role playing gamers, an activist network, a news organization, a governmental agency, and so on. Computer programmers frequently rely on GitHub to collaborate, and they have their own ways of using things like issues, commit comments, and pull requests to interact with each other. Without being on GitHub, it’s hard for an ethnographer who studies software development to be a fully-immersed participant-observer, because they would be missing a substantial amount of activity — even if they are constantly in the same room as the programmers.

More about trace ethnography

If you want to read more about “trace ethnography,” we first used this term in “The Work of Sustaining Order in Wikipedia: The Banning of a Vandal,” which I co-authored with my then-advisor David Ribes in the proceedings of the CSCW 2010 conference. We then wrote a followup paper in the proceedings of HICSS 2011 to give a more general introduction to this method, in which we ‘inverted’ the CSCW 2011 paper, explaining more of the methods we used. We also held a workshop at the 2015 iConference with Amelia Acker and Matt Burton — the details of that workshop (and the collaborative notes) can be found athttp://trace-ethnography.github.io.

Some examples of projects employing this method:

Ford, H. and Geiger, R.S. “Writing up rather than writing down: Becoming Wikipedia literate.” Proceedings of the Eighth Annual International Symposium on Wikis and Open Collaboration. ACM, 2012. http://www.stuartgeiger.com/writing-up-wikisym.pdf

Ribes, D., Jackson, S., Geiger, R.S., Burton, M., & Finholt, T. (2013). Artifacts that organize: Delegation in the distributed organization. Information and Organization, 23(1), 1-14. http://www.stuartgeiger.com/artifacts-that-organize.pdf

Mugar, G., Østerlund, C., Hassman, K. D., Crowston, K., & Jackson, C. B. (2014). Planet hunters and seafloor explorers: legitimate peripheral participation through practice proxies in online citizen science. InProceedings of the 17th ACM conference on Computer supported cooperative work & social computing (pp. 109-119). ACM. http://dl.acm.org/citation.cfm?id=2531721

Howison, J., & Crowston, K. (2014). Collaboration Through Open Superposition: A Theory of the Open Source Way. Mis Quarterly, 38(1), 29-50. http://aisel.aisnet.org/cgi/viewcontent.cgi?article=3156&context=misq

Burton, M. (2015). Blogs as Infrastructure for Scholarly Communication. Doctoral Dissertation, University of Michigan.http://deepblue.lib.umich.edu/bitstream/handle/2027.42/111592/mcburton_1.pdf

by stuart at March 28, 2016 06:55 PM

March 27, 2016

MIMS 2012

Icons are the Acronyms of Design

In The Elements of Style, the seminal writing and grammar book by Strunk and White, the authors have a style rule that states, “Do not take shortcuts at the cost of clarity.” This rule advises writers to spell out acronyms in full unless they’re readily understood. For example, not everyone knows that madd is Mothers Against Drunk Driving.

Acronyms come at the cost of clarity. “Many shortcuts are self-defeating,” the authors say, “they waste the reader’s time instead of conserving it.”

Icons are the acronyms of design. Designers often rely on them to communicate what an action or object does, instead of simply stating what the action or object is. Unless you’re using universally-recognized icons (which are rare), you’re more likely to harm the usability of an interface.

Do you know what the icons on the left mean? Do you know what the icons on the left mean?

So as Strunk and White advise, don’t take shortcuts at the cost of clarity. “The longest way round is usually the shortest way home.”

by Jeff Zych at March 27, 2016 11:21 PM

March 22, 2016

Center for Technology, Society & Policy

Privacy for Citizen Drones: Privacy Policy-By-Design

By Timothy Yim, CTSP Fellow and Director of Data & Privacy at Startup Policy Lab | Permalink

Startup Policy Lab is leading a multi-disciplinary initiative to create a model policy and framework for municipal drone use.

Towards A More Reasoned Approach

Significant policy questions have arisen from the nascent but rapidly increasing adoption of drones in society today. The developing drone ecosystem is a prime example of how law and policy must evolve with and respond to emerging technology, in order for society to thrive while still preserving its normative values.

Privacy has quickly become a vital issue in the debate over acceptable drone use by government municipalities. In some instances, privacy concerns over the increased potential for government surveillance have even led to wholesale bans on the use of drones by municipalities.

Let me clear. This is a misguided approach.

Without a doubt, emerging drone technology is rapidly increasing the potential ability of government to engage in surveillance, both intentionally and unintentionally, and therefore to intrude on the privacy of its citizenry. And likewise, it’s also absolutely true that applying traditional privacy principles—such as notice, consent, and choice—has proven incredibly challenging in the drone space. For the record, these are legitimate and serious concerns.

Yet even under exceptionally strong constructions of modern privacy rights, including those enhanced protections afforded under state constitutions such as California’s, an indiscriminate municipal drone ban makes little long-term sense. A wholesale ban cuts off municipal modernization and the many potential benefits of municipal drone use—for instance, decreased costs and increased frequency of monitoring for the maintenance of public parks, docks, and bridges.

What a wholesale ban, or for that matter a blanket whitelisting, does accomplish is avoiding the admittedly difficult task of creating a policy framework to enable appropriate municipal drone use while preserving privacy. But these are questions that need to be considered, in order to move beyond the false binary dichotomy between privacy and municipal drone usage. In short, safeguarding privacy and enabling municipal innovation via new drone applications need not be mutually exclusive.

Privacy Policy-By-Design

Our privacy policy-by-design approach considers and integrates privacy principles—such as data minimization, retention, and onward transfer limits—early in the development of drone law and policy. Doing so will enable, much like privacy-by-design theory in engineering contexts, the creation of positive-sum policy solutions.

Critical to a privacy policy-by-design approach is (1) identifying potential stakeholders, both core and ancillary, and (2) understanding how their particular interests play out.

By identifying a broad array of stakeholders—including invested municipal agencies, interdisciplinary academia, industry, and civil society organizations—we hope to better understand how municipal drone use will impact the privacy interests of each stakeholder group. Here, privacy subject matter experts from interdisciplinary academia—law, public policy, and information studies—are critical to facilitate identification of potential issues, both to represent the public at large and to assist other stakeholder groups, which might not otherwise have the necessary expertise to fully assess their interests.

Oftentimes, this approach will benefit from convening key stakeholders in a face-to-face roundtable setting, especially those in other municipal departments and in groups outside municipal government altogether. A series of such tabletop roundtables, organized around likely use cases, provides an opportunity for stakeholder groups to identify general privacy concerns as well as facilitate early development of creative and nuanced solutions between parties.

Once municipal departments gain a comprehensive understanding of general stakeholder concerns, they can extrapolate those concerns for application in additional use cases and situations. City governments do not have the time or resources to convene roundtables for the entire range of potential drone applications. Nonetheless, takeaways from the initial set of use cases can provide invaluable insight into the potential privacy concerns of external stakeholders—helping avoid otherwise likely conflict in the future.

Understanding the multitude of privacy interests by different stakeholders is key to the creation of innovative, positive-sum solutions that safeguard privacy while enabling modern drone use in and by cities. The following table represents a theoretical, high-level mapping of stakeholder concerns in the municipal drone space.

drone privacy stakeholder concerns

Evolving Data-Driven Policy

Finally, it’s important to realize that a privacy policy-by-design approach should not be pursued in isolation. A growing fraction of recently proposed or enacted legislation has authorized the ancillary collection of relevant data around the new legislation itself—creating opportunities in the future to further evolve policy via real-world usage. So too, we propose that appropriate data collection modules be added to municipal drone use processes to confirm that established policies are creating the proper incentives and disincentives.

Our overarching goal is to develop a framework that enables law and policy to evolve in lockstep with emerging technologies, so that society can innovate and thrive without compromising on its normative values.

If you are one of the interested stakeholder groups above or are otherwise interested in participating in our roundtables or research, please let us know at drones@startuppolicylab.org.

by charles at March 22, 2016 08:00 AM

March 21, 2016

MIMS 2011

What I’m talking about in 2016

Authority and authoritative sources, critical data studies, digital methods, the travel of facts online, bot politics and social media and politics. These are some of the things I’m talking about in the first six months of 2016. (Just in case you thought the #sunselfies only indicated fun and aimless loafing).  

15 January Fact factories: How Wikipedia’s logics determine what facts are represented online. Wikipedia 15th birthday event, Oxford Internet Institute. [Webcast, OII event page, OII’s Medium post]

29 January Wikipedia and me: A story in four acts. TEDx Leeds University. [Video, TEDx Leeds University site]

Abstract: This is a story about how I came to be involved in Wikipedia and how I became a critic. It’s a story about hope and friendship and failure, and what to do afterwards. In many ways this story represents the relationship that many others like me have had with the Internet: a story about enormous hope and enthusiasm followed by disappointment and despair. Although similar, the uniqueness of these stories is in the final act – the act where I tell you what I now think about the future of the Internet after my initial despair. This is my Internet love story in four acts: 1) Seeing the light 2) California rulz 3) Doubting Thomas 4) Critics unite. 

17 February. Add data to methods and stir. Digital Methods Summer School. CCI, Queensland University of Technology, Brisbane [QUT Digital Methods Summer School website]

Abstract: Are engagements with real humans necessary to ethnographic research? In this presentation, I argue for methods that connect data traces to the individuals who produce them by exploring examples of experimental methods featured on the site ‘EthnographyMatters.net’, such as live fieldnoting, collaborative mapmaking and ‘sensory postcards’.  This presentation will serve as an inspiration for new work that expands beyond disciplinary and methodological boundaries and connects the stories we tell about our things with the humans who create them.  

10 March. Situating Innovations in Digital Measures. University of Leeds, Leeds Critical Data Studies Inaugural Event.  

Abstract: Drawn from case studies that were presented at the recent Digital Methods Summer School (Digital Media Research Centre, Queensland University of Technology) in Brisbane, Australia last month, as well as from experimental methods contributed to by authors of the Ethnography Matters community, this seminar will present a host of inspiring methodological tools that researchers of digital culture and politics are using to explore questions about the role of digital technologies in modern life. Instead of data-centric models and methodologies, the seminar focuses on human-centric models that also engage with the opportunities afforded by digital technologies. 

21-22 April. Ode to the infobox. Streams of Consciousness: Data, Cognition and Intelligent Devices Conference. University of Warwick.

Abstract: Also called a ‘fact box’, the infobox is a graphic design element that highlights summarised statements or facts about the world contained within it. Infoboxes are important structural elements in the design of digital information. They usually hang in the right-hand corner of a webpage, calling out to us that the information contained within them is special and somehow apart from the rest. The infobox satisfies our rapid information-seeking needs. We’ve been trained to look to the box to discover, not just another set of informational options, but an authoritative statement of seemingly condensed consensus emerging out of the miasma of data about the world around us.

When you start to look for them, you’ll see infoboxes wherever you look. On Google, these boxes contain results from Google’s Knowledge Graph; on Wikipedia they are contained within articles and host summary statistics and categories; and on the BBC, infoboxes highlight particular facts and figures about the stories that flow around them.

The facts represented in the infoboxes are no longer as static as the infoboxes of old. Now they are the result of algorithmic processes that churn thousands, sometimes millions of data points according to rulesets that produce relatively unique encounters by each new user.

In this paper, I trace the multitude of instructions and sources, institutions and people that constitute the assemblage that results in different facts for different groups at different times. Investigating infoboxes on Wikipedia and Google through intermediaries such as Wikidata, I build a portrait of the pipes, processes and people that feed these living, dynamic frames. The infobox, humble as it seems, turns out to be a powerful force in today’s deeply connected information ecosystem. By celebrating the infobox, I hope to reveal its hidden power – a power with consequences far beyond the efficiency that it promises.

29 April. How facts travel in the digital age. Social Media Lab Guest Speaker Series, Ryerson University, Social Media Lab, Toronto, Canada. [Speaker series website]

Abstract: How do facts travel through online systems? How is it that some facts gather steam and gain new adherents while others languish in isolated sites? This research investigates the travel of two sets of facts through Wikipedia’s networks and onto search engines like Google. The first: facts relating to the 2011 Egyptian Revolution; the second: facts relating to “surr”, a sport played by men in the villages of Northern India. While the Egyptian Revolution became known to millions across the world as events were reported on multiple Wikipedia language versions in early 2011, the facts relating to surr faced enormous challenges as its companions attempted to propel it through Wikipedia’s infrastructure. Following the facts as they travelled through Wikipedia gives us an insight into the source of systemic biases of Internet infrastructures and the ways in which political actors are changing their strategies in order to control narratives around political events. 

8 June. Politicians, Journalists, Wikipedians and their Twitter bots. Algorithms, Automation and Politics. (Heather Ford, Elizabeth Dubois, Cornelius Puschmann) ICA Pre-Conference, Fukuoka, Japan. [Event website]

Abstract selection: Recent research suggests that automated agents deployed on social media platforms, particularly Twitter, have become a feature of the modern political communication environment (Samuel, 2015, Forelle et al, 2015, Milan, 2015). Haustein et al (2016) cite a range of studies that put the percentage of bots among all Twitter accounts at 10-16% (p. 233). Governments have been shown to employ social media experts to spread pro-governmental messages (Baker, 2015, Chen 2015), political parties pay marketing companies to create or manipulate trending topics (Forelle et al, 2015), and politicians and their staff use bots to augment the number of account followers in order to provide an illusion of popularity to their accounts (Forelle et al, 2015). The assumption in these analyses is that bots have a direct influence on public opinion and that they can act as credible and competent sources of information (Edwards et al, 2014). There is still, however, little empirical evidence of the link between bots and political discourse, the material consequences of such changes or how social groups are reacting. [continued] 

11 June. Wikipedia: Moving Between the Whole and its Traces. In ‘Drowning in Data: Industry and Academic Approaches to Mixed Methods in “Holistic” Big Data Studies’ panel. International Communication Association Conference. Fukuoka, Japan. [ICA website]

Abstract: In this paper, I outline my experiences as an ethnographer working with data scientists to explore various questions surrounding the dynamics of Wikipedia sources and citations. In particular, I focus on the moments at which we were able to bring the small and the large into conversation with one another, and moments when we looked, wide-eyed at one another, unable to articulate what had gone wrong. Inspired by Latour’s (2010) reading of Gabriel Tarde, I argue that a useful analogy for conducting mixed methods for studies about which large datasets and holistic tools are available is the process of life drawing – a process of moving up close to the easel and standing back (or to the side) as the artist looks at both their subject and the canvas in a continual motion.

Wikipedia’s citation traces can be analysed in their aggregate – piled up, one on top of the other to indicate the emergence of new patterns, new vocabulary, new authorities of knowledge in the digital information environment. But citation traces take a particular shape and form, and without an understanding of the behaviour that lies behind such traces, the tendency is to count what is available to us, rather than to think more critically about the larger questions that Wikipedia citations help to answer.

I outline a successful conversation which happened when we took a large snapshot of 67 million source postings from about 3.5 million Wikipedia articles and attempted to begin classifying the citations according to existing frameworks (Ford 2014). In response, I conducted a series of interviews with editors by visualising their citation traces and asking them questions about the decision-making and social interaction that lay behind such performances (Dubois and Ford 2015). I also reflect on a less successful moment when we attempted to discover patterns in the dataset on the basis of findings from my ethnographic research into the political behaviour of editors. Like the artist who had gotten their proportions wrong when scaling up the image on the canvas, we needed to re-orient ourselves and remember what we were trying to ultimately discover.

13 June. The rise of expert amateurs in the realm of knowledge production: The case of Wikipedia’s newsworkers. In ‘Dialogues in Journalism Studies: The New Gatekeepers’ panel. International Communication Association Conference. Fukuoka, Japan. [ICA website]

Abstract: Wikipedia has become an authoritative source about breaking news stories as they happen in many parts of the world. Although anyone can technically edit a Wikipedia article, recent evidence suggests that some have significantly more power than others when it comes to being able to have edits sustained over time. In this paper, I suggest that the theory of co-production, elaborated upon by Sheila Jasanoff, is a useful way of framing how, rather than a removal of the gatekeepers of the past, Wikipedia demonstrates two key trends. The first is the rise of a new set of gatekeepers in the form of experienced Wikipedians who are able to deploy coded objects effectively in order to stabilize or destabilize an article, and the second is a reconfiguration in the power of traditional sources of news and information in the choices that Wikipedia editors make when writing about breaking news events.

 

 


by Heather Ford at March 21, 2016 10:24 PM

March 15, 2016

Center for Technology, Society & Policy

The Neighbors are Watching: From Offline to Online Community Policing in Oakland, California

By Fan Mai & Rebecca Jablonsky, CTSP Fellows | Permalink

As one of the oldest and most popular community crime prevention programs in the United States, Neighborhood Watch is supposed to promote and facilitate community involvement by bringing citizens together with law enforcement in resolving local crime and policing issues. However, a review of Neighborhood Watch programs finds that nearly half of all properly evaluated programs have been unsuccessful. The fatal shooting of Trayvon Martin by George Zimmerman, an appointed neighborhood watch coordinator at that time, has brought the conduct of Neighborhood Watch under further scrutiny.

Founded in 2010, Nextdoor is an online social networking site that connects residents of a specific neighborhood together. Unlike other social media, Nextdoor maintains a one-to-one mapping of real-world community to virtual community, nationwide. Positioning itself as the platform for “virtual neighborhood watch,” Nextdoor not only encourages users to post and share “suspicious activities,” but also invites local police departments to post and monitor the “share with police” posts. Since its establishment, more than 1000 law enforcement agencies have partnered with the app, including the Oakland Police Department. Although Nextdoor has helped the local police to solve crimes, it has also been criticized for giving voices to racial biases, especially in Oakland, California.

Activists have been particularly vocal in Oakland, California—a location that is historically known for diversity and black culture, but is currently a site where racial issues and gentrification are contested public topics. The Neighbors for Racial Justice, a local activist group started by residents of Oakland, has been particularly active in educating people about unconscious racial bias and working with the Oakland City Council to request specific changes to the crime and safety form that Nextdoor users fill out when posting to the site.

Despite the public attention and efforts made by activist groups to address the issue of racial biases, controversies remain in terms of who should be held responsible and how to avoid racial profiling without stifling civic engagement in crime prevention. With its rapid expansion across the United States, Nextdoor is facing many challenges, especially on the issues of moderation and regulation of user-generated content.

Racial profiling might just be the tip of the iceberg. Using a hyper-local social network like Nextdoor can bring up pressing issues related to community, identity, and surveillance. Neighborhoods have their own history and dynamics, but Nextdoor provides identical features to every neighborhood across the entirety of the U.S. Will this “one size fits all” approach work as Nextdoor expands its user base? As a private company that is involved in public issues like crime and policing, what kind of social responsibility should Nextdoor have to its users? How does the composition of neighborhoods affect online interactions within Nextdoor communities? Is the Nextdoor neighborhood user base an accurate representation of the actual community?

Researching Nextdoor

As researchers, we seek to contribute to the conversation by conducting empirical research with Nextdoor users in three Oakland neighborhoods: one that is predominantly white, one that is predominantly non-white, and one that is ethnically diverse. We hope to elucidate the ways that racial composition of a neighborhood influences the experience of using a community-based social network such as Nextdoor.

Neighborhood 1

For example, here is the demographic breakdown of one Oakland neighborhood, which we will call Neighborhood 1. As you can see, this area might be considered fairly diverse: many different races are represented, and there isn’t one race that is dominant in the population. It has a median household income of $52,639 and is predominantly non-white, with over half of residents identifying as Black or Asian.

Graphs included in this post were accessed from City-Data.com. Zip codes have been removed to protect neighborhood privacy. These are example neighborhoods, and are not neighborhoods that we are researching.

Now, take a look at the neighborhood that directly borders the previous one, which we will call Neighborhood 2. It has a median household income of $94,276 and is nearly 75% white.

Neighborhood 2 statistics

Neighborhood 2

Although these micro-neighborhoods directly border each other, they might normally function as separate entities entirely. Residents might walk down different streets, shop in different stores, and remain generally unaware of each other’s existence. Racial segregation is fairly typical of urban environments in the United States, where people of different racial backgrounds are often segregated into packets of a city that is otherwise considered to be “diverse”—meaning fewer families actually live in mixed-income neighborhoods, and are therefore less likely to be exposed to people who are different from themselves.

This segregation can be disrupted when a person joins social networking websites like Nextdoor.com. Since early 2013, not only can Nextdoor users receive information generated from all people in their neighborhood, but they can see and respond to posts in the Crime and Safety section of several “nearby neighborhoods.” Pushing of the neighborhood boundaries amplifies the potential for users to participate in more heterogeneous communities, but at the same time, may increase the anxiety for trust and the chance for conflict within the larger virtual communities.

Contrary to popular belief, researchers have found that the use of digital technologies is associated with higher level of engagement in public and semi-public spaces, such as parks and community centers. Social network sites can be considered as “networked publics” that may help people connect for social, cultural, and civic purposes. But on the other hand, they can also be used as tools for gentrification that divide communities through surveillance and profiling.

What can we do, as researchers and citizens, to address the complexities of online policing in the use of social networking sites?

by Nick Doty at March 15, 2016 07:04 PM

March 14, 2016

MIMS 2012

Designing Backwards

Action is the bedrock of drama. Action drives the play forward and makes for a compelling story that propels the audience to the end. And an engaging play is just a series of connected actions, according to David Ball in Backwards & Forwards.

Like a play, the user’s journey through your product is also a series of connected actions. Every click, tap, and swipe is an action users take. But unlike the audience of a play, which is just along for the ride, your users are in the driver’s seat trying to reach a specific goal. If you, as the designer, don’t make the series of actions to reach that goal clear, your users will get lost and your product will fail.

To help authors write engaging plays, David Ball recommends starting at the end of the play and identifying each preceding action, all the way back to the beginning. By looking backwards, you can see the specific steps that led to a particular outcome. “The present demands and reveals a specific past. One particular, identifiable event lies immediately before any other,” he says.

Looking forward, on the other hand, presents unlimited possibilities. The outcome of an action can trigger any number of other actions. You can only know which specific action comes next by looking backwards from the end.

This technique applies just as well to designing user experiences as it does to writing plays. Start by identifying the user’s goal, then walk backwards through each action they must take to get there.

An example makes this clearer. Before we launched native mobile A/B testing at Optimizely, my colleague Silvia and I re-designed the onboarding flow using this technique. (Silvia wrote about the onboarding flow on Medium.)

We identified the user’s goal as creating their first A/B test. We arrived there by understanding the problem that A/B testing solves for our customers, which is to improve their app and ultimately make their business more successful.

If we had started at the beginning and worked our way forward, it would have been easy to stop once they installed our mobile SDK. But installing an SDK isn’t the customer’s goal. There’s no inherit value in that – it’s just a stepping stone to getting value out of our product.

Then we walked backwards through each step a user must take to reach that goal:

  • Goal: create your first A/B test.
  • To create an A/B test, you must install the SDK.
  • To install the SDK, you need to download it and add an account-specific identifier to your app.
  • To download and set up the SDK, you need an account and a project ID.
  • To create an account and a project, you must sign up by entering your info (name, email, billing info, etc.) in a form on our website.

Just by writing out each step like this, we eliminated excess steps and didn’t get distracted by edge cases or side flows. We had a focused list of tasks to design for. And at the conclusion of each task, we knew the next task to lead the user to.

Using this series of steps as a skeleton, we were able to design an onboarding flow that seamlessly led users to their goal. The experience has been praised by customers, and none of them have needed any help from our support team to create their first test.

So next time you’re designing a complex flow, start with the user’s goal and work your way backwards through each action they must take to get there. This technique will put you in an empathetic mindset that will result in user experiences that are clear and focused.

“Of such adjacent links is life — and drama — made up,” says David Ball. And so is product design.

by Jeff Zych at March 14, 2016 02:19 AM

March 07, 2016

MIMS 2016

TweetDay: a better visualization for your Twitter timeline

On Twitter, individuals and outlets frequently use the acronym ICYMI (In Case You Missed It) to bring links to the attention of others who…

by Andrew Huang at March 07, 2016 05:45 AM

March 03, 2016

Center for Technology, Society & Policy

Design Wars: The FBI, Apple and hundreds of millions of phones

By Deirdre K. Mulligan and Nick Doty, UC Berkeley, School of Information | Permalink | Also posted to the Berkeley Blog

After forum-and fact-shopping and charting a course via the closed processes of district courts, the FBI has honed in on the case of the San Bernardino terrorist who killed 14 people, injured 22 and left an encrypted iPhone behind. The agency hopes the highly emotional and political nature of the case will provide a winning formula for establishing a legal precedent to compel electronic device manufacturers to help police by breaking into devices they’ve sold to the public.

The phone’s owner (the San Bernardino County Health Department) has given the government permission to break into the phone; the communications and information at issue belong to a deceased mass murderer; the assistance required, while substantial by Apple’s estimate, is not oppressive; the hack being requested is a software downgrade that enables a brute force attack on the crypto — an attack on the implementation rather than directly disabling encryption altogether and, the act under investigation is heinous.

But let’s not lose sight of the extraordinary nature of the power the government is asking the court to confer.

Over the last 25 years, Congress developed a detailed statutory framework to address law enforcement access to electronic communications, and the technical design and assistance obligations of service providers who carry and store them for the public. That framework has sought to maintain law enforcement’s ability to access evidence, detailed a limited set of responsibilities for various service providers, and filled gaps in privacy protection left by the U.S. Supreme Court’s interpretation of the Fourth Amendment.

This structure, comprised of the 1986 Electronic Communications Privacy Act and the 1994 Communications Assistance for Law Enforcement Act, should limit the FBI’s use of the All Writs Act to force Apple to write a special software downgrade to facilitate a brute-force attack on the phone’s encryption and access the phone’s contents.

As we argue in a brief filed with the court today, the FBI’s effort to require Apple to develop a breach of iPhone security in the San Bernardino case is an end run around the legislative branch. While the FBI attempts to ensure that law enforcement needs for data trump other compelling social values including cybersecurity, privacy, and innovation, legislators and engineers pursue alternative outcomes.

A legal primer

The Communications Assistance for Law Enforcement Act, passed in 1994, essentially requires telecommunications carriers to make their networks wire-tappable, ensuring law enforcement can intercept communications when authorized by law. Importantly, CALEA’s design and related assistance requirements apply only to telecommunications common carriers and prohibit the government from dictating design; alternative versions of the law which to extend these requirements to service providers such as Apple were debated and rejected by Congress.

The second statute of interest is the 1986 Electronic Communications Privacy Act. ECPA governs the conditions and process for law enforcement access to stored records such as subscriber information, transactional data, and communications from electronic communication service providers and remote communication service providers.

Apple has assisted the government in obtaining records related to the San Bernardino iPhone stored on Apple’s servers. That is the extent of Apple’s obligation. ECPA does not require service providers like Apple to help government get access to information on devices or equipment owned by an individual, regardless of whether they sold the device to that individual.

A ruling that the All Writs Act can be used to force Apple to retroactively redesign an iPhone it sold to ensure FBI access to data an individual chose to encrypt would inappropriately upend a carefully constructed policy designed to address privacy, law enforcement, and other values.

If the AWA is read to give a court authority to order this relief because the statute does not expressly prohibit it, it would allow law enforcement to bypass the public policy process on an issue of immense importance to citizens, technologists, human rights activists, regulators and industry.

Make no mistake, we are in the midst of what we call the Design Wars, and those wars are about policy priorities which ought to be established through full and open legislative debate.

Design Wars: The FBI Strikes Back

Design by Molly Mahar (UC Berkeley); background image from NASA.

Unlike an exception in a law that requires a standard to be met by someone in the right role (for example law enforcement), and ideally a court process to invoke (a warrant or other order approved by a court), a vulnerability in a system lets in anyone who can find it – no standard, no process, no paper required: come one, come all. For these reasons, former government officials differ about whether the trade off is worth it.

Former National Security Administration and CIA Director Michael Hayden has recognized that, on balance, America is “more secure with end-to-end unbreakable encryption.” This view is shared by former NSA Director Mike McConnell, former Department of Homeland Security Secretary Michael Chertoff and former U.S. Deputy Defense Secretary William Lynn who recently wrote, “the greater public good is a secure communications infrastructure protected by ubiquitous encryption at the device, server and enterprise level without building in means for government monitoring.”

This is a big public policy question with compelling benefits and risks on both sides. It’s a conversation that should occur in Congress. If the FBI can require product redesigns of their choosing through the All Writs Act, it risks subverting this process and sidestepping a public conversation about how to prioritize values – defensive security, access to evidence, privacy, etc. – in tech policy.

Technical complications

Much of the public debate has focused on how many phones will be affected by the order to design and deploy a modified and less secure version of the iOS operating system. The FBI claims interest in a single phone. Apple claims that the backdoor would endanger hundreds of millions of iPhone users.

“In the physical world, it would be the equivalent of a master key, capable of opening hundreds of millions of locks — from restaurants and banks to stores and homes. No reasonable person would find that acceptable.” — Apple

Legal precedent is certainly an important question; if the All Writs Act can compel Apple to design and deploy software in this case, then would they also have to for the other 13 devices covered by other federal court orders? Or the 175 devices of interest to the Manhattan District Attorney? Will it only require assistance where the government possesses the phone? Or can the All Writs Act be used to push malicious software updates to a device to proactively collect data? What should Apple’s response be when this case is cited by governments of other countries (including China) to compel disabling the PIN entry limits or other security features of an activist’s iPhone?

But the danger of a backdoor exists separately from legal precedents. What if the custom, insecure operating system were to fall into the wrong hands? Apple notes in their motion that it would be a highly-prized asset, sought by hackers, organized crime syndicates and repressive regimes around the world. Developing such software would endanger the security and privacy of all iPhone users in a way that couldn’t be fixed by any software update.

To the FBI’s credit, the conditions of the court order try to limit the risk of this dangerous software falling into the wrong hands: customizing the software to run only on the San Bernardino phone, and unlocking the phone on the Apple campus without releasing the custom insecure software to the FBI.

However, security practitioners more than anyone recognize the limits of well-intentioned methods such as these. They believe in defense in depth, as advocated by the National Security Agency. Rather than relying on a single protective wall or security measure, good security anticipates bugs and mitigates attacks by building security throughout all parts of a system.

Could the design of the custom-insecure operating system limit its applicability and danger if inadvertently released? Apple engineers would certainly try, and much of the expense of developing the software would be the extensive testing necessary to reduce those dangers. But no large scale piece of software is ever written bug-free.

And what is the likely response of rational companies faced with hundreds or thousands of requests to unlock secure devices they’ve sold to the public? Sure, Apple may be financially capable of creating boutique code to unlock every individual phone law enforcement wants access to, or at least many of them. But other companies may build in backdoors to accommodate law enforcement access with minimal impact to the business bottom line.

The result? An end run around the legislative process that has to date been unconvinced that backdoors are good national policy and decreased security for all users.

What next?

Beyond the courtroom, Congress has jumped back into the fray with a hearing in the House Judiciary Committee: The Encryption Tightrope: Balancing Americans’ Security and Privacy.

But the discussion will also include software and hardware engineers. As technical designers see the discretion of law used (or abused) to access communications or undermine security, they will seek technical methods to enforce security in ways increasingly difficult to reverse or undermine.

To take a piece of recent history, revelations during 2013 of the NSA’s mass surveillance of online communications and sabotage of security standards led to organizational and architectural responses from the technical community. The Internet Engineering Task Force concluded that pervasive monitoring is an attack on privacy and one that must be mitigated in the development of the basic standards that define the Internet. A flurry of activity has led to increased encryption of online communications.

As we discussed with the LA Times last week, we expect to see more encryption in cloud services; using a design pattern of exclusively user-managed keys, service providers may build storage and processing services where they are unable to decrypt content for law enforcement and where hackers will be unable to review the data even after breaching a company’s security.

Likewise, look for more work in academia and in industry on: reproducible builds, certificate transparency, homomorphic encryption, trusted platform modules, end-to-end encryption and other technical capabilities that allow for providing services with guarantees of privacy and security from tampering, whether by a hacker, a national intelligence agency or, via court order, the service provider itself.

The next battle in these Design Wars, even after the outcome of the Apple v. FBI case, will be whether the legal process tries to frustrate these technical efforts to provide enhanced security and privacy to people around the globe.


More resources:

by Nick Doty at March 03, 2016 04:00 PM

February 28, 2016

MIMS 2016 Final Project

User Onboarding

User onboarding is a volatile stage in the journey of a user. Lots of strong opinions get formed during these first steps. A user is trusting you with their time and the very first thing they’ll see and interact with is a series of screens, actions and instructions that will set the tone for the rest of their experience. As we all know, first impressions are a crucial part of a user’s assessment of a product.  That first meal at a certain restaurant, the first car you owned (Hyundai Elantra former owner right here!), your first camping trip, etc.

IMG_20160228_143750 IMG_20160228_144358 onboarding_cropped

Information products have their own quirks and affordances. You might want to create an account for your user when they first open your app and get information to be able to tailor your service to them. In our case we need a bunch of info, some of it sensitive, like the case of religious dietary restrictions. We don’t know how users might react to some of these questions and at the same time we need to maintain near-zero friction for the user at all times. Lots of variables.

Lucky for us we’ve done our homework (literally) and know a thing or two about the importance of these first steps. We also have some talented folks amongst our ranks who are pretty passionate about understanding user needs during this process and how we can build an appealing, respectful and useful experience. Work is happening!

We had a successful surveying session with schoolmates and strangers and have amassed tons of eye-opening insights and critiques. For instance, most people have very personal “mental-flows” in which they dissect a menu and go about making choices on what to order. It’s indicative of a person’s beliefs and priorities and they shed light on how koAlacart should be structured. Stay tuned, more updates coming soon…


by nsoldiac at February 28, 2016 11:32 PM

February 25, 2016

Ph.D. alumna

What is the Value of a Bot?

Bots are tools, designed by people and organizations to automate processes and enable them to do something technically, socially, politically, or economically.

Most of the bots that I have built have been in the pursuit of laziness. I have built bots to sit on my server to check to see if processes have died and to relaunch them, mostly to avoid trying to figure out why the process would die in the first place. I have also built bots under the guise of “art.” For example, I built a bot to crawl online communities to quantitatively assess the interactions.

I’ve also written some shoddy code, and my bots haven’t always worked as intended. While I never designed them to be malicious, a few poorly thought through keystrokes had unintended consequences. One rev of my process-checker bot missed the mark and kept launching new processes every 30 seconds until it brought the server down. And in some cases, it wasn’t the bot that was the problem, but my own stupid interpretation of the information I got back from the bot. For example, I got the great idea to link my social bot designed to assess the “temperature” of online communities up to a piece of hardware designed to produce heat. I didn’t think to cap my assessment of the communities and so when my bot stumbled upon a super vibrant space and offered back a quantitative measure intended to signal that the community was “hot,” another piece of my code interpreted this to mean: jack the temperature up the whole way. I was holding that hardware and burnt myself. Dumb. And totally, 100% my fault.

Most of the bots that I’ve written were slipshod, irrelevant, and little more than a nuisance. But, increasingly, huge systems rely on bots. Bots make search engines possible and, when connected to sensors, are often key to smart cities and other IoT instantiations. Bots shape the financial markets and play a role in helping people get information. Of course, not all bots are designed to be helpful to large institutions. Bots that spread worms, viruses, and spam are often capitalizing on the naivety of users. There are large networks of bots (“botnets”) that can be used to bring down systems (e.g., DDoS attacks). There are also pesky bots that mess with the ecosystem by increasing people’s Twitter follower counts, automating “likes” on Instagram, and create the appearance of natural interest even when there is none.

Identifying the value of these different kinds of bots requires a theory of power. We may want to think that search engines are good, while fake-like bots are bad, but both enable the designer of the bots to profit economically and socially.

Who gets to decide the value of a bot? The technically savvy builder of the bot? The people and organizations that encounter or are affected by the bot? Bots are being designed for all sorts of purposes, and most of them are mundane. But even mundane bots can have consequences.

In the early days of search engines, many website owners were outraged by search engine bots, or web crawlers. They had to pay for traffic, and web crawlers were not seen as legitimate or desired traffic. Plus, they visited every page and could easily bring down a web server through their intensive crawling. As a result, early developers came together and developed a proposal for web crawler politeness, including a mechanism known as the “robots exclusion standard” (or robots.txt), which allowed a website owner to dictate which web crawler could look at which page.

As systems get more complex, it’s hard for developers to come together and develop politeness policies for all bots out there. And it’s often hard for a system to discern between bots that are being helpful and bots that are a burden and not beneficial. After all, before Google was Google, people didn’t think that search engines could have much value.

Standards bodies are no longer groups of geeky friends hashing out protocols over pizza. They’re now structured processes involving all sorts of highly charged interests — they often feel more formal than the meeting of the United Nations. Given high-profile disagreements, it’s hard to imagine such bodies convening to regulate the mundane bots that are creating fake Twitter profiles and liking Instagram photos. As a result, most bots are simply seen as a nuisance. But how many gnats come together to make a wasp?

Bots are first and foremost technical systems, but they are derived from social values and exert power into social systems. How can we create the right social norms to regulate them? What do the norms look like in a highly networked ecosystem where many pieces of the pie are often glued together by digital duct tape?

(This was originally written for Points as part of a series on how to think about bots.)

by zephoria at February 25, 2016 12:50 AM

February 24, 2016

Center for Technology, Society & Policy

Rough cuts on the incredibly interesting implications of Facebook’s Reactions

By Galen Panger, CTSP | Permalink

How do we express ourselves in social media, and how does that make other people feel? These are two questions at the very heart of social media research including, of course, the ill-fated Facebook experiment. Facebook Reactions are fascinating because they are, even more explicitly than the Facebook experiment, an intervention into our emotional lives.

Let me be clear that I support Facebook’s desire to overcome the emotional stuntedness of the Like button (don’t even get me started on the emotional stuntedness of the Poke button). I support the steps the company has taken to expand the Like button’s emotional repertoire, particularly in light of the company’s obvious desire to maintain its original simplicity. But as a choice about which emotional expressions and reactions to officially reward and sanction on Facebook, they are consequential. They explicitly present the company with the knotty challenge of determining the shape of Facebook’s emotional environment, and they have wide implications for the 1.04 billion of us who visit Facebook each day. Here are a few rough reactions to Facebook Reactions.

  • To some extent, Reactions track existing research about emotions that motivate sharing in social media, and the new buttons now allow us to reflect those emotions back to friends in their posts. We have buttons for enthusiasm, amusement and humor (Haha, Love), awe and inspiration (Wow), and anger (Angry). Interestingly, other high arousal emotions thought to motivate sharing—anxiety is a key one—are absent from Reactions, and one low arousal emotion theoretically less likely to motivate sharing, sadness, is present. This suggests either the research is incomplete, and people very often do express and react with sadness (and more so than anxiety), or the company has culled the set of supported emotions by other criteria than popularity. I’m guessing it’s a combination of these: that people do frequently express sadness on Facebook and receive high engagement when they do, such as after a death. But that sadness also has perhaps the most glaring cognitive dissonance with the Like button, urging the company to split it off regardless of popularity. We simply cannot “like” our friends’ grief.
  • Facebook’s choices about which emotions to include and exclude will shape the platform’s emotional environment and, thus, users’ emotional experiences on it—just like the Facebook experiment did and just as Facebook’s design choices have always done. Sanctioning four positive emotions with Like, Love, Haha and Wow buttons as well as two negative emotions with Angry and Sad buttons means posts that stimulate those emotional reactions are likely to be better rewarded, while posts that do not are less rewarded. Now that we can explicitly reward angry posts from friends with the Angry button, will there be more anger shared? Almost certainly. Now that users no longer have to fight against the grain of the Like button to express grief and sadness, will we see more posts about grief and sadness? Likely. Facebook has encouraged the mix of emotions to stay positive overall, however, with a total of now four buttons to express positive reactions (even when, arguably, these are the reactions that fit best under the original Like button).
  • Facebook has always clearly wanted to avoid fostering a disparaging emotional environment, which is likely a reason it has fastidiously avoided a Dislike button. It will be interesting, thus, to see if people sometimes use the new buttons sarcastically or disparagingly, and I can imagine people trolling a company or politician with the Sad and Angry buttons (or even a sarcastic Wow). But these are more ambiguously disparaging than Dislike would have been. Responding with Sad or Angry, for example, may unintentionally invite a literal interpretation by the post’s author and, perceiving that they’ve made a friend feel sad or angry, might be empathetic in response and start a dialogue about why there was a negative reaction. This is much less confrontational than a Dislike button—and, arguably, superior design.
  • In Facebook’s announcement, the company hints at a very big question mark: how to rank posts in News Feed with the new information from these buttons. “Initially … if someone uses a Reaction, we will infer they want to see more of that type of post. … Over time we hope to learn how the different Reactions should be weighted differently by News Feed.” This is a thorny issue that puts Facebook’s role in shaping our emotional experiences into sharp relief. If sad posts make us feel sad, but in expressing that reaction, Facebook decides to show us more sad posts which further our sadness, is that a good thing? If we react with anger at others’ angry posts, is it a good thing that Facebook will show us more posts that perpetuate our anger? Beyond the personal implications, what of the political implications? Forget the filter bubble: Facebook’s ranking of News Feed could suck us into an emotional bubble.

Luckily, one thing the company arguably has been in rolling out these new buttons is careful, in a way they often are not. Facebook drew on its years-long relationship with Dacher Keltner, the Berkeley psychology professor who also consulted on Inside Out, to craft its Reactions, and tested the new buttons for a long time before rolling them out today. In addition, by adding a Wow button, Facebook may further the platform as a vehicle for spreading awe, an emotion Keltner’s own research suggests has particular benefits for health and well-being, beyond those of other positive emotions (though, be careful—a Wow reaction could also be the sign of an envy-inducing post).

But here, now, we are in the tricky space where Facebook is explicitly choosing the emotions we may feel and not feel as we interact with News Feed throughout the day. This is no less “manipulative” than the Facebook experiment, and by rolling out globally, arguably more consequential. Choices the company makes in ranking News Feed will now more explicitly affect the amount of amusement, love and awe we experience in our daily lives—but also the amount of anger and sadness. Facebook’s choices will have consequences because emotions have consequences.

by Galen Panger at February 24, 2016 11:51 PM

February 23, 2016

BioSENSE research project

February 18, 2016

BioSENSE research project

Software Detects CEO Emotions, Predicts Financial Performance

Software Detects CEO Emotions, Predicts Financial Performance:

Although fear, anger, and disgust are negative emotions, Dr. Cicon found
they correlated positively with financial performance. CEOs whose faces
during a media interview showed disgust–as evidenced by lowered eyebrows
and eyes, and a closed, pursed mouth–were associated with a 9.3% boost in
overall profits in the following quarter.

February 18, 2016 06:17 PM

February 17, 2016

BioSENSE research project

February 16, 2016

BioSENSE research project

prostheticknowledge: Young Women Sitting and Standing and...









prostheticknowledge:

Young Women Sitting and Standing and Talking and Stuff (No, No, No)

Lo-Fi tech performance art by @sondraperry01 uses tft-fitted goggles to emphasise eyes and their non-verbal communication during a conversation:

Young Women Sitting and Standing and Talking and Stuff (No, No, No)
2 hour durational performance from 6 to 8 PM on April 21, 2015
Performers: Joiri Minaya, Victoria Udondian, and Ilana Harris-Babou
Safety goggles, 3 tft screens, 3 mica media players, 3 usb sticks, 3 extension cords, 3 Hanes Ultimate Cotton® Crewneck Adult Sweatshirts, zip ties

Link

February 16, 2016 07:58 PM

The Internet Of Things Will Be The World's Biggest Robot

The Internet Of Things Will Be The World's Biggest Robot:

seems unlikely it will be 1 robot - likely multiple robots with different sensors and actuators sharing data in ways that have more to do with deals signed by companies (or not). the idea of the IoT+servers as some singular super-robot is about as far-fetched as the Web 2.0 dream of every last service exposing a REST API. won’t happen - for human and business reasons alike —nick

February 16, 2016 06:21 AM

Authentication Using Pulse-Response Biometrics

Authentication Using Pulse-Response Biometrics:

a novel biometric based on how people respond to a small electric pulse applied to the palm of the hand.

February 16, 2016 04:18 AM

Preventing Lunchtime Attacks: Fighting Insider Threats With Eye Movement Biometrics

Preventing Lunchtime Attacks: Fighting Insider Threats With Eye Movement Biometrics:

using eye movement for authentication, and unique identification among 30 subjects. according to the study, eye movement is hard to spoof.

February 16, 2016 04:16 AM

Interactionist AI and the promise of ubicomp, or, how to put your box in the world without putting the world in your box

Interactionist AI and the promise of ubicomp, or, how to put your box in the world without putting the world in your box:

In many ways, the central problem of ubiquitous computing – how computational systems can make sense of and respond sensibly to a complex, dynamic environment laden with human meaning – is identical to that of Artificial Intelligence (AI). Indeed, some of the central challenges that ubicomp currently faces in moving from prototypes that work in restricted environments to the complexity of real-world environments – e.g. difficulties in scalability, integration, and fully formalizing context – echo some of the major issues that have challenged AI researchers over the history of their field. In this paper, we explore a key moment in AI’s history where researchers grappled directly with these issues, resulting in a variety of novel technical solutions within AI. We critically reflect on six strategies from this history to suggest technical solutions for how to approach the challenge of building real-world, usable solutions in ubicomp today.

February 16, 2016 12:16 AM

February 15, 2016

BioSENSE research project

February 12, 2016

BioSENSE research project

"The Link Between Neanderthal DNA and Depression Risk By mining electronic medical records,..."

The Link Between Neanderthal DNA and Depression Risk

By mining electronic medical records, scientists show the lasting legacy of prehistoric sex on modern humans’ health.



- The Link Between Neanderthal DNA and Depression Risk - The Atlantic

February 12, 2016 10:51 PM

February 11, 2016

BioSENSE research project

prostheticknowledge: First Person Slitscanning Series of visual...









prostheticknowledge:

First Person Slitscanning

Series of visual experiments by Terence Broad explores variations on the photo-delay method using geometric parameters:

Experiments with different geometric variations on tried and tested technique of slitscanning. All the videos were made using custom C++ software and led to my work on a commision for Converse.

Link

February 11, 2016 08:25 AM

prostheticknowledge: iDummy A robotic mannequin designed to...









prostheticknowledge:

iDummy

A robotic mannequin designed to create and model clothing, altering its form for different body shapes:

i.Dummy, revolutionary physical fitting avatar enabling users to adjust and achieve over hundreds of human body measurements and shapes with just few clicks on computer.

Complicated and meticulous mechanical structures comprising over 1000 parts are built and constructed inside i.Dummy to achieve immediate, accurate and reliable i.Dummy measurements.

Body panels are designed by professionals based on years of human body researches over worldwide population. With the driving force from internal parts, a variety of reasonable body proportions from extra small to extra large can be attained perfectly.

More Here

February 11, 2016 08:25 AM

"BROOKE: No conversations…it’s mostly selfies. Depending on the person, the selfie changes. Like, if..."

“BROOKE: No conversations…it’s mostly selfies. Depending on the person, the selfie changes. Like, if it’s your best friend, you make a gross face, but if it’s someone you like or don’t know very well, it’s more regular.”

- Teenagers Are Much Better At Snapchat Than You

February 11, 2016 06:56 AM

"A DARPA-funded research team has created a novel neural-recording device that can be implanted into..."

“A DARPA-funded research team has created a novel neural-recording device that can be implanted into the brain through blood vessels, reducing the need for invasive surgery and the risks associated with breaching the blood-brain barrier.”

- Minimally Invasive “Stentrode” Shows Potential as Neural Interface for Brain

February 11, 2016 06:49 AM

February 10, 2016

Center for Technology, Society & Policy

The need for interdisciplinary tech policy training

By Nick Doty, CTSP, with Richmond Wong, Anna Lauren Hoffman and Deirdre K. Mulligan | Permalink

Conversations about substantive tech policy issues — privacy-by-design, net neutrality, encryption policy, online consumer protection — frequently evoke questions of education and people. “How can we encourage privacy earlier in the design process?” becomes “How can we train and hire engineers and lawyers who understand both technical and legal aspects of privacy?” Or: “What can the Federal Trade Commission do to protect consumers from online fraud scams?” becomes “Who could we hire into an FTC bureau of technologists?” Over the past month, members of the I School community have participated in several events where these tech policy conversations have occurred:

  • Catalyzing Privacy by Design: fourth in a series of NSF-sponsored workshops, organized with the Computing Community Consortium, to develop a privacy by design research agenda
  • Workshop on Problems in the Public Interest: hosted by the Technology Science Research Collaboration Network at Harvard to generate new research questions
  • PrivacyCon: an event to bridge academic research and policymaking at the Federal Trade Commission

In talking with people from government, academia, industry, and civil society, we identified several common messages:

  • a value of getting academics talking to industry, non-profits and government is that we can hear concrete requests
  • there is a shared recognition that, for many values that apply throughout the lifecycle of an organization or a project, we require trained people as well as processes and tools
  • because these problems are interdisciplinary, there is a new and specific need for interdisciplinarily people to bridge gaps; we hear comments like “we need more Latanya Sweeneys” or “we need more Ashkan Soltanis” and “we need people to translate”

We are also urged by these events to define “interdisciplinary” broadly. Tech policy problems are not only problems of law and software engineering — they also demand social scientific, economic, and humanistic investigation, as well as organizational or philosophical/ethical analyses. Such issues also require the methodological diversity that accompanies interdisciplinary collaboration; in particular, we have been pleasantly surprised to see how well-received and novel lessons from human-centered design and design practice have been to lawyers and engineers working in privacy.

Workshops like these are a good place to identify needs, problems and open research questions. But they’re also opportunities to start sketching out responses and proposing solutions. In view of these recent events, we stress the following takeaways:

  1. different institutions working on training for tech policy can build on previous conversations and events to collaboratively develop curricula and practical knowledge
  2. funding is available, including potential sources in NSF and in private foundations
  3. career paths are increasingly available if not easily defined, so we should connect students with emerging opportunities

We hope to have more to share on these three points soon. For now, a call: Are you working on case studies, a syllabus, tools or training for teaching issues at the intersection of technology and policy? We’d like to hear from you. We will develop a repository of these teaching and training resources.


Other writing about these events:

by Nick Doty at February 10, 2016 07:52 PM