School of Information Blogs

May 23, 2016

Ph.D. student

discovering agency in symbolic politics as psychic expression of Blau space

If the Blau space is exogenous to manifest society, then politics is an epiphenomenon. There will be hustlers; there will be the oscillations of who is in control. But there is no agency. Particularities are illusory, much as how in quantum field theory the whole notion of the ‘particle’ is due to our perceptual limitations.

An alternative hypothesis is that the Blau space shifts over time as a result of societal change.

Demographics surely do change over time. But this does not in itself show that Blau space shifts are endogenous to the political system. We could possibly attribute all Blau space shifts to, for example, apolitical terms of population growth and natural resource availability. This is the geographic determinism stance. (I’ve never read Guns, Germs, and Steel… I’ve heard mixed reviews.)

Detecting political agency within a complex system is bound to be difficult because it’s a lot like trying to detect free will, only with a more hierarchical ontology. Social structure may or may not be intelligent. Our individual ability to determine whether it is or not will be very limited. Any individual will have a limited set of cognitive frames with which to understand the world. Most of them will be acquired in childhood. While it’s a controversial theory, the Lakoff thesis that whether one is politically liberal or conservative depends on ones relationship with ones parents is certainly very plausible. How does one relate to authority? Parental authority is replaced by state and institutional authority. The rest follows.

None of these projects are scientific. This is why politics is so messed up. Whereas the Blau space is an objective multidimensional space of demographic variability, the political imaginary is the battleground of conscious nightmares in the symbolic sphere. Pathetic humanity, pained by cruel life, fated to be too tall, or too short, born too rich or too poor, disabled, misunderstood, or damned to mediocrity, unfurls its anguish in so many flags in parades, semaphore, and war. But what is it good for?

“Absolutely nothin’!”

I’ve written before about how I think Jung and Bourdieu are an improvement on Freud and Habermas as the basis of unifying political ideal. Whereas for Freud psychological health is the rational repression of the id so that the moralism of the superego can hold sway over society, Jung sees the spiritual value of the unconscious. All literature and mythology is an expression of emotional data. Awakening to the impersonal nature of ones emotions–as they are rooted in a collective unconscious constituted by history and culture as well as biology and individual circumstance–is necessary for healthy individuation.

So whereas Habermasian direct democracy, being Freudian through the Frankfurt School tradition, is a matter of rational consensus around norms, presumably coupled with the repression of that which does not accord with those norms, we can wonder what a democracy based on Jungian psychology would look like. It would need to acknowledge social difference within society, as Bourdieu does, and that this social difference puts constraints on democratic participation.

There’s nothing so remarkable about what I’m saying. I’m a little embarrassed to be drawing from European Grand Theorists and psychoanalysts when it would be much more appropriate for me to be looking at, say, the tradition of American political science with its thorough analysis of the role of elites and partisan democracy. But what I’m really looking for is a theory of justice, and the main way injustice seems to manifest itself now is in the resentment of different kinds of people toward each other. Some of this resentment is “populist” resentment, but I suspect that this is not really the source of strife. Rather, it’s the conflict of different kinds of elites, with their bases of power in different kinds of capital (economic, institutional, symbolic, etc.) that has macro-level impact, if politics is real at all. Political forces, which will have leaders (“elites”) simply as a matter of the statistical expression of variable available energy in the society to fill political roles, will recruit members by drawing from the psychic Blau space. As part of recruitment, the political force will activate the habitus shadow of its members, using the dark aspects of the psyche to mobilize action.

It is at this point, when power stokes the shadow through symbols, that injustice becomes psychologically real. Therefore (speaking for now only of symbolic politics, as opposed to justice in material economic actuality, which is something else entirely) a just political system is one that nurtures individuation to such an extent that its population is no longer susceptible to political mobilization.

To make this vision of democracy a bit more concrete, I think where this argument goes is that the public health system should provide art therapy services to every citizen. We won’t have a society that people feel is “fair” unless we address the psychological roots of feelings of disempowerment and injustice. And while there are certainly some causes of these feelings that are real and can be improved through better policy-making, it is the rare policy that actually improves things for everybody rather than just shifting resources around according to a new alignment of political power, thereby creating a new elite and new grudges. Instead I’m proposing that justice will require peace, and that peace is more a matter of the personal victory of the psyche than it is a matter of political victory of ones party.


by Sebastian Benthall at May 23, 2016 04:17 PM

Ph.D. student

directions to migrate your WebFaction site to HTTPS

Hiya friends using WebFaction,

Securing the Web, even our little websites, is important — to set a good example, to maintain the confidentiality and integrity of our visitors, to get the best Google search ranking. While secure Web connections had been difficult and/or costly in the past, more recently, migrating a site to HTTPS has become fairly straightforward and costs $0 a year. It may get even easier in the future, but for now, the following steps should do the trick.

Hope this helps, and please let me know if you have any issues,
Nick

P.S. Yes, other friends, I recommend WebFaction as a host; I’ve been very happy with them. Services are reasonably priced and easy to use and I can SSH into a server and install stuff. Sign up via this affiliate link and maybe I get a discount on my service or something.

P.S. And really, let me know if and when you have issues. Encrypting access to your website has gotten easier, but it needs to become much easier still, and one part of that is knowing which parts of the process prove to be the most cumbersome. I’ll make sure your feedback gets to the appropriate people who can, for realsies, make changes as necessary to standards and implementations.


One day soon I hope WebFaction will make many of these steps unnecessary, but the configuring and testing will be something you have to do manually in pretty much any case. You should be able to complete all of this in an hour some evening. You might have to wait a bit on WebFaction installing your certificate and the last two parts can be done on the following day if you like.

Create a secure version of your website in the WebFaction Control Panel

Login to the Web Faction Control Panel, choose the “DOMAINS/WEBSITES” tab and then click “Websites”.

“Add new website”, one that will correspond to one of your existing websites. I suggest choosing a name like existingname-secure. Choose “Encrypted website (https)”. For Domains, testing will be easiest if you choose both your custom domain and a subdomain of yourusername.webfactional.com. (If you don’t have one of those subdomains set up, switch to the Domains tab and add it real quick.) So, for my site, I chose npdoty.name and npdoty.npd.webfactional.com.

Finally, for “Contents”, click “Re-use an existing application” and select whatever application (or multiple applications) you’re currently using for your http:// site.

Click “Save” and this step is done. This shouldn’t affect your existing site one whit.

Test to make sure your site works over HTTPS

Now you can test how your site works over HTTPS, even before you’ve created any certificates, by going to https://subdomain.yourusername.webfactional.com in your browser. Hopefully everything will load smoothly, but it’s reasonably likely that you’ll have some mixed content issues. The debug console of your browser should show them to you: that’s Apple-Option-K in Firefox or Apple-Option-J in Chrome. You may see some warnings like this, telling you that an image, a stylesheet or a script is being requested over HTTP instead of HTTPS:

Mixed Content: The page at ‘https://npdoty.name/’ was loaded over HTTPS, but requested an insecure image ‘http://example.com/blah.jpg’. This content should also be served over HTTPS.

Change these URLs so that they point to https://example.com/script.js (you could also use a scheme-relative URL, like //example.com/script.js) and update the files on the webserver and re-test.

Good job! Now, https://subdomain.yourusername.webfactional.com should work just fine, but https://yourcustomdomain.com shows a really scary message. You need a proper certificate.

Get a free certificate for your domain

Let’s Encrypt is a new, free, automated certificate authority from a bunch of wonderful people. But to get it to setup certificates on WebFaction is a little tricky, so we’ll use the letsencrypt-webfaction utility —- thanks will-in-wi!

SSH into the server with ssh yourusername@yourusername.webfactional.com.

To install, run this command:

GEM_HOME=$HOME/.letsencrypt_webfaction/gems RUBYLIB=$GEM_HOME/lib gem2.2 install letsencrypt_webfaction

For convenience, you can add this as a function to make it easier to call. Edit ~/.bash_profile to include:

function letsencrypt_webfaction {
    PATH=$PATH:$GEM_HOME/bin GEM_HOME=$HOME/.letsencrypt_webfaction/gems RUBYLIB=$GEM_HOME/lib ruby2.2 $HOME/.letsencrypt_webfaction/gems/bin/letsencrypt_webfaction $*
}

Now, let’s test the certificate creation process. You’ll need your email address (preferably not GMail, which has longer instructions), e.g. nick@npdoty.name and the path to the files for the root of your website on the server, e.g. /home/yourusername/webapps/sitename/. Filling those in as appropriate, run this command:

letsencrypt_webfaction --account_email you@example.com --support_email you@example.com --domains yourcustomdomain.com --public /home/yourusername/webapps/sitename/

It’s important to use your email address for both --account_email and --support_email so that for this test, you’ll get the emails rather than sending them to the WebFaction support staff.

If all went well, you’ll see a new directory in your home directory called le_certs, and inside that a directory with the name of your custom domain (and inside that, a directory named with a timestamp, which has a bunch of cryptographic keys in it that we don’t care much about). You should also have received a couple of emails with appropriate instructions, e.g.:

LetsEncrypt Webfaction has generated a new certificate for yourcustomdomain.com. The certificates have been placed in /home/yourusername/le_certs/yourcustomdomain.com/20160522004546. WebFaction support has been contacted with the following message:

Please apply the new certificate in /home/yourusername/le_certs/yourcustomdomain.com/20160522004546 to yourcustomdomain.com. Thanks!

Now, run the same command again but without the --support_email parameter and this time the email will get sent directly to the WebFaction staff. One of the friendly staff will need to manually copy your certificates to the right spot, so you may need to wait a while. You’ll get a support notification once it’s done.

Test your website over HTTPS

This time you get to test it for real. Load https://yourcustomdomain.com in your browser. (You may need to force refresh to get the new certificate.) Hopefully it loads smoothly and without any mixed content warnings. Congrats, your site is available over HTTPS!

You are not done. You might think you are done, but if you think so, you are wrong.

Set up automatic renewal of your certificates

Certificates from Let’s Encrypt expire in no more than 90 days. (Why? There are two good reasons.) Your certificates aren’t truly set up until you’ve set them up to renew automatically. You do not want to do this manually every few months; you would forget, I promise.

Cron lets us run code on WebFaction’s server automatically on a regular schedule. If you haven’t set up a cron job before, it’s just a fancy way of editing a special text file. Run this command:

EDITOR=nano crontab -e

If you haven’t done this before, this file will be empty, and you’ll want to test it to see how it works. Paste the following line of code exactly, and then hit Ctrl-O and Ctrl-X to save and exit.

* * * * * echo "cron is running" >> $HOME/logs/user/cron.log 2>&1

This will output to that log every single minute; not a good cron job to have in general, but a handy test. Wait a few minutes and check ~/logs/user/cron.log to make sure it’s working. Now, let’s remove that test and add the renewal line, being sure to fill in your email address, domain name and the path to your website’s directory, as you did above:

0 4 15 */2 * PATH=$PATH:$GEM_HOME/bin GEM_HOME=$HOME/.letsencrypt_webfaction/gems RUBYLIB=$GEM_HOME/lib ruby2.2 $HOME/.letsencrypt_webfaction/gems/bin/letsencrypt_webfaction --account_email you@example.com --domains example.com --public /home/yourusername/webapps/sitename/

You’ll probably want to create the line in a text editor on your computer and then copy and paste it to make sure you get all the substitutions right. Ctrl-O and Ctrl-X to save and close it. Check with crontab -l that it looks correct.

That will create a new certificate at 4am on the 15th of alternating months (January, March, May, July, September, November) and ask WebFaction to install it. New certificates every two months is fine, though one day in the future we might change this to get a new certificate every few days; before then WebFaction will have taken over the renewal process anyway.

Redirect your HTTP site (optional, but recommended)

Now you’re serving your website in parallel via http:// and https://. You can keep doing that for a while, but everyone who follows old links to the HTTP site won’t get the added security, so it’s best to start permanently re-directing the HTTP version to HTTPS.

WebFaction has very good documentation on how to do this, and I won’t duplicate it all here. In short, you’ll create a new static application named “redirect”, which just has a .htaccess file with, for example, the following:

RewriteEngine On
RewriteCond %{HTTP_HOST} ^www\.(.*)$ [NC]
RewriteRule ^(.*)$ https://%1/$1 [R=301,L]
RewriteCond %{HTTP:X-Forwarded-SSL} !on
RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]

This particular variation will both redirect any URLs that have www to the “naked” domain and make all requests HTTPS. And in the Control Panel, make the redirect application the only one on the HTTP version of your site. You can re-use the “redirect” application for different domains.

Test to make sure it’s working! http://yourcustomdomain.com, http://www.yourcustomdomain.com, https://www.yourcustomdomain.com and https://yourcustomdomain.com should all end up at https://yourcustomdomain.com. (You may need to force refresh a couple of times.)

by nick@npdoty.name at May 23, 2016 01:22 AM

May 21, 2016

Ph.D. student

on intellectual sincerity

I have recently received some extraordinary encouragement regarding this blog. There are a handful of people who have told me how much they get out of my writing here.

This is very meaningful to me, since I often feel like blogging is the only intellectually sincere outlet I have. I have had a lot of difficulty this past year with academic collaboration. My many flaws have come to the fore, I’m afraid. One of these flaws is an inability to make certain intellectual compromises that would probably be good for my career if I were able to make.

A consolation in what has otherwise been a painful process is that a blog provides an outlet that cannot be censored even when it departs from the style and mores of academic writing, which I have come up against in such contexts such as internal university emails and memos. I’ve been told that writing research memos in an assertive way that reflects my conviction as I write them is counterproductive, for example. It was cited as an auxiliary reason for a major bureaucratic obstacle. One is expected, it seems, to play a kind of linguistic game as a graduate student working on a dissertation: one must not write with more courage than ones advisors have to offer as readers. To do so upsets the social authority on which the social system depends.

These sociolinguistic norms hold internal to the organization, despite the fact that every researcher may and is even expected or encouraged to publish their research outwardly with a professional confidence. In an academic paper, I can write assertively because I will not be published without peer review verifying that my work warrants the confidence with which it is written. In a blog, I can write even more assertively, because I can expect to be ignored. More importantly, others can expect each other to ignore writing in blogs. Recognition of blog writing as academically relevant happens very rarely, because to do so would be to acknowledge the legitimacy of a system of thought outside the system of academically legitimized thought. Since the whole game of the academy depends on maintaining its monopoly on expertise or at least the value of its intellectual currency relative to others, it’s very dangerous to acknowledge a blog.

I am unwise, terrifically unwise, and in my youthful folly I continue to blog with the candor that I once used as a pseudonymous teenager. Surely this will ruin me in the end, as now this writing has a permanence and makes a professional impression whose impact is real. Stakes are high; I’m an adult. I have responsibilities, or should; as I am still a graduate student I sometimes feel I have nothing to lose. Will I be forgiven for speaking my mind? I suppose that depends on whether there is freedom and justice in society or not. I would like to think that if the demands of professional success are such that to publish reflective writing is a career killer for an academic, that means The Terrorists Have Won in a way much more profound than any neoconservative has ever fantasized about.

There are lots of good reasons to dislike intellectuals. But some of us are by nature. I apologize on behalf of all of us. Please allow us to continue our obscure practices in the margins. We are harmless when ignored.


by Sebastian Benthall at May 21, 2016 07:02 PM

May 18, 2016

Center for Technology, Society & Policy

A User-Centered Perspective on Algorithmic Personalization

By Rena Coen, Emily Paul, Pavel Vanegas, and G.S. Hans, CTSP Fellows | Permalink

We conducted a survey using experimentally controlled vignettes to measure user attitudes about online personalization and develop an understanding of the factors that contribute to personalization being seen as unfair or discriminatory. Come learn more about these findings and hear from the Center for Democracy & Technology on the policy implications of this work at our event tonight!

What is online personalization?

Some of you may be familiar with a recent story, in which United Artists presented Facebook users with different movie trailers for the film Straight Outta Compton based on their race, or “ethnic affinity group,” which was determined based on users’ activity on the site.

This is just one example of online personalization, where content is tailored to users based on some user attribute. Such personalization can be beneficial to consumers but it can also have negative and discriminatory effects, as in the targeted trailers for Straight Outta Compton or Staples’ differential retail pricing based on zip code. Of course, not all personalization is discriminatory; there are examples of online personalization that many of us see as useful and have even come to expect. One example of this is providing location-based results for generic search terms like “coffee” or “movie showtimes.”

The role of algorithms

A big part of this story is the role of algorithms in personalization. This could mean that the data that is used to drive the personalization has been inferred, as in the Straight Outta Compton example where Facebook algorithmically inferred people’s ethnic affinity group based on the things they liked and clicked on. In this case the decision about who to target was made deductively. Facebook offers companies the opportunity to target their ads to ethnic affinity groups and United Artists thought it made sense to show different movie trailers to people based on their race. In other cases, there may not be a clear logic used in deciding what kind of targeting to do. Companies can use algorithms to identify patterns in customer data and target content, based on the assumption that people who like one thing will like another.

When does personalization discriminate?

We have a range of responses to personalization practices; we may see some as useful while others may violate our expectations. But how can we think about the range of responses to these examples more systematically – in a way that helps us articulate what these expectations are?

This is something that policy makers and privacy scholars have been examining and debating. From the policy side there is a need for practices and procedures that reflect and protect users’ expectations. These personalization practices, especially the use of inference, create challenges for existing policy frameworks. Several reports from the Federal Trade Commission (e.g. here and here) and the White House (e.g. here and here) look at how existing policy frameworks like the Fair Information Practice Principles (FIPPs) can address the use of algorithms to infer user data and target content. Some of the proposals from authors including Kate Crawford, Jason Schultz, Danielle Citron, and Frank Pasquale look to expand due process to allow users to correct data that has been inaccurately inferred about them.

Theoretical work from privacy scholars attempts to understand what users’ expectations are around inference and personalization, attempting to understand how these might be protected in the face of new technology. Many of these scholars have talked about the importance of context. Helen Nissenbaum and Solon Barocas discuss Nissenbaum’s conception of privacy as contextual integrity based on whether the inference conflicts with information flow norms and expectations. So, in the Straight Outta Compton example, does Facebook inferring people’s ethnic affinity based on their activity on the site violate norms and expectations of what users think Facebook is doing with their data?

This policy and privacy work highlights some of the important factors that seem to affect user attitudes about personalization: there is the use of inferred data and all of the privacy concerns it raises, there are questions around accuracy when inference is used, and there is the notion of contextual integrity.

One way to find more clarity around these factors and how they affect user attitudes is to ask the users directly. There is empirical work looking at how users feel about targeted content. In particular, there are several studies on user attitudes about targeted advertising, including by Chris Hoofnagle, Joseph Turow, Jen King, and others which found that most users (66%) did not want targeted advertising at all and that once users were informed of the tracking mechanisms that support targeted ads even more (over 70%) did not want targeted ads. There has also been empirical work from researchers at Northeastern University who have examined where and how often personalization is taking place online in search results and pricing. In addition, a recent Pew study looked at when people are willing to share personal information in return for something of value.

Experimental approach to understanding user attitudes

Given the current prevalence of personalization online and the fact that some of it does seem to be useful to people, we chose to take personalization as a given and dig into the particular factors that push it from something that is beneficial or acceptable to something that is unfair.

Using an experimental vignette design, we measure users’ perceptions of fairness in response to content that is personalized to them. We situate these vignettes in three domains: targeted advertising, filtered search results, and differential retail pricing using a range of data types including race, gender, city or town of residence, and household income level.

We find that users’ perceptions of fairness are highly context-dependent. By looking at the fairness ratings based on the contextual factors of domain and data type, we observe the importance of both the sensitivity of the data used to personalize and its relevance to the domain of the personalization in determining what forms of personalization might violate user norms and expectations.

Join us tonight from 6-9 pm with the Startup Policy Lab to hear Rena Coen, Emily Paul, and Pavel Vanegas present the research findings, followed by a conversation about the policy implications of the findings with Alethea Lange, policy analyst at the Center for Democracy & Technology, and Jen King, privacy expert and Ph.D. candidate at the UC Berkeley School of Information, moderated by Gautam Hans.

Event details and RSVP

This project is funded by the Center for Technology, Society & Policy and the Center for Long-Term Cybersecurity.

by Nick Doty at May 18, 2016 08:02 PM

May 13, 2016

Ph.D. student

A Bourdieusian anticipation of the SciPy proceedings review process

My currently defunct dissertation was about applying Bourdieu’s sociology of science to data scientific software production, with a specific focus on Scientific Python. I’m done a lot of work look at the statistical distribution of mailing list discussions so far.

But perhaps the problem with my dissertation is that mailing list miss the point. I believe I was able to discover the statistical properties of email discussion and how these can be modeled using the Central Limit Theorem and an underlying Blau space. Through this I could make a case for the autonomous participation of participants on the mailing list.

But autonomy alone is not sufficient for science. The Enron email corpus has the same statistical properties. So, the punchline (which seems to have been rejected by my committee) was that an autonomous organization that was not committed, as per Bourdieu’s recommendation, to logical necessity as a social norm can potentially commit massive fraud. Whether this allusion to the corruption of the social sciences was caught by my committee, I cannot say. The point was not substantively addressed.

Viewing the situation more positively, autonomy affords the possibility of autonomous recognition of logical progress, and this, for Bourdieu, is science. The question, then, is what constitutes this recognition in open source software development?

I have the opportunity to learn a lot about this in the coming months through my participation with the SciPy conference this year.

  • I’m on the proceedings committee, which means I’ll be with co-chair Scott Rostrup managing the paper submission and review process. Interestingly, this is all done openly on GitHub, with public comments and real identities used. It’s also much more incrementalist, at least potentially, than the typical conference revise-and-resubmit approach. That the SciPy conference is secretly an experiment in open scholarly publishing is one of its coolest quirks, in my opinion.
  • I’ll be working on a paper about software dependency. The more I think about it, the more I realize that it’s this dependency structure that most closely mirrors the ‘citation’ function of paper scholarship. So getting more familiar with these networks (which I expect to look completely different, statistically, from email discussions!) will be very interesting.

by Sebastian Benthall at May 13, 2016 10:57 PM

May 10, 2016

Ph.D. student

dissertation update

I gave up blogging to work on my dissertation.

Over the course of the past several weeks, I’ve been bureaucratically compelled to stop working on my dissertation. Were I to tell you the series of events of this semester in great detail, you would find it utterly amazing. You would, as I do, have any number of diverging theories about individual agency and motivation of the characters involved. Nobody involved is naive, so all possibilities are open. But then the system is so vast, and I such an insignificant part of it, that inertia is the most likely explanatory factor. No particular intellectual logic predetermines the outcome. I put energy in, put a lot of energy in, and the system’s response is: stop.

I guess I can start blogging again.

In my last post, I wrote about some of the frustrations I’ve had with the priority of narrative in the social sciences, and the politics of technology and narration. Looking back on past blog posts, I see now that this has been a steady theme since I began graduate school. In fact the majority of my posts to this blog have probably in one way or another been about the challenges of navigating the politics of interdisciplinary space.

I have to reflect on why. This is not a topic I find intrinsically valuable or interesting. I would much rather be doing something productive, or finding the truth about something. By virtue of professional circumstance and participant observation, I do think I now “get” the contours of academic politics with ethnographic sensitivity. This experience and note-taking process does not afford me with material for academic publication because I did not begin the “study” in an official way. It has not given me any insights that anybody else who has grown cynical about academia would not also attest to. When I have allowed my experiences to inform my dissertation in a broad thematic way, I have been told I have not provided enough empirical evidence? The standard of empiricism seems fluid enough to accommodate any bureaucratic move, while the standard of logic is routinely denied as a matter of disciplinary specialization.

Mainly what I’ve discovered–and it seems obvious in retrospect–is that as a social system the university’s primary purpose is to maintain its own equilibrium. Perhaps this is to be expected in an organization characterized by extreme autonomy. The status quo cannot change internally through disruption. As an institution, it is what the entrepreneurs call installed base. Disruptive innovation will come in the form of external pressure, which will result in a loss of market share and, consequently, funding. The budget crisis of UC Berkeley confirms this. Internal organizational shifts will be incremental, difficult, petty, concessional.

I think I made a mistake, which was to try to write a dissertation that confronted these politics and provide an alternative model for intellectual organization. I think the Scientific Python communities are a significant and successful alternative model. “Write what you know.”

The problem is that I want to do two different things. The first is explain to the established intellectual authorities why this alternative model has legitimate intellectual grounds in earlier theory and therefore represents a progression in, not a rupture to, mainstream intellectual thought.

The second is to translate those grounds into a logic that the new field can accept on its own terms. My main problem here is that the primary logic of the new field is computational statistics, and for good reason it is hard to get access to computational statisticians for research mentorship at Berkeley.

Are these two ideas “two different dissertations”? Is either one of them “too broad for a dissertation”? The institutional answer is “Yes”. I am not supposed to be working on these problems as a graduate student. A dissertation should be narrow, about something in particular, and follow the conventions of a discipline so that it can be judged and completed.

So, as I’ve said, I’ve made a mistake, or many mistakes. I should not be making the most challenging social and political problems that I encounter during the course of my academic career to be the subject of my dissertation. I am doing this because of a number of inappropriate impulses, perhaps the narcissistic  impulse of blogging being one of them. Doing this only complicates my life, and the lives of others, by surfacing the self-referential complexity of our institutional and social context. Are there new possibilities in that complexity, somewhere? Who cares. It is a headache. Reflexivity is not appropriate for academic work because it upsets personal equilibrium. Its mirage of emancipation is dangerous!

So the system is right. I need to take some time to disentangle myself.


by Sebastian Benthall at May 10, 2016 04:11 PM

May 07, 2016

Ph.D. student

the end of narrative in social science

‘Narrative’ is a term you hear a lot in the humanities, the humanities-oriented social sciences, and in journalism. There’s loads of scholarship dedicated to narrative. There’s many academic “disciplines” whose bread and butter is the telling of a good story, backed up by something like a scientific method.

Contrast this with engineering schools and professions, where the narrative is icing on the cake if anything at all. The proof of some knowledge claim is in its formal logic or operational efficacy.

In the interdisciplinary world of research around science, technology, and society, the priority of narrative is one of the major points of contention. This is similar to the tension I found I encountered in earlier work on data journalism. There are narrative and mechanistic modes of explanation. The mechanists are currently gaining in wealth and power. Narrativists struggle to maintain their social position in such a context.

A struggle I’ve had while working on my dissertation is trying to figure out how to narrate to narrativists a research process that is fundamentally formal and mechanistic. My work is “computational social science” in that it is computer science applied to the social. But in order to graduate from my department I have to write lots of words about how this ties in to a universe of academic literature that is largely by narrativists. I’ve been grounding my work in Pierre Bourdieu because I think he (correctly) identifies mathematics as the logical heart of science. He goes so far as to argue that mathematics should be at the heart of an ideal social science or sociology. My gloss on this after struggling with this material both theoretically and in practice is that narratively driven social sciences will always be politically or at least perspectivally inflected in ways that threaten the objectivity of the results. Narrativists will try to deny the objectivity of mathematical explanation, but for the most part that’s because they don’t understand the mathematical ambition. Most mathematicians will not go out of their way to correct the narrativists, so this perception of the field persists.

So I was interested to discover in the work of Miller McPherson, the sociologist who I’ve identified as the bridge between traditional sociology and computational sociology (his work gets picked up, for example, in the generative modeling of Kim and Leskovec, which is about as representative of the new industrial social science paradigm as you can get), an admonition about the consequences of his formally modeled social network formation process (the Blau space, which is very interesting). His warning is that the sociology his work encourages loses narrative and with it individual agency.

IMG_20160506_160149

(McPherson, 2004, “A Blau space primer: prolegomenon to an ecology of affiliation”)

It’s ironic that the whole idea of a Blau space, which is that the social network of society is sampled from an underlying multidimensional space of demographic dimensions, predicts the quantitative/qualitative divide in academic methods as not just a methodological difference but a difference in social groups. The formation of ‘disciplines’ is endogenous to the greater social process and there isn’t much individual agency in this choice. This lack of agency is apparent, perhaps, to the mathematicians and a constant source of bewilderment and annoyance, perhaps, to the narrativists who will insist on the efficacy of a narratively driven ‘politics’–however much this may run counter to the brute fact of the industrial machine–because it is the position that rationalizes and is accessible from their subject position in Blau space.

“Subject position in Blau space” is basically the same idea, in more words, as the Bourdieusian habitus. So, nicely, we have a convergence between French sociological grand theory and American computational social science. As the Bourdieusian theory provides us with a serviceable philosophy of science grounded in sociological reality of science, we can breathe easily and accept the correctness of technocratic hegemony.

By “we” here I mean…ah, here’s the rub. There’s certainly a class of people who will resist this hegemony. They can be located easily in Blau space. I’ve spent years of my life now trying to engage with them, persuading them of the ideas that rule the world. But this turns out to be largely impossible. It’s demanding they cross too much distance, removes them from their local bases of institutional support and recognition, etc. The “disciplines” are what’s left in the receding tide before the next oceanic wave of the unified scientific field. Unified by a shared computational logic, that is.

What is at stake, really, is logic.


by Sebastian Benthall at May 07, 2016 12:00 AM

May 05, 2016

Center for Technology, Society & Policy

FutureGov: Drones and Open Data

By Kristine Gloria, CTSP Fellow | Permalink

As we’ve explored in previous blog posts, civil drone applications are growing, and concerns regarding violations of privacy follow closely. We’ve thrown in our own two cents offering a privacy policy-by-design framework. But, this post isn’t (necessarily) about privacy. Instead, we pivot our focus towards the benefits and challenges of producing Open Government Drone Data. As proponents of open data initiatives, we advocate its potential for increased collaboration, accessibility and transparency of government programs. The question, therefore, is: How can we make government drone data more open?

A drone’s capability to capture large amounts of data – audio, sensory, geospatial and visual – serves as a promising pathway for future smart city proposals. It also has many data collection, use and retention policies that require considering data formats and structures.

Why is this worth exploring? We suggest that it opens up additional (complementary) questions about access, information sharing, security and accountability. The challenge with the personal UAS ecosystem is its black box nature comprised of proprietary software/hardware developers and third-party vendors. This leads to technical hurdles such as the development of adaptable middleware, specified application development, centralized access control, etc.

How do governments make data public?

Reviewing this through an open data lens — as our work focuses on municipal use cases –offers a more technical discussion and highlights available open source developer tools and databases. In this thought experiment, we assume a government agency prescribes to and is in the development of an open data practice.  At this stage, the agency now faces the question: How do we make the data public?  Additional general guidance on how to approach Open Data in government, please refer to our work: Open Data Privacy Report 2015.

Drawing from the Sunlight Foundation’s Open Data Guidelines, information should be released in “open formats” or “open standards”, and be machine-readable and machine-processable (or structured appropriately). Translation: data designated by a municipality as “shareable” should follow a data publishing standard in order to facilitate sharing and reuse by both human and machine. These formats may include XML, CSV, JSON, etc. Doing so enables access (where designated) and opportunities for more sophisticated analysis. Note that the PDF format is generally discouraged as it prevents data from being shared and reused.

Practical Guidelines for Open Data Initiatives

It seems simple enough, right? Yes and no. Learning from challenges of early open data initiatives, database managers should also consider the following: completeness, timeliness, and reliability & trustworthiness.

  • Completeness refers to the entirety of a record. Again, the Sunlight Foundation suggests: “All raw information from a dataset should be released to the public, except to the extent necessary to comply with federal law regarding the release of personally identifiable information.” We add that completeness must also align with internal privacy policies. For example, one should consider whether the open data could lead to risks of re-identification.  
  • Timeliness is particularly important given the potential applications of UAS real-time data gathering. Take for example emergency or disaster recovery use cases. Knowing what types of data can be shared, by whom, to whom and how quickly can lead to innovative application development for utility services or aide distribution. Published data should therefore be released as quickly as possible with priority given to time-sensitive data.
  • Reliability and Trustworthiness are key data qualities that highlight authority and primacy, such as the source name of specific data agencies. Through metadata provenance, we can capture and define resources, access points, derivatives, formulas, applications, etc. Examples of this include W3C’s PROV-XML schema. Identifying the source of the data, any derivatives, additions, etc., helps increase the reliability and trustworthiness of the data.

What of Linked Open Government Data?

For those closely following the open government data space, much debate has focused on the need for a standardized data format in order to link data across formats, organizations, governments etc. Advocates suggest that, linking open data may increase its utility through interoperability.  This may be achieved using structured machine-processable formats, such as the Resource Description Framework (RDF). This format uses Uniform Resource Identifiers (URIs), which can be identified by reference and linked with other relevant data by subject, predicate, or object. For a deep dive on this specific format, check out the “Cookbook for Open Government Linked Data”. One strength of this approach  is its capability to generate a large searchable knowledge graph. Check out the Linked Open Data Cloud for an example of all linked databases currently available. Paired with Semantic Web standards and a robust ontology, the potential for its use with drone data could be quite impactful.

No matter the data standard chosen, linked or not, incorporating a reflexive review process should also be considered. This may include some form of a dataset scoring methodology, such as the 5-Star Linked Data System or Montgomery County’s scoring system (Appendix H), in order to ensure that designated datasets comply to both internal and external standards.

Image from: MapBox Project

Hacking Drone Data

Now to the fun stuff. If you’re interested in drone data, there are a few open drone databases and toolkits available for people to use. The data ranges from GIS imaging to airport/airspace information. See MapBox as an example of work (note: this is now part of the B4UFLY smartphone app available by the FAA).  Tools and datasets include:

And, finally, for those interested in more operational control of their drone experience, check out these Linux based drones highlighted in 2015 by Network World.

So, will the future of drones include open data? Hopefully. Drones have already proven to to be incredibly useful as a means to surveying the environment and for search and rescue efforts. Unfortunately, drones also raise considerable concerns regarding surveillance, security and privacy. The combination of an open data practice with drones therefore requires a proactive, deliberate balancing act. Fortunately, we can and should learn from our past open data faux pas. Projects such as our own CityDrone initiative and or fellow CSTP colleagues’ “Operationalizing Privacy for Open Data Initiatives: A Guide for Cities” project serve as excellent reference points for those interested in opening up their drone data. 

by charles at May 05, 2016 12:22 PM

May 04, 2016

Center for Technology, Society & Policy

Exciting Upcoming Events from CTSP Fellows

By Galen Panger, CTSP Director | Permalink

Five of CTSP’s eleven collaborative teams will present progress and reflections from their work in two exciting Bay Area events happening this month, and there’s a common theme: How can we better involve the communities and stakeholders impacted by technology’s advance? On May 17, three teams sketch preliminary answers to questions about racial and socioeconomic inclusion in technology using findings from their research right here in local Bay Area communities. Then, on May 18, join two teams as they discuss the importance of including critical stakeholders in the development of policies on algorithms and drones.

Please join us on May 17 and 18, and help us spread the word by sharing this post with friends or retweeting the tweet below. All the details below the jump.


Inclusive Technologies: Designing Systems to Support Diverse Bay Area Communities & Perspectives

Three collaborative teams from the Center for Technology, Society & Policy share insights from their research investigating racial and socioeconomic inclusion in technology development from the perspective of local Bay Area communities. How do Oakland neighborhoods with diverse demographics use technology to enact neighborhood watch, and are all perspectives being represented? How can low-income families of color in Richmond overcome barriers to participating in our technology futures? Can we help immigrant women access social support and social services through technology-based interventions? Presenters will discuss their thoughts on these questions, and raise important new ones, as they share the preliminary findings of their work.

Tuesday, May 17, 2016
4 p.m. – 5:30 p.m.
202 South Hall, Second Floor
School of Information
Berkeley, CA

Refreshments will be served.

About the Speakers:

  • Fan Mai is a sociologist studying the intersection of technology, culture, identities and mobility. She holds a Ph.D. from the University of Virginia.
  • Rebecca Jablonsky is a professional UX designer and researcher. She holds a Master’s of Human-Computer Interaction from Carnegie Mellon and will be starting a Ph.D. at Rensselaer Polytechnic Institute this fall in Science & Technology Studies.
  • Kristen Barta is a Ph.D. candidate in the Department of Communication at the University of Washington whose research investigates online support spaces and the recovery narratives of survivors of sexual assault. She earned her Master’s from Stanford.
  • Robyn Perry researches language shift in Bay Area immigrant communities and has a background in community organizing and technology. She holds a Master’s from the Berkeley School of Information.
  • Morgan G. Ames is a postdoctoral researcher at the Center for Science, Technology, Medicine, and Society at UC Berkeley investigating the role, and limitations, of technological utopianism in computing cultures.
  • Anne Jonas is a Ph.D. student at the Berkeley School of Information researching education, social justice, and social movements.

 


Involving Critical Stakeholders in the Governance of Algorithms & Drones

Advances in the use of algorithms and drones are challenging privacy norms and raising important policy questions about the impact of technology. Two collaborative teams from the Center for Technology, Society & Policy share insights from their research investigating stakeholder perceptions of algorithms and drones. First, you’ll hear from fellows about their research on user attitudes toward algorithmic personalization, followed by a panel of experts who will discuss the implications. Then, hear from the fellows at CityDrones, who are working with the City of San Francisco to regulate the use of drones by municipal agencies. Can the city adopt drone policies that balance privacy and support innovation?

Wednesday, May 18, 2016
6 p.m. – 7:30 p.m.
1355 Market St, Suite 488
Runway Headquarters
San Francisco, CA

Refreshments will be served. Registration requested.

About the Speakers:

  • Charles Belle is the CEO and Founder of Startup Policy Lab and is an appointed member of the City & County of San Francisco’s Committee on Information Technology.
  • CTSP Fellows Rena Coen, Emily Paul, and Pavel Vanegas are 2016 graduates of the Master’s program at the Berkeley School of Information. They are working collaboratively with the Center for Democracy & Technology to carry out their study of user perspectives toward algorithmic personalization. The study is jointly funded by CTSP and the Center for Long-Term Cybersecurity.

Panelists:

  • Gautam Hans is Policy Counsel and Director of CDT-SF at the Center for Democracy & Technology. His work encompasses a range of technology policy issues, focused on privacy, security and speech.
  • Alethea Lange is a Policy Analyst on the Center for Democracy & Technology’s Privacy and Data Project. Her work focuses on empowering users to control their digital presence and developing standards for fairness and transparency in algorithms.
  • Jen King is a Ph.D. candidate at the Berkeley School of Information, where she studies the social aspects of how people make decisions about their information privacy, and how privacy by design — or the lack of it — influences privacy choices.

 
Stay tuned for more events from the CTSP fellows!

by Galen Panger at May 04, 2016 09:49 PM

April 26, 2016

Ph.D. student

Reflections on CSCW 2016

CSCW 2016 (ACM’s conference on Computer Supported Cooperative Work and Social Computing) took place in San Francisco last month. I attended (my second time at this conference!), and it was wonderful meeting new and old colleagues alike. I thought I would share some reflections and highlights that I’ve had from this year’s proceedings.

Privacy

Many papers addressed issues of privacy from a number of perspectives. Bo Zhang and Heng Xu study how behavioral nudges can shift behavior toward more privacy-conscious actions, rather than merely providing greater information transparency and hoping users will make better decisions. A nudge showing users how often an app accesses phone permissions made users feel creepy, while a nudge showing other users’ behaviors reduced users’ privacy concerns and elevated their comfort. I think there may be value in studying the emotional experience of privacy (such as creepiness), in addition to traditional measurements of disclosure and comfort. To me, the paper suggests a further ethical question about the use of paternalistic measures in privacy. Given that nudges could affect users’ behaviors both positively and negatively toward an app, how should we make ethical decisions when designing nudges into systems?

Looking at the role of anonymity, Ruogu Kang, Dabbish, and Sutton conducted interviews with users of anonymous smartphone apps, focusing on Whisper and YikYak. They found that users mostly use these apps to post personal disclosures and do so for social reasons: social validation from other users, making short-term connections (on or off-line), sharing information, or avoiding social risk and context collapse. Anonymity and a lack of social boundaries allowed participants to feel alright venting certain complaints or opinions that they wouldn’t feel comfortable disclosing on non-anonymous social media.

Investigating privacy and data, Peter Tolmie, Crabtree, Rodden, Colley, and Luger discuss the need for articulation work in order to make fine-grained sensing data legible, challenging the notion that the more data home sensing systems collect, the more that can learned about the individuals in the home. Instead, they find that in isolation, these personal data are quite opaque. Someone is needed to explain the data and provide contextual insights, and social and moral understandings of data – for subjects, the data does not just show events but also insight into what should (or should not) be happening in the home. One potential side effect about home data collection might be that the data surfaces practices that otherwise might not be seen (such as activities in a child’s room), which might create concerns about accountability and surveillance within a home.

Related to privacy and data, Janet Vertesi, Kaye, Jarosewski, Khovansakaya, and Song frame data management practices in the context of moral economy. I found this a welcome perspective to online privacy issues, adding to well-researched perspectives of information disclosure, context, and behavioral economics. Using mapping techniques with interviewees, the authors focused on participants’ narratives over their practices, finding a strong moral undertone to the way people managed their personal data – managing what they considered “their data” in a “good” or “appropriate” way.  Participants spoke about managing overlapping systems, devices, and networks, but also managing multiple human relationships – both with other individuals and with the companies making the products and services. I found two points particularly compelling: First the description that interviewees did not describe sharing data as a performance to an audience, but rather a moral action (e.g. being a “good daughter” means sharing and protecting data in particular ways). Second, that given the importance participants placed on the moral aspects of data management, many feared that changes in companies’ products or interfaces would make it harder to manage data in “the right way,” rather than a fear of inadvertent data disclosure.

Policy

I was happy to see continued attention at CSCW to issues of policy. Casey Fiesler, Lampe, and Bruckman investigate the terms of service for copyright of online content. Drawing some parallels to research on privacy policies, they find that users are often unaware of sites’ copyright policies (which are often not very readable), but do care about content ownership. They note that different websites use very different licensing terms, and that some users may have a decent intuition about some rights. However, there are some licensing terms across sites that users often do not expect or know about – such as the right for sites to modify users’ content. These misalignments could be potentially problematic. Their work suggests that clearer copyright terms of service could be beneficial. While this approach has been heavily researched in the privacy space to varying degrees of success, there is a clear set of rights associated with copyright which (at the outset at least) would seem to indicate plain language descriptions may be useful and helpful to users.

In another discussion of policy, Alissa Centivany uses the case of HathiTrust (a repository of digital content from research libraries in partnership with Google Books) to frame policy not just as a regulating force, but as a source of embedded generativity – that policy can also open up new spaces and possibilities (and foreclose others).  Policy can open and close technical and social possibilities similar to the way design choices can. Specifically, she cites the importance of a specific clause in the 2004 agreement between the University of Michigan and Google that allowed the University the right to use its digital copies “in cooperation with partner research libraries,” which eventually led to the creation of HathiTrust. HathiTrust represents an emergent system out of the conditions of possibility enabled by the policy. It’s important to also recognize that policies can also foreclose other possibilities – for example, the restriction to other library consortia excludes the Internet Archive from HathiTrust. In the end, Cenitvany posits that policy is a potential non-technical solution to help bridge the sociotechnical gap.

Infrastructure

I was similarly pleased to see several papers using the lens of infrastructure. Ingrid Erickson and Jarrahi investigate knowledge workers’ experience of seams and creating workarounds. They find both technological and contextual constraints that create seams in infrastructure – such as public Wi-Fi access that doesn’t accommodate higher bandwidth applications like Skype, incompatibility between platforms, or contextual examples might include locations of available 4G and Wi-Fi access or cafes that set time limits on how long patrons can use free Wi-Fi. Workers respond with a number of “workarounds” when encountering work systems and infrastructures that do not fully meet their needs: bridging these gaps, assembling new infrastructural solutions, or circumventing regulations.

Susann Wagenknecht and Matthias Korn look at hacking as a way to critically engage and (re)make infrastructures, noting that hacking is one way to make tacit conventions visible. They follow a group of German phone hackers who “open” the GSM mobile phone system (a system much more closed and controlled by proprietary interests than the internet) by hacking phones and creating alternative GSM networks. Through reverse engineering, re-implementing parts of the system, and running their own versions of the system, the hackers appropriate knowledge about the GSM system: how it functions; how to repair, build, and maintain it; and how to control where and who it is used. These hacking actions can be considered “infrastructring” as they render network components visible and open to experimentation, as well as contributing toward a sociotechnical imaginary foreseeing GSM as a more transparent and open system.

Time

Adding to a growing body of CSCW work on time, Nan-Chen Chen, Poon, Ramakrishnan, and Aragon investigate the role of time and temporal rhythms in a high performance computing center at a National Lab, following in the vein of other authors’ work on temporal rhythms that I thoroughly enjoy (Mazmanian,  Erickson & Harmon, Lindley, Sharma, and Steinhardt & Jackson). They draw on collective notions of time over individual ones, finding frictions between human and computer patterns of time and human and human patterns of time. For instance scientists doing research and writing code have to weigh the (computer) time patterns related to code efficiency and (human) time patterns related to project schedules or learning additional programming skills. Or they may have to weigh the (human) time it takes to debug their own code versus the (human) time it takes to invest time in getting another person to help debug their code and make it more efficient. In this timesharing environment, scientists have to juggle multiple temporal rhythms and temporal uncertainties caused by system updates, queue waiting time, human prep work, or other unexpected delays.

Cooperation and Work

Contributing to the heart of CSCW, several research papers studied problems at the forefront of cooperation and work. Carman Neustaedter, Venolia, Procyk, and Hawkins reported on a study of remote telepresence robots (“Beams”) at the ACM Ubicomp and ISWC conferences. Notably, remote tele-presence bot use at academic conferences differed greatly from office contexts. Issues of autonomy are particularly interesting: is it alright for someone to physically move the robot? How could Beam users benefit from feedback of their microphone volume, peripheral cameras, or other ways to show social cues? Some remote users used strategies to manage privacy in their home environment by blocking the camera or turning off the microphone, but had a harder time managing privacy in the public conference environment, such as speaking more loudly than intended. Some participants also created strong ties with their remote Beams, feeling that they were in the “wrong body” when their feed transferred between robots. It was fascinating to read this paper and compare it to my personal experience after seeing Beams used and interacting with some of them at CSCW 2016.

Tawanna Dilahunt, Ng, Fiesta, and Wang et al research how MOOCs support (or don’t support) employability, with a focus on low socioeconomic status learners, a population that is not well understood in this environment. They note that while MOOCs (Massive Open Online Courses) help provide human capital (learners attain new skills like programming), they lack support for increasing social capital, helping form career identity, and personal adaptability. Many low socioeconomic status learners said that they were not able to afford formal higher education (due to financial cost, time, family obligations, and other reasons). Most felt that MOOCs would be beneficial to employment, and unlike a broader population of respondents, largely were not concerned about the lack of accreditation for MOOCs. They did, however, discuss other barriers, such as lack of overall technical literacy, or how MOOCs can’t substitute for actual experience.

Noopur Raval and Dourish bring in concepts from feminist political economy to look at the experience of ridesharing crowd labor. They bring in notions of immaterial labor, affective labor, and temporal cultural politics – all of which are traditionally not considered as “work.” They find that Uber and Lyft drivers must engage in types of affective and immaterial labor, needing to perform the identity of a 5-star driver, pressured by the frustration that many passengers don’t understand how the rating systems are weighed. Their status as contractors provides individual opportunities for microbranding, but also creates individual risks that may pose customers’ desires and the drivers’ safety against each other. Through their paper, the authors suggest that we may need to reconceptualize ideas of labor if previously informal activities are now considered work, and that we may be able to draw more strongly from labor relation studies and labor theory.

Research Methods and Ethics

Several papers provided reflections on doing research in CSCW. Jessica Vitak, Shilton, and Ashktorab present survey results on researchers’ ethical beliefs and practices when using online data sets. Online research poses challenges to the ways Institutional Review Boards have traditionally interpreted the Belmont Report’s principles of respect, beneficence, and justice. One finding was that researchers believe that ethical standards, norms, and practices differ relative to different disciplines and different work fields (such as academia or industry). (I’ve often heard this discussed anecdotally by people working in the privacy space). However, the authors find that ethical attitudes do not significantly vary across disciplinary boundaries and in fact there is general agreement across five practices which may serve as a set of foundational ethical research practices. This opens the possibility for researchers across disciplines, and across academia and industry, to united around and learn from a common set of data research practices.

Daniela Rosner, Kawas, Li, Tilly and Sung provide great insight into design workshops as a research method, particularly when things don’t go quite as the researcher intends. They look at design workshops as sites of study, research instruments, and as a process that invites researchers to reflexively examine research practices and methods. They suggest that workshops can engage with different types of temporal relations – both the long lasting and meaningful relationships participants might have with objects before and after the workshops, and the temporal rhythms of the workshops themselves. What counts as participation? The timing of the workshop may not match the time that participants want to or can spend. Alternate (sometimes unintended or challenging) practices brought by participants to the workshops can be useful too. They provide important insights that might make us rethink about how we define participation in CSCW (and perhaps in interventionist and participatory work more broadly), and how we can gain insights from interventional and exploratory research approaches.

In all, I’m excited by the directions CSCW research is heading in, and I’m very much looking forward to CSCW 2017!


by Richmond at April 26, 2016 06:33 PM

April 25, 2016

Center for Technology, Society & Policy

Developing Strategies to Counter Online Abuse

By Nick Doty, CTSP | Permalink

We are excited to host a panel of experts this Wednesday, talking about strategies for making the Internet more gender-inclusive and countering online harassment and abuse.

Toward a Gender-Inclusive Internet: Strategies to Counter Harassment, Revenge Porn, Threats, and Online Abuse
Wednesday, April 27; 4:10-5:30 pm
202 South Hall, Berkeley, CA
Open to the public; Livestream available

These are experts and practitioners in law, journalism and technology with an interest in the problem of online harassment. And more importantly, they’re all involved with ongoing concrete approaches to push back against this problem (see, for example, Activate Your Squad and Block Together). While raising awareness about online harassment and understanding the causes and implications remains important, we have reached the point where we can work on direct countermeasures.

The Center for Technology, Society & Policy intends to fund work in this area, which we believe is essential for the future of the Internet and its use for digital citizenship. We encourage students, civil society and industry to identify productive collaborations. These might include:

  • hosting a hackathon for developing moderation or anti-harassment tools
  • drafting model legislation to address revenge porn while maintaining support for free expression
  • standardizing block lists or other tools for collaboratively filtering out abuse
  • meeting in reading groups, support groups or discussion groups (sometimes beer and pizza can go a long way)
  • conducting user research and needs assessments for different groups that encounter online harassment in different ways

And we’re excited to hear other ideas: leave your comments here, join us on Wednesday, or get in touch via email or Twitter.


See also:

by Nick Doty at April 25, 2016 09:50 PM

April 15, 2016

Center for Technology, Society & Policy

Please Can We Not Try to Rationalize Emoji

By Galen Panger, CTSP Director | Permalink

Emoji are open to interpretation, and that’s a good thing. Credit: Samuel Barnes

Emoji are open to interpretation, and that’s a good thing. Credit: Samuel Barnes

This week a study appeared on the scene suggesting an earth-shattering, truly groundbreaking notion: Emoji “may be open to interpretation.”

And then the headlines. “We Really Don’t Know What We’re Saying When We Use Emoji,” a normally level-headed Quartz proclaimed. “That Emoji Does Not Mean What You Think It Means,” Gizmodo declared. “If Emoji Are the Future of Communication Then We’re Screwed,” New York Magazine cried, obviously not trying to get anyone to click on its headline.

Normally I might be tempted to blame journalists for sensationalizing academic research, but in this instance, I think the fault actually lies with the research. In their study, Hannah Miller, Jacob Thebault-Spieker and colleagues from the University of Minnesota took a bunch of smiley face emoji out of context, asked a bunch of people what they meant, and were apparently dismayed to find that, 25% of the time, people didn’t even agree on whether a particular emoji was positive or negative. “Overall,” the authors write, “we find significant potential for miscommunication.”

It’s odd that an academic paper apparently informed by such highfalutin things as psycholinguistic theory would be concerned that words and symbols can have a range of meanings, even going so far as to be sometimes positive and sometimes negative. But of course they do. The word “crude” can refer to “crude” oil, or it can refer to the double meanings people are assigning to emoji of fruits and vegetables. “Crude” gains meaning in context. That people might not agree on what a word or symbol means outside of the context in which it is used is most uninteresting.

The authors mention this at the end of their paper. “One limitation of this work is that it considered emoji out of context (i.e., not in the presence of a larger conversation).” Actually, once the authors realized this, they should have started over and come up with a research design that included context.

The fact that emoji are ambiguous, can stand for many things, and might even evolve to stand for new things, is part of what makes them expressive. It’s part of what makes them dynamic and fun, and trying to force a one-to-one relationship between emoji and interpretation would make them less, not more, communicative. So please, if we’re going to try to measure the potential for miscommunication wrought by our new emoji society, let’s measure real miscommunication. Not normal variations in meaning that might be clear (even clever!) in context or that might be clarified during the normal course of conversation. Or that might remain ambiguous but at least not harm our understanding (while still making our message just that much cuter 💁). Once we’ve measured actual miscommunication, then we can decide whether we want to generate a bunch of alarmist headlines or not.

That said, all of the headlines the authors generated with their study did help to raise awareness of a legitimate problem for people texting between platforms like iOS and Android. Differences in how a few emoji are rendered by different platforms can mean we think we’re sending a grinning face, when in fact we’re sending a grimacing one. Or perhaps we’re sending aliens. “I downloaded the new iOS platform and I sent some nice faces,” one participant in the study said, “and they came to my wife’s phone as aliens.”

That’s no good. Although, at least they were probably cute aliens. 🙌

Cross-posted to Medium.

by Galen Panger at April 15, 2016 11:58 PM

April 12, 2016

Center for Technology, Society & Policy

Start Research Project. Fix. Then Actually Start.

By Robyn Perry, CTSP Fellow | Permalink

If you were living in a new country, would you know how to enroll your child in school, get access to health insurance, or find affordable legal assistance? And if you didn’t, how would you deal?

As Maggie, Kristen, and I are starting to interview immigrant women living in the US and the organizations that provide support to them, we are trying to understand how they deal – particularly, how they seek social support when they face stress.

This post gives a bit of an orientation to our project and helps us document our research process.

We’ve developed two semi-structured questionnaires to guide our interviews: one for immigrants and one for staff at service providers (organizations that provide immigrants with legal aid, job training, access to resources for navigating life in the US, and otherwise support their entry and integration). We are seeking to learn about women immigrants who have been in the US between 1-7 years. All interviews are conducted in English by one of the team members. For this reason, we are striving for a high degree of consistency in our interview process because we will each be conducting one-on-one interviews separately from each other.

As we have begun interviewing, we’ve realized a couple of things:

  1. Balancing specificity and ambiguity is difficult, but necessary. We need questions that are agile enough to be adapted to each respondent, but answers that are not too divergent to compare once we do analysis.
  2. The first version of our interview questions needed to be sharpened to better enable us to learn more about the particular technologies respondents use for specific types of relationships and with respect to different modes of social support. Without this refinement, it may have been difficult to complete the analysis as we had intended.

We have sought mentorship and advice from several mentors, and that has helped us arrive at our current balance of trade-offs. We have found support in Gina Neff and Nancy Rivenburgh, Maggie and Kristen’s PhD advisors, as well as in Betsy Cooper, the new director of the Center for Long-term Cybersecurity.

When we met with Betsy, she took about 30 seconds to assess the work we had done so far and begin to offer suggestions for dramatic improvements. She challenged us to reexamine our instruments, asking, “Do the answers you will get by asking these questions actually get at your big research questions?” The other advisors have challenged us similarly.

This helpful scrutiny has pushed us to complete a full revision of our migrant interview questions. Betsy also recommended we further narrow our pool of immigrants to limit to one to three geographical regions of origin, so that we might be able to either compare between groups, or make qualified claims about at least one subset of interviewees. As a result, we would like to interview at least 5 people within particular subsets based on geography or other demographics. We are making an effort to interview in clusters that are geographical. For example, we are angling to interview individuals originally from East Africa, Central America, and/or Mexico, given each group’s large presence at both sites.

However, we anticipate that there may be greater similarities among immigrants who are similarly positioned in terms of socioeconomic status and potentially reason for immigrating than those that are from similar geographies. We’ll be considering as many points of similarity between interviewees as possibility. 

We’re finding that these three scholars, Nancy, Gina, and Betsy, fall on different points on the spectrum of preferring structured interviews to eschewing rigidity and allowing the interview to take the direction that the researcher finds most fruitful during conversation. Our compromise has been to improve our interview process by sharpening the questions (ambiguity removed or reduced as much as possible) we will step through with interviewees, leaning on clarifying notes and subquestions we are asking when the themes we’re looking for don’t organically emerge from the answer to the broader question.

Honing in on what’s really important is both imperative and iterative. In addition to our questionnaire, this includes definitions, objectives, and projected outcomes. This may sound like a banality, but doing so has been quite challenging. For example, what specifically do we mean by ‘migrant’, ‘social support’, ‘problem’, or ‘stress’? Reaching group clarity about this is essential. We also must remain as flexible as possible, because in the spirit of our work, we recognize we have a lot to learn, so artificially rigid definitions may not position us as well to learn from those we are interviewing (or even to find the right people to interview).

As we seek clarity in our definitions, we’ve looked to existing models in the great scientific tradition of standing on the shoulders of others. Social support defies easy definition, but one helpful distinction in an article by Cutrona & Suhr splits social support into two types, nurturant (attempts to care for person rather than the problem) and action-facilitating (attempts to mitigate the problem causing the stress). We’ve found this distinction helpful as both a guide and a foil for revising our questionnaire instrument to clarify what we’re looking for. We may find another classification from the literature that better matches what we are finding in our interviews, so we’ll stay open to this possibility.

Stay tuned. We’re excited to share what we learn as we get deeper into the interviews and begin analysis in a couple of weeks.

Cross-posted over at the Migrant Women & Technology Project blog.

by Robyn Perry at April 12, 2016 05:50 PM

April 08, 2016

BioSENSE research project

April 07, 2016

Center for Technology, Society & Policy

Moderating Harassment in Twitter with Blockbots

By Stuart Geiger, ethnographer and post-doctoral scholar at the Berkeley Institute for Data Science | Permalink

I’ve been working on a research project about counter-harassment projects in Twitter, where I’ve been focusing on blockbots (or bot-based collective blocklists) in Twitter. Blockbots are a different way of responding to online harassment, representing a more decentralized alternative to the standard practice of moderation — typically, a site’s staff has to go through their own process to definitively decide what accounts should be suspended from the entire site. I’m excited to announce that my first paper on this topic will soon be published in Information, Communication, and Society (the PDF on my website and the publisher’s version).

This post is a summary of that article and some thoughts about future work in this area. The paper is based on my empirical research on this topic, but it takes a more theoretical and conceptual approach given how novel these projects are. I give an overview of what blockbots are, the context in which they have emerged, and the issues that they raise about how social networking sites are to be governed and moderated with computational tools. I think there is room for much future research on this topic, and I hope to see more work on this topic from a variety of disciplines and methods.

What are blockbots?

Blockbots are automated software agents developed and used by independent, volunteer users of Twitter, who have developed their own social-computational tools to help moderate their own experiences on Twitter.

blockbot

The blocktogether.org interface, which lets people subscribe to other people’s blocklists, publish their own blocklists, and automatically block certain kinds of accounts.

Functionally, blockbots work similarly to ad blockers: people can curate lists of accounts they do not wish to encounter, and others can subscribe to these lists. To subscribe, you have to give the blockbot limited access to your account, so that it can update your blocks based on the blocklists you subscribe to. One of the most popular platforms for supporting blockbots in Twitter is blocktogether.org, which is what hosts the popular ggautoblocker project and many more smaller, private blocklists. Blockbots were developed to help combat harassment in the site, particularly coordinated harassment campaigns, although they are a general purpose approach that can be used to filter across any group or dimension. (More on this later in this post.)

A subscription-based model

Blockbots extend the functionality of the social networking site to make the work of responding to harassment more efficient and more communal. Blockbots are based on the standard feature of individual blocking, in which users can hide specific accounts from their experience of the site. Blocking has long been directly integrated into Twitter’s user interfaces, which is necessary because by default, any user on Twitter can send tweets and notifications to any other user. Users can make their accounts private, but this limits their ability to interact with a broad public — one of the big draws of Twitter versus more tightly bound social networking sites like Facebook.

For those who wish to use Twitter to interact with a broad public and find themselves facing rising harassment, abuse, trolling, and general incivility, the typical solution is to individually block accounts. Users can also report harassing accounts to Twitter for suspension, but this process has long been accused of being slow and opaque. People also have quite different ideas about what constitutes harassment, and the threshold to get suspended from Twitter is relatively high. As a result, those who are facing unsolicited remarks find themselves facing a Sisyphean task of individually blocking all accounts that send them inappropriate mentions. This is a large reason why a subscription model has emerged in bot-based collective blocklists.

Some blockbots use lists curated by a single person, others use community-curated blocklists, and a final set use algorithmically-generated blocklists. The benefit of using blockbots is that it lets ad-hoc groups form around common understandings of what they want their experiences to be on the site. Blockbots are opt-in, and they only apply for the people who subscribe to them. There are groups who coordinate campaigns against specific individuals (I need not name specific movements, but you can observe this by reading the abusive tweets received in just one week by Anita Sarkeesian of Feminist Frequency, many of which are incredibly emotionally disturbing). With blockbots, the work of responding to harassers can be efficiently distributed to a group of likeminded individuals or delegated to an algorithmic process.

Blockbots first emerged in Twitter in 2012, and their development followed a common trend in technological automation. People who were a part of the same online community found that they were being harassed by a common set of accounts. They first started sharing these accounts manually whenever they encountered them, but they found that this process of sharing blockworthy accounts could be automated and made more collective and efficient. Since then, the computational infrastructure has grown in leaps and bounds. It has been standardized with the development of the blocktogether.org service, which makes it easy and efficient for blocklist curators and subscribers to connect. People do not need to develop their own bots anymore, they only need to develop their own process for generating a list of accounts.

How are public platforms governed with data and algorithms?

Beyond the specific issue of online harassment, blockbots are an interesting development in how public platforms are governed with data and algorithmic systems. Typically, the responsibility for moderating behavior on social networking sites and discussion forums falls to the organizations that own and operate these sites. They have to both make the rules and enforce them, which is increasingly difficult to do at the scale that many of these platforms now have achieved. At this scale, not only is there a substantial amount of labor involved to do this moderation work, but it is also increasingly unlikely to find a common understanding about what is acceptable and unacceptable. As a result, there has been a proliferation of flagging and reporting features that are designed to collect information from users about what material they find inappropriate, offensive, or harassing. These user reports are then fed into a complex system of humans and algorithmic agents, who evaluate the reports and sometimes take an action in response.

I can’t write too much more about the flagging/reporting process on the platform operator’s side in Twitter, because it largely takes place behind closed doors. I haven’t found too many people who are satisfied with how moderation takes place on Twitter. There are people who claim that Twitter is not doing nearly enough to suspend accounts that are sending harassing tweets to others, and there are people who claim that Twitter has gone too far when they do suspend accounts for harassment. This is a problem faced by any centralized system of authority that has to make binary decisions on behalf of a large group of people; the typical solution is what we usually call “politics.” People petition and organize around particular ideas about how this centralized system ought to operate, seeking to influence the rules, procedures, and people who make up the system.

Blockbots are a quite different mode of using data and algorithms to moderate large-scale platforms at scale. They are still political, but they operate according to a different model of politics than the top-down approach that places a substantial responsibility for governing a site on platform operators. In my studies of blockbot projects, I’ve found that members of these groups have serious discussions and debates about what kind of activity they are trying to identify and block. I’ve even seen groups fracture and fork over different standards of blockworthyness — which I think can sometimes be productive. A major benefit of blockbots is that they do not operate according to a centralized system of authority where there is only one standard of blockworthyness, such that someone is either allowed to contact anyone or no one.  

Blockbots as counterpublic technologies

In the paper, I analyze blockbot projects as counterpublics, taking from the term Nancy Fraser coined in her excellent critique of Jurgen Habermas’s account of the public sphere. Fraser argues that there are many publics where people assemble to discuss issues relevant to them, but only a few of these publics get elevated to the status of “the public.” She argues that we need to pay attention to the “counterpublics” that are created when non-dominant groups find themselves excluded from more mainstream public spheres. Typically, counterpublics have been analyzed as “separate discursive spaces:” safe spaces where members of these groups can assemble without facing the chilling effects that are common in public spheres. However, blockbots are a different way of parallelizing the public sphere than ones that have been historically analyzed by scholars of the public sphere.

One aspect of counterpublics is that they serve as sites of collective sensemaking: they are a space where members of non-dominant groups can work out their own understandings about issues that they face. I found a substantial amount of collective sensemaking in these groups, which can be seen in the intense debates that sometimes take place over defining standards of blockworthyness. As a blockbot can be easily forked (particularly with the blocktogether.org service), people are free to imagine and implement all kinds of possibilities about how to define harassment or any other criterion. People can also introduce new processes for curating a blocklist, such as adding a human appeals board for a blocklist that was generated by an algorithmic process. I’ve also seen a human curated blocklist move from a “two eyes” to “four eyes” principle, requiring that every addition to a blocklist be approved by another authorized curator before it would be synchronized with all the subscribers.

Going beyond “What is really harassment?”

Blockbots were originally created as counter-harassment technologies, but harassment is a very complicated term — one that even has different kinds of legal significance in different jurisdictions. One of the things I have found in conducting this research is that if you ask a dozen people to define harassment, you’ll get two dozen different answers. Some people who have found themselves on blocklists have claimed that they do not deserve to be there. And like in any debate on the Internet, there have even been legal threats made, including those alleging infringements of freedom of speech. I do think that the handful of major social networking sites are close to having a monopoly on mediated social interaction, and so the decisions they make about who to suspend or ban are ones we should look at incredibly closely. However, I think it is important to acknowledge these multiple definitions of harassment and other related terms, rather than try and close them down and find one that will work for everyone.

I think it is important and useful to move away from having just one single authoritative system that returns a binary decision about whether an activity is or is not allowed for all users of a site. I’ve seen controversies over this not just with harassment/abuse/trolling on Twitter, but also with things like photos of breastfeeding on Facebook. I think we should be exploring tools to give people more agency over moderating their own experiences on social networking sites, where ‘better’ means both more efficiently and more collectively. Facebook already uses sophisticated machine learning models to try and intuit what it thinks you want to see (i.e. what will keep you on the site looking for ads the longest), but I’d rather see this take place in a more deliberate and transparent manner, where people take an active role in defining their own expectations.

I also think it is important distinguish between the right to speak and the right to be heard, particularly in privately-owned social networking sites. Being placed on a blocklist means that someone’s potential audience is cut, which can be a troubling prospect for people who are used to their speech being heard by default. In the paper, I do discuss how modes of filtering and promotion are the mechanisms in which cultural hegemony operates. Scholars who focus on marginalized and non-dominant groups have long noted the need to investigate such mechanisms. However, I also review the literature about how harassment, trolling, incivility, and related phenomena are also ways in which people are pushed out of public participation. The public sphere has never been neutral, although the fiction that it is a neutral space where all are on equal group is one that has long been advanced by people who have a strong voice in such spaces.

How do you best build a decentralized classification system?

One issue that is relevant in these projects is about the kind of false positive versus false negative rates we comfortable having. No classification system is perfect (Bowker and Star’s Sorting Things Out is a great book on this), and it isn’t hard to see why someone facing a barrage of unwanted messages might be more willing to face a false positive than a false negative. On this issue, I see an interesting parallel with Wikipedia’s quality control systems, which my collaborators and I have written extensively about. There was a point in time when Wikipedians were facing a substantial amount of vandalism and hate speech in the “anyone can edit” encyclopedia, far too much for them to tackle on their own. They developed a lot of sophisticated tools (see The Banning of a Vandal and Wikipedia’s Immune System). However, my collaborators and I found that there are a lot of false positives, and this can inhibit participation among the good-faith newcomers who get hit as collateral damage. And so there have been some really interesting projects to try and correct that, using new kinds of feedback mechanisms, user interfaces, and Application Programming Interfaces (like Snuggle and ORES, led by Aaron Halfaker).

I suspect that if this decentralized approach to moderation in social networking sites gets more popular, then we might see a whole sub-field emerge around this issue, extending work done in spam filtering and recommender systems. Blockbots are still at the initial stages of their development, and think there is a lot of work still to be done. How do we best design and operate a social and technological system so that people with different ideas about what constitutes harassment can thoughtfully and reflectively work out these ideas? How do we give people the support that they need, so that responding to harassment isn’t something people have to do on their own? And how can we do this at scale, leveraging computational techniques without losing the nuance and context that is crucial for this kind of work? Thankfully, there are lots of dedicated, energetic, and bright people who are working on these kinds of issues and thinking about these questions.

Personal issues around researching online harassment

I want to conclude by sharing some anxieties that I face in publishing this work. In my studies of these counter-harassment projects, I’ve seen the people who have taken a lead on these projects become targets themselves. Often this stays at the level of trolling and incivility, but it has extended to more traditional definitions of harassment, such as people who contact someone’s employer and try to get them fired, or people who send photos of themselves outside of that person’s place of work. In some cases, it becomes something closer to domestic terrorism, with documented cases of people who have had the police come to their house because someone reported a hostage situation at their address, as well as people who have had to cancel presentations because someone threatened to bring a gun and open fire on their talk.

Given these situations, I’d be lying if I said I wasn’t concerned that this kind of activity might come my way. However, this is part of what the current landscape around online harassment is like. It shows how significant this problem is and how important it is that people work on this issue using many methods and strategies. In the paper, I spend some time arguing why I don’t think that blockbots are part of the dominant trend of “technological solutionism,” where a new technology is celebrated as the definitive way to fix what is ultimately a social problem. The people who work on these projects don’t talk about them in this solutionist way either. However, blockbots are tackling the symptoms of a larger issue, which is why I am glad that people are working on multifaceted projects and initiatives that investigate and tackle the root causes of harassment, like HeartMob, Hollaback, Women, Action, and the Media, Crash Override, the Online Abuse Prevention Initiative,, the many researchers working on harassment (see this resource guide), the members of Twitter’s recently announced Trust and Safety Council, and many more people and groups I’m inevitably leaving out.

Cross-posted to the Berkeley Institute for Data Science.

by Stuart Geiger at April 07, 2016 07:21 PM

April 06, 2016

Ph.D. alumna

Where Do We Find Ethics?

I was in elementary school, watching the TV live, when the Challenger exploded. My classmates and I were stunned and confused by what we saw. With the logic of a 9-year-old, I wrote a report on O-rings, trying desperately to make sense of a science I did not know and a public outcry that I couldn’t truly understand. I wanted to be an astronaut (and I wouldn’t give up that dream until high school!).

Years later, with a lot more training under my belt, I became fascinated not simply by the scientific aspects of the failure, but by the organizational aspects of it. Last week, Bob Ebeling died. He was an engineer at a contracting firm, and he understood just how badly the O-rings handled cold weather. He tried desperately to convince NASA that the launch was going to end in disaster. Unlike many people inside organizations, he was willing to challenge his superiors, to tell them what they didn’t want to hear. Yet, he didn’t have organizational power to stop the disaster. And at the end of the day, NASA and his superiors decided that the political risk of not launching was much greater than the engineering risk.

Organizations are messy, and the process of developing and launching a space shuttle or any scientific product is complex and filled with trade-offs. This creates an interesting question about the site of ethics in decision-making. Over the last two years, Data & Society has been convening a Council on Big Data, Ethics, and Society where we’ve had intense discussions about how to situate ethics in the practice of data science. We talked about the importance of education and the need for ethical thinking as a cornerstone of computational thinking. We talked about the practices of ethical oversight in research, deeply examining the role of IRBs and the different oversight mechanisms that can and do operate in industrial research. Our mandate was to think about research, but, as I listened to our debates and discussions, I couldn’t help but think about the messiness of ethical thinking in complex organizations and technical systems more generally.

I’m still in love with NASA. One of my dear friends — Janet Vertesi — has been embedded inside different spacecraft teams, understanding how rovers get built. On one hand, I’m extraordinarily jealous of her field site (NASA!!!), but I’m also intrigued by how challenging it is to get a group of engineers and scientists to work together for what sounds like an ultimate shared goal. I will never forget her description of what can go wrong: Imagine if a group of people were given a school bus to drive, only they were each given a steering wheel of their own and had to coordinate among themselves which way to go. Introduce power dynamics, and it’s amazing what all can go wrong.

Like many college students, encountering Stanley Milgram’s famous electric shock experiment floored me. Although I understood why ethics reviews came out of the work that Milgram did, I’ve never forgotten the moment when I fully understood that humans could do inhuman things because they’ve been asked to do so. Hannah Arendt’s work on the banality of evil taught me to appreciate, if not fear, how messy organizations can get when bureaucracies set in motion dynamics in which decision-making is distributed. While we think we understand the ethics of warfare and psychology experiments, I don’t think we have the foggiest clue how to truly manage ethics in organizations. As I continue to reflect on these issues, I keep returning to a college debate that has constantly weighed on me. Audre Lorde said, “the master’s tools will never dismantle the master’s house.” And, in some senses, I agree. But I also can’t see a way of throwing rocks at a complex system that would enable ethics.

My team at Data & Society has been grappling with different aspects of ethics since we began the Institute, often in unexpected ways. When the Intelligence and Autonomy group started looking at autonomous vehicles, they quickly realized that humans were often left in the loop to serve as “liability sponges,” producing “moral crumple zones.” We’ve seen this in organizations for a long time. When a complex system breaks down, who is to be blamed? As the Intelligence & Autonomy team has shown, this only gets more messy when one of the key actors is a computational system.

And that leaves me with a question that plagues me as we work on our Council on Big Data, Ethics, and Society whitepaper: How do we enable ethics in the complex big data systems that are situated within organizations, influenced by diverse intentions and motivations, shaped by politics and organizational logics, complicated by issues of power and control?

No matter how thoughtful individuals are, no matter how much foresight people have, launches can end explosively.

(This was originally posted on Points.)

by zephoria at April 06, 2016 12:55 AM

April 01, 2016

Ph.D. student

Ebb

Ebb: Dynamic Textile Displays from Laura Devendorf on Vimeo.

Ebb is an exploration of dynamic textiles created in partnership with Project Jacquard. Myself and my collaborators at UC Berkeley coated conductive threads with thermochromic pigments and explored how we could leverage the geometries of weaving and crochet to create unique aesthetic effects and power efficiencies. The thermochromic pigments change colors in slow, subtle, and even ghostly ways, and when we weave them into fabrics, they create calming “animations” that move across the threads. The name “Ebb” reflects this slowness, as it conjures images of the ebb and flow of the tides rather than the rapid-fire changes we typically associate with light emitting information displays. For this reason, Ebb offers a nuanced and subtle approach to displaying information on fabrics. A study we conducted with fashion designers and non-designers (i.e. people who wear clothes) explored potentials for dynamic fabrics in everyday life and revealed an important role for subtle, abstract displays of information in these contexts.

weave_4x4_3up_WEB

crochet_3up_WEB

weave_sevenseg_WEB

Publications

Noura Howell, Laura Devendorf, Rundong (Kevin) Tian, Tomas Vega, Nan-Wei Gong, Ivan Poupyrev, Eric Paulos, Kimiko Ryokai.
“Biosignals as Social Cues: Ambiguity and Emotional Interpretation in Social Displays of Skin Conductance ”
In Proceedings of the SIGCHI Conference on Designing Interactive Systems
(DIS ’16)

Laura Devendorf, Joanne Lo, Noura Howell, Jung Lin Lee, Nan-Wei Gong, Emre Karagozler, Shiho Fukuhara, Ivan Poupyrev, Eric Paulos, Kimiko Ryokai.
“I don’t want to wear a screen”: Probing Perceptions of and Possibilities for Dynamic Displays on Clothing.
In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
(CHI ’16 – Best Paper Award)

Press
Gizmodo: http://gizmodo.com/color-changing-threads-might-one-day-turn-your-t-shirt-1774966030

by admin at April 01, 2016 09:47 PM

March 31, 2016

Ph.D. student

Hands in the Land of Machines


Drawing Hands from Laura Devendorf on Vimeo.

I created a CAD model of 12 hands arranged along different axis, converted that CAD file to G-Code, and used my “Being the Machine” laser guide to draw the G-Code while I followed by hand. After each mark, I used my palm to smear the charcoal in order to account for the presence of layers that would build in darkness as successive layers were added. I preformed the drawing live to emphasize the attention to being human (with all of the imprecision, labor, and messiness that comes along with it) in response to an overwhelming attention to technologies that are clean, virtual, simulated, and magical. This drawing took about four hours and the charcoal took about two days to be completely cleaned off my hand.

IMG_3251

IMG_3244

by admin at March 31, 2016 09:51 PM

March 29, 2016

Center for Technology, Society & Policy

Privacy for Citizen Drones: Use Cases for Municipal Drone Applications

By Timothy Yim, CTSP Fellow and Director of Data & Privacy at Startup Policy Lab | Permalink

Previous Citizen Drone Articles:

  1. Citizen Drones: delivering burritos and changing public policy
  2. Privacy for Citizen Drones: Privacy Policy-By-Design
  3. Privacy for Citizen Drones: Use Cases for Municipal Drone Applications

Startup Policy Lab is leading a multi-disciplinary initiative to create a model policy and framework for municipal drone use.

A Day in the Park

We previously conceptualized a privacy policy-by-design framework for municipal drone applications—one that begins with gathering broad stakeholder input from academia, industry, civil society organizations, and municipal departments themselves. To demonstrate the benefits of such an approach, we play out a basic scenario.

A city’s Recreation and Parks Department (“Parks Dept.”) wants to use a drone to monitor the state of its public parks for maintenance purposes, such as proactive tree trimming prior to heavy seasonal winds, vegetation pruning around walking paths, and any directional or turbidity changes in water flows. For most parks, this would amount to twice-daily flights of approximately 15–30 minutes each. The flight video would then be reviewed, processed, and stored by the Parks Dept.

Even with this “basic” scenario, a number of questions immediately jump to mind. Here are a few:

Intentional & Unintentional Collection

  • Will the drone be recording audio as well as video? And will the drone begin recording within the boundaries of the park? Or over surrounding public streets? What data is actually needed for the stated flight purpose?
  • Will the drone potentially be recording city employees or park-goers? Does the city need to do so for the stated purpose of monitoring for park maintenance? Is such collection avoidable? If not, how can the city build privacy safeguards for unintentional collection of city employees or park-goers into the process?
  • How can notice, consent, and choice principles be brought to bear for municipal employees for whom data is collected? How can they be applied to park-goers? To residents of surrounding homes? To citizens merely walking along the edge of the park?

 

Administrative & Technical Safeguards

  • What sort of access to the collected data will the employees of the recreation and parks department have? Will access be tiered? Who needs access to the raw video? Who needs access only to the post-processed data reports?
  • What sort of processing on the video will occur? Can the processing be algorithmically defined or adapted for machine learning? Can safeguards be placed into the technical processing itself? For example, by algorithmically blurring any persons on the video before long-term storage?
  • What sort of data retention limits will apply to the video data? The post-processed data reports? The flight plans? Should there be a shorter retention period, e.g., 30 days, for the raw video footage?

 

Sharing: Vendors, Open Data, & Onward Transfer

  • Who outside the recreational and parks department will have access to any of the data? Are there outside vendors who will manage the video processing? Are there other agencies that would want access to that data? Should the raw video data even be shared with other agencies? Which ones? Under what conditions?
  • What happens if the drone video data is requested by members of the public via municipal FOIA-analogue requests? What sorts of data will be released via the city’s open data portal? In each case, how can the privacy of city employees and park-goers be protected?

 

Assessing Stakeholder Interests

We’ve got a good list of potential issues to start considering, but in the interest of demonstrating the process as a whole and not getting lost in the details, we’re going to limit the scope of discussion down to just one facet—the unintentional collection of municipal employee data.

The Park Dept. begins by assembling both internal municipal stakeholders and external stakeholders—such as industry stakeholders, interdisciplinary academics, and public policy experts—and then proceeds to iterate through a simple privacy impact assessment.

Data Minimization for Specified Purposes

Stakeholder: Parks Dept. Drone Project Lead

After assembling the stakeholder group, the Parks Dept. drone project manager outlines the use case above, adding the following relevant details:

During the twice-daily drone flights at a specific park, two municipal employees are working in the park. One employee is clearing brush and debris from heavy seasonal winds. Another is pruning the vegetation around walking paths. The drone collects video focused on the health and structural integrity of trees as well as the proximity of any overhanging branches to walking paths.

The Parks Dept. then defers to the privacy and data subject matter experts to highlight the potential legal and policy issues at stake.

Stakeholder: Privacy & Data Expert, Legal Academic or Civil Society

Privacy best practices usually dictate that data collected, processed, or stored be limited to that which is necessary for the specified purpose. Here, the Parks Dept.’s purpose is to detect changes in park features and vegetation that will allow the Parks Dept. to better maintain the park. The drone flight video and associated data will focus on the trees, foliage, and plant debris. Unfortunately, this video data will also unintentionally capture, on occasion, the two Parks Dept. workers. Perhaps there’s a way to limit the collection of video data or secondary data on the Parks Dept. employees?

Stakeholder: Outsourced Video Processing Vendor

At this point, the external vendor that handles the processing of the video data helpfully chimes in. The vendor can create a machine learning method that will recognize human faces and bodies and effectively blur them out of both the subsequently stored video and the data analytics report produced. Problem solved the vendor says.

Stakeholder: Privacy & Data Expert, Engineering & Public Policy Academic

The privacy academic pipes up. That might not solve the problem the academic says. Even if blurred, because there are likely only a limited number of employees who would be performing a given task at a given date, time, and location, it might be easy to cross-reference the blurred images with other data, and identify the Parks Dept. gardener. Even going beyond blurring and producing full redactions within the video data might be insufficient. It would be safer to simply discard those portions of video data entirely and rely on the data reports.

Stakeholder: Parks Dept. Management

One manager within the Parks Dept. speaks up. Why do we even care? If we have Parks Dept. employees in the video data, that’s not so bad. We can monitor them while they work, to see how hard they’re really working.

Another manager responds. That wasn’t an approved purpose for the drone flights. Plus we already have performance metrics that help assess employee productivity.

Stakeholder: Union of Laborers Local 711

The representative from the Union Laborers Local 711, to which the two municipal workers belong, adds that there are pre-existing agreed-upon policies around the privacy of their union members. Especially since we haven’t determined how this data might be made available via the city’s open data portal or via municipal FOIA-analogue requests. While the union understands that drone video might unintentionally capture union members, it appreciates best efforts to cleanse and disregard that information.

 

Notice, Consent, & Choice

The team comes to a consensus that Parks Dept. employees may be unintentionally captured on drone video footage, but will not be factored into the post-processed data summary reports. Additionally, the raw footage will include video redactions and will be retained for a shorter period of time than the data summary reports.

The team meeting goes on to determine how to provide and present notice and choice options to the Parks Dept. workers.

Stakeholder: City Attorney

The city attorney happily reports that he can easily write notification language into the Parks Dept. employee contracts. Will that be enough for meaningful notice? And will there be any choice for Parks Dept. workers?

Stakeholder: Privacy & Data Expert, Academic or Civil Society

The privacy expert addresses the group. That may depend on the varying privacy laws in a particular state or country, but it’d be much better if additional notice were given. For example, the flights could be limited in number and scheduled, with updates accessible via the city’s mobile application for employees.

Stakeholder: Union of Laborers Local 711

The representative from the Union Laborers Local 711 adds that simplified, graphic drone flight notice should also be posted as a supplement to the physical Board of State and Federal Employee Notices in the Parks Dept. staff lounge.

 

Data-Driven “Pan Out”

As the camera pans out from our imagined privacy policy-by-design meeting, the privacy and policy expert from civil society suggests that the general policy framework around municipal drone use should start with broad privacy safeguards, evolving from that beginning only once additional data is gathered from both actual municipal drone use as well as stabilizing societal norms.

Takeaways

The creation of a robust, privacy policy-by-design framework for municipal drone use is indeed a challenging endeavor. Understanding the privacy interests for the many impacted stakeholders is a critical starting point. Policymakers should also encourage meta-policies that allow the collection of data around the implemented policy itself. Our goal is develop frameworks that enable law and policy to evolve in lockstep with emerging technologies, so that society can innovate and thrive without compromising on its normative values. Here that means the creation of innovative, positive-sum solutions that safeguard privacy while enabling modern drone use in and by cities.

If you are one of the interested stakeholder groups above or are otherwise interested in participating in our roundtables or research, please let us know at drones@startuppolicylab.org.

by charles at March 29, 2016 08:00 AM

March 28, 2016

Ph.D. student

Trace Ethnography: A Retrospective

This is a cross-post of a post I wrote for Ethnography Matters, in their “The Person in the (Big) Data” series

When I was an M.A. student back in 2009, I was trying to explain various things about how Wikipedia worked to my then-advisor David Ribes. I had been ethnographically studying the cultures of collaboration in the encyclopedia project, and I had gotten to the point where I could look through the metadata documenting changes to Wikipedia and know quite a bit about the context of whatever activity was taking place. I was able to do this becauseWikipedians do this: they leave publicly accessible trace data in particular ways, in order to make their actions and intentions visible to other Wikipedians. However, this was practically illegible to David, who had not done this kind of participant-observation in Wikipedia and had therefore not gained this kind of socio-technical competency.

For example, if I added “{{db-a7}}” to the top an article, a big red notice would be automatically added to the page, saying that the page has been nominated for “speedy deletion.” Tagging the article in this way would also put it into various information flows where Wikipedia administrators would review it. If any of Wikipedia’s administrators agreed that the article met speedy deletion criteria A7, then they would be empowered to unilaterally delete it without further discussion. If I was not the article’s creator, I could remove the {{db-a7}} trace from the article to take it out of the speedy deletion process, which means the person who nominated it for deletion would have to go through the standard deletion process. However, if I was the article’s creator, it would not be proper for me to remove that tag — and if I did, others would find out and put it back. If someone added the “{{db-a7}}” trace to an article I created, I could add “{{hangon}}” below it in order to inhibit this process a bit — although a hangon is a just a request, it does not prevent an administrator from deleting the article.

File:Wiki Women's Edit-a-thon-1.jpg

Wikipedians at an in-person edit-a-thon (the Women’s History Month edit-a-thon in 2012). However, most of the time, Wikipedians don’t get to do their work sitting right next to each other, which is why they rely extensively on trace data to coordinate render their activities accountable to each other. Photo by Matthew Roth, CC-BY-SA 3.0

I knew all of this both because Wikipedians told me and because this was something I experienced again and again as a participant observer. Wikipedians had documented this documentary practice in many different places on Wikipedia’s meta pages. I had first-hand experience with these trace data, first on the receiving end with one of my own articles. Then later, I became someone who nominated others’ articles for deletion. When I was learning how to participate in the project as a Wikipedian (which I now consider myself to be), I started to use these kinds of trace data practices and conventions to signify my own actions and intentions to others. This made things far easier for me as a Wikipedian, in the same way that learning my university’s arcane budgeting and human resource codes helps me navigate that bureaucracy far easier.

This “trace ethnography” emerged out of a realization that people in mediated communities and organizations increasingly rely on these kinds of techniques to render their own activities and intentions legible to each other. I should note that this was not my and David’s original insight — it is one that can can be found across the fields of history, communication studies, micro-sociology, ethnomethodology, organizational studies, science and technology studies, computer-supported cooperative work, and more. As we say in the paper, we merely “assemble their various solutions” to the problem of how to qualitatively study interaction at scale and at a distance. There are jargons, conventions, and grammars learned as a condition of membership in any group, and people learn how to interact with others by learning these techniques.

The affordances of mediated platforms are increasingly being used by participants themselves to manage collaboration and context at massive scales and asynchronous latencies. Part of the trace ethnography approach involves coming to understand why these kinds of systems were developed in the way that they were. For me and Wikipedia’s deletion process, it went from being strange and obtuse to something that I expected and anticipated. I got frustrated when newcomers didn’t have the proper literacy to communicate their intentions in a way that I and other Wikipedians would understand. I am now at the point where I can even morally defend this trace-based process as Wikipedians do. I can list reason after reason why this particular process ought to unfold in the way that it does, independent of my own views on this process. I understand the values that are embedded in and assumed by this process, and they cohere with other values I have found among Wikipedians. And I’ve also met Wikipedians who are massive critics of this process and think that we should be using a far different way to deal with inappropriate articles. I’ve even helped redesign it a bit.

Trace ethnography is based in the realization that these practices around metadata are learned literacies and constitute a crucial part of what it means to participate in many communities and organizations. It turns our attention to an ethnographic understanding of these practices as they make sense for the people who rely on them. In this approach, reading through log data can be seen as a form of participation, not just observation — if and only if this is how members themselves spend their time. However, it is crucial that this approach is distinguished from more passive forms of ethnography (such as “lurker ethnography”), as trace ethnography involves an ethnographer’s socialization into a group prior to the ability to decode and interpret trace data. If trace data is simply being automatically generated without it being integrated into people’s practices of participation, if people in a community don’t regularly rely on following traces in their everyday practices, then the “ethnography” label is likely not appropriate.

Looking at all kinds of online communities and mediated organizations, Wikipedia’s deletion process might appear to be the most arcane and out-of-the-ordinary. However, modes of participation are increasingly linked to the encoding and decoding of trace data, whether that is a global scientific collaboration, an open source software project, a guild of role playing gamers, an activist network, a news organization, a governmental agency, and so on. Computer programmers frequently rely on GitHub to collaborate, and they have their own ways of using things like issues, commit comments, and pull requests to interact with each other. Without being on GitHub, it’s hard for an ethnographer who studies software development to be a fully-immersed participant-observer, because they would be missing a substantial amount of activity — even if they are constantly in the same room as the programmers.

More about trace ethnography

If you want to read more about “trace ethnography,” we first used this term in “The Work of Sustaining Order in Wikipedia: The Banning of a Vandal,” which I co-authored with my then-advisor David Ribes in the proceedings of the CSCW 2010 conference. We then wrote a followup paper in the proceedings of HICSS 2011 to give a more general introduction to this method, in which we ‘inverted’ the CSCW 2011 paper, explaining more of the methods we used. We also held a workshop at the 2015 iConference with Amelia Acker and Matt Burton — the details of that workshop (and the collaborative notes) can be found athttp://trace-ethnography.github.io.

Some examples of projects employing this method:

Ford, H. and Geiger, R.S. “Writing up rather than writing down: Becoming Wikipedia literate.” Proceedings of the Eighth Annual International Symposium on Wikis and Open Collaboration. ACM, 2012. http://www.stuartgeiger.com/writing-up-wikisym.pdf

Ribes, D., Jackson, S., Geiger, R.S., Burton, M., & Finholt, T. (2013). Artifacts that organize: Delegation in the distributed organization. Information and Organization, 23(1), 1-14. http://www.stuartgeiger.com/artifacts-that-organize.pdf

Mugar, G., Østerlund, C., Hassman, K. D., Crowston, K., & Jackson, C. B. (2014). Planet hunters and seafloor explorers: legitimate peripheral participation through practice proxies in online citizen science. InProceedings of the 17th ACM conference on Computer supported cooperative work & social computing (pp. 109-119). ACM. http://dl.acm.org/citation.cfm?id=2531721

Howison, J., & Crowston, K. (2014). Collaboration Through Open Superposition: A Theory of the Open Source Way. Mis Quarterly, 38(1), 29-50. http://aisel.aisnet.org/cgi/viewcontent.cgi?article=3156&context=misq

Burton, M. (2015). Blogs as Infrastructure for Scholarly Communication. Doctoral Dissertation, University of Michigan.http://deepblue.lib.umich.edu/bitstream/handle/2027.42/111592/mcburton_1.pdf

by stuart at March 28, 2016 06:55 PM

March 27, 2016

MIMS 2012

Icons are the Acronyms of Design

In The Elements of Style, the seminal writing and grammar book by Strunk and White, the authors have a style rule that states, “Do not take shortcuts at the cost of clarity.” This rule advises writers to spell out acronyms in full unless they’re readily understood. For example, not everyone knows that madd is Mothers Against Drunk Driving.

Acronyms come at the cost of clarity. “Many shortcuts are self-defeating,” the authors say, “they waste the reader’s time instead of conserving it.”

Icons are the acronyms of design. Designers often rely on them to communicate what an action or object does, instead of simply stating what the action or object is. Unless you’re using universally-recognized icons (which are rare), you’re more likely to harm the usability of an interface.

Do you know what the icons on the left mean? Do you know what the icons on the left mean?

So as Strunk and White advise, don’t take shortcuts at the cost of clarity. “The longest way round is usually the shortest way home.”

by Jeff Zych at March 27, 2016 11:21 PM

March 22, 2016

Center for Technology, Society & Policy

Privacy for Citizen Drones: Privacy Policy-By-Design

By Timothy Yim, CTSP Fellow and Director of Data & Privacy at Startup Policy Lab | Permalink

Startup Policy Lab is leading a multi-disciplinary initiative to create a model policy and framework for municipal drone use.

Towards A More Reasoned Approach

Significant policy questions have arisen from the nascent but rapidly increasing adoption of drones in society today. The developing drone ecosystem is a prime example of how law and policy must evolve with and respond to emerging technology, in order for society to thrive while still preserving its normative values.

Privacy has quickly become a vital issue in the debate over acceptable drone use by government municipalities. In some instances, privacy concerns over the increased potential for government surveillance have even led to wholesale bans on the use of drones by municipalities.

Let me clear. This is a misguided approach.

Without a doubt, emerging drone technology is rapidly increasing the potential ability of government to engage in surveillance, both intentionally and unintentionally, and therefore to intrude on the privacy of its citizenry. And likewise, it’s also absolutely true that applying traditional privacy principles—such as notice, consent, and choice—has proven incredibly challenging in the drone space. For the record, these are legitimate and serious concerns.

Yet even under exceptionally strong constructions of modern privacy rights, including those enhanced protections afforded under state constitutions such as California’s, an indiscriminate municipal drone ban makes little long-term sense. A wholesale ban cuts off municipal modernization and the many potential benefits of municipal drone use—for instance, decreased costs and increased frequency of monitoring for the maintenance of public parks, docks, and bridges.

What a wholesale ban, or for that matter a blanket whitelisting, does accomplish is avoiding the admittedly difficult task of creating a policy framework to enable appropriate municipal drone use while preserving privacy. But these are questions that need to be considered, in order to move beyond the false binary dichotomy between privacy and municipal drone usage. In short, safeguarding privacy and enabling municipal innovation via new drone applications need not be mutually exclusive.

Privacy Policy-By-Design

Our privacy policy-by-design approach considers and integrates privacy principles—such as data minimization, retention, and onward transfer limits—early in the development of drone law and policy. Doing so will enable, much like privacy-by-design theory in engineering contexts, the creation of positive-sum policy solutions.

Critical to a privacy policy-by-design approach is (1) identifying potential stakeholders, both core and ancillary, and (2) understanding how their particular interests play out.

By identifying a broad array of stakeholders—including invested municipal agencies, interdisciplinary academia, industry, and civil society organizations—we hope to better understand how municipal drone use will impact the privacy interests of each stakeholder group. Here, privacy subject matter experts from interdisciplinary academia—law, public policy, and information studies—are critical to facilitate identification of potential issues, both to represent the public at large and to assist other stakeholder groups, which might not otherwise have the necessary expertise to fully assess their interests.

Oftentimes, this approach will benefit from convening key stakeholders in a face-to-face roundtable setting, especially those in other municipal departments and in groups outside municipal government altogether. A series of such tabletop roundtables, organized around likely use cases, provides an opportunity for stakeholder groups to identify general privacy concerns as well as facilitate early development of creative and nuanced solutions between parties.

Once municipal departments gain a comprehensive understanding of general stakeholder concerns, they can extrapolate those concerns for application in additional use cases and situations. City governments do not have the time or resources to convene roundtables for the entire range of potential drone applications. Nonetheless, takeaways from the initial set of use cases can provide invaluable insight into the potential privacy concerns of external stakeholders—helping avoid otherwise likely conflict in the future.

Understanding the multitude of privacy interests by different stakeholders is key to the creation of innovative, positive-sum solutions that safeguard privacy while enabling modern drone use in and by cities. The following table represents a theoretical, high-level mapping of stakeholder concerns in the municipal drone space.

drone privacy stakeholder concerns

Evolving Data-Driven Policy

Finally, it’s important to realize that a privacy policy-by-design approach should not be pursued in isolation. A growing fraction of recently proposed or enacted legislation has authorized the ancillary collection of relevant data around the new legislation itself—creating opportunities in the future to further evolve policy via real-world usage. So too, we propose that appropriate data collection modules be added to municipal drone use processes to confirm that established policies are creating the proper incentives and disincentives.

Our overarching goal is to develop a framework that enables law and policy to evolve in lockstep with emerging technologies, so that society can innovate and thrive without compromising on its normative values.

If you are one of the interested stakeholder groups above or are otherwise interested in participating in our roundtables or research, please let us know at drones@startuppolicylab.org.

by charles at March 22, 2016 08:00 AM

March 21, 2016

MIMS 2011

What I’m talking about in 2016

Authority and authoritative sources, critical data studies, digital methods, the travel of facts online, bot politics and social media and politics. These are some of the things I’m talking about in the first six months of 2016. (Just in case you thought the #sunselfies only indicated fun and aimless loafing).  

15 January Fact factories: How Wikipedia’s logics determine what facts are represented online. Wikipedia 15th birthday event, Oxford Internet Institute. [Webcast, OII event page, OII’s Medium post]

29 January Wikipedia and me: A story in four acts. TEDx Leeds University. [Video, TEDx Leeds University site]

Abstract: This is a story about how I came to be involved in Wikipedia and how I became a critic. It’s a story about hope and friendship and failure, and what to do afterwards. In many ways this story represents the relationship that many others like me have had with the Internet: a story about enormous hope and enthusiasm followed by disappointment and despair. Although similar, the uniqueness of these stories is in the final act – the act where I tell you what I now think about the future of the Internet after my initial despair. This is my Internet love story in four acts: 1) Seeing the light 2) California rulz 3) Doubting Thomas 4) Critics unite. 

17 February. Add data to methods and stir. Digital Methods Summer School. CCI, Queensland University of Technology, Brisbane [QUT Digital Methods Summer School website]

Abstract: Are engagements with real humans necessary to ethnographic research? In this presentation, I argue for methods that connect data traces to the individuals who produce them by exploring examples of experimental methods featured on the site ‘EthnographyMatters.net’, such as live fieldnoting, collaborative mapmaking and ‘sensory postcards’.  This presentation will serve as an inspiration for new work that expands beyond disciplinary and methodological boundaries and connects the stories we tell about our things with the humans who create them.  

10 March. Situating Innovations in Digital Measures. University of Leeds, Leeds Critical Data Studies Inaugural Event.  

Abstract: Drawn from case studies that were presented at the recent Digital Methods Summer School (Digital Media Research Centre, Queensland University of Technology) in Brisbane, Australia last month, as well as from experimental methods contributed to by authors of the Ethnography Matters community, this seminar will present a host of inspiring methodological tools that researchers of digital culture and politics are using to explore questions about the role of digital technologies in modern life. Instead of data-centric models and methodologies, the seminar focuses on human-centric models that also engage with the opportunities afforded by digital technologies. 

21-22 April. Ode to the infobox. Streams of Consciousness: Data, Cognition and Intelligent Devices Conference. University of Warwick.

Abstract: Also called a ‘fact box’, the infobox is a graphic design element that highlights summarised statements or facts about the world contained within it. Infoboxes are important structural elements in the design of digital information. They usually hang in the right-hand corner of a webpage, calling out to us that the information contained within them is special and somehow apart from the rest. The infobox satisfies our rapid information-seeking needs. We’ve been trained to look to the box to discover, not just another set of informational options, but an authoritative statement of seemingly condensed consensus emerging out of the miasma of data about the world around us.

When you start to look for them, you’ll see infoboxes wherever you look. On Google, these boxes contain results from Google’s Knowledge Graph; on Wikipedia they are contained within articles and host summary statistics and categories; and on the BBC, infoboxes highlight particular facts and figures about the stories that flow around them.

The facts represented in the infoboxes are no longer as static as the infoboxes of old. Now they are the result of algorithmic processes that churn thousands, sometimes millions of data points according to rulesets that produce relatively unique encounters by each new user.

In this paper, I trace the multitude of instructions and sources, institutions and people that constitute the assemblage that results in different facts for different groups at different times. Investigating infoboxes on Wikipedia and Google through intermediaries such as Wikidata, I build a portrait of the pipes, processes and people that feed these living, dynamic frames. The infobox, humble as it seems, turns out to be a powerful force in today’s deeply connected information ecosystem. By celebrating the infobox, I hope to reveal its hidden power – a power with consequences far beyond the efficiency that it promises.

29 April. How facts travel in the digital age. Social Media Lab Guest Speaker Series, Ryerson University, Social Media Lab, Toronto, Canada. [Speaker series website]

Abstract: How do facts travel through online systems? How is it that some facts gather steam and gain new adherents while others languish in isolated sites? This research investigates the travel of two sets of facts through Wikipedia’s networks and onto search engines like Google. The first: facts relating to the 2011 Egyptian Revolution; the second: facts relating to “surr”, a sport played by men in the villages of Northern India. While the Egyptian Revolution became known to millions across the world as events were reported on multiple Wikipedia language versions in early 2011, the facts relating to surr faced enormous challenges as its companions attempted to propel it through Wikipedia’s infrastructure. Following the facts as they travelled through Wikipedia gives us an insight into the source of systemic biases of Internet infrastructures and the ways in which political actors are changing their strategies in order to control narratives around political events. 

8 June. Politicians, Journalists, Wikipedians and their Twitter bots. Algorithms, Automation and Politics. (Heather Ford, Elizabeth Dubois, Cornelius Puschmann) ICA Pre-Conference, Fukuoka, Japan. [Event website]

Abstract selection: Recent research suggests that automated agents deployed on social media platforms, particularly Twitter, have become a feature of the modern political communication environment (Samuel, 2015, Forelle et al, 2015, Milan, 2015). Haustein et al (2016) cite a range of studies that put the percentage of bots among all Twitter accounts at 10-16% (p. 233). Governments have been shown to employ social media experts to spread pro-governmental messages (Baker, 2015, Chen 2015), political parties pay marketing companies to create or manipulate trending topics (Forelle et al, 2015), and politicians and their staff use bots to augment the number of account followers in order to provide an illusion of popularity to their accounts (Forelle et al, 2015). The assumption in these analyses is that bots have a direct influence on public opinion and that they can act as credible and competent sources of information (Edwards et al, 2014). There is still, however, little empirical evidence of the link between bots and political discourse, the material consequences of such changes or how social groups are reacting. [continued] 

11 June. Wikipedia: Moving Between the Whole and its Traces. In ‘Drowning in Data: Industry and Academic Approaches to Mixed Methods in “Holistic” Big Data Studies’ panel. International Communication Association Conference. Fukuoka, Japan. [ICA website]

Abstract: In this paper, I outline my experiences as an ethnographer working with data scientists to explore various questions surrounding the dynamics of Wikipedia sources and citations. In particular, I focus on the moments at which we were able to bring the small and the large into conversation with one another, and moments when we looked, wide-eyed at one another, unable to articulate what had gone wrong. Inspired by Latour’s (2010) reading of Gabriel Tarde, I argue that a useful analogy for conducting mixed methods for studies about which large datasets and holistic tools are available is the process of life drawing – a process of moving up close to the easel and standing back (or to the side) as the artist looks at both their subject and the canvas in a continual motion.

Wikipedia’s citation traces can be analysed in their aggregate – piled up, one on top of the other to indicate the emergence of new patterns, new vocabulary, new authorities of knowledge in the digital information environment. But citation traces take a particular shape and form, and without an understanding of the behaviour that lies behind such traces, the tendency is to count what is available to us, rather than to think more critically about the larger questions that Wikipedia citations help to answer.

I outline a successful conversation which happened when we took a large snapshot of 67 million source postings from about 3.5 million Wikipedia articles and attempted to begin classifying the citations according to existing frameworks (Ford 2014). In response, I conducted a series of interviews with editors by visualising their citation traces and asking them questions about the decision-making and social interaction that lay behind such performances (Dubois and Ford 2015). I also reflect on a less successful moment when we attempted to discover patterns in the dataset on the basis of findings from my ethnographic research into the political behaviour of editors. Like the artist who had gotten their proportions wrong when scaling up the image on the canvas, we needed to re-orient ourselves and remember what we were trying to ultimately discover.

13 June. The rise of expert amateurs in the realm of knowledge production: The case of Wikipedia’s newsworkers. In ‘Dialogues in Journalism Studies: The New Gatekeepers’ panel. International Communication Association Conference. Fukuoka, Japan. [ICA website]

Abstract: Wikipedia has become an authoritative source about breaking news stories as they happen in many parts of the world. Although anyone can technically edit a Wikipedia article, recent evidence suggests that some have significantly more power than others when it comes to being able to have edits sustained over time. In this paper, I suggest that the theory of co-production, elaborated upon by Sheila Jasanoff, is a useful way of framing how, rather than a removal of the gatekeepers of the past, Wikipedia demonstrates two key trends. The first is the rise of a new set of gatekeepers in the form of experienced Wikipedians who are able to deploy coded objects effectively in order to stabilize or destabilize an article, and the second is a reconfiguration in the power of traditional sources of news and information in the choices that Wikipedia editors make when writing about breaking news events.

 

 


by Heather Ford at March 21, 2016 10:24 PM

March 15, 2016

Center for Technology, Society & Policy

The Neighbors are Watching: From Offline to Online Community Policing in Oakland, California

By Fan Mai & Rebecca Jablonsky, CTSP Fellows | Permalink

As one of the oldest and most popular community crime prevention programs in the United States, Neighborhood Watch is supposed to promote and facilitate community involvement by bringing citizens together with law enforcement in resolving local crime and policing issues. However, a review of Neighborhood Watch programs finds that nearly half of all properly evaluated programs have been unsuccessful. The fatal shooting of Trayvon Martin by George Zimmerman, an appointed neighborhood watch coordinator at that time, has brought the conduct of Neighborhood Watch under further scrutiny.

Founded in 2010, Nextdoor is an online social networking site that connects residents of a specific neighborhood together. Unlike other social media, Nextdoor maintains a one-to-one mapping of real-world community to virtual community, nationwide. Positioning itself as the platform for “virtual neighborhood watch,” Nextdoor not only encourages users to post and share “suspicious activities,” but also invites local police departments to post and monitor the “share with police” posts. Since its establishment, more than 1000 law enforcement agencies have partnered with the app, including the Oakland Police Department. Although Nextdoor has helped the local police to solve crimes, it has also been criticized for giving voices to racial biases, especially in Oakland, California.

Activists have been particularly vocal in Oakland, California—a location that is historically known for diversity and black culture, but is currently a site where racial issues and gentrification are contested public topics. The Neighbors for Racial Justice, a local activist group started by residents of Oakland, has been particularly active in educating people about unconscious racial bias and working with the Oakland City Council to request specific changes to the crime and safety form that Nextdoor users fill out when posting to the site.

Despite the public attention and efforts made by activist groups to address the issue of racial biases, controversies remain in terms of who should be held responsible and how to avoid racial profiling without stifling civic engagement in crime prevention. With its rapid expansion across the United States, Nextdoor is facing many challenges, especially on the issues of moderation and regulation of user-generated content.

Racial profiling might just be the tip of the iceberg. Using a hyper-local social network like Nextdoor can bring up pressing issues related to community, identity, and surveillance. Neighborhoods have their own history and dynamics, but Nextdoor provides identical features to every neighborhood across the entirety of the U.S. Will this “one size fits all” approach work as Nextdoor expands its user base? As a private company that is involved in public issues like crime and policing, what kind of social responsibility should Nextdoor have to its users? How does the composition of neighborhoods affect online interactions within Nextdoor communities? Is the Nextdoor neighborhood user base an accurate representation of the actual community?

Researching Nextdoor

As researchers, we seek to contribute to the conversation by conducting empirical research with Nextdoor users in three Oakland neighborhoods: one that is predominantly white, one that is predominantly non-white, and one that is ethnically diverse. We hope to elucidate the ways that racial composition of a neighborhood influences the experience of using a community-based social network such as Nextdoor.

Neighborhood 1

For example, here is the demographic breakdown of one Oakland neighborhood, which we will call Neighborhood 1. As you can see, this area might be considered fairly diverse: many different races are represented, and there isn’t one race that is dominant in the population. It has a median household income of $52,639 and is predominantly non-white, with over half of residents identifying as Black or Asian.

Graphs included in this post were accessed from City-Data.com. Zip codes have been removed to protect neighborhood privacy. These are example neighborhoods, and are not neighborhoods that we are researching.

Now, take a look at the neighborhood that directly borders the previous one, which we will call Neighborhood 2. It has a median household income of $94,276 and is nearly 75% white.

Neighborhood 2 statistics

Neighborhood 2

Although these micro-neighborhoods directly border each other, they might normally function as separate entities entirely. Residents might walk down different streets, shop in different stores, and remain generally unaware of each other’s existence. Racial segregation is fairly typical of urban environments in the United States, where people of different racial backgrounds are often segregated into packets of a city that is otherwise considered to be “diverse”—meaning fewer families actually live in mixed-income neighborhoods, and are therefore less likely to be exposed to people who are different from themselves.

This segregation can be disrupted when a person joins social networking websites like Nextdoor.com. Since early 2013, not only can Nextdoor users receive information generated from all people in their neighborhood, but they can see and respond to posts in the Crime and Safety section of several “nearby neighborhoods.” Pushing of the neighborhood boundaries amplifies the potential for users to participate in more heterogeneous communities, but at the same time, may increase the anxiety for trust and the chance for conflict within the larger virtual communities.

Contrary to popular belief, researchers have found that the use of digital technologies is associated with higher level of engagement in public and semi-public spaces, such as parks and community centers. Social network sites can be considered as “networked publics” that may help people connect for social, cultural, and civic purposes. But on the other hand, they can also be used as tools for gentrification that divide communities through surveillance and profiling.

What can we do, as researchers and citizens, to address the complexities of online policing in the use of social networking sites?

by Nick Doty at March 15, 2016 07:04 PM

March 14, 2016

MIMS 2012

Designing Backwards

Action is the bedrock of drama. Action drives the play forward and makes for a compelling story that propels the audience to the end. And an engaging play is just a series of connected actions, according to David Ball in Backwards & Forwards.

Like a play, the user’s journey through your product is also a series of connected actions. Every click, tap, and swipe is an action users take. But unlike the audience of a play, which is just along for the ride, your users are in the driver’s seat trying to reach a specific goal. If you, as the designer, don’t make the series of actions to reach that goal clear, your users will get lost and your product will fail.

To help authors write engaging plays, David Ball recommends starting at the end of the play and identifying each preceding action, all the way back to the beginning. By looking backwards, you can see the specific steps that led to a particular outcome. “The present demands and reveals a specific past. One particular, identifiable event lies immediately before any other,” he says.

Looking forward, on the other hand, presents unlimited possibilities. The outcome of an action can trigger any number of other actions. You can only know which specific action comes next by looking backwards from the end.

This technique applies just as well to designing user experiences as it does to writing plays. Start by identifying the user’s goal, then walk backwards through each action they must take to get there.

An example makes this clearer. Before we launched native mobile A/B testing at Optimizely, my colleague Silvia and I re-designed the onboarding flow using this technique. (Silvia wrote about the onboarding flow on Medium.)

We identified the user’s goal as creating their first A/B test. We arrived there by understanding the problem that A/B testing solves for our customers, which is to improve their app and ultimately make their business more successful.

If we had started at the beginning and worked our way forward, it would have been easy to stop once they installed our mobile SDK. But installing an SDK isn’t the customer’s goal. There’s no inherit value in that – it’s just a stepping stone to getting value out of our product.

Then we walked backwards through each step a user must take to reach that goal:

  • Goal: create your first A/B test.
  • To create an A/B test, you must install the SDK.
  • To install the SDK, you need to download it and add an account-specific identifier to your app.
  • To download and set up the SDK, you need an account and a project ID.
  • To create an account and a project, you must sign up by entering your info (name, email, billing info, etc.) in a form on our website.

Just by writing out each step like this, we eliminated excess steps and didn’t get distracted by edge cases or side flows. We had a focused list of tasks to design for. And at the conclusion of each task, we knew the next task to lead the user to.

Using this series of steps as a skeleton, we were able to design an onboarding flow that seamlessly led users to their goal. The experience has been praised by customers, and none of them have needed any help from our support team to create their first test.

So next time you’re designing a complex flow, start with the user’s goal and work your way backwards through each action they must take to get there. This technique will put you in an empathetic mindset that will result in user experiences that are clear and focused.

“Of such adjacent links is life — and drama — made up,” says David Ball. And so is product design.

by Jeff Zych at March 14, 2016 02:19 AM

March 07, 2016

MIMS 2016

TweetDay: a better visualization for your Twitter timeline

On Twitter, individuals and outlets frequently use the acronym ICYMI (In Case You Missed It) to bring links to the attention of others who…

by Andrew Huang at March 07, 2016 05:45 AM

March 03, 2016

Center for Technology, Society & Policy

Design Wars: The FBI, Apple and hundreds of millions of phones

By Deirdre K. Mulligan and Nick Doty, UC Berkeley, School of Information | Permalink | Also posted to the Berkeley Blog

After forum-and fact-shopping and charting a course via the closed processes of district courts, the FBI has honed in on the case of the San Bernardino terrorist who killed 14 people, injured 22 and left an encrypted iPhone behind. The agency hopes the highly emotional and political nature of the case will provide a winning formula for establishing a legal precedent to compel electronic device manufacturers to help police by breaking into devices they’ve sold to the public.

The phone’s owner (the San Bernardino County Health Department) has given the government permission to break into the phone; the communications and information at issue belong to a deceased mass murderer; the assistance required, while substantial by Apple’s estimate, is not oppressive; the hack being requested is a software downgrade that enables a brute force attack on the crypto — an attack on the implementation rather than directly disabling encryption altogether and, the act under investigation is heinous.

But let’s not lose sight of the extraordinary nature of the power the government is asking the court to confer.

Over the last 25 years, Congress developed a detailed statutory framework to address law enforcement access to electronic communications, and the technical design and assistance obligations of service providers who carry and store them for the public. That framework has sought to maintain law enforcement’s ability to access evidence, detailed a limited set of responsibilities for various service providers, and filled gaps in privacy protection left by the U.S. Supreme Court’s interpretation of the Fourth Amendment.

This structure, comprised of the 1986 Electronic Communications Privacy Act and the 1994 Communications Assistance for Law Enforcement Act, should limit the FBI’s use of the All Writs Act to force Apple to write a special software downgrade to facilitate a brute-force attack on the phone’s encryption and access the phone’s contents.

As we argue in a brief filed with the court today, the FBI’s effort to require Apple to develop a breach of iPhone security in the San Bernardino case is an end run around the legislative branch. While the FBI attempts to ensure that law enforcement needs for data trump other compelling social values including cybersecurity, privacy, and innovation, legislators and engineers pursue alternative outcomes.

A legal primer

The Communications Assistance for Law Enforcement Act, passed in 1994, essentially requires telecommunications carriers to make their networks wire-tappable, ensuring law enforcement can intercept communications when authorized by law. Importantly, CALEA’s design and related assistance requirements apply only to telecommunications common carriers and prohibit the government from dictating design; alternative versions of the law which to extend these requirements to service providers such as Apple were debated and rejected by Congress.

The second statute of interest is the 1986 Electronic Communications Privacy Act. ECPA governs the conditions and process for law enforcement access to stored records such as subscriber information, transactional data, and communications from electronic communication service providers and remote communication service providers.

Apple has assisted the government in obtaining records related to the San Bernardino iPhone stored on Apple’s servers. That is the extent of Apple’s obligation. ECPA does not require service providers like Apple to help government get access to information on devices or equipment owned by an individual, regardless of whether they sold the device to that individual.

A ruling that the All Writs Act can be used to force Apple to retroactively redesign an iPhone it sold to ensure FBI access to data an individual chose to encrypt would inappropriately upend a carefully constructed policy designed to address privacy, law enforcement, and other values.

If the AWA is read to give a court authority to order this relief because the statute does not expressly prohibit it, it would allow law enforcement to bypass the public policy process on an issue of immense importance to citizens, technologists, human rights activists, regulators and industry.

Make no mistake, we are in the midst of what we call the Design Wars, and those wars are about policy priorities which ought to be established through full and open legislative debate.

Design Wars: The FBI Strikes Back

Design by Molly Mahar (UC Berkeley); background image from NASA.

Unlike an exception in a law that requires a standard to be met by someone in the right role (for example law enforcement), and ideally a court process to invoke (a warrant or other order approved by a court), a vulnerability in a system lets in anyone who can find it – no standard, no process, no paper required: come one, come all. For these reasons, former government officials differ about whether the trade off is worth it.

Former National Security Administration and CIA Director Michael Hayden has recognized that, on balance, America is “more secure with end-to-end unbreakable encryption.” This view is shared by former NSA Director Mike McConnell, former Department of Homeland Security Secretary Michael Chertoff and former U.S. Deputy Defense Secretary William Lynn who recently wrote, “the greater public good is a secure communications infrastructure protected by ubiquitous encryption at the device, server and enterprise level without building in means for government monitoring.”

This is a big public policy question with compelling benefits and risks on both sides. It’s a conversation that should occur in Congress. If the FBI can require product redesigns of their choosing through the All Writs Act, it risks subverting this process and sidestepping a public conversation about how to prioritize values – defensive security, access to evidence, privacy, etc. – in tech policy.

Technical complications

Much of the public debate has focused on how many phones will be affected by the order to design and deploy a modified and less secure version of the iOS operating system. The FBI claims interest in a single phone. Apple claims that the backdoor would endanger hundreds of millions of iPhone users.

“In the physical world, it would be the equivalent of a master key, capable of opening hundreds of millions of locks — from restaurants and banks to stores and homes. No reasonable person would find that acceptable.” — Apple

Legal precedent is certainly an important question; if the All Writs Act can compel Apple to design and deploy software in this case, then would they also have to for the other 13 devices covered by other federal court orders? Or the 175 devices of interest to the Manhattan District Attorney? Will it only require assistance where the government possesses the phone? Or can the All Writs Act be used to push malicious software updates to a device to proactively collect data? What should Apple’s response be when this case is cited by governments of other countries (including China) to compel disabling the PIN entry limits or other security features of an activist’s iPhone?

But the danger of a backdoor exists separately from legal precedents. What if the custom, insecure operating system were to fall into the wrong hands? Apple notes in their motion that it would be a highly-prized asset, sought by hackers, organized crime syndicates and repressive regimes around the world. Developing such software would endanger the security and privacy of all iPhone users in a way that couldn’t be fixed by any software update.

To the FBI’s credit, the conditions of the court order try to limit the risk of this dangerous software falling into the wrong hands: customizing the software to run only on the San Bernardino phone, and unlocking the phone on the Apple campus without releasing the custom insecure software to the FBI.

However, security practitioners more than anyone recognize the limits of well-intentioned methods such as these. They believe in defense in depth, as advocated by the National Security Agency. Rather than relying on a single protective wall or security measure, good security anticipates bugs and mitigates attacks by building security throughout all parts of a system.

Could the design of the custom-insecure operating system limit its applicability and danger if inadvertently released? Apple engineers would certainly try, and much of the expense of developing the software would be the extensive testing necessary to reduce those dangers. But no large scale piece of software is ever written bug-free.

And what is the likely response of rational companies faced with hundreds or thousands of requests to unlock secure devices they’ve sold to the public? Sure, Apple may be financially capable of creating boutique code to unlock every individual phone law enforcement wants access to, or at least many of them. But other companies may build in backdoors to accommodate law enforcement access with minimal impact to the business bottom line.

The result? An end run around the legislative process that has to date been unconvinced that backdoors are good national policy and decreased security for all users.

What next?

Beyond the courtroom, Congress has jumped back into the fray with a hearing in the House Judiciary Committee: The Encryption Tightrope: Balancing Americans’ Security and Privacy.

But the discussion will also include software and hardware engineers. As technical designers see the discretion of law used (or abused) to access communications or undermine security, they will seek technical methods to enforce security in ways increasingly difficult to reverse or undermine.

To take a piece of recent history, revelations during 2013 of the NSA’s mass surveillance of online communications and sabotage of security standards led to organizational and architectural responses from the technical community. The Internet Engineering Task Force concluded that pervasive monitoring is an attack on privacy and one that must be mitigated in the development of the basic standards that define the Internet. A flurry of activity has led to increased encryption of online communications.

As we discussed with the LA Times last week, we expect to see more encryption in cloud services; using a design pattern of exclusively user-managed keys, service providers may build storage and processing services where they are unable to decrypt content for law enforcement and where hackers will be unable to review the data even after breaching a company’s security.

Likewise, look for more work in academia and in industry on: reproducible builds, certificate transparency, homomorphic encryption, trusted platform modules, end-to-end encryption and other technical capabilities that allow for providing services with guarantees of privacy and security from tampering, whether by a hacker, a national intelligence agency or, via court order, the service provider itself.

The next battle in these Design Wars, even after the outcome of the Apple v. FBI case, will be whether the legal process tries to frustrate these technical efforts to provide enhanced security and privacy to people around the globe.


More resources:

by Nick Doty at March 03, 2016 04:00 PM

February 28, 2016

MIMS 2016 Final Project

User Onboarding

User onboarding is a volatile stage in the journey of a user. Lots of strong opinions get formed during these first steps. A user is trusting you with their time and the very first thing they’ll see and interact with is a series of screens, actions and instructions that will set the tone for the rest of their experience. As we all know, first impressions are a crucial part of a user’s assessment of a product.  That first meal at a certain restaurant, the first car you owned (Hyundai Elantra former owner right here!), your first camping trip, etc.

IMG_20160228_143750 IMG_20160228_144358 onboarding_cropped

Information products have their own quirks and affordances. You might want to create an account for your user when they first open your app and get information to be able to tailor your service to them. In our case we need a bunch of info, some of it sensitive, like the case of religious dietary restrictions. We don’t know how users might react to some of these questions and at the same time we need to maintain near-zero friction for the user at all times. Lots of variables.

Lucky for us we’ve done our homework (literally) and know a thing or two about the importance of these first steps. We also have some talented folks amongst our ranks who are pretty passionate about understanding user needs during this process and how we can build an appealing, respectful and useful experience. Work is happening!

We had a successful surveying session with schoolmates and strangers and have amassed tons of eye-opening insights and critiques. For instance, most people have very personal “mental-flows” in which they dissect a menu and go about making choices on what to order. It’s indicative of a person’s beliefs and priorities and they shed light on how koAlacart should be structured. Stay tuned, more updates coming soon…


by nsoldiac at February 28, 2016 11:32 PM

February 25, 2016

Ph.D. alumna

What is the Value of a Bot?

Bots are tools, designed by people and organizations to automate processes and enable them to do something technically, socially, politically, or economically.

Most of the bots that I have built have been in the pursuit of laziness. I have built bots to sit on my server to check to see if processes have died and to relaunch them, mostly to avoid trying to figure out why the process would die in the first place. I have also built bots under the guise of “art.” For example, I built a bot to crawl online communities to quantitatively assess the interactions.

I’ve also written some shoddy code, and my bots haven’t always worked as intended. While I never designed them to be malicious, a few poorly thought through keystrokes had unintended consequences. One rev of my process-checker bot missed the mark and kept launching new processes every 30 seconds until it brought the server down. And in some cases, it wasn’t the bot that was the problem, but my own stupid interpretation of the information I got back from the bot. For example, I got the great idea to link my social bot designed to assess the “temperature” of online communities up to a piece of hardware designed to produce heat. I didn’t think to cap my assessment of the communities and so when my bot stumbled upon a super vibrant space and offered back a quantitative measure intended to signal that the community was “hot,” another piece of my code interpreted this to mean: jack the temperature up the whole way. I was holding that hardware and burnt myself. Dumb. And totally, 100% my fault.

Most of the bots that I’ve written were slipshod, irrelevant, and little more than a nuisance. But, increasingly, huge systems rely on bots. Bots make search engines possible and, when connected to sensors, are often key to smart cities and other IoT instantiations. Bots shape the financial markets and play a role in helping people get information. Of course, not all bots are designed to be helpful to large institutions. Bots that spread worms, viruses, and spam are often capitalizing on the naivety of users. There are large networks of bots (“botnets”) that can be used to bring down systems (e.g., DDoS attacks). There are also pesky bots that mess with the ecosystem by increasing people’s Twitter follower counts, automating “likes” on Instagram, and create the appearance of natural interest even when there is none.

Identifying the value of these different kinds of bots requires a theory of power. We may want to think that search engines are good, while fake-like bots are bad, but both enable the designer of the bots to profit economically and socially.

Who gets to decide the value of a bot? The technically savvy builder of the bot? The people and organizations that encounter or are affected by the bot? Bots are being designed for all sorts of purposes, and most of them are mundane. But even mundane bots can have consequences.

In the early days of search engines, many website owners were outraged by search engine bots, or web crawlers. They had to pay for traffic, and web crawlers were not seen as legitimate or desired traffic. Plus, they visited every page and could easily bring down a web server through their intensive crawling. As a result, early developers came together and developed a proposal for web crawler politeness, including a mechanism known as the “robots exclusion standard” (or robots.txt), which allowed a website owner to dictate which web crawler could look at which page.

As systems get more complex, it’s hard for developers to come together and develop politeness policies for all bots out there. And it’s often hard for a system to discern between bots that are being helpful and bots that are a burden and not beneficial. After all, before Google was Google, people didn’t think that search engines could have much value.

Standards bodies are no longer groups of geeky friends hashing out protocols over pizza. They’re now structured processes involving all sorts of highly charged interests — they often feel more formal than the meeting of the United Nations. Given high-profile disagreements, it’s hard to imagine such bodies convening to regulate the mundane bots that are creating fake Twitter profiles and liking Instagram photos. As a result, most bots are simply seen as a nuisance. But how many gnats come together to make a wasp?

Bots are first and foremost technical systems, but they are derived from social values and exert power into social systems. How can we create the right social norms to regulate them? What do the norms look like in a highly networked ecosystem where many pieces of the pie are often glued together by digital duct tape?

(This was originally written for Points as part of a series on how to think about bots.)

by zephoria at February 25, 2016 12:50 AM

February 24, 2016

Center for Technology, Society & Policy

Rough cuts on the incredibly interesting implications of Facebook’s Reactions

By Galen Panger, CTSP | Permalink

How do we express ourselves in social media, and how does that make other people feel? These are two questions at the very heart of social media research including, of course, the ill-fated Facebook experiment. Facebook Reactions are fascinating because they are, even more explicitly than the Facebook experiment, an intervention into our emotional lives.

Let me be clear that I support Facebook’s desire to overcome the emotional stuntedness of the Like button (don’t even get me started on the emotional stuntedness of the Poke button). I support the steps the company has taken to expand the Like button’s emotional repertoire, particularly in light of the company’s obvious desire to maintain its original simplicity. But as a choice about which emotional expressions and reactions to officially reward and sanction on Facebook, they are consequential. They explicitly present the company with the knotty challenge of determining the shape of Facebook’s emotional environment, and they have wide implications for the 1.04 billion of us who visit Facebook each day. Here are a few rough reactions to Facebook Reactions.

  • To some extent, Reactions track existing research about emotions that motivate sharing in social media, and the new buttons now allow us to reflect those emotions back to friends in their posts. We have buttons for enthusiasm, amusement and humor (Haha, Love), awe and inspiration (Wow), and anger (Angry). Interestingly, other high arousal emotions thought to motivate sharing—anxiety is a key one—are absent from Reactions, and one low arousal emotion theoretically less likely to motivate sharing, sadness, is present. This suggests either the research is incomplete, and people very often do express and react with sadness (and more so than anxiety), or the company has culled the set of supported emotions by other criteria than popularity. I’m guessing it’s a combination of these: that people do frequently express sadness on Facebook and receive high engagement when they do, such as after a death. But that sadness also has perhaps the most glaring cognitive dissonance with the Like button, urging the company to split it off regardless of popularity. We simply cannot “like” our friends’ grief.
  • Facebook’s choices about which emotions to include and exclude will shape the platform’s emotional environment and, thus, users’ emotional experiences on it—just like the Facebook experiment did and just as Facebook’s design choices have always done. Sanctioning four positive emotions with Like, Love, Haha and Wow buttons as well as two negative emotions with Angry and Sad buttons means posts that stimulate those emotional reactions are likely to be better rewarded, while posts that do not are less rewarded. Now that we can explicitly reward angry posts from friends with the Angry button, will there be more anger shared? Almost certainly. Now that users no longer have to fight against the grain of the Like button to express grief and sadness, will we see more posts about grief and sadness? Likely. Facebook has encouraged the mix of emotions to stay positive overall, however, with a total of now four buttons to express positive reactions (even when, arguably, these are the reactions that fit best under the original Like button).
  • Facebook has always clearly wanted to avoid fostering a disparaging emotional environment, which is likely a reason it has fastidiously avoided a Dislike button. It will be interesting, thus, to see if people sometimes use the new buttons sarcastically or disparagingly, and I can imagine people trolling a company or politician with the Sad and Angry buttons (or even a sarcastic Wow). But these are more ambiguously disparaging than Dislike would have been. Responding with Sad or Angry, for example, may unintentionally invite a literal interpretation by the post’s author and, perceiving that they’ve made a friend feel sad or angry, might be empathetic in response and start a dialogue about why there was a negative reaction. This is much less confrontational than a Dislike button—and, arguably, superior design.
  • In Facebook’s announcement, the company hints at a very big question mark: how to rank posts in News Feed with the new information from these buttons. “Initially … if someone uses a Reaction, we will infer they want to see more of that type of post. … Over time we hope to learn how the different Reactions should be weighted differently by News Feed.” This is a thorny issue that puts Facebook’s role in shaping our emotional experiences into sharp relief. If sad posts make us feel sad, but in expressing that reaction, Facebook decides to show us more sad posts which further our sadness, is that a good thing? If we react with anger at others’ angry posts, is it a good thing that Facebook will show us more posts that perpetuate our anger? Beyond the personal implications, what of the political implications? Forget the filter bubble: Facebook’s ranking of News Feed could suck us into an emotional bubble.

Luckily, one thing the company arguably has been in rolling out these new buttons is careful, in a way they often are not. Facebook drew on its years-long relationship with Dacher Keltner, the Berkeley psychology professor who also consulted on Inside Out, to craft its Reactions, and tested the new buttons for a long time before rolling them out today. In addition, by adding a Wow button, Facebook may further the platform as a vehicle for spreading awe, an emotion Keltner’s own research suggests has particular benefits for health and well-being, beyond those of other positive emotions (though, be careful—a Wow reaction could also be the sign of an envy-inducing post).

But here, now, we are in the tricky space where Facebook is explicitly choosing the emotions we may feel and not feel as we interact with News Feed throughout the day. This is no less “manipulative” than the Facebook experiment, and by rolling out globally, arguably more consequential. Choices the company makes in ranking News Feed will now more explicitly affect the amount of amusement, love and awe we experience in our daily lives—but also the amount of anger and sadness. Facebook’s choices will have consequences because emotions have consequences.

by Galen Panger at February 24, 2016 11:51 PM

February 23, 2016

BioSENSE research project

February 18, 2016

BioSENSE research project

Software Detects CEO Emotions, Predicts Financial Performance

Software Detects CEO Emotions, Predicts Financial Performance:

Although fear, anger, and disgust are negative emotions, Dr. Cicon found
they correlated positively with financial performance. CEOs whose faces
during a media interview showed disgust–as evidenced by lowered eyebrows
and eyes, and a closed, pursed mouth–were associated with a 9.3% boost in
overall profits in the following quarter.

February 18, 2016 06:17 PM

February 17, 2016

BioSENSE research project

February 16, 2016

BioSENSE research project

prostheticknowledge: Young Women Sitting and Standing and...









prostheticknowledge:

Young Women Sitting and Standing and Talking and Stuff (No, No, No)

Lo-Fi tech performance art by @sondraperry01 uses tft-fitted goggles to emphasise eyes and their non-verbal communication during a conversation:

Young Women Sitting and Standing and Talking and Stuff (No, No, No)
2 hour durational performance from 6 to 8 PM on April 21, 2015
Performers: Joiri Minaya, Victoria Udondian, and Ilana Harris-Babou
Safety goggles, 3 tft screens, 3 mica media players, 3 usb sticks, 3 extension cords, 3 Hanes Ultimate Cotton® Crewneck Adult Sweatshirts, zip ties

Link

February 16, 2016 07:58 PM

The Internet Of Things Will Be The World's Biggest Robot

The Internet Of Things Will Be The World's Biggest Robot:

seems unlikely it will be 1 robot - likely multiple robots with different sensors and actuators sharing data in ways that have more to do with deals signed by companies (or not). the idea of the IoT+servers as some singular super-robot is about as far-fetched as the Web 2.0 dream of every last service exposing a REST API. won’t happen - for human and business reasons alike —nick

February 16, 2016 06:21 AM

Authentication Using Pulse-Response Biometrics

Authentication Using Pulse-Response Biometrics:

a novel biometric based on how people respond to a small electric pulse applied to the palm of the hand.

February 16, 2016 04:18 AM

Preventing Lunchtime Attacks: Fighting Insider Threats With Eye Movement Biometrics

Preventing Lunchtime Attacks: Fighting Insider Threats With Eye Movement Biometrics:

using eye movement for authentication, and unique identification among 30 subjects. according to the study, eye movement is hard to spoof.

February 16, 2016 04:16 AM

Interactionist AI and the promise of ubicomp, or, how to put your box in the world without putting the world in your box

Interactionist AI and the promise of ubicomp, or, how to put your box in the world without putting the world in your box:

In many ways, the central problem of ubiquitous computing – how computational systems can make sense of and respond sensibly to a complex, dynamic environment laden with human meaning – is identical to that of Artificial Intelligence (AI). Indeed, some of the central challenges that ubicomp currently faces in moving from prototypes that work in restricted environments to the complexity of real-world environments – e.g. difficulties in scalability, integration, and fully formalizing context – echo some of the major issues that have challenged AI researchers over the history of their field. In this paper, we explore a key moment in AI’s history where researchers grappled directly with these issues, resulting in a variety of novel technical solutions within AI. We critically reflect on six strategies from this history to suggest technical solutions for how to approach the challenge of building real-world, usable solutions in ubicomp today.

February 16, 2016 12:16 AM

February 15, 2016

BioSENSE research project

February 12, 2016

BioSENSE research project

"The Link Between Neanderthal DNA and Depression Risk By mining electronic medical records,..."

The Link Between Neanderthal DNA and Depression Risk

By mining electronic medical records, scientists show the lasting legacy of prehistoric sex on modern humans’ health.



- The Link Between Neanderthal DNA and Depression Risk - The Atlantic

February 12, 2016 10:51 PM

February 11, 2016

BioSENSE research project

prostheticknowledge: First Person Slitscanning Series of visual...









prostheticknowledge:

First Person Slitscanning

Series of visual experiments by Terence Broad explores variations on the photo-delay method using geometric parameters:

Experiments with different geometric variations on tried and tested technique of slitscanning. All the videos were made using custom C++ software and led to my work on a commision for Converse.

Link

February 11, 2016 08:25 AM

prostheticknowledge: iDummy A robotic mannequin designed to...









prostheticknowledge:

iDummy

A robotic mannequin designed to create and model clothing, altering its form for different body shapes:

i.Dummy, revolutionary physical fitting avatar enabling users to adjust and achieve over hundreds of human body measurements and shapes with just few clicks on computer.

Complicated and meticulous mechanical structures comprising over 1000 parts are built and constructed inside i.Dummy to achieve immediate, accurate and reliable i.Dummy measurements.

Body panels are designed by professionals based on years of human body researches over worldwide population. With the driving force from internal parts, a variety of reasonable body proportions from extra small to extra large can be attained perfectly.

More Here

February 11, 2016 08:25 AM

"BROOKE: No conversations…it’s mostly selfies. Depending on the person, the selfie changes. Like, if..."

“BROOKE: No conversations…it’s mostly selfies. Depending on the person, the selfie changes. Like, if it’s your best friend, you make a gross face, but if it’s someone you like or don’t know very well, it’s more regular.”

- Teenagers Are Much Better At Snapchat Than You

February 11, 2016 06:56 AM

"A DARPA-funded research team has created a novel neural-recording device that can be implanted into..."

“A DARPA-funded research team has created a novel neural-recording device that can be implanted into the brain through blood vessels, reducing the need for invasive surgery and the risks associated with breaching the blood-brain barrier.”

- Minimally Invasive “Stentrode” Shows Potential as Neural Interface for Brain

February 11, 2016 06:49 AM

February 10, 2016

Center for Technology, Society & Policy

The need for interdisciplinary tech policy training

By Nick Doty, CTSP, with Richmond Wong, Anna Lauren Hoffman and Deirdre K. Mulligan | Permalink

Conversations about substantive tech policy issues — privacy-by-design, net neutrality, encryption policy, online consumer protection — frequently evoke questions of education and people. “How can we encourage privacy earlier in the design process?” becomes “How can we train and hire engineers and lawyers who understand both technical and legal aspects of privacy?” Or: “What can the Federal Trade Commission do to protect consumers from online fraud scams?” becomes “Who could we hire into an FTC bureau of technologists?” Over the past month, members of the I School community have participated in several events where these tech policy conversations have occurred:

  • Catalyzing Privacy by Design: fourth in a series of NSF-sponsored workshops, organized with the Computing Community Consortium, to develop a privacy by design research agenda
  • Workshop on Problems in the Public Interest: hosted by the Technology Science Research Collaboration Network at Harvard to generate new research questions
  • PrivacyCon: an event to bridge academic research and policymaking at the Federal Trade Commission

In talking with people from government, academia, industry, and civil society, we identified several common messages:

  • a value of getting academics talking to industry, non-profits and government is that we can hear concrete requests
  • there is a shared recognition that, for many values that apply throughout the lifecycle of an organization or a project, we require trained people as well as processes and tools
  • because these problems are interdisciplinary, there is a new and specific need for interdisciplinarily people to bridge gaps; we hear comments like “we need more Latanya Sweeneys” or “we need more Ashkan Soltanis” and “we need people to translate”

We are also urged by these events to define “interdisciplinary” broadly. Tech policy problems are not only problems of law and software engineering — they also demand social scientific, economic, and humanistic investigation, as well as organizational or philosophical/ethical analyses. Such issues also require the methodological diversity that accompanies interdisciplinary collaboration; in particular, we have been pleasantly surprised to see how well-received and novel lessons from human-centered design and design practice have been to lawyers and engineers working in privacy.

Workshops like these are a good place to identify needs, problems and open research questions. But they’re also opportunities to start sketching out responses and proposing solutions. In view of these recent events, we stress the following takeaways:

  1. different institutions working on training for tech policy can build on previous conversations and events to collaboratively develop curricula and practical knowledge
  2. funding is available, including potential sources in NSF and in private foundations
  3. career paths are increasingly available if not easily defined, so we should connect students with emerging opportunities

We hope to have more to share on these three points soon. For now, a call: Are you working on case studies, a syllabus, tools or training for teaching issues at the intersection of technology and policy? We’d like to hear from you. We will develop a repository of these teaching and training resources.


Other writing about these events:

by Nick Doty at February 10, 2016 07:52 PM

BioSENSE research project

Samsung: Future smartwatch could tap your veins for biometric authentication

Samsung: Future smartwatch could tap your veins for biometric authentication:

commentary from an anonymous inside source

Heartbeat is a no-go for biometrics with current tech. You’ve seen the number of electrodes that they put on you to do an ECG. You aren’t going to get the proper resolution to properly asses the PQRST wave with a single or even a dual contact reader. Current accuracy for that approach only works for someone that is taking nitroglycerin, or high blood pressure meds. Think drum machine vs Neal Pert or Max Roach. 

February 10, 2016 05:56 PM

Telcare Diabetes Survey Fact Sheet

Telcare Diabetes Survey Fact Sheet:

some stats from the report (no mention ofm ehtods)

Give me the Data Download for my Disease:

  • 88 percent of people want access to real-time data when managing their chronic diseases.

Forget Orange, Tech Savvy is the New Black:

  • 77 percent of millennials are interested in using technology to track their family’s health and fitness,

Trust me, I’m an App:

  • 55 percent of millennials living with diabetes connect with their doctors more frequently because of health apps
  • 55 percent of millennials living with diabetes would trust a health app over a health professional alone for advice.

February 10, 2016 12:09 AM

February 06, 2016

Ph.D. alumna

It’s not Cyberspace anymore

It’s been 20 years — 20 years!? — since John Perry Barlow wrote “A Declaration of the Independence of Cyberspace” — a rant in response to the government and corporate leaders who descend on a certain snowy resort town each year as part of the World Economic Forum (WEF). Picture that pamphleteering with me for a moment…

Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone.

I first read Barlow’s declaration when I was 18 years old. I was in high school and in love with the Internet. His manifesto spoke to me. It was a proclamation of freedom, a critique of the status quo, a love letter to the Internet that we all wanted to exist. I didn’t know why he was in Davos, Switzerland, nor did I understand the political conversation he was engaging in. All I knew is that he was on my side.

Twenty years after Barlow declared cyberspace independent, I myself was in Davos for the WEF annual meeting. The Fourth Industrial Revolution was the theme this year, and a big part of me was giddy to go, curious about how such powerful people would grapple with questions introduced by technology.

What I heard left me conflicted and confused. In fact, I have never been made to feel more nervous and uncomfortable by the tech sector than I did at Davos this year.

Walking down the promenade through the center of Davos, it was hard not to notice the role of Silicon Valley in shaping the conversation of the powerful and elite. Not only was everyone attached to their iPhones and Androids, but companies like Salesforce and Palantir and Facebook took over storefronts and invited attendees in for coffee and discussions about Syrian migrants, while camouflaged snipers protected the scene from the roofs of nearby hotels. As new tech held fabulous parties in the newest venues, financial institutions, long the stalwarts of Davos, took over the same staid venues that they always have.

A Big Dose of AI-induced Hype and Fear

Yet, what I struggled with the most wasn’t the sheer excess of Silicon Valley in showcasing its value but the narrative that underpinned it all. I’m quite used to entrepreneurs talking hype in tech venues, but what happened at Davos was beyond the typical hype, in part because most of the non-tech people couldn’t do a reality check. They could only respond with fear. As a result, unrealistic conversations about artificial intelligence led many non-technical attendees to believe that the biggest threat to national security is humanoid killer robots, or that AI that can do everything humans can is just around the corner, threatening all but the most elite technical jobs. In other words, as I talked to attendees, I kept bumping into a 1970s science fiction narrative.

At first I thought I had just encountered the normal hype/fear dichotomy that I’m faced with on a daily basis. But as I listened to attendees talk, a nervous creeping feeling started to churn my stomach. Watching startups raise downrounds and watching valuation conversations moving from bubbalicious to nervousness, I started to sense that what the tech sector was doing at Davos was putting on the happy smiling blinky story that they’ve been telling for so long, exuding a narrative of progress: everything that is happening, everything that is coming, is good for society, at least in the long run.

Shifting from “big data,” because it’s become code for “big brother,” tech deployed the language of “artificial intelligence” to mean all things tech, knowing full well that decades of Hollywood hype would prompt critics to ask about killer robots. So, weirdly enough, it was usually the tech actors who brought up killer robots, if only to encourage attendees not to think about them. Don’t think of an elephant. Even as the demo robots at the venue revealed the limitations of humanoid robots, the conversation became frothy with concern, enabling many in tech to avoid talking about the complex and messy social dynamics that are underway, except to say that “ethics is important.” What about equality and fairness?

We are creating a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth.

Barlow’s dreams echoed in my head as I listened to the tech elite try to convince the other elites that they were their solution. We all imagined that the Internet would be the great equalizer, but it hasn’t panned out that way. Only days before the Annual Meeting began, news media reported that the World Bank found that the Internet has had a role in rising inequality.

Welcome to Babel

Conversations around tech were strangely juxtaposed with the broader social and fiscal concerns that rattled through the halls. Faced with a humanitarian crises and widespread anxieties about inequality, much of civil society responded to tech enthusiasm by asking if technology will destabilize labor and economic well-being. A fair question. The only problem is that no one knows, and the models of potential impact are so variable as to be useless. Not surprisingly, these conversations then devolved into sharply split battles, as people lost track of whether all jobs would be automated or whether automation would trigger a lot more jobs.

Not only did any nuance get lost in this conversation, but so did the messy reality of doing tech. It’s hard to explain to political actors why, just because tech can (poorly) target advertising, this doesn’t mean that it can find someone who is trying to recruit for ISIS. Just because advances in AI-driven computer vision are enabling new image detection capabilities, this doesn’t mean that precision medicine is around the corner. And no one seemed to realize that artificial intelligence in this context is just another word for “big data.” Ah, the hype cycle.

It’s going to be a complicated year geopolitically and economically. Somewhere deep down, everyone seemed to realize that. But somehow, it was easier to engage around the magnificent dreams of science fiction. And I was disappointed to watch as tech folks fueled that fire with narratives of tech that drive enthusiasm for it but are so disconnected from reality as to be a distraction on a global stage.

The Internet Is Us. Which Us?

When Barlow penned his declaration, he was speaking on behalf of cyberspace, as though we were all part of one homogeneous community. And, in some sense, we were. We were geeks and freaks and queers. But over the last twenty years, tech has become the underpinning of so many sectors, of so much interaction. Those of us who wanted cyberspace to be universal couldn’t imagine a world in which our dreams got devoured by Silicon Valley.

Tech is truly mainstream — and politically powerful — and yet many in tech still want to see themselves as outsiders. Some of Barlow’s proclamations feel a lot weirder in this contemporary light:

You claim there are problems among us that you need to solve. You use this claim as an excuse to invade our precincts. Many of these problems don’t exist. Where there are real conflicts, where there are wrongs, we will identify them and address them by our means. We are forming our own Social Contract.

There is a power shift underway and much of the tech sector is ill-equipped to understand its own actions and practices as part of the elite, the powerful. Worse, a collection of unicorns who see themselves as underdogs in a world where instability and inequality are rampant fail to realize that they have a moral responsibility.
They fight as though they are insurgents while they operate as though they are kings.

What makes me the most uncomfortable is the realization that most of tech seems to have forgotten the final statement that Barlow made:

May it be more humane and fair than the world your governments have made before.

We built the Internet hoping that the world would come. The world did, but the dream that drove so many of us in the early days isn’t the dream of those who are shaping the Internet today. Now what?

by zephoria at February 06, 2016 12:42 AM

February 04, 2016

MIMS 2012

Play the Right Note

Like many Americans, I started learning guitar back in high school. I began where everyone did – strumming basic chords and melodies and building up my finger strength. I got a little better every day, and could eventually play simple songs.

I kept practicing and getting better and pushed myself to learn advanced techniques. I wanted to know how to play all the notes — every scale, every chord, alternate picking, sweep picking, tapping, and so on. I didn’t want my technical proficiency to limit what I could play.

Over time I built up a repertoire of techniques to use. Even though I could play a lot of technically advanced parts, I didn’t necessarily know how to play the guitar well.

When he was developing as a writer, David Foster Wallace (author of Infinite Jest) had a similar focus on the advanced stuff. In his multi-day interview of the author, Although of Course You End Up Becoming Yourself, interviewer David Lipsky asks Wallace what his younger self would think of his new work, and if he thought things like character were pointless. Wallace responded with:

Not pointless but that they were easy. And that the hard stuff was more, you know, front of the head. It’s never as stark as pointless or not pointless. It’s, you know, what’s interesting, what’s advanced, what’s next? It’s gotta be — right? Not what’s true, but what’s fresh and novel and whatever. It’s very difficult to get out of that.

In his early work, David pushed himself to produce advanced work that would be considered “fresh” and “novel.” Not because that helped him communicate a larger truth to his readers, but because he wanted to push himself as a writer.

I’ve observed this focus on technical mastery in every creative field. Young filmmakers care more about getting the perfect lighting and shooting on film (rather than digital) than they do about story (check out the HBO series Project Greenlight for great examples of this). Digital product designers create slick UIs that look great on Dribbble, but aren’t usable or feasible or valuable. Programmers build technically impressive solutions to problems that don’t exist.

It’s easy to focus on this “hard” stuff because it has a clear path forward. Just practice what you aren’t good at and you’ll improve. And by learning the “hard” stuff, you’ll distinguish yourself from amateurs and beginners. When I was in high school, this is what I thought it meant to “master” the guitar.

As a result, I overlooked the “easy” stuff. The “easy” stuff is knowing when to use advanced techniques, and when to do something simple. It’s using your skills in service of achieving a higher goal – writing a song, communicating a truth to the reader, telling a good story, building something useful for people, etc.

I’ve since learned that to truly master your craft, you need to know the “hard” technical skills, and how to use those skills. So don’t just focus on learning all the notes. Learn when to play the right note, too.

by Jeff Zych at February 04, 2016 04:29 PM

February 02, 2016

Center for Technology, Society & Policy

Citizen Drones: delivering burritos and changing public policy

By Charles Belle, CTSP Fellow and CEO of Startup Policy Lab | Permalink

It’s official: drones are now mainstream. The Federal Aviation Administration (FAA) estimates that consumers purchased 1 million drones — or if you prefer to use the more cumbersome technical term “Unmanned Aerial Systems” (UAS) — during the past holiday season alone. Fears about how government agencies might use data collected by drones, however, have led to bans against public agencies operating drones across the country. These concerns about individual privacy are important, but they are obstructing an important discussion about the benefits drones can bring to government operations. A more constructive approach to policymaking begins by asking: how do we want government agencies to use drones?

Reticence amongst policymakers to allow public agencies to operate drones is valid. There are legitimate concerns about how government agencies will collect, stockpile, mine, and have enduring access to data collected. And to make things more complicated, the FAA has clear jurisdictional primacy, but has not set out any clear direction on future regulations. Nonetheless, policymakers and citizens should keep in mind that drones are more than just a series of challenges to privacy and “being under the thumb” of Federal agencies. Drones also offer local public agencies exciting opportunities to expand ambulatory care, deliver other government services more effectively, and support local innovation.

 

 

So what is the role for local government?
Unfortunately, there is a dearth of knowledge about how public agencies will use drones or the questions they must answer in order to minimize concerns before operating drones. This leaves government, citizens, and private industry relying on heroic assumptions rather than good information about how to make public policy. Therefore, elevating the conversation requires taking a step back. Drones are challenging legal precedent and demanding new regulatory standards. The resulting conversations necessitate including local government as a critical stakeholder.

The courts have relied on altitude to define the scope of individual privacy in airspace.

The courts have relied on altitude to define the scope of individual privacy in airspace.

For example, concerns about individual privacy drive many of the discussions at the local government level right now. One of the biggest challenges public agencies face is the erosion of traditional jurisprudence protecting individual privacy. For decades the courts have relied on the altitude of manned aircraft flying over homes to define reasonable expectations of privacy from law enforcement. Drones shatter these precepts because of their ability to fly (and hover) at low altitudes and carry increasingly sophisticated monitoring equipment. Given that most interactions between citizens and law enforcement are at the local level, it’s no surprise that local governments and police forces are taking the heat for these concerns. But for public agencies that are not law enforcement, there is no information on how data might be collected, retained, or how that data might be shared amongst agencies.

Is it time to rethink warrants in a data-intensive society with drones a commonly used tool by government?

Is it time to rethink warrants in a data-intensive society with drones a commonly used tool by government?

 

In response, some advocates have pushed for substantive changes to our legal systems; for instance, by proposing the application of property rights to protect individual privacy. The origins of privacy law in the United States can be found in tort law — thank the paparazzi. A shift to a property law framework is likely to implicate local governments since property rights are heavily construed by local law, e.g. through local zoning laws. If rule-makers apply property rights law as a framework for operating drones, cities would (should?) have a major influence on how those laws shape the operation of drones. This framework naturally puts cities on the front lines to litigate many legal issues that will, without a doubt, emerge from the operation of drones by public agencies.

But even if we bracket privacy concerns, the operation of drones by public agencies may conflict with the FAA. The FAA manages the national air space. And large companies such as Amazon and Google are agitating for a sort of drone commercial corridor subject to FAA rule-making: “a one agency to rule them all approach.” To be sure, not everyone supports this approach. Either way, cities have a reasonable argument that at least some degree of the urban airspace should be managed at the local level.

Consider that San Francisco, while not the largest city in the United States, has 50 buildings at 400 feet or taller and more in the pipeline. As autonomous vehicles — air, ground, or water for that matter — become more prevalent, it’s not unreasonable for major cities to explore the extension of their regulatory “roof” to the tallest buildings in the city or even just government buildings.

utm-airspace

Visual images of air space management often tend to gloss over an uncomfortable reality: many city buildings are higher than 200 feet (20 -25 stories). (More uncomfortably, this visual ignored Tufte).

Fears stem from lack of information
Protecting individual privacy is important, but that conversation should not be driven by FUD (fear, uncertainty, and doubt). FUD is not conducive to developing informed public policy, which requires data. An evidence-based policymaking approach means putting in place mechanisms that collect data with the objective of making better policy, from protecting privacy to improving building inspections.

A more effective approach to policy development would have local policymakers emphasize the generation of expertise, knowledge, and public input. Just as startups often use The Lean Startup methodology for product development, governments can use a similar approach for policy (see image). For example, a city might use one agency to test the operations of drones in real world situations, with metrics to evaluate success/failure, and iterate on those lessons to expand testing slowly until a body of knowledge and internal expertise is developed. As of now, there is too much we don’t know about the operation of drones by public agencies to make informed decisions.

It might be a bit gimmicky, but that doesn’t mean it’s wrong.

It might be a bit gimmicky, but that doesn’t mean it’s wrong.

Society shapes the technology we want.
As we feel our way through the new challenges and opportunities drones introduce, we must consider how public agencies should use drones for the public good. Assuming that government has a role to play, it makes sense to study how public agencies will use drones in order to flesh out our understandings of the implications of those tasks. Erecting bright line rules today might make us feel accomplished, but they are not very useful for a well governed society navigating new technologies. Society should be reflective about technology in order to craft the world we want, not react to the world we are afraid of.

And just because it’s cool.

by charles at February 02, 2016 10:59 PM

MIMS 2016 Final Project

Some insights on cake

So, what is cake?

One could say that a cake is this spongy baked good made out of some sweet batter with some delicious toppings and other variations.  Wordnet likes cake, I’d like to think, but it likes to be thorough even more. Wait. What’s this wordnet you speak of? You might be better off googling it, but to keep it simple let’s say it’s a semantic hierarchy of words organized by their meaning (e.g.: vegetable -> root veg. -> potato -> russet potato). So, cake can actually be many things for wordnet. More importantly, cake can be many things in the real world and that’s really why I’m writing this post. Brian and I found ourselves in a bit of an ‘oh shit’ moment earlier when we realized that there are a lot of ‘cakes’ out there that you wouldn’t find in the the dessert section.

By doing a quick search in our dataset and looking in yummly.com we found several savory options for cake in many different cuisines. There were fish cakes, latka cakes, salmon cakes, crab cakes, just to name a few. We need a way to have our system not get fooled by the word ‘cake’ and make assumptions about the dish by thinking it’s always a traditional dessert. Part of that solution lies on the categorization and location of the food in the menu (tagged as an entree rather than a dessert). The other part of the solution hopefully can be answered by the supervised machine learning model we’re working on, which should be able to see ingredients in the dish and tell that it’s not a dessert, you know, because of all that crabmeat.


by nsoldiac at February 02, 2016 10:05 PM

February 01, 2016

MIMS 2016 Final Project

The project “buckets”

We’ve seen how this project unravels and expands and shows it’s true colors from day one. This is good and bad. New possibilities are exciting but that also means more baggage on your backlog and certainly in our case more technical debt. That said, in an effort to capture a high level overview of what we’re trying to accomplish we’ve grouped the main drivers into “buckets”, here’s what we’ve come up with so far:

  • What would be need if we publish an academic article about this
  • How people approach menus, what they think as they go through them. How to reimagine them
  • Dietary restrictions and food preferences. What is the gamut of those preferences? Can we work on making them more granular and concrete? (veganism, kosher, etc)
  • Restaurants; how do they handle customers who have concerns in their domain? What is their perspective on menus? Why do they put what they put on them? etc. Basically understanding the other side of the equation
  • Making decisions about the technology stack. What decisions should we make to support the functionality desired in this application?
  • Analytics and the analytics engine.
  • Menu NLP bucket, which includes the processing, cleaning and EDA of data from menus. This includes the ontology that we’d be using and any upgrade and/or modification we need to perform for it to work up to our requirements.
  • Recipe analytics. This includes:
    • Gathering data. Recipes
    • Quick and dirty NLP on that recipe data
    • Machine learning on it. Creating a multi-label classification model. Most importantly, It will also include the ingredients
  • Supplementary nutrition data such as calories, fats, cholesterol, sodium, etc. How do we collect it, process it and present it to users and how much additional value would it bring for the work that it would require.

by nsoldiac at February 01, 2016 10:34 PM

MIMS 2016

January 24, 2016

MIMS 2012

Color Saver

This weekend I built a quick Mac screensaver that displays the current time as a color. The hour is mapped onto the red channel, the minute onto the green channel, and the second onto the blue channel.

I was inspired by What Colour Is It, which converts the current time into a hex value (e.g. 11:02:47 is #110247). But What Colour Is It doesn’t map to every hex value. Its range is limited to #000000 (midnight, AKA black) to #235959 (11:59:59 PM, a darkish blue green), which misses brighter colors closer to white (#FFFFFF). Instead, Color Saver maps the time component onto the full range (0–255) of each color channel.

I experimented with mapping the time components onto hue, saturation, and lightness instead, but that resulted in more ugly colors more often. For example, when seconds represent the color’s lightness, the color will go from completely black to white in the course of a minute, every minute of the day. I found this to be jarring and unpleasant. Mapping onto the RGB channels instead is more calming and mesmerizing.

Download Color Saver from Dropbox. Note: I didn’t code sign the screensaver, so when you double-click to install it you’ll get a warning that it’s from an untrusted source. You’ll have to make an exception in the “Security & Privacy” section of System Preferences to install it.

Feel free to check out the source code on Github.

by Jeff Zych at January 24, 2016 11:33 PM

January 12, 2016

Center for Technology, Society & Policy

Reviewing Danielle Citron’s Hate Crimes in Cyberspace

By Richmond Wong, UC Berkeley School of Information | Permalink

Earlier this year, CTSP sponsored a reading group, in conjunction with the School of Information’s Classics Reading Group, on Danielle Citron’s Hate Crimes in Cyberspace. Citron’s book exposes problems caused by and related to cyber harassment. She provides a detailed and emotional description of the harms of cyber harassment followed by a well-structured legal argument that offers several suggestions on how to move forward. It is a timely piece that allows us to reflect on the ways that the Internet can be a force multiplier for both good and bad, and how actions online interact with the law.

Hate Crimes in Cyberspace cover imageCyber harassment is disproportionately experienced by women and other often disadvantaged groups (see Pew’s research on online harassment). Citron brings the emotional and personal toll of cyber harassment to life through three profiles of harassment victims. These victims experienced harm not only online, but in the physical world as well. Cyber harassment can destroy professional relationships and employment opportunities, particularly when a victim’s name becomes linked to harassing speech via search engines. Victims may be fired if their nude photos are publicly published. The spread of victims’ personal information such as addresses may lead dangerous and unwanted visitors to their homes.

Compounding these issues, victims of cyber harassment are often not taken seriously. Some people blame the victim, or suggest that the victim should simply stop using social media as a remedy. Other people call the victims “drama queens” and excuse abusive behavior as harmless “frat boy” behavior.  Others believe that the online world should have rules distinct from the physical world. Current laws are often under-enforced due to these social views, inadequate understanding of the law, or inadequate technical skills in law enforcement. Local and state law enforcement often have little experience with investigating cybercrimes, and encourage victims to ignore the abuse or to not use the Internet. Even when states have cyber harassment laws, local officers may not be trained to handle those cases.

In response, Citron recommends legal reforms in three areas. First, for combating cyber harassers, she recommends: that state stalking and harassment laws include abuse indirectly communicated to victims, to include threats made on forums and blogs; that states pass laws banning the posting of revenge porn; and that states allow pseudonymous litigation when the allegations would prevent victims from asserting their rights using their real name. Because some victims are targeted by anonymous groups, using their real names is likely to raise the visibility of the offending posts and risk retaliation by cyber mobs.

Second, Citron calls for reform to platform operators’ immunity. Section 230 of the Communication’s Decency Act provides website operators immunity from liability for user-generated content, allowing websites like YouTube and Twitter to exist. However, this also protects revenge porn sites and other sites that post personal photos and information that enable stalking. Thus she recommends that Section 230 be amended to not provide immunity for sites that purposefully encourage cyber stalking or host nonconsensual pornography.

Third, Citron addresses the fact that employers often use web searches and social media in hiring decisions. Employers have no obligation to tell job applicants whether or not search results affected a hiring decision. Citron recommends passing a “Fair Reputation Reporting Act,” similar to the Fair Credit Reporting Act, to provide due process, forcing employers to reveal what online information they used in making a hiring decision.

Citron has a strong argument and her legal reforms and recommendations should be adopted. However, there are limitations to her recommendations. By focusing on U.S. law, she does not address the problems of offshore operators. Websites and harassers that are based or live outside of the U.S. may not be subject to the same laws, and thus are unaddressed by these recommendations. Furthermore, federal legal reforms that require Congressional action may be difficult given today’s political climate. The speed at which these reforms can occur is important given that cyber harassment is an ongoing problem which warrants a speedy response.

Law is not the only type of regulatory force. Regulatory agencies, markets, norms, and technology or code (recall Lessig’s Code) can also play a role. While at the end of the book Citron briefly mentions the role of education in creating new social norms around harassment, other options are largely overlooked.

Citron does not discuss the potential role of regulatory agencies to combat cyber harassment. There are actions that the Federal Trade Commission (FTC), can take now. The FTC’s power to bring enforcement actions against unfair and deceptive practices can and has been used to target some cyber harassment sites. In 2015, the FTC filed a complaint against Craig Brittain, the operator of a revenge porn site which tried to extort victims for payment in return for taking down nude photos. The FTC should increase enforcement actions under unfair or deceptive practices against sites that engage in cyber harassment. This will increase the potential costs for sites owners that decide to engage in this type of activity and help set new social norms about what is acceptable and not online.

Technical solutions may help abate victims’ exposure to cyber harassment. For instance, the Good Game Auto Blocker is a filter developed by Twitter users, many of whom were the targets of GamerGate cyber mob attacks. The Auto Blocker shares a community-decided list of Twitter harassers. Twitter users who use the Auto Blocker are unable to see the harassers’ tweets. While this does not stop the harassers from tweeting obscenities, it does help block potential victims from the emotional tolls of being bombarded with messages from cyber mobs. Others propose that platforms can make deliberate decisions in their technical design: for example, a site may want to technically disable posting animated images in comments when people begin using that feature for harassment purposes.  A potential positive of technological solutions is that they can be used to block harmful content that originates from outside the U.S.

In conjunction with Citron’s prudent legal recommendations, an additional focus on regulatory actions, technological approaches, and changes to social norms can more comprehensively address the problems of cyber harassment.

by Nick Doty at January 12, 2016 05:42 PM

MIMS 2010

Thoughts on one year as a parent

(Cross-posted from Medium, a site that people actually visit).

Around this time last year I spent a lot of time walking around, thinking about all the things I wanted to teach my daughter: how a toilet works, how to handle a 4-way stop, how to bake cookies. One year later, the only thing I have taught my daughter about toilets is please stop playing with that now, sweetheart, that’s not a toy. Babies shouldn’t eat cookies. And driving is thankfully limited to a push toy which has nonetheless had its share of collisions. On the other hand, I can recite Time For Bed and Wherever You Are My Love Will Find You from memory. Raffi is racing up the charts of our household weekly top 40. I share the shower every morning with a giant inflatable duck. It has been a challenge and yet still joyful. Here’s an assorted collection of observations and advice from someone who just finished his first trip around the sun as a parent.

Advice

Speaking of advice, I try not to give too much to new parents. They have surfeit of books, family, friends, and occasionally complete strangers telling them what they should and shouldn’t do. I don’t want to be one more voice in that cacophony. The first months with a new child is a struggle, and you have to do whatever it takes to get through them. Sure, there are probably some universals when it comes to babies, but as someone who has done it just once, I’m not a likely candidate to know what they are. I’m happy to tell you what I think, but only if you want to know. Let’s just assume for the rest of this article that you want to know.

That said, please vaccinate your kids.

Firsts

It’s easy to forget how many experiences an adult has accumulated in their decades alive. The first year for a baby is almost nonstop first experiences. Everything that has long since become ordinary in your life is new to a baby: eating solid food, going to a zoo, taking the bus, touching something cold, petting a dog. The beautiful thing is that being a parent makes all these old experiences new firsts for you too. I hope never to forget the first time I watched Samantha use a straw: she sucked on it—like everything else—and then when water magically came out of the straw she looked startled, and then, suddenly, thrilled, as if she were not merely drinking water, but had discovered water on Mars.

Sleep

Nothing can prepare you for this. Maybe it’s smooth sailing for some parents, but we were exhausted, completely drained, dead to the world, and whatever other synonyms there are for being tired. Lots of people told us that we would be tired beyond belief, but I think this may not be something that can be communicated with language; it can only be learned through experience. I thought that being an experienced all-nighter-puller in college would be good training for having a baby. It’s completely different. In college you stay up all night writing a paper, turn in the paper, and then it’s OK if you sleep for 36 hours during the weekend. Having a baby is like there’s a term paper due every day for months.

Breastfeeding

Breastfeeding is really hard. I don’t know why they don’t work this into more breastfeeding curricula. Caitlin and I took a multi-hour class and I don’t remember this coming up. Just lots of stuff about all the benefits of breastfeeding, how wonderful the bonding is, how the mother will be totally in love with breastfeeding. Nobody wants to attend a breastfeeding class taught by a dude, but if I were teaching one it would go something like this:

  • There are a lot of good things about breastfeeding.
  • By the way, it’s really hard and Mom will probably end up in tears several times.
  • Working with a lactation consultant can be a lifesaver.
  • Formula is not the end of the world.
  • Good luck, happy latching.

Pictures

If you want to make a new parent’s day, ask to see pictures of their baby. I tried not to subject people to them, but there’s only so much self-control one can have. I loved it when people asked.

Stuff

You end up with so much stuff for a baby. There’s a lot of stuff you don’t need. If you skip that stuff, you’ll still have a lot. Car seat, stroller, bottles, diapers, a bathtub, continually outgrown clothes, more diapers, a crib, a rocking chair. And that’s before you even think about toys and books.

Here are some of my favorite things that we bought this past year: - Halo sleep sacks. They zip from the top to the bottom which means you only have to unzip them partway for late night diaper changes. - LectroFan white noise machine. We actually have two—one for baby and one for the lucky napping adult. - NoseFrida. I never would have guessed how much fun decongesting your baby would be with this snot sucker. - Giant Inflatable Duck. I can’t say I love sharing my shower with this duck, but Samantha loves it, so I kind of love it too.

One recommendation I make to all my expecting friends is to check out The Nightlight, the baby equivalent of The Wirecutter and The Sweet Home. They don’t give you a spreadsheet of data and rankings, they just tell you what to buy, with a detailed explanation if you care to read it. I did a lot of independent research and ultimately came to many of the same conclusions, so I stopped reading.

Trivia: the set of clothes and items you need for a newborn is called a layette.

Dropcam/Nest Cam

I read something somewhere about video monitors being distracting and got it into my head that we would only use an audio monitor. I didn’t want one more app to hijack my phone. Boy was I ever wrong. First, we live in a small apartment, so the idea that we need radio frequencies to transmit baby sounds across it is ludicrous. Second, I got so much peace of mind from actually seeing what my baby is doing that I highly recommend it. When we were doing sleep training it was a huge help to be able to see that things were “OK”. Streaming live video from my house to the cloud is a bit creepy, but it’s so nice to check on her taking a nap when I’m at work, and being able to rewind 30 seconds and see what just happened is handy. I guess this is how privacy dies: with little, convenient features here and there.

Other Parents

Being a parent is like gaining membership to the world’s least exclusive club, but finding out that the club is somehow still great. It gave me a new way to bond with other friends and coworkers who are also parents. I thought (naively in retrospect) that all parents have a sense of this shared camaraderie. As it turns out, though, parents are just a random sample of people which means that some of them are strange or petty or just mean. I was surprised by how many interactions with other parents left me feeling like somehow we were still in high school: cliques at drop-in play areas, passive-aggressive remarks about the strangest things.

Airplanes

You could write a Shakespearean tragedy about the Herculean trials of flying with a baby. We rolled the dice a couple times and got lucky but it was exhausting.

Parental leave

I had access to a generous paternity leave policy—10 weeks paid—due to California’s progressive policies and my employer’s good will. It’s completely crazy that this isn’t the norm across the U.S. The law of the land is that, if you meet the requisite conditions, you are entitled to 12 weeks of unpaid leave. I cannot understand how the wealthiest country in the world can’t afford to prioritize reasonable family leave policies (and neither can John Oliver, who has a much funnier take on the state of parental leave in America). On top of that, it’s not like new parents are actually doing the best work of their career. I was sleepwalking through my job for weeks even after I got back.

Politicians say they love families; how about actually helping them out when they need it?

Joy

Being a new parent is a struggle, even if you are thrilled to have a child. You lose so much of your previous life: free time, hobbies, spontaneous dining-out, sleeping in—it’s a lot of change. You trade these things in for something new. This new thing is hard to describe in a way that doesn’t sound trite or glib. I’d say it feels like trading some happiness for joy.

I love being Samantha’s father. The past year has had its share of challenges, but honestly we’ve been so fortunate and I hope that confronting our small share of problems has made me a more empathetic person. Samantha arrived on time, easily, and healthy; we didn’t have the burden of illness or an extended stay in the NICU. We never worried about the cost of diapers or formula; I can only imagine how crushing it must feel not to have what you need to take care of your child. We have had help from so many of our family and friends, help you absolutely need to keep your head on straight. I have a wonderful partner and I don’t know I would get through parenting without Caitlin; I have a new appreciation for single parents.

Who knows what we’ll teach our daughter this next year, or what she’ll teach us. It has been an incredible journey so far. I can’t believe how many years we get to have. They won’t be non-stop happiness, but I hope they’re as joyful as this first one.


“Let’s say I wanted to read more tweets about babies. Where would I go?” “You would go to this collection on Twitter.” “That was a hypothetical. No one wants to read more tweets about babies.”

by Ryan at January 12, 2016 05:37 PM