index2019-02-18

week 3: Daniel Howe's Surveillance Countermeasures: Expressive Privacy via Obfuscation

I’ll begin this post by stating that I do believe computational technologies to be inherently philosophical and political. I am beginning to learn about the origins of computation and have concluded that they were always and already about power and time and control. If computation started there, with notions of power and control, then the argument for obfuscation as a countermeasure for better privacy and control stands. If we are to use a computer we should assume that the computer has the power as much as we do. Our power lies in the fact that the computer can’t do much without our inputs. The computer’s power lies in the fact that we can’t do much without knowing how to program it and even then we don’t really know how to control it because programming has become so abstracted in relation to what computers are actually doing in a physical sense. 

I think projects like TrackMeNot, I Like What I See, and ScareMail are all really interesting and maybe even subversive projects but I do wonder what their actual effects really are. In other words, ScareMail brings attention from the NSA to your inbox by adding NSA search terms to the body of your emails. 

What does it mean to bring attention to your inbox in this way? Can we assume that you have “nothing to hide”? I can agree that its a way to wedge extra non-usable information into whatever database the NSA is saving but don’t projects like ScareMail just make NSA’s computational machine bigger and more bloated? Is it effective enough for NSA to question whether or not their algorithms are working? Does it encourage them to make more pointed and therefore probably more problematic decisions about what counts as threatening text and what doesn’t?

This paper also had me thinking about the meaning of the word sousveillance. I think of sousveillance as “regular people” surveilling authorities who are surveilling “regular people”. While authorities can use surveillance to impose behavioral modification and anything else, including oppression, in the name of safety then people can also surveil them in the name of their own definitions of freedom and safety. I’m not convinced that the tool AdNauseum supports sousveillance. I think being able to see what you look to a network of advertising companies is not the same as being able to see, for instance, how much money a company is making off of your page views or where your job application failed to get through a company’s hiring algorithm. 

After reading this paper I am interested in the number of resources being used for tracking systems. Howe notes that up to 80% of project time is used to “clean” the data. While I agree that small amounts of noise produced by obfuscation tools are impactful it’s hard for me to wrap my head around whether or not this is a positive tactic. In other words, does this kind of noise influence data-mining companies to get more narrow in their collection? Is more narrow better? I do like this idea of temporary surveillance-free space when using tools that obfuscate by bringing so much attention to the data that collection engines simply learn to ignore that data…but how can we know that this is even possible or ever happening?