X hits on this document





69 / 100

Information and communication technologies increasingly preserve information about the individuals using them15 and surveillance systems are spreading into the workplace (in the form of email and web monitoring) and to other spheres of daily activity (e.g., broadcasting the interior of night clubs, bars, or beaches [52]). Often, these systems collect information unbeknownst to the user. Furthermore, the development of digital sensors has enabled the collection of novel types of information in everyday situations (e.g., automatic toll payment systems based on RFID and license plate recognition [111], implantable sensors monitoring the health of patients [205], monitoring systems deployed in the homes of elderly people [38]). Technical and economic considerations suggest that sensing technologies will become a ubiquitously present infrastructure, open for use by individuals as well as organizations for a wide array of purposes. A distinctive characteristic of these systems is that the interaction is increasingly becoming implicit, out of the scope of control of Norman’s “Seven Steps of Interaction” [227]. This kind of implicit interaction requires new mechanisms for managing the resulting risks to personal information and privacy.

One possible solution to problems above is to develop more effective and less burdensome user interfaces for helping people make good decisions. A key challenge here is that there is currently no agreement as to what kinds of interaction styles are best for each type of information disclosure. Rule- or policy-based mechanisms may be suboptimal for many applications, as discussed in Section 3.2.2. Other interaction styles, such as social translucency and plausible deniability, might be able to achieve comparable effects with far less burden and with a greater sense of control [28], but there are no clear guidelines on how to build plausible deniability into computing systems. Ambiguity has been discussed as a design resource in other contexts (e.g., games) [117], and we believe it will become an increasingly important design element in the context of privacy. In short, there needs to be much more work to determine the efficacy of these different ideas in a wider range of contexts.

Another possibility is to consider a better division of labor that helps shoulder the burden of managing personal privacy. A consensus is slowly building in the research community that privacy-sensitive applications cannot make all data transfers explicit, nor require users to track them all. The related UIs and interaction patterns would simply be too complex and unwieldy. From a data protection viewpoint, experience shows that most data subjects are unable or unwilling to control all disclosures of personal information, and to keep track of all parties that process their personal data [64, 95]. Distributing the burden of managing one’s personal privacy across a combination of operating systems, networking infrastructure, software applications, system administrators, organizations, and third parties could help address this problem. Ideally, these entities would provide advice to users or make trusted decisions on their behalf, with the ultimate goal being to reduce the overall effort required to make good decisions. Taking email spam as an example, multiple entities—including ISPs, local system administrators, and automatic filters—all contribute to reducing the amount of spam that end-users receive. Here, it makes sense to share the costs of spam reduction since the hardship would otherwise be borne by a large number of individuals.

15 For example, personal video recorders capture a person’s television viewing habits, mobile phones contain photos, call history, instant messages, and contacts, etc.

end-user-privacy-in-human-computer-interaction-v57.docPage 69 of 85

Document info
Document views112
Page views112
Page last viewedSat Oct 22 13:27:55 UTC 2016