X hits on this document

249 views

0 shares

0 downloads

0 comments

52 / 100

In many cases, though, privacy-enhancing features cannot be avoided. However, simple privacy precautions are often sufficient. An example is provided by Grasso and Meunier’s evaluation of a ‘smart’ printing system deployed at Xerox R&D France [128]. Their printing system has two main functions: it stores printed jobs on the print server for future access, and has an affinity function that shows, on the header page of each print job, information about similar print jobs submitted by other users. The objective of the latter function is to enable social networking between people interested in the same type of information. Grasso and Meunier claim that the simple privacy-enhancing features built in the system are sufficient for preventing abuse. First, users must intentionally use the “smart printer.” Regular printers are still available. Second, a “forget” function is available that removes any trace of the print history of a specific user.

In conclusion, the examples above show that the interaction between social norms and technology is often subtle. Privacy by obscurity, such as in Palen’s case study, can effectively curtail privacy violations, even if it is not a “strong” mechanism. Erickson et al.’s work suggests that technology should leverage, rather than mechanically reproduce, social norms. Finally, designers should remember that often simple UI features are sufficient to curtail misuse, as Grasso and Meunier’s experience shows.

3.5 Privacy Frameworks

Unlike other areas of HCI, there are few widely accepted frameworks for privacy, due to the elusiveness of privacy preferences and the technical hurdles of applying guidelines to specific cases. In this section, we discuss some of the frameworks that have been proposed to analyze and organize privacy requirements, and note the benefits and drawbacks of each (see Table 2).

Design frameworks relevant to HCI researchers and practitioners can be roughly grouped into three categories. These include guidelines, such as the aforementioned Fair Information Practices [230]; process frameworks, such as Jensen’s STRAP [170] or Bellotti and Sellen’s Questions Options Criteria (QOC) process [43]; and modeling frameworks, such as Jiang et al.’s Approximate Information Flows [172].

These frameworks are meant to provide guidance for analysis and design. However, it should be noted that few of these frameworks have been validated. By validation, we mean a process that provides evidence of the framework’s effectiveness in solving the design issues at hand by some metric, for example design time, quality of the overall design, or comprehensiveness of requirements analysis. In most cases, these frameworks were derived based on application practice in related fields or from the authors’ experiences.

This lack of validation partially explains why many frameworks have not been widely adopted. Indeed, case studies have been better received. Nevertheless, the issue of knowledge reuse in HCI is pressing [278] and accounts of single applications are not an efficient way of communicating knowledge. We believe that research on privacy can greatly benefit from general guidelines and methods, if they are thoroughly tested and validated, and if practitioners and researchers use them with an understanding of their performance and limitations. In fact, we suggest in the conclusion that the development of a privacy toolbox composed of several complementary techniques is one of the main

end-user-privacy-in-human-computer-interaction-v57.docPage 52 of 85

Document info
Document views249
Page views249
Page last viewedWed Dec 07 16:45:35 UTC 2016
Pages100
Paragraphs1533
Words44427

Comments