X hits on this document

205 views

0 shares

0 downloads

0 comments

76 / 100

Offline, or Why Ubicomp Scares Me” [84]. Howard Rheingold observed that ubiquitous computing technologies “might lead directly to a future of safe, efficient, soulless, and merciless universal surveillance” [249].

One reason for these negative reactions was that PARC’s ubicomp system was “all or nothing.” Users did not have control on how the information was shared with others. There were no provisions for ambiguity. Furthermore, the system provided no feedback about what information was revealed to others. This resulted in concerns that a co-worker or boss could monitor a user’s location by making repeated queries about the user’s location without that user ever knowing.

A second important reason for these reactions lays in the way the ubiquitous computing project itself was presented. The researchers often talked about the technological underpinnings, but had few compelling applications to describe. Thus, discussions often revolved around the technology rather than the value proposition for end-users. To underscore this point, once researchers at PARC started talking about their technology in terms of “invisible computing” and “calm computing,” news articles came out with more positive headlines like “Visionaries See Invisible Computing” [253] and “Here, There and Everywhere” [299].

Thinking about privacy from the perspective of the value proposition also helps to explain many of the recent protests against the proposed deployment of Radio Frequency Identification (rfid) systems in the United States and in England [37]. From a retailer’s perspective, rfids reduce the costs of tracking inventory, and maintaining steady supply chains. However, from a customer’s perspective, rfids are potentially harmful, because they expose customers to the risk of surreptitious tracking without any benefit to them.

4.5.2 Models of Privacy Factors Affecting Acceptance

The lack of a value proposition in the privacy debate can be analyzed using “Grudin’s Law.” Informally, it states that when those who benefit from a technology are not the same as those who bear the brunt of operating it, then it is likely to fail or be subverted [133]. The privacy corollary is that when those who share personal information do not benefit in proportion to the perceived risks, the technology is likely to fail.

However, a more nuanced view suggests that even strong value proposition may not be sufficient to achieve acceptance of novel applications. Eventually, applications enter the hands of users and are accepted or rejected based on their actual or perceived benefits. HCI practitioners would benefit from reliable models of how privacy attitudes impact adoption. We see two aspects of understanding acceptance patterns: 1) a “static” view, in which an acceptance decision is made one-off based on available information, and 2) a dynamic view, in which acceptance and adoption evolve over time. We discuss two working hypotheses related to these two aspects of acceptance next.

Static Acceptance Models

In a renowned article on technology credibility, Fogg and Tseng drafted three models of credibility evaluation: the binary, threshold, and the spectral evaluation models [110]. Fogg and Tseng argued that these models helped explain how different levels of interest and knowledge affect how users perceive the credibility of a product, thus impacting

end-user-privacy-in-human-computer-interaction-v57.docPage 76 of 85

Document info
Document views205
Page views205
Page last viewedSat Dec 03 07:40:09 UTC 2016
Pages100
Paragraphs1533
Words44427

Comments