information gathered from social networking web sites can be used to trick people into giving up personal information, such as passwords. They showed that individuals are more likely to fall for phishing attacks if the sender is from their existing social network . These attacks are known as spear-phishing, or context-aware phishing. By incorporating sender information mined from a social networking site, they showed that scam emails were much more effective in deceiving the targets. Two other examples of research that would fall into this category include convincing people not to abuse other people’s trust (for example, cyber-stalking a person), and persuading people that they can do simple things to protect their privacy online.
Here, one challenge is that the very behaviors under scrutiny are not stable, but evolve with the adoption of new technologies. For example, the surge of identity theft and the enactment of legislation countering it suggests that the public is becoming slowly, if painfully, aware of the risks of combining personal information from multiple data sources. On the other hand, the availability of personal information from multiple sources has transformed the previously difficult task of constructing individuals’ profiles into a fairly trivial activity . It is not uncommon for people to “google” potential dates and prospective employees and find past postings on message boards, photographs, and with some effort, information on political affiliations, social networks, criminal records, and financial standing.
Furthermore, the willingness of people to ultimately accept these technologies despite the intrinsic risks shows that HCI researchers should not trust stated preferences relative to unknown technologies, but analyse the use of the technologies in practice. We discuss this point further below in Section 4.5 in relation to acceptance.
To summarize, we see an increasing role for “behavioral” research in HCI relative to privacy. The cost of this kind of research is higher than traditional survey-based or even lab-based experiments. However, we are convinced that the nature of the issues revolving around privacy demand this additional expense if the goal is to obtain credible and generalizable results.
4.3 Developing a “Privacy HCI Toolbox”
A third “grand challenge” is in providing more support to guide the development of privacy-sensitive systems. Design teams often have to grope through a design space, relying primarily on their intuition to guide them. What is needed are better methods, tools, guidelines, and design patterns to help teams iteratively design, implement, and evaluate applications.
With respect to design, we believe that there would be great value in developing an organic privacy toolbox. This privacy toolbox would be a catalog of privacy design methods and models, with an indication of what applications and social settings each is most effective. Practitioners could then choose to use these tools with a competent understanding of their contributions and limitation. We would like to stress that we are not proposing to develop a Software Engineering ‘methodology’ —our proposal is simply a coherent collection that assists practitioners.
An initial catalog of design techniques for privacy and HCI would be relatively easy to devise. For example, we mentioned above that the FIPS are particularly fit for large
end-user-privacy-in-human-computer-interaction-v57.docPage 71 of 85