Hong et al. proposed using risk analysis to tackle privacy issues in ubicomp applications . Their process enhances standard risk analysis by providing a set of social and technical questions to drive the analysis, as well as a set of heuristics to drive risk management. The analysis questions, shown in Table 7, are designed to elicit potential privacy risks for ubicomp applications. The authors propose a semi-quantitative risk evaluation framework, suggesting to act upon each identified risk if the standard “C < LD” equation is satisfied.13 To evaluate the components of this formula, a set of risk management questions are used, listed in Table 8.
One important point of Hong et al.’s framework is that it requires the designer to evaluate the motivation and cost of a potential attacker who would misuse personal information. The economic aspect of such misuse is important because it can help in devising a credible risk evaluation strategy and represents the implicit assumption of analysis performed by regulatory entities. Although risk analysis is a fundamental component of security engineering, many aspects of design in this domain cannot be easily framed in a quantitative manner, and a qualitative approach may be necessary. Also, quantitative approaches may prove misleading, failing to consider user perceptions and opinions .
An interesting qualitative approach to risk analysis for ubicomp is provided by Hilty et al. . They suggest using a risk analysis process based on risk screening and risk filtering. In the screening phase, an expert panel identifies relevant risks for a given application (thus using the expert’s experience directly, instead of checklists like Hong et al.’s).
In the filtering phase, experts prioritize risks according to several criteria that respond to the precautionary principle. According to the precautionary principle, risk management should be “driven by making the social system more adaptive to surprises” . They suggest to filter risks according to qualitative prioritization based on the following criteria :
Socioeconomic irreversibility (Is it possible to restore the status before the effect of the technology has occurred?)
Delay effect (is the time span between the technological cause and the negative effect long?)
Potential conflicts, including voluntariness (Is exposure to the risk voluntary?) and fairness (Are there any externalities?)
Burden on posterity (Does the technology compromise the possibilities of future generations to meet their needs?)
The authors used this framework to analyze the social and technical risks of ubicomp technologies, including their social and environmental impact. However, while their heuristics are adequate for analyzing large scale social risks, they may not be adequate for risks arising at the interpersonal level. Furthermore, even qualitative risk analysis may be inadequate, because security and privacy design decisions interact with issues that cannot be modeled as risks, both internal (e.g., application usefulness), and external (e.g., regulatory requirements) as pointed out in work by Hudson and Smith  and
13 C = cost of adequate protection; L = the likelihood that an unwanted disclosure of personal information occurs; D = the damage that happens on such a disclosure.
end-user-privacy-in-human-computer-interaction-v57.docPage 62 of 85