than choice formats. Ratings indicate intensity of preference for one profile relative to another, possibly including indifference, and rankings indicate the relative position of multiple alternative profiles relative to each other.
Conjoint tasks also can include an opt-out alternative. In these cases, the opt-out alternative allows subjects to choose standard-of-care, current treatment, or no treatment rather than the hypothetical alternatives included in a conjoint task. When subjects choose an opt-out alternative in a specific conjoint task, researchers learn nothing about subjects’ relative preferences for the hypothetical alternatives presented in the task.
Therefore, while including an opt-out alternative often may provide a more realistic scenario for subjects to evaluate, it also introduces additional challenges in the design and analysis of the study. An alternative to including an opt-out alternative in each conjoint task is to include an opt-out alternative in a separate question following each conjoint task – subjects who select alternative A in a forced-choice question are offered A or opt out in a follow-up question. Including an opt-out question in a follow-up task increases the length and difficulty of the conjoint survey instrument, but provides the researcher with a more complete set of preference data.
vi. Instrument design
Conjoint data collection instruments are surveys and the development of a conjoint data collection instrument should follow good survey research principles. Therefore, it is important to ask, “Was the data collection instrument designed appropriately?”
It is important to elicit subject-specific health and demographic information to test for systematic differences in preferences based on these characteristics. Patients’ health status may influence their willingness to pay in a systematic way and so may reduce the generalizability of the findings (32.).
Sample size calculations represent a challenge in conjoint analysis. Minimum sample size depends on a number of criteria including the question format, the complexity of the choice task, the desired precision of the results, and the need to conduct subgroup analyses (28). Researchers commonly apply rules of thumb such as that proposed by (33), which suggests that 300 observations per attribute level are required. Simulation techniques, which have been used in EQ-5D valuation work could potentially be used (34). Sample size estimation for conjoint analysis requires further work because it is an important criterion for grant awarding bodies and ethics committees.
Because conjoint tasks often are complex and cognitively burdensome to subjects, potential measurement error may be a serious concern in conjoint studies (35). Measurement error may be introduced by the order in which attributes are presented, the question order, or the number of attributes and levels. Work by Kjaer and colleagues (36), for example, suggests participants can show a differential sensitivity to price depending on where the cost attribute occurs in the profile. Although it is probably best to not randomise the order of attributes across conjoint tasks within a survey, some of these issues could be addressed through randomising the order of questions.
Finally, it is important to pilot the final questionnaire with respondents using both small cognitive debrief interviews (n~5-10 people) and also a larger quantitative pilot (n~30). The cognitive debrief will identify areas of misunderstanding or common errors and whether the survey is too lengthy. It will also test whether people
ISPOR Conjoint Analysis in Health Task Force Report