8 overestimation biases (i.e., excluding nonsignificant effects from calculations of effect size estimates, failure to adjust for small sample bias, failure to separate studies using single-group pre-post designs vs. control group designs, using unweighted average effect sizes, analyzing biased partial samples that reflect treatment dropout and research attrition, researcher allegiance bias, publication bias, and wait-list control group bias). Wherever possible, evidence regarding the magnitude of these biases is presented, and methods for addressing these biases separately and collectively are discussed. Implications of the meta-analytic evidence on psychotherapy for the effect sizes of other psychological interventions are also considered.
Hunsley, J. & Lee, C. M. (2007). Professional Psychology: Research and Practice, 38(1), 21-33.
To examine whether the results of effectiveness studies match those obtained for efficacy studies on the same treatments, we conducted a focused review of the published treatment effectiveness literature. A literature search yielded 35 effectiveness studies for adult disorders (N = 21) and child and adolescent disorders (N = 14). A comparison of data from these studies with benchmarks from recent reviews of efficacy trials revealed treatment completion rates comparable with those found in the efficacy benchmarks. The improvement rates were comparable in effectiveness studies with those reported in randomized clinical trials of treatment efficacy. Despite methodological limitations in many effectiveness studies, these initial data provide encouraging support for the transportability to clinical settings of treatments with established efficacy. (PsycINFO Database Record (c) 2009 APA, all rights reserv
Nathan, P. E. (2007). In: The art and science of psychotherapy. S. G. Hofmann & Weinberger (Eds.); New York, NY, US: Routledge/Taylor & Francis Group, pp. 69-8
Many psychologists have observed that the practice of psychotherapy remains surprisingly unaffected by the spate of psychotherapy research during the past half century by psychologists. Of the several explanations that have been offered, the continuing controversy among researchers on the relative worth of the efficacy and effectiveness models of psychotherapy research bulks large. If psychotherapy researchers, after more than 50 years of trying, cannot agree on how best to assess the worth of a given therapeutic strategy, the logic of this explanation goes, is it any wonder that clinicians do not put much faith in therapy research outcomes and many question the concept of evidence-based treatments? This chapter looks carefully at the data on this issue, past and present, in the effort to understand both why practitioners to date have largely ignored therapy research findings and whether and how they might be induced not to do so in the future.
Wampold, B. E., Minami, T., Tierney, S. C., Baskin, T. W. & Bhati, K. S. (2005). Journal of Clinical Psychology, 61(7), 835-854.
The logic of the randomized double-blind placebo control group design is presented, and problems with using the design in psychotherapy are discussed. Placebo effects are estimated by examining clinical trials in medicine and psychotherapy. In medicine, a recent meta-analysis of clinical trials with treatment, placebo, and no treatment arms was conducted (Hróbjartsson & Gøtzsche, 2001), and it was concluded that placebos have small or no effects. A re-analysis of those studies, presented here, shows that when disorders are amenable to placebos and the design is adequate to detect the effects, the placebo effect is robust and approaches the treatment effect. For psychological disorders, particularly depression, it has been shown that pill placebos