and least when 2 therapies are compared ( Mdn PS[sub]TT[/sub] = .56). Results suggested that there is more to therapeutic success than placebo effects ( Mdn PS[sub]TP[/sub] = .66, where T = therapy and P = placebo) and that placebo is typically better than do-nothing control conditions ( Mdn PS[sub]PC[/sub] = .62). The present exceptionally large study, controlling for dependencies and confounding variables, may put to rest the question of the superiority of therapy to placebo. It also appears that the strength of effect of therapy is typically at least average among the effects of independent variables in psychology.
Wilson, D. B. & Lipsey, M. W. (2001). Psychological Methods, 6(4), 413-429.
A synthesis of 319 meta-analyses of psychological, behavioral, and educational treatment research was conducted to assess the influence of study method on observed effect sizes relative to that of substantive features of the interventions. An index was used to estimate the proportion of effect size variance associated with various study features. Study methods accounted for nearly as much variability in study outcomes as characteristics of the interventions. Type of research design and operationalization of the dependent variable were the method features associated with the largest proportion of variance. The variance as a result of sampling error was about as large as that associated with the features of the interventions studied. These results underscore the difficulty of detecting treatment outcomes, the importance of cautiously interpreting findings from a single study, and the importance of meta-analysis in summarizing results across studies.
Staines, G. L. (2008). Review of General Psychology, 12(4). 330-343.
Because the efficacy of behavioral interventions is central to applied psychology, the relative merits of competing approaches to an intervention are important. Many comparative studies examine the differential outcomes of alternative methods of psychotherapy. This paper addresses the issue of impact differences among rival intervention methods by focusing on treatment outcome research that emphasizes the relative (or comparative) efficacy of different psychotherapies. The paper has 4 components. First, it explores the concept of relative efficacy. Second, it reviews the extensive evidence on relative efficacy, which is generally consistent with the null hypothesis. Third, it offers a 3-part explanation of the negative evidence on relative efficacy: (a) a statistical argument about how relative efficacy is bound by a modest upper limit; (b) a research design argument about how relative efficacy studies are confounded by multiple factors, which make it difficult to demonstrate differences in treatment effects; and (c) a theoretical argument about how therapists' contributions to treatment outcomes depend more on their clinical abilities than the therapy methods they implement. The final section of the paper outlines questions for future research.
Staines, G. L. & Cleland, C. M. (2007). Review of General Psychology, 11(4), 329-347.
Meta-analytic estimates of the absolute efficacy of psychotherapy indicate an effect size of approximately 0.80. However, various biases in primary and meta-analytic studies may have influenced this estimate. This study examines 4 nonsystematic biases that increase error variance (i.e., nonrandomized designs, methodological deficiencies, failure to use the study as the unit of analysis, and violations of homogeneity), 4 underestimation biases that primarily concern psychometric issues (i.e., unreliability of outcome measures, failure to report nonsignificant effect sizes, nonoptimal composite outcome measures, and nonstandardized outcome measures), and