et al., 2004; Kuhnet al.,1999; Briss et al., 2000; Tompa et al., 2004). For example, application of the algorithm developed by Briss et al. (2000) would characterize the level of evidence provided by a group of studies as “Insufficient”, “Sufficient”, or “Strong”.
Criteria for these algorithms concern the study design, quality of research (as determined using the review’s quality assessment tool), the consistency of the results, and the quantity of research. The GRADE Working Group (2004) also includes consideration of the direct applicability of the studies to a new setting of interest, the strength of association, whether a dose- response gradient was seen, whether all plausible confounders would have reduced effect, and whether there was a reporting bias9.
In contrast, this review did not adopt an explicit algorithm at the outset. The reason was a lack of consensus in the OHS prevention field as to which synthesis algorithm was best. In addition, it was thought premature to base an algorithm upon so newly developed a QA tool. This review instead synthesized a summary statement in the style of a traditional narrative review, which is customary for systematic reviews in this field (Am J Prev Med (2000)) and permissible in best-evidence syntheses (Slavin, 1995).
9 Reporting bias is also known as publication bias, which is the bias towards more positive results (i.e., in the direction intended by the intervention) within a body of literature, resulting from the reluctance of researchers to write and submit manuscripts with null finding and the reluctance of editors to publish such manuscripts.
Effectiveness of Occupational Health & Safety Management Systems: A Systematic Review