Improving Primary Care Pharmaceutical Use
live births in children under 5), or a performance score (e.g., a knowledge index). If the outcome was measured as a percentage, effect size was computed as the relative gain in the intervention group, that is, the net difference between the percent improvement in the intervention group and the percent improvement in the comparison group.4 If the outcome was a mortality rate or a performance score, the changes (i.e., improvements pre to post) were converted to percent improvements by dividing the absolute changes by baseline values.5 In methodologically acceptable studies without a control group (i.e., time series/repeated measures), effect size was calculated as percent improvement between the stable pre-intervention percentage and the stable post-intervention percentage; short-term shifts in percentage immediately before or after an intervention were discounted as transitory effects.
We were able to identify 59 interventions that met our study inclusion criteria. The characteristics of these interventions and of the key outcomes that they measured are shown in Table 1. These included 36 interventions (61.0%) that were classified as having a methodologically acceptable study design, and 23 (39.0%) whose design did not meet minimum criteria. Of the studies with acceptable quality, 13 were RCTs, 12 had pre‑post designs with control groups, and 11 employed time series or repeated measures designs. Fourteen studies of unacceptable quality were pre‑post designs that failed to include a control group, while 9 were post‑only studies, 5 with and 4 without control groups.
4 effect size = (%POST - %PRE)intervention - (%POST - %PRE)control
5 effect size = ((POST - PRE) / PRE)intervention - ((%POST - %PRE) / PRE)control
There has been a sharp rise since the beginning of the decade in both the frequency of interventions to improve use of medicines, and in the percentage of these interventions that are based on acceptable study designs. Sixteen interventions (27.1%) were reported prior to 1991, while 43 interventions (72.9%) have been reported since that time. The majority of the earlier interventions (56.3%) were based on study designs that were of unacceptable quality; over two‑thirds of the more recent interventions (68.2%) used more methodologically sound research designs.
Much of the reported experience in improving use of medicines in the developing world has come from