X hits on this document





18 / 28

provided a clear statement of these specific goals. The general goals are clear (although they seem to be growing in number), but the specific inferential or risk-analytical goals are as yet unclear. Stratifying doesn't automatically improve power. In fact, it can reduce it. The Agency would need to articulate the argument for using stratification by vulnerability, or, for that matter, by any criterion.

The Agency’s background document asserts that we do not know which variables are likely to be the best indicators of vulnerability. The Agency has suggested that the effectiveness of different criteria for predicting vulnerability will likely vary from chemical to chemical. The Agency’s background document suggests that two independent criteria of vulnerability be considered simultaneously. It suggests that this approach would double the chances of getting reasonable and interpretable results and provide twice as much play for the risk managers to design mitigation strategies. Such an approach might have these advantages, but the cost would be to square the sample size required to maintain the desired data quality. It seems unlikely to some SAP members that this would be a workable approach in this context.

It was suggested that a pilot study (or perhaps another pilot study) might be of value to look for a good vulnerability index, as well as to test other design ideas. If scientific knowledge on the subject is as spotty as the Agaency background document suggests, then the best variables and how they should be combined into a vulnerability index is surely an empirical matter, rather than one that can be decided by an expert panel without specific empirical study of the question. However, some SAP members noted that the past extensive process studies and deterministic models developed by USEPA, USDA, USGS, and others can be used and should be consulted before we give up on formulating a vulnerability index. It might still be entirely prudent to test their formulation against the real world in a pilot study. In part, it was noted that it was not clear how past studies may have been used (as pilot studies) to guide the current design. Ensuring adequate use and consideration of extant data is always of benefit.

A pilot study can be simple to design, based on random sampling with as many variables as possible, including geographic variables, chemical-specific variables, transport/fate variables, and other ancillary variables. Exploratory data analysis, including different kinds of discriminant analyses such as traditional linear methods but also non-parametric methods, should allow a much more refined design for the full monitoring effort. Discriminant analysis can specify a vulnerability index and quantify how useful it would be. Of course, using a formal statistical analysis is based on the theory that if it's worth doing, it's worth doing right. A pilot study would likely have many other practical benefits, such as providing an opportunity to work out some technical details and test various reasonable hypotheses that could simplify the entire effort. Now, the result may be that there is no reasonably general definition of vulnerability on which sampling might be usefully stratified. But, of course, it would be very important to know this.

Once the nature of the stratification, if any, has been determined and the specific inferential or risk-analytic goals for the study have been established, the calculations can be carried out to


Document info
Document views53
Page views53
Page last viewedThu Oct 27 06:05:09 UTC 2016