Since groundwater monitoring has become a fact of business life at landfills around the nation, what can an operator or owner do to help control these new costs? Ensuring the monitoring's accuracy is one significant way, including the use of statistical analysis.
In addition, the choice of a statistical approach can make more than a 50 percent difference in long term monitoring costs, according to a recent study.
Statistical analysis services cost between 10 to 15 percent of the total groundwater monitoring costs and rarely exceed 20 percent. Field sampling, analytic laboratory and regulatory reporting costs comprise the majority of monitoring costs.
For example, analytic laboratory costs for detection monitoring, a Subtitle D requirement, usually cost between $350 and $400 per well per sample. Sampling and reporting costs will increase groundwater monitoring costs to approximately $700 per well. However, if re-testing is required due to an inappropriate statistical test, the sampling, lab and reporting costs can exceed $2,000. If the facility is forced into assessment monitoring, the costs can exceed $1,900 per sample.
By contrast, statistical analysis for a detection monitoring program, which may prevent a facility from retesting or assessment monitoring, usually costs between $125 to $175 per monitoring well per reporting period. The cost of initial evaluation and permit preparation, however, may often exceed these amounts.
Specialized groundwater statistical software is now available to help keep your monitoring costs down. This type of software, such as IDT's Groundwater Statistical Analysis System, can automate statistical analyses to help ensure that the appropriate statistical tests are being used.
No single statistical approach will work to minimize groundwater monitoring costs at all sites. When choosing a statistical approach, look for ones that will maximize statistical flexibility when data characteristics change as well as minimize retesting, site-wide false positive rates and sample size requirements.
Site Hydrogeology As the quest begins to control groundwater monitoring costs, make sure that the hydrogeologic assumptions accurately reflect the site's hydrogeology. Misunderstood assumptions are the number one cause of skyrocketing groundwater monitoring costs.
For example, in some cases interwell statistics have indicated a release from a facility even when waste has not yet been placed. This demonstrates why, if there is spatial variation in a site's hydrogeology, an intrawell statistical approach may be more appropriate than an interwell approach. However, it can be difficult to convince regulators of the need for an intrawell approach, especially if the monitoring program began after the waste was placed.
Intrawell statistics compare a compliance well's historical data to recent observations from the well. This eliminates the possibility that spatial variation between the upgradient and downgradient wells would lead to the incorrect conclusion that a release has occurred. It assumes, however, that the historical data at the wells have not been impacted by the facility, a fundamental regulatory concern about this approach.
Future impact cannot be determined if the historic data has been impacted. At older facilities, where monitoring wells were installed after the waste was placed, it's often difficult to demonstrate that historical data is clean. For proof, these facilities should use hydrogeologic information supplemented with statistical evidence. This information can illustrate the significant natural spatial variation in the site's hydrogeology.
Spatial variation has probably occurred if the upgradient wells show a significant statistical difference between each other. When this happens, an interwell approach can make faulty conclusions about the facility's downgradient wells' water quality. Before an intrawell approach can be used, however, the facility should screen the historical data at the compliance wells to ensure that only unimpacted data is used to develop each compliance well's background standard. Statistical approaches such as interwell limit-based analyses and VOC and trend tests can assist the screening process.
This approach, which reduces re-testing and prevents facilities from conducting unnecessary assessment monitoring, has received regulatory approval in several states including California and Colorado.
Minimizing False Positives Minimizing site-wide false positives is another way to control groundwater monitoring costs. In the original EPA guidance as well as in federal and most state regulations, false positive rates are considered on a test or individual comparison basis. However, facilities need to focus on the site-wide false positive rate, which is the chance of finding a statistical false positive result in a regulatory reporting period.
Site-wide false positive rates, which are much higher than the individual-test false positive rates, increase with each statistical test performed. For example, if an interwell statistical test is run on only 10 constituents at a 5 percent false positive rate per constituent, the site-wide false positive rate will be approximately 40 percent.
Consequently, many facilities have a better than 50 percent chance that at least one false positive will occur in each reporting period. The site-wide false positive rate is critical. If just one statistically significant difference is found, then retesting or assessment monitoring will be required.
To minimize site-wide false positive rates, reduce the number of constituents that are statistically analyzed under detection monitoring. Federal Subtitle D regulations don't require every Appendix I constituent to be statistically analyzed. In fact, for detection monitoring, EPA regulators realize that it may be better to statistically analyze a subset of the inorganic and organic Appendix I constituents.
Subsets should be selected based on prior monitoring results, local hydrogeology and leachate characteristics. Some state regulatory agencies, such as California's regional water quality boards, have shortened lists of inorganic parameters that need to be statistically analyzed.
Composite analyses also can reduce site-wide false positive rates for VOC analyses. Analyses such as Poisson-based limits or California's screening method will reduce site-wide false positive rates. Because of the high proportion of non-detects commonly found in VOC data, composite analyses usually are more appropriate for VOCs. However, Poisson-based limits must be carefully applied. A recent Environmental Protection Agency review criticized a commonly used formulation for the Poisson limits, because in certain circumstances they can result in unrealistic conclusions.
Reducing the false positive rate of the individual tests is another way to reduce site-wide false positive rates. However, this also will increase the false negative rate, which can be offset by increasing the background sample size.
Statistical Power Maximizing statistical power for a given sample size is the third step to controlling costs. The power of a statistical test is its ability to detect a "true" difference or change.
Several factors can affect the power of a test; not all statistical tests have the same power under the same circumstances. Power is primarily determined by the statistical sample size, such as the number of analytic results. For example, parametric tests usually have more power than nonparametric tests for the same sample size. Consequently, a parametric test is most commonly used for groundwater monitoring, especially when sample sizes are limited if the data are normally distributed.
With a parametric test, however, groundwater quality data usually don't fit a normal or log-transformed-normal distribution when rigorous normality tests such as the Shapiro-Wilks or Shapiro Francia tests are used. At this point, there are two common reactions: using a parametric analysis, which will yield unpredictable results, and using a non-parametric analysis, which has a much lower power for a given sample size when compared to the parametric test, if the data are normally or transformed normally distributed.
Another option is to use a family of transforms which increases the chances of transforming the data into a normal distribution for parametric analysis. This method increases the power of the test for a given false positive rate and sample size so that additional sampling, expensive retests and assessment monitoring can be avoided. However, since using transforms is a complex procedure, specialized statistical software is often necessary for the task.
Ensuring Flexibility Another way to control costs is to develop a site-specific statistical selection process, which incorporates the possibility of various data changes over the long-term. For example, percentage of non-detects, data distributions or equality of variances can dramatically change as groundwater monitoring programs mature.
Many facilities do not plan for possible changes in data distributions or percentage of non-detects and base their statistical test on the limited data which is currently available. If a facility does not adjust its statistical approaches when data characteristics change, the retesting and assessment monitoring costs will escalate.
Therefore, instead of proposing one statistical test based solely on the limited data available at the time a permit is submitted, propose a decision logic for choosing the most appropriate statistical approach based on the current characteristics of the data. Then, at each reporting period, review the data characteristics and select the most appropriate statistical test based on the permit decision logic. If the decision logic remains consistent, it will not matter if the test changes. Again, software may ease the implementation of this concept.
It's important to ease into a groundwater monitoring program. All it takes is one applicational error for statistical findings to grossly inflate monitoring costs and lead to inaccurate answers. However, if statistics are wisely applied, they can yield dramatic savings from costly long-term projects.