Jennifer McPartland, Ph.D., is a Health Scientist.
Common sense tells us it’s impossible to evaluate the safety of a chemical without any data. We’ve repeatedly highlighted the scarcity of information available on the safety of chemicals found all around us (see for example, here and here). Much of this problem can be attributed to our broken chemicals law, the Toxic Substances Control Act of 1976 (TSCA).
But even for those chemicals that have been studied, sometimes for decades, like formaldehyde and phthalates, debate persists about what the scientific data tell us about their specific hazards and risks. Obtaining data on a chemical is clearly a necessary step for its evaluation, but interpreting and drawing conclusions from the data are equally critical steps – and arguably even more complicated and controversial.
How should we evaluate the quality of data in a study? How should we compare data from one study relative to other studies? How should we handle discordant results across similar studies? How should we integrate data across different study designs (e.g., a human epidemiological study and a fruit fly study)? These are just a few examples of key questions that must be grappled with when determining the toxicity or risks of a chemical. And they lie at the heart of the controversy and criticism surrounding chemical assessment programs such as EPA’s Integrated Risk Information System (IRIS).
Recently, a number of efforts have been made to systematize the process of study evaluation, with the goal of creating a standardized approach for unbiased and objective identification, evaluation, and integration of available data on a chemical. These approaches go by the name of systematic review.
Groups like the National Toxicology Program’s Office of Health Assessment and Translation (OHAT) and the UCSF-led Navigation Guide collaboration have been working to adapt systematic review methodologies from the medical field for application to environmental chemicals. IRIS has also begun an effort to integrate systematic review into its human health assessments.
Recently a paper in Environmental Health Perspectives (EHP) by Krauth et al. systematically identified and reviewed tools currently in use to evaluate the quality of toxicology studies conducted in laboratory animals. The authors found significant variability across the tools; this finding has significant consequences when reviewing the evidence for chemical hazard or risk, as we pointed out in our subsequent commentary (“A Valuable Contribution toward Adopting Systematic Review in Environmental Health,” Dec 2013).
EDF applauds these and other efforts to adopt systematic review in the evaluation of chemical safety. Further elaboration of EDF’s perspective on systematic review can be found here.