Richard Denison, Ph.D., is a Senior Scientist.
This week I attended a workshop sponsored by the National Academy of Sciences’ Committee to Review the IRIS Process. This committee was established in response to a rider attached to an “omnibus” spending bill passed by Congress in late 2011. The committee’s charge is to “assess the scientific, technical, and process changes being implemented by the U.S. Environmental Protection Agency (EPA) for its Integrated Risk Information System (IRIS).”
EPA describes IRIS as “a human health assessment program that evaluates information on health effects that may result from exposure to environmental contaminants.” The key outputs of IRIS assessments are one or more so-called “risk values,” quantitative measures of an “acceptable” level of exposure to the chemical for each cancer and non-cancer health effect associated with the chemical. IRIS risk values are in turn used by regulators to set everything from cleanup standards at Superfund sites to limits in industrial facilities’ water discharge permits.
This week’s workshop – a detailed agenda is available here – was intended to provide expert input to the committee to inform its review of IRIS. It focused on the complex and controversial issue known as “weight of evidence” (WOE) evaluation. Here WOE refers to how EPA – in conducting an IRIS assessment of a particular chemical – selects studies, evaluates their quality, and assesses and integrates their findings, as well as how it communicates the results. At issue in particular in a WOE evaluation is how the assessor determines the relative importance – or weight – to be given to each study.
One of the many issues that came up in the discussion of WOE is how to identify and assess the “risk of bias” in individual studies – a concept borrowed from the evaluation of the reliability of clinical trials used in drug evaluations. (See this Powerpoint presentation by one of the committee’s members, Dr. Lisa Bero, which provides a nice overview of risk of bias in that setting). Evaluating a study’s risk of bias is critical for assessing its quality and in turn the weight it should be given, because bias in studies can result in significant under- or overestimates of the effects being observed.
One type of bias is so-called “funder bias.” Dr. Bero and other researchers have documented through extensive empirical research that there is a significantly increased likelihood that a study paid for by a drug manufacturer will overstate the efficacy or understate the side effects of a drug. As to studies of environmental chemicals, at the workshop and more generally, the chemical industry has pointed to adherence to Good Laboratory Practice (GLP) standards as a sufficient antidote to bias, including funder bias, a notion that has been heartily disputed by others.
But enough background. My intent here is not to fully describe the workshop discussions, but rather to provide the comments I presented during the public comment period at the end of the meeting. My comments addressed the issue of funder bias and also sought to urge the committee not to dive so deeply into the weeds in reviewing and proposing enhancements to EPA’s IRIS process that it loses sight of the need for a workable IRIS process that is able to provide in a timely manner information so critical to ensuring public health protection.
Read More »