EDF Health

Selected tag(s): Systematic review

FDA’s approach to systematic review of chemicals got off on the wrong foot

Scientist working on a digital tablet showing data on the chemical element Cadmium

What Happened?

Last month, FDA’s scientists published the toxicological reference value (TRV) for exposure to cadmium in the diet. This value is the amount of a chemical—in this case cadmium—a person can consume in their daily diet that would not be expected to cause adverse health effects and can be used for food safety decision-making. The TRV was based on a systematic review FDA scientists published last year. We will turn to the TRV itself in an upcoming blog but are focusing on the systematic review here.

In a May 2023 publication, experts in systematic reviews from the University of California San Francisco (UCSF) raised concerns about FDA’s “lack of compliance” from established procedures.

We discussed these concerns with FDA. They said:

  • “The systematic review and the TRV” publication “have both undergone external peer review by a third-party and experts in the field.” The agency expects to publish the reviews on its website, and
  • FDA “is working on developing a protocol for a systematic review of cardiovascular effects of cadmium exposure that will be published.”

Why It Matters

Systematic review is a method designed to collect and synthesize scientific evidence on specific questions to increase transparency and objectivity and provide conclusions that are more reliable and of higher confidence than traditional literature reviews. In particular, the National Academies of Sciences, Engineering, and Medicine have recommended the use of systematic reviews to establish values such as the TRV that may be used to inform regulatory decisions.

The National Toxicology Program (NTP) and others have developed specific methodologies to conduct systematic reviews. FDA’s authors said they followed NTP’s Office of Health Assessment and Translation (OHAT) handbook.

Unfortunately, FDA’s adherence to the methodology fell short on both transparency and objectivity grounds, undermining the credibility of its conclusions. Credibility is crucial because FDA’s authors stated that “this systematic review ultimately supports regulatory decisions and FDA initiatives, such as Closer to Zero, which identifies actions the agency will take to reduce exposures to contaminants like cadmium through foods.”

Read More »

Posted in FDA, Health science, Public health / Also tagged , , , , | Authors: , / Comments are closed

EDF submits extensive comments critical of EPA OPPT’s TSCA systematic review document

Ryan O’Connell is a High Meadows Fellow; Jennifer McPartland, Ph.D., is a Senior Scientist.

Last night, Environmental Defense Fund (EDF) submitted critical comments on EPA’s Office of Pollution Prevention and Toxics’ (OPPT) “systematic review” document that OPPT is using to evaluate chemicals’ risks under the Toxic Substances Control Act (TSCA).

Systematic review, a hallmark of the clinical sciences, employs structured approaches to identifying, evaluating, and integrating evidence in a manner that promotes scientific rigor, consistency, transparency, objectivity, and reduction of bias.

Unfortunately, OPPT’s systematic review document deviates dramatically from the best practices in systematic review—practices developed over decades based on empirical evidence and experience in application. OPPT’s approach also significantly diverges from recent recommendations of the National Academy of Sciences (see here and here).

Read More »

Posted in Health policy, Public health, Regulation, TSCA reform / Also tagged , , | Comments are closed

EPA IRIS program receives high marks from the National Academies

Jennifer McPartland, Ph.D., is a Senior Scientist and Ryan O’Connell is a High Meadows Fellow with the Health Program.

Last week the National Academy of Sciences (NAS) published its review of the Environmental Protection Agency’s (EPA) Integrated Risk Information System (IRIS) program, concluding that the program has made strong progress in implementing NAS’ earlier recommendations. As noted by the chair of the NAS committee that led the review, “The changes in the IRIS program over such a short period of time are impressive.”

As I’ve blogged about before, IRIS is a non-regulatory program that provides critical chemical reviews and scientific expertise that help ensure the water we drink, the air we breathe, and the land where we live, work, and play are safe. Offices across EPA and elsewhere in the federal government rely on IRIS, as do states, local governments, and affected communities (see here and here).[pullquote]“The changes in the IRIS program over such a short period of time are impressive.”[/pullquote]

The new NAS report comes four years after its 2014 review, which noted the substantial progress made by IRIS in addressing recommendations from a more critical 2011 review of a draft IRIS assessment of formaldehyde. It is worth noting that half of the committee members involved in the new IRIS review served on the committee that authored the 2011 review.   Read More »

Posted in Health science / Also tagged , | Comments are closed

Getting the data on chemicals is just the beginning

Jennifer McPartland, Ph.D., is a Health Scientist.

Common sense tells us it’s impossible to evaluate the safety of a chemical without any data. We’ve repeatedly highlighted the scarcity of information available on the safety of chemicals found all around us (see for example, here and here).  Much of this problem can be attributed to our broken chemicals law, the Toxic Substances Control Act of 1976 (TSCA).

But even for those chemicals that have been studied, sometimes for decades, like formaldehyde and phthalates, debate persists about what the scientific data tell us about their specific hazards and risks.  Obtaining data on a chemical is clearly a necessary step for its evaluation, but interpreting and drawing conclusions from the data are equally critical steps – and arguably even more complicated and controversial. 

How should we evaluate the quality of data in a study? How should we compare data from one study relative to other studies? How should we handle discordant results across similar studies?  How should we integrate data across different study designs (e.g., a human epidemiological study and a fruit fly study)? These are just a few examples of key questions that must be grappled with when determining the toxicity or risks of a chemical.  And they lie at the heart of the controversy and criticism surrounding chemical assessment programs such as EPA’s Integrated Risk Information System (IRIS). 

Recently, a number of efforts have been made to systematize the process of study evaluation, with the goal of creating a standardized approach for unbiased and objective identification, evaluation, and integration of available data on a chemical.  These approaches go by the name of systematic review

Groups like the National Toxicology Program’s Office of Health Assessment and Translation (OHAT) and the UCSF-led Navigation Guide collaboration have been working to adapt systematic review methodologies from the medical field for application to environmental chemicals.  IRIS has also begun an effort to integrate systematic review into its human health assessments. 

Recently a paper in Environmental Health Perspectives (EHP) by Krauth et al. systematically identified and reviewed tools currently in use to evaluate the quality of toxicology studies conducted in laboratory animals.  The authors found significant variability across the tools; this finding has significant consequences when reviewing the evidence for chemical hazard or risk, as we pointed out in our subsequent commentary (“A Valuable Contribution toward Adopting Systematic Review in Environmental Health,” Dec 2013). 

EDF applauds these and other efforts to adopt systematic review in the evaluation of chemical safety.  Further elaboration of EDF’s perspective on systematic review can be found here

 

Posted in Health policy, Health science, Regulation / Also tagged | Comments are closed