CDC finally describes its derivation of “safe” level in WV spill – but erroneously claims it to be “highly conservative”

Richard Denison, Ph.D., is a Senior Scientist.  Jennifer McPartland, Ph.D., is a Health Scientist.

Slowly but surely, like the movement downstream of the spill’s plume, we are learning more about how government officials derived the 1 ppm “safe” level in the drinking water for the chemical MCHM that was spilled into West Virginia’s Elk River late last week.

A few more slivers of light were cast today onto what has been a remarkably opaque procedure used by CDC and other officials to set the 1 ppm level, which got even more confused with last night’s issuance of a “Water Advisory for Pregnant Women” by the West Virginia State Department of Health. 

The slivers come from a story today in the Charleston Gazette by Ken Ward, Jr. and David Gutman reporting on their conversation with an official from the Centers for Disease Control (CDC), and a media call today with the same official. 

CDC finally gave a fuller description of their methodology, and while it appears to have more closely followed standard practice than the methodology they initially described, many questions remain about the study used as the starting point.  Release of these studies, therefore, is essential.  [UPDATE:  EVENING OF 1/16/14:  Late today, Eastman finally made its studies public:  they are available here.] 

We discuss the details further below.  But first:

CDC’s erroneous claim that its “safe” level is “highly conservative”

CDC’s claim that the 1 ppm level is “highly conservative” is not warranted on scientific grounds.  This claim is based on its use of three 10-fold adjustments, referred to by CDC as “uncertainty factors,” to extrapolate from a dose identified in an animal study to a level in drinking water consumed by people.

  1. An “interspecies extrapolation” uncertainty factor to account for the fact that humans may be much more sensitive to the effects of a chemical exposure than rats.
  2. An “intraspecies extrapolation” uncertainty factor to account for the fact that humans differ in their sensitivity to a chemical exposure (e.g., infants or the elderly vs. healthy adults).
  3. A third uncertainty factor to account for how few data are available on the chemical and hence the likelihood that its health effects that have not been identified may occur at doses much lower than the doses for the health effect that has been studied.

The CDC official referred to these adjustments as “safety factors” – implying they provide for a large margin of safety.  This is FALSE.  These are REALITY FACTORS.

Each of these accounts for known circumstances with regard to the effects of chemical exposures on people in the real world.  There are plenty of examples of chemicals where:

  1. humans are 10x (or more) more sensitive than rats to a chemical effect, and
  2. the most vulnerable/sensitive human is 10x (or more) more sensitive than the least vulnerable/sensitive, and
  3. an effect not considered in a given study occurs at a dose that is 10x (or more) lower than the effect looked at in the study.

Don’t take our word for it, but rather the National Academy of Sciences, in a seminal 2009 report titled Science and Decisions:  Advancing Risk Assessment (p. 132, emphases in original):

Another problem … is that the term uncertainty factors is applied to the adjustments made to calculate the RfD [reference dose, derived from, e.g., a no-effect level] to address species differences, human variability, data gaps, study duration, and other issues. The term engenders misunderstanding: groups unfamiliar with the underlying logic and science of RfD derivation can take it to mean that the factors are simply added on for safety or because of a lack of knowledge or confidence in the process. That may lead some to think that the true behavior of the phenomenon being described may be best reflected in the unadjusted value and that these factors create an RfD that is highly conservative. But the factors are used to adjust for differences in individual human sensitivities, for humans’ generally greater sensitivity than test animals’ on a milligrams-per-kilogram basis, for the fact that chemicals typically induce harm at lower doses with longer exposures, and so on. At times, the factors have been termed safety factors, which is especially problematic given that they cover variability and uncertainty and are not meant as a guarantee of safety.

CDC’s Methodology Revealed

Until yesterday, all indications were that the 1 ppm level was derived from a single oral lethality study in rats that is not publicly available but reported a median lethal dose value (LD50).  Yesterday, CDC referred to “additional animal studies” that were under review.  In today’s Charleston Gazette story and this afternoon’s call, the CDC official indicated for the first time that CDC used a second study – also not publicly available – as the starting point for the calculations.  This second study was stated as identifying a “No Observable Adverse Effects Level (NOAEL)” for MCHM of 100 milligrams per kilogram of body weight per day (mg/kg/day). 

[UPDATE 1/17/14:  This study, finally made available late yesterday, was performed using “pure MCHM” (97.3%) rather than the “crude MCHM” mixture that was the material actually spilled.  This adds some additional uncertainty; if other components besides MCHM present in the crude mixture are more or less toxic than MCHM, the mixture’s toxicity would differ from that found for the pure material.]

Numerous questions about this study remain unanswered that bear on its relevance for the purpose to which it has been put.  Just a couple key ones:

  • What health effect(s) were looked for?  and which ones were not considered?  [UPDATE 1/17/14It appears that the study looked for changes in standard blood chemistry and biochemistry parameters, and included histopathological examination of all major organs to look for abnormalities.] 
  • How long were the animals exposed – a day? a week?  a month?  [UPDATE 1/17/14:  The study report indicates the animals were exposed for 4 weeks.] 

But at least we now know how CDC made the calculation that led to the 1 ppm level:

  1. CDC started with the reported NOAEL of 100 mg/kg/day, and divided it by the three uncertainty factors (10 x 10 x10 = 1000) to arrive at a “reference dose” of 0.1 mg/kg of body weight/day.  This is the amount of the chemical that, under the assumptions made, could be presumed “safe” to ingest.
  2. It then assumed an “average child” weighing 10 kilograms (about 22 pounds) was drinking water at an average rate of 1 liter per day (about 34 ounces).  These average values are typical assumptions for use in risk assessment.
  3. Then CDC multiplied the 0.1 mg/kg of body weight/day by the 10 kg average body weight, resulting in 1.0 mg/day for a child as the amount that could be ingested without seeing an effect, again under the assumptions used.
  4. That 1.0 mg/day was then divided by the average water consumption of 1 liter/day to yield 1.0 mg/liter as the concentration in the water consumed identified by CDC as the “safe” level.
  5. That 1.0 mg/liter is equivalent to 1 ppm.

Welcome to the wild and woolly world of risk assessment, folks.  More to come, we’re sure.

This entry was posted in Environment, Health policy, Regulation and tagged , . Bookmark the permalink. Both comments and trackbacks are currently closed.

2 Comments

  1. Rachel
    Posted January 17, 2014 at 4:57 pm | Permalink

    The name of Ken Ward’s newspaper is The Charleston Gazette. Please correct.

    • Richard Denison
      Posted January 17, 2014 at 6:36 pm | Permalink

      Rachel: My bad! I used the URL, but see that is wrong. Will make the correction!