Scott Pruitt seeks to cook the books on EPA risk assessment science

Richard Denison, Ph.D., is a Lead Senior Scientist.

EPA Administrator Scott Pruitt unveiled his “secret science” initiative yesterday at a press conference to which no press were invited.  While EPA has yet to post the proposed rule or otherwise make it available to the public, it was made available by others.  The main thrust of the proposal is actually considerably different and, at least initially, more targeted, than advertised by Pruitt in recent weeks and by the House of Representatives Science Committee’s Chairman Lamar Smith (R-TX), who authored the secret science legislation on which Pruitt’s proposal was to be based and appeared with Pruitt yesterday.

Yesterday both men stuck to their earlier talking points about the need to make sure all information EPA relies on is reproducible and fully publicly available, and never mentioned the change in the focus of the proposal.  I suspect both of them would have been hard pressed to describe the actual main focus of the proposal, which is now this:

When promulgating significant regulatory actions, the Agency shall ensure that dose response data and models underlying pivotal regulatory science are publicly available in a manner sufficient for independent validation.  (p. 23, emphases in original)

But I am sure Dr. Nancy Beck, chemical industry toxicologist turned top political appointee in EPA’s toxics office, could in a heartbeat.

I would describe the new approach, while no less dangerous, as a laser-guided missile in comparison to the carpet-bombing approach taken by the House legislation and earlier iterations of the EPA proposal.  

Why the change of strategy? 

A major clue surfaced last week in emails released by the Union of Concerned Scientists that it had received in response to a FOIA request to the agency.  That request was for “copies of all communications, from January 1, 2018 to present [March 6, 2018], between Richard Yamada and Nancy Beck.”  Mr. Yamada is Dr. Beck’s counterpart in EPA’s Office of Research and Development (ORD).  He came to ORD directly from working for Chairman Smith at the House Science Committee.

It should be noted that, after the FOIA’d emails drew significant attention in the media, EPA withdrew them saying they should not have been released; UCS promptly made them available.

One of the Beck-Yamada email exchanges shows Dr. Beck raising a red flag that an earlier version of the proposal that mirrored the House legislation would pose enormous burdens on industry, by either forcing companies to make public all of the data in their unpublished studies at great expense or forcing EPA not to rely on those industry studies.  Said Dr. Beck:  “The directive needs to be revised.  Without change it will jeopardize our entire pesticide registration/ re-registration review process and likely all TSCA risk evaluations.”  (No mention by Dr. Beck that the directive would have posed the same burden/quandary on EPA in seeking to use published as well as unpublished studies to inform its work, to the detriment of public health protection.)

The proposal was indeed revised.  So what is the new one up to?

The new focus is a requirement that EPA make public information “sufficient for independent validation” of “dose response data and models” as a condition for EPA to rely on such data and models.  I describe this as a laser-guided missile because it goes right to the heart of the war industry has been waging against the risk assessment science used by EPA and called for by the nation’s most prestigious scientific body, the National Academy of Sciences (NAS).

Part of what’s at stake here was signaled in the statement made by one of the proposed rule’s endorsers featured in EPA’s press release:  “The proposal represents a major scientific step forward by recognizing the widespread occurrence of non-linear dose responses in toxicology and epidemiology for chemicals and radiation and the need to incorporate such data in the risk assessment process.”  (emphasis added)

To understand what this statement is getting at, there are three scientific issues that I need to briefly introduce here.

Risk assessment science has increasingly moved away from assuming chemicals have safe thresholds, has moved toward considering low-dose effects, and relies on defaults to account for uncertainty.  NAS embraced these concepts and called on EPA to adopt them in its seminal 2009 report, Science and Decisions.  Let me briefly discuss each of them.

First, for decades the chemical industry and its army of consultants have argued that virtually every substance, no matter how toxic, has a “safe threshold” – a level of exposure below which there is no risk whatsoever.  The science has steadily challenged this assumption, based on strong evidence that even if such a threshold appears to exist in, say, a test conducted in laboratory animals, when extrapolated to a diverse human population the notion that a threshold actually exists rapidly falls apart.  That is because the human population exhibits enormous variation in genetics, health status, life stage, background and co-exposures, etc. – such that a “safe” level of exposure in an affluent, healthy adult may well not be safe at all for, say, a developing fetus or an adult living in a poor community.

Second, as the sophistication of scientific methods and our understanding of biology have grown, the science is also increasingly pointing to evidence of real effects of many substances at low doses, once thought to be safe.  Lead and small particulates in air pollution are two examples of substances where science has not identified a “safe” level of exposure.

At the risk of resorting to too much jargon, how this plays out in risk assessment is whether one assumes a linear or non-linear association between low levels of exposure to a substance (dose) and the effect that exposure has on health (response).  A linear dose-response relationship means that some level of response within the population is expected for all doses all the way down to zero.  In contrast, an assumption of non-linear dose response at low levels would mean there is a safe threshold – a finite dose at which the risk would be zero.

A third, related issue arises from the fact that we don’t intentionally test toxic substances on people, but rather extrapolate from studies done on other animals or even on cultured cells or cell components.  To ensure protection of people requires the use of what are called “defaults.”  For example, when extrapolating from a study done in laboratory rodents, risk assessors typically apply a factor of up to 10 to account for the potential that a substance is much more toxic to people than to rodents.  A similar factor is often applied to account for variability in susceptibility within the human population, to account for the potential that, say, an infant is more susceptible to the effects of a chemical than is an adult.  EPA’s proposal makes several references to “default assumptions,” although their meaning is not entirely clear.

In her time at the American Chemistry Council (ACC), Dr. Beck led the charge for ACC and the chemical industry on these very issues.  Now as the top Trump official in the toxics program at EPA, she appears to be heralding a return to old, industry-friendly “science,” to the detriment of public health.

So it’s no accident that the new proposal seeks to compel EPA to subject its dose response data and models to independent – read “industry” – validation.

Won’t the proposal also subject the industry’s own data and models to independent validation?

In principle, EPA’s proposal would require that the industry’s data and models relating to dose response would also need to be made public in order for EPA to rely on them in regulatory decisions.

Count me skeptical, for at least three reasons.

First, the proposal gives EPA unfettered discretion to define what constitutes “pivotal regulatory science,” which it could use selectively to require disclosure where it wishes and not do so where it doesn’t.

Second, the proposal gives EPA unfettered discretion to exempt anything it wants from the full-disclosure requirements (p. 14):

The proposed rule includes a provision allowing the Administrator to exempt significant regulatory decisions on a case-by-case basis if he or she determines that compliance is impracticable because it is not feasible to ensure that all dose response data and models underlying pivotal regulatory science are publicly available in a fashion that is consistent with law, protects privacy and confidentiality, and is sensitive to national and homeland security, or in instances where OMB’s Information Quality Bulletin for Peer Review provides for an exemption (Section IX).

Third, in part to blunt earlier criticism that this proposal could force the disclosure of private information, whether medical records or confidential business information, EPA now proposes (pp. 9-10):

Nothing in the proposed rule compels the disclosure of any confidential or private information in a manner that violates applicable legal and ethical protections.

Interestingly – especially coupled with the broad authority EPA grants itself to issue exemptions – this statement does not say whether EPA will or won’t  base a significant regulatory action on non-disclosed data and models, only that nothing compels it to make such disclosure.  EPA may use this ambiguity to selectively consider or ignore non-disclosed data and models to reach industry-preferred outcomes.

In theory, there is one piece of good news in EPA’s proposal: it is a proposal and will be subject to public comment, in contrast to Pruitt’s earlier plan to issue this as an immediately effective directive.  On the flip side, though, it will be harder for a new administration to undo a rule.  Read/listen to what Pruitt himself said yesterday at the announcement (minute 53:21):

This regulation that we’re proposing today, and that’s something that I want to emphasize, this is not a policy, this is not a memo, this is a proposed rule. And the reason that’s important is because this is not just something that we’re proposing that may last for two months or two years, it is a codification of an approach that says that as we do our business at the agency, the science that we use is going to be transparent, it’s gonna to be reproducible, it’s gonna be able to be analyzed by those in the marketplace and those that watch what we do can make informed decisions about whether we’ve drawn the proper conclusions or not.

It will be critical that advocates for strong science and public and environmental health protections loudly let EPA know that its effort to allow industry interests to manipulate agency risk assessment science is outrageous and unacceptable.

This entry was posted in Air pollution, Health policy, Health science, Industry influence, Regulation and tagged , . Bookmark the permalink. Both comments and trackbacks are currently closed.