Chemical safety evaluation: Potential benefits of emerging test methods

Jennifer McPartland, Ph.D., is a Health Scientist.

Parts in this series:      Part 1     Part 2     Part 3     Part 4

This is the third in a series of blog posts on new approaches that federal agencies are exploring to improve how chemicals are evaluated for safety.  Previous posts primarily focused on the scientific principles underlying these efforts.  This post will take a pause from scientific fundamentals to discuss some of the opportunities presented by these more novel methods, while subsequent posts will address some of their limitations and remaining challenges.  (Not to worry, though, I’ll soon get back to computer-simulated organs as promised.) 

A cornerstone of the new approaches is high-throughput (HT) chemical testing tools, housed within projects like EPA’s ToxCast (see earlier post).  Below are some of the potential advantages that HT tools offer over the conventional chemical assessment paradigm:

  • Speed.  Conventional toxicology testing methods generally involve dosing an animal with a chemical of interest and after some period of time—days to months to years—looking to see whether an adverse outcome, for example, a tumor, has developed.  Here the scientist is observing the downstream consequence of a chemical’s interference with the proper function of one or more biological pathways.  In contrast, HT methods mostly focus on “catching” an early indicator of hazard:  the perturbation of the pathway(s) itself, rather than the ultimate consequence of that perturbation.  This requires much less time:  Not only does the effect happen sooner, but it can often be observed in something less than the whole animal, e.g., in a culture of cells or even a solution of cell components.
    That also means that many chemicals can be put through a battery of HT assays simultaneously.  Indeed, thousands of chemicals can be analyzed in hundreds of assays all in a period of time far shorter than would be required to detect adverse outcomes in laboratory animals.  Given the massive backlog of chemicals with little or no safety data, the speed of HT tools could be very valuable, at least in screening and prioritizing chemicals by level of potential concern.
  • Human relevance.  Because of the ethical problems associated with human testing, as well as the simple fact that we live so long, traditional toxicological methods use laboratory animals to assess the toxicity a chemical may present to a human.  According to the seminal National Academy of Sciences report “Toxicity Testing in the 21st Century: A Vision and a Strategy,” use of such animal “models” is possible because in general human biology is similar to that of test animals.  While animal studies have served as important and useful tools in predicting the hazards a chemical may present to a human, an extrapolation is still needed at some level from animal data to estimating risk in humans.  And there are cases where the toxicity a chemical presents is not shared between a lab animal and human.  For example, thalidomide is toxic to human fetuses, but rats are resistant to its effects.
    EPA’s ToxCast HT assays employ human cells, grown in culture, in addition to animal cells.  That means potentially greater confidence that effects observed in the human cells could happen in a whole person.  Additionally, this could lower the likelihood of a cross-species “false negative,” that is, missing an effect because it happens not to occur in the lab animal chosen for a given test but would occur in a human – or, conversely, a “false positive,” that is, seeing an effect in the animal model that for some reason would not occur in people.  (Note that false negatives or false positives may arise for entirely different reasons in HT assays.  But that’s a discussion for a future blog post.)
  • Multiple cell types and life stages.  In addition to advantages resulting from testing on human cells in general, HT methods also offer the potential to look for different kinds of toxicity by testing chemicals on different cell types (e.g. liver cells, kidney cells, etc).  This can shed light on a chemical’s ability to disrupt a process that only takes place in certain organs.  Some HT assays even use combinations of cell types  taken directly from human tissues, in an effort to mimic responses of, and interactions between, cells types that are involved in the body’s reaction to a particular disease or disorder (e.g., asthma).
    A particularly exciting potential application of HT tests is in evaluating chemical effects on early life stages, including fetal development.  For example, the Texas-Indiana Virtual STAR Center is using mouse embryonic stem cells to determine how chemicals may affect key biological pathways during early fetal development.  While this kind of research is still at an early stage (no pun intended), the potential to effectively screen chemicals for developmental toxicity using HT tests would greatly strengthen chemical safety assessments.
  • Exposure Relevance.  Chemical testing in laboratory animals is typically done at high dosing concentrations to ensure that, if an adverse effect is caused, it can be detected in a relatively small number of animals in a relatively short period of time.  These concentrations are often much higher than what a person would actually experience.  Methods are then used to extrapolate the data from such high-dose exposure to lower concentrations more representative of “real-world” exposure.  The high- to low-dose extrapolation process has been always been contentious; for example, are effects seen at high doses merely artifacts of the high doses, or real effects that would still be seen at lower doses?  An advantage of HT methods is that a wide range of doses, including low doses, can be directly tested and in enough samples that statistically meaningful results can be obtained.  Such capabilities may also assist in resolving disputes around chemicals’ ability to cause different effects at low doses than they cause at high doses.
  • Assessing Mixtures.  We don’t live in bubbles where exposures to chemicals occur one at a time.  Rather, we are exposed to multiple chemicals over the same time period with overlapping durations.  Human biomonitoring data reveal the presence of hundreds of chemicals inside our bodies, from fetushood to adulthood.  It is a challenge to put hazard data generated in separate lab tests for individual chemicals into this real-world context of multiple, simultaneous chemical exposures.  In addition, testing all of the various combinations of chemicals in traditional tests would require far more lab animals and be enormously time-consuming and expensive.  High-throughput assays offer a means to test large numbers of chemical mixtures, at multiple doses, and to look for effects at multiple time points.
  • Green Chemistry.  HT methods find their way into green chemistry applications, too.  High-throughput technologies hold promise for informing safer chemical, selection, design and engineering.  These assays could flag potential toxicity concerns for new chemicals during early research, design, and development phases.  Many of the HT technologies used in ToxCast and related programs in fact originate from the pharmaceutical industry, where they have been used for many years in drug discovery to screen out drug candidates that appear ineffective or show indications of hazard, and to push forward into further evaluation and development drugs that show potential for market approval.  Efforts are already underway to integrate HT tools into green chemical design.  A workshop held just this March, brought expert scientists together to discuss a new paradigm in safer chemical design that relies in part on HT and other computational technologies.
  • Crisis Situations.  In crisis situations where there is limited time to evaluate a chemical or mixture before its use, we might be able to rely on batteries of quick high-throughput tests to make a more informed decision.  However, even in these situations care should be taken to clearly communicate any limitations and uncertainties associated with these decisions and the data informing them.  For example, EPA used some of its ToxCast assays to examine potential endocrine-disrupting effects of dispersant chemicals used to clean up the BP oil spill.  Given the crisis state of the situation (and putting aside the fact that the testing came after millions of gallons of the dispersants had already been used, begging the question of why more thorough testing hadn’t been conducted well before then), this information was helpful.  However, EDF expressed concern (see here and here) regarding the poor communication of the results of such assays in a manner that effectively exonerated these chemicals from having any endocrine-disrupting activity – let alone other effects – despite the significant limitations of the available assays.  The important lesson here is that the new technologies shouldn’t be given explicit credit beyond their actual capabilities in any situation crisis or otherwise.

Which brings us to the end of this post with a good segue to the next one in this series.  The many potential benefits of HT methods are enticing and certainly give ample justification for the substantial effort and resources federal agencies are investing to bring them to bear.   But we still have a ways to go from current research and development activities on HT and other cutting-edge tools, to full-throttle use of them in decision-making, regulatory or otherwise.

In the next post, I’ll look at a number of the challenges these methods entail that must be overcome if they are to become the basis for the ideal chemical testing future.

This entry was posted in Emerging testing methods, Health science and tagged , , , , . Bookmark the permalink. Both comments and trackbacks are currently closed.

3 Comments

  1. josephguth
    Posted May 12, 2011 at 12:02 pm | Permalink

    This is an excellent compilation and analysis of the benefits of new testing methods. I hope in your next post you will address the link between these methods and the legal test that defines (or, in a new law, will define) EPA’s power to regulate a chemical. In the case of the current “unreasonable risk” test of TSCA Section 6, for example, the question is whether results from these new methods would provide sufficient evidence for EPA to carry that burden of proof in order to regulate. (One issue here, of course, is the scientifically established causal connection between this newly evolving test data and the likelihood of actual human or environmental harm.) I harbor a lurking fear that industry generally seems to support these methods not just because they are inexpensive, but also because they would not support regulatory action, much as the SIDS data set generated under the HPV program is insufficient to support regulation of HPV chemicals under TSCA. Chemicals policy reform advocates must be sure that the legal test of new laws, such as Senator Lautenberg’s Safer Chemicals Act, will enable EPA to actually act on the test data that is accepted as satisfying a no data, no market requirement. If we don’t, society will continue on in the same logical paralysis that TSCA imposes on us.

  2. Posted May 12, 2011 at 3:38 pm | Permalink

    Thank you Joe for your thoughtful comment. You’ve raised an important issue of whether or when newer methods will inform any regulatory decision-making. This is a millions dollars question (quite literally) on the minds of many engaged in evolving chemical testing efforts. I hope to touch on several aspects of the concern you raised in the next post. Stay tuned!

  3. Pasky Pascual
    Posted May 18, 2011 at 7:17 am | Permalink

    Ditto on the previous comment: you’ve done a great job compiling information on these emerging techniques for risk assessment. I also appreciate and very much agree with the concerns about linking the research to regulatory decisions. It’s an issue that some colleagues and I are now addressing and I very much look forward to your next post.

    cheers!