Jonathan Choi is a chemicals policy fellow.
EDF Senior Scientist Dr. Jennifer McPartland contributed to this post.
The beginning of this century will no doubt be known for a lot of things. In the biological sciences, I predict it’ll be known for big data. It’s hard to wrap your head around just how far we’ve come already. For example, the data chips that sing “happy birthday” to your loved ones in those horrendously overpriced cards have more computing power than the Allies did in 1945. When I first started using computers, the 5.4” floppy disk was being replaced by the new 256Kb 3.5” disk. Now in Korea, you can get 1 GB per second internet speeds for $20 a month. That’s around 4000 floppy disks of data per second for about as much as I spend every week at the burrito place down the street.
In the biological sciences, we’ve seen an explosion of new ways to generate, collect, analyze, and store data. We’re photographing the world’s biodiversity and sharing it with crowdsourced taxonomists. We’re creating a database of the genomes of the world’s organisms. We’re mapping chemical exposures (our exposome), inventorying the microbes that live in our guts (our microbiome), ripping apart cells and sequencing every bit of messenger RNA that floats around inside (our transcriptome), and much more.
So, it’s not too surprising that regulatory agencies like EPA are pushing their own efforts to amass large quantities of data to help meet their missions. EPA has the unenviable task of reviewing tens of thousands of chemicals currently on the market with little health and safety data, on top of hundreds of new chemicals banging at its door each year. As we have written on numerous occasions, the agency clearly needs a better law that gives it greater authority to get the data it needs to effectively evaluate and manage chemical risks. But, given the information abyss in which we operate, we could definitely stand to adopt new testing approaches that generate at least screening level data on chemicals faster. Read More