Market Forces

How renewables, natural gas and flat demand led to a drop in CO2 emissions from the US power sector

New state-by-state research shows significant reductions across the country from 2005-2015

 Decarbonizing the power sector in the United States will be critical to achieving the goal of a 100% clean economy by 2050 – especially since reaching “net-zero” greenhouse gas emissions across the economy means that other energy-using sectors such as buildings and transport will increasingly need to be electrified, switching away from direct fossil fuel use and relying on low-carbon electricity instead. Demand for electricity is therefore very likely to grow in the future – which makes it critical that its CO2 emissions sharply decrease through the accelerated deployment of low carbon technologies, such as wind and solar power, in the decades ahead.

US power sector CO2 emissions, 1990-2015

For now, US power sector CO2 emissions appear to have turned a corner. While CO2 emissions from the U.S. power sector increased between 1990 and 2005, they peaked shortly thereafter, and then decreased to the point that by 2015, they had fallen by 20% (or 480 million metric tonnes CO2) compared to 2005.

In recently published research, my co-authors and I wanted to understand the drivers behind the drastic fall in the country’s—and individual states’–power sector CO2 emissions, and in particular the role that low carbon technologies such as wind and solar power have already played in reducing US power sector CO2 emissions. Our analysis, published in Environmental Research Letters  used an approach called index decomposition analysis and found that natural gas substituting for coal and petroleum coupled with large increases in renewable energy generation—primarily wind—were responsible for 60% and 30%, respectively, of the decline in CO2 emissions from the US power sector between 2005 and 2015.

Renewable growth in red states

Most of the emissions reductions driven by renewable energy growth came from Texas and states in the Midwest — Iowa, Kansas, Illinois and Oklahoma. While many of these states are not necessarily known for supporting aggressive climate policies, the combination of federal tax credits, state energy policies, decreasing costs of renewables and windy conditions appears to have provided powerful support for renewable energy deployment.

Texas, in particular, is an interesting case. In 2005, it was the leading emitter of U.S. power sector CO2 emissions across the country. But by 2015, its gross reductions from wind energy totaled 27 million metric tons, or more than 5% of the total net US reduction in power sector CO2 emissions since 2005 (i.e., a sixth of the total US reduction attributed to renewables). The state achieved its final renewable portfolio standard (RPS) target in 2008—seven years ahead of its 2015 goal. In addition to reduced costs of turbine technologies, federal tax credits and positive wind conditions also likely played a role in wind’s growth.

Wind generation in Texas, Iowa, Kansas, Illinois and Oklahoma together contributed half of the renewables-related emission reductions (70Mt or 3%-points out of the 20% reduction in US power sector CO2 emissions since 2005).

Over the same period, many states that had relied heavily on coal like Pennsylvania, Georgia, Alabama and Florida, reduced emissions by substituting natural gas for coal in electricity generation. While that prompted a decline in CO2 emissions, it’s important to note that while natural gas emits less CO2 emissions than coal and petroleum when producing electricity it is still a source of CO2 emissions and can only take us so far in decarbonizing the power sector. In addition, methane leakage across the supply chain remains a significant issue–and is not accounted for in this analysis, meaning the overall net greenhouse gas benefit from this natural gas expansion was–potentially significantly—lower.

Need for new policy

While there are positive signs in the power sector—the cost of renewables continues to decline and a growing number of states are taking crucial action to cut CO2 emissions, these trends as well as the specific factors identified in this analysis cannot be relied upon to achieve the deep emissions reductions needed in the decades ahead.

U.S. power sector CO2 emissions are projected to remain relatively flat over the next decade and rise slowly after that, absent new policies. This is particularly significant given that, much of the decarbonization of other sectors such as buildings and transportation will need to rely heavily on electrification.

Ultimately, new policy interventions are necessary, including strong limits on climate pollution – not only in the power sector, but across the entire economy to drive reductions at the pace and scale needed for the US to be 100% clean no later than 2050.

Also posted in emissions / Leave a comment

Not all fossil fuel subsidies are created equal, all are bad for the planet

This is part two of a five-part series exploring Policy Design for the Anthropocene,” based on a recent Nature Sustainability Perspective. The first post explored the intersection of policy and politics in the development of instruments to help humans and systems adapt to the changing planet.

A recent International Monetary Fund (IMF) working paper made headlines by revealing that the world is subsidizing fossil fuels to the tune of $5 trillion a year. Every one of these dollars is a step backward for the climate. That much is clear.

Instead of subsidizing fossil emissions, each ton of carbon dioxide (CO2) emitted should be appropriately priced. That’s also where it’s important to dig into the numbers.

It’s tempting to go with the $5 trillion figure, as it suggests a simple remedy: remove the subsidies. At one level, that is precisely the right message. But the details matter, and they go well beyond the semantics of what it means to subsidize something.

Direct subsidies are large

The actual, direct subsidies—money flowing directly from governments to fossil fuel companies and users—are “only” around $300 billion per year. That is still a huge number, and it may well be an underestimate at that. The International Energy Agency’s World Energy Outlook 2014, which took a closer look at fossil subsidies than reports since, put the number closer to around $500 billion; a 2015 World Bank paper provided more detailed methodologies and a range of between $500 billion and $2 trillion.

What all these estimates have in common is that they stick to a tight definition of a subsidy:

Subsidy (noun, \ ˈsəb-sə-dē \)

“a grant by a government to a private person or company to assist an enterprise deemed advantageous to the public,” per  Merriam-Webster.

These taxpayer-funded giveaways are not only not “advantageous to the public,” they also ignore the enormous now-socialized costs each ton of CO2 emitted causes over its lifetime in the atmosphere. (Each ton emitted today stays in the atmosphere for dozens to hundreds of years.)

The direct subsidies also come in various shapes and forms—from some countries keeping the cost of gasoline artificially low, to a $1 billion tax credit for “refined coal” in the United States.

Indirect subsidies are significantly larger

The vast majority of the IMF’s total $5 trillion figure is the unpriced socialized cost of each ton of CO2 emitted into the atmosphere. Each ton, the IMF estimates conservatively, causes about $40 in damages over its lifetime in today’s dollars.

Depending on one’s definition of a subsidy, this may well qualify. It’s a grant from the public to fossil fuel producers and users—something the public pays for in lives, livelihoods and other unpriced consequences of unmitigated climate change.

The remedy here is very different than removing direct government subsidies. It’s to price each ton of CO2 emitted for less to be emitted. The principle couldn’t be simpler: “When something costs more, people buy less of it,” as Bill Nye imbued memorably on John Oliver’s Last Week Tonight recently.

 

All that goes well beyond semantics of what it means to subsidize. One policy is to remove a tax loophole or another kind of subsidy, the other is to introduce a carbon price. The politics here are very different.

Unpriced climate risks might be much larger still

The $5 trillion figure also hides something else. By using a $40-per-ton figure, the IMF focuses on a point estimate for each ton of CO2 emitted, and a conservative one at that. The number comes from an estimate produced by President Obama’s Interagency Working Group on the Social Cost of Carbon. That’s a good starting point, certainly a better one than the current Administration’s estimate.

But even the $40 figure is conservative. It captures what was quantifiable and quantified at the time. It does not account for many known yet still-to-be-quantified damages. It does not account for risks and uncertainties, the vast majority of which would push the number significantly higher still.

In short, the $5 trillion figure may well convey a false sense of certainty.

In some sense, little of that matters. The point is: there is a vast thumb on the scale pushing the world economy toward fossil fuels, the exact opposite of what should be done to ensure a stable climate.

In another very real sense, the different matters a lot: Politics may trump all else, but policy design matters, too.

By now the task is so steep that it’s simply not enough to say we need to price emissions and leave things at that. Yes, we need to price carbon, but we also need to subsidize cleaner alternatives—in the true sense of what it means to subsidize: to do so for the benefit of the public. Whether that comes under the heading of a “Green New Deal” or not, it is a much more comprehensive approach than one single policy instrument.

This is party 2 of a 5-part series exploring policy solutions outlined in broad terms in Policy Design for the Anthropocene.” Part 3 will focus on “Coasian” rights-based instruments, taking a closer look at tools that limit overall pollution to create markets where there were none before.

Also posted in Economics, Social Cost of Carbon / Leave a comment

How smart congestion pricing will benefit New Yorkers

This post was co-authored with Maureen Lackner

Last week, New York became the first American city to adopt congestion pricing—a move that should benefit both the city’s crumbling transit system and the environment.

In highly dense areas such as lower Manhattan – where valuable road space is quite limited by its urban geography — congestion pricing allows for a better management for improving vehicle traffic flow. Since the 1950s, economists and transportation engineers have advocated for this market-based measure, which encourages drivers to consider the social cost of each trip by imposing an entrance fee to certain parts of cities—in this case, Manhattan below 60th street. These fees should both discourage driving, thus reducing traffic, while—in New York’s case—raising needed funds for the subway and city buses. These pricing plans have been successful in reducing congestion in places like Singapore and parts of Europe. They have provided additional social benefits, like reducing asthma attacks in children in Stockholm by almost half and cutting traffic accidents in London by 40 percent.

New York will formally introduce this policy instrument in 2021. And while many of the pricing decisions have been deferred, 80 percent of the revenue collected will go to the subway and bus network; 10 percent will go to New York’s commuter rail systems that serve the city. Those setting rates can look to existing pricing models and research to price for success.

Cristobal Ruiz-Tagle, an EDF High Meadows Fellow, spoke with Juan Pablo Montero – a leading environmental economist, fellow Chilean and member of our Economics Advisory Council – about his research on congestion pricing, and what New Yorkers can and should expect.

CRT: What’s the best case scenario of New Yorkers with this pricing plan?

JPM: The latest report from INRIX ranked New York as the fourth most congested city in the United States. New Yorkers lose an average of 133 hours per year in congestion—just sitting in a car and not moving, or moving very slowly. The cost of congestion per year in New York is $9.5 billion—the largest cost in the country. So that’s the starting point.

 To solve the problem, you need to set the congestion fee sufficiently high. In Santiago, we ran a study and found it should be around $14 per day. In New York as far as I understand, they’re proposing for passenger vehicles of around $12. It’s a little lower than what we see in London (£ 11.50), but I expect New Yorkers are still going to get most of the benefit from less congestion.

 The most important element of the plan is what you do with the resources collected. The proposal here is to improve public transportation. We did a study on this in Santiago and showed that if you don’t put the money back into the transportation system, the poor will be much worse off. We’re proposing something similar in Santiago–that you use the funds to both improve infrastructure and lower fares. That’s the only way to do it without turning it into a regressive policy.

CRT: Are there other benefits to these plans besides reducing traffic and improving public transportation?

 JPM: Maybe people will starting using bike lanes more frequently—or people are willing to walk more because their public transportation is better. There could be more outdoor activities. Those additional benefits are important, but they’re hard to quantify. 

 CRT: In addition to a congestion pricing plan, London also has a pollution fee, which started on April 8th. Do you think these kinds of fees will further reduce emissions and improve health? Or is there something else that should be considered?

JPM: They should, but it’s important to understand the local vehicle fleet—especially how old it is. The cars that contribute the most to local air pollution are very old cars. In terms of global pollutants—namely CO2—they’re roughly the same. Local pollutants—nitrogen oxides, fine particulate matter, etc., are much worse in older fleets. So if the fleet is new—younger than 10 years—you may not see as much as a reduction as if the fleet is very old or if you have ban on diesel pollution. People with lower incomes are more likely to leave their cars at home when charged for driving or with a congestion fee—and they tend to use older vehicles that emit more pollution.

 CRT: The age of local fleets has been important in other parts of the world, right? Especially when cities tried non-market-based measures like vehicle restrictions.

 JPM: Yes, Mexico City placed a uniform restriction on vehicles in 1989 without making any distinction between newer, cleaner cars and older, dirtier ones. You could only drive your car into the city for a limited number of days per week, so people went out and bought second cars that they could then use on the off days. And the cars they bought were older, which were dirtier. So in that way, the plan backfired.

CRT: It sounds like there are many ways to structure these congestion pricing plans:

JPM: Yes. There are ways to introduce dynamic pricing. You don’t want to keep these prices fixed forever in case the policy response wasn’t as expected. Ideally, you want to change prices based on location and time of day. This may eventually happen, but I don’t think it’s prudent to push for that today. Don’t let the perfect be the enemy of the good.

We are finally seeing this policy instrument taken seriously—and it will be very interesting to see which city is next. LA? Seattle? Washington, DC?

Congestion pricing should provide New Yorkers with a number of benefits, including cleaner air, better public health and a modernized public transit system, all while reducing that maddening gridlock in downtown Manhattan. EDF is part of a coalition of groups supporting New York’s congestion pricing plan.

 

Posted in Markets 101 / Leave a comment

How reverse auctions can help scale energy storage

This post is co-authored with Maureen Lackner

Just as reverse auctions have helped increase new renewable energy capacity, our new policy brief for the Review of Environmental Economics and Policy argues they could also be an effective approach for scaling energy storage.

Why we need energy storage

Voters have spoken, and states are moving toward cleaner electricity. Legislatures in Hawaii and California passed mandates for 100 percent clean energy in the electricity sector, and governors in Colorado, Illinois, Nevada, New Jersey, New York, Maine, and Michigan have all made similar 100 percent clean energy promises in recent months. These ambitious targets will require large-scale integration of wind and solar energy, which can be unpredictable and intermittently available. Cost-effective energy storage solutions can play a leading role to provide clean, reliable electricity—even when the sun isn’t shining and wind isn’t blowing.

Energy storage systems—ranging from lithium-ion (Li-ion) batteries to hydroelectric dams—can provide a wide array of valuable grid services. Their ability to bank excess energy for use at a later date makes them particularly well-suited to address the intermittency challenge of wind and solar. In some cases, energy storage systems are also already cost-competitive with natural gas plants.

However, in order to reach ambitious clean energy targets, we’ll likely need to close a large energy storage gap. One recent estimate suggests approximately 10,000 Gigawatt hours (GWh) of energy storage may be needed to support a two-thirds renewables domestic electricity mix. In our policy brief, we estimate the United States currently has no more than 10 percent of this utility-scale energy storage capacity available; the actual quantity is likely much lower. Developing vast levels of energy storage will likely be an important factor toward integrating a greater share of renewables into the energy mix. Smart policy design can help drive energy storage prices even further below current historic lows, while ensuring these technologies are procured cost-effectively.

A path forward: using reverse auctions to scale energy storage

Reverse auctions have already helped scale renewables and, when designed well, may also be an effective tool when applied to energy storage. In a reverse auction, multiple sellers submit bids to a single buyer for the right to provide a good or service. In the case of renewables, developers bid to provide a portion of capacity desired by the buyer, typically a utility. This policy tool is gaining in popularity, because, if designed well, it can drive down bid prices and ensure reliable procurement. Globally, the share of renewables capacity procured through reverse auctions is expected to grow from 20 percent in 2016 to more than 50 percent in 2022. It seems likely that auction-induced competition has triggered a fall in renewable prices that some are calling the “Auctions Revolution.”

While examples in Colorado and Hawaii suggest reverse auctions can be effective in procuring energy storage, there’s little guidance on tailoring them for that purpose. We offer five recommendations:

1: Encourage a Large Number of Auction Participants

The more developers bidding into an auction, the fiercer the competition. How policymakers approach this depends on their end goal. In a 2016 Chilean auction, bidding was open to solar and coal developers, and policymakers were pleased when solar developers offered cheaper bids on a dollar per megawatt-hour basis than coal developers. Another approach: signaling consistent demand through auction schedules. Participation in South African renewable auctions increased after auction organizers took steps to give advance notice and instructions for future regular auctions.

2: Limit the Amount of Auctioned Capacity

If competition still seems tepid, auctioneers can always scale down the amount of capacity auctioned. As witnessed in a South African renewable auction, bidders respond to a supply squeeze by decreasing their bid prices.

3: Leverage Policy Frameworks and Market Structures

Auctions don’t exist in a vacuum. Renewable auctions benefit tremendously from existing market structures and companion policies. Where applicable, auction design should consider the multiple grid services energy storage systems can offer. Even if an auction is only focused on energy arbitrage, it should not preclude storage developers from participating in multiple markets (e.g. frequency regulation), as this may help bidders reduce their bid prices.

4: Earmark a Portion of Auctioned Capacity for Less-mature Technologies

A major criticism of early auctions is that they unintentionally favored the same large players and mature technologies. Policymakers shouldn’t forget that energy storage includes several technological options; they can design auctions to address this by separating procurement for more advanced technologies (Li-ion, for example) from newer technologies (zinc air batteries).

5: Penalize delivery failures without damaging competition

Developers should be incentivized to bid their cheapest possible price, but poor auction design can trigger a race to the bottom with ever more unrealistic bid prices. This is especially true if developers don’t believe they will be punished for delivery failures or poor quality projects. Already, some contract terms for energy storage by auctions include penalties if the developer cannot deliver its promised grid service.

Decarbonizing our energy supply isn’t an easy task. Reverse auctions stand out as a possible tool to quickly and cost-effectively increase our energy storage capacity, which will help integrate intermittent renewables. If this market-based mechanism can be tailored to suit energy storage systems’ capabilities (e.g. offering multiple grid services), it could help shift us to a future where we have access to clean energy at any time of day and year.

Also posted in Energy efficiency / Leave a comment

Dysfunctional gas market cost New England electric customers $3.6 billion

This blog post was co-authored with Levi Marks, Charles Mason and Matthew Zaragoza-Watkins

New England natural gas and electricity prices have undergone dramatic spikes in recent years, spurring talk about the need for a costly new pipeline to meet the region’s needs as demand for gas seemed ready to overtake suppliers’ available capacity to deliver it. For example, during the polar vortex of 2013-14, the gas price at New England’s main gas trading hub regularly exceeded $20/MMBtu (million British Thermal Units, the measure commonly used in the gas industry) and reached a record high of $78/MMBtu on January 22, 2014, compared to the annual average of $5.50/MMBtu.

In an efficient market, we would indeed expect prices to be high during events like the polar vortex. We would also expect pipelines delivering gas to regions like the Boston area – in this case the Algonquin Gas Transmission (AGT) pipeline – to be fully utilized. But this is not what we observed when we analyzed the scheduling patterns on the AGT pipeline from 2013 to 2016.

What 8 million data points told us about artificial shortages

Our research group spent 18 months looking at eight million data points covering the three-year period from mid-2013 to mid-2016. We discovered that during this period, a handful of New England gas utilities owned by two large energy companies routinely scheduled large deliveries, then cancelled orders at the last minute. These scheduling practices created an artificial shortage when in fact there was far more pipeline capacity on the system than it appeared.

As a result, we estimate that New England electricity customers paid $3.6 billion more over this period than they would have if the unused pipeline capacity had been available to deliver gas for electricity generation (for more information on how we calculated this number, visit our methodology page). As for the need for a new pipeline, our analysis shows that energy prices over this period were inflated, which means they should not be used to assess how much, if any, additional pipeline capacity is needed. Both conclusions illustrate why it’s so important (and how valuable it could be) to fix the interface between the gas and electric markets.

Why unused pipeline capacity impacts electricity prices

Although it was natural gas that was supposedly in short supply over this period, electricity prices also experienced large price spikes. That’s due to the way electricity prices are set, and the fact that much of the electricity in New England, as in much of the country, is increasingly generated using natural gas.

About half of the electricity traded in New England’s wholesale electricity market, ISO New England (ISO-NE), comes from gas-fired generators. For any given hour, the wholesale electricity price for all generators in this market is determined by the last (highest) bid needed to meet customer demand (or “clear the market”). This market clearing price is typically (75 percent of the time) set by a natural gas plant, which means their cost for gas and pipeline transportation tends to drive the price of electricity. That cost is largely determined by the spot price of natural gas at Algonquin Citygate, New England’s main gas trading hub, served by the Algonquin Pipeline.

Policy paper: Aligning natural gas and electricity markets »

The figure below shows a stylized generation supply curve for ISO-NE. The lower cost resources to the left (typically solar, wind and hydro) are generally used before the higher cost plants to the right (coal, gas, petroleum). The plants situated where demand meets the supply curve set the overall market price in any given hour (bids are submitted a day ahead of time in the day ahead market). This is typically one of the natural gas plants represented by the red dots on the middle part of the curve. A higher spot price for natural gas increases the marginal cost of gas-fired generators, shifting the generation supply curve up as seen in the second panel. This translates into a higher marginal cost of meeting a given level of electricity demand and thus a higher wholesale electricity price P*.

 

Stylized generation supply curve for ISO-NE.

What price do electric generators pay for gas? The secondary market for natural gas

In New England, as in many other markets, gas-fired electricity generators generally procure gas from a secondary market, where sellers are usually natural gas utilities that purchase long-term contracts at regulated prices directly from the pipeline company. The secondary market exists because these long-term contracts allow contract holders to sell any unused capacity at unregulated prices to gas-fired generators or others.

Generators buying in the secondary market for gas do so because they have decided it is more cost-effective to procure natural gas transportation that way than to grapple with rigid, long-term contracts for pipeline capacity that don’t fit their highly variable needs.

While the amount of pipeline capacity available to deliver natural gas to New England is fixed, demand for gas fluctuates significantly with external factors such as temperature, as seen by the price spikes experienced during the polar vortex

On days like these, holders of long-term contracts can pocket the difference between the price that buyers in the secondary market are willing to pay for gas deliveries, as indicated by the Algonquin Citygate spot price, and the regulated price they themselves pay the pipeline for that same capacity. In the case of utilities, revenues from such sales are typically to a large extent refunded back to the ratepayers that paid for those long-term contracts in the first place.

How could pipeline capacity go unused during the polar vortex?

We see four local gas utilities (two owned by Eversource, two by Avangrid) that scheduled far more pipeline capacity the day before gas delivery than they ended up using the next day. Repeatedly, these companies downscheduled their orders only at the end of the gas delivery day–too late for that unused capacity to be made available to the secondary market.

The threshold at which last-minute down-scheduling of gas orders impacts gas and electric prices varies depending on daily demand. As a proxy, we looked at how far the scheduling patterns at delivery “nodes” on the pipeline operated by Eversource and Avangrid-owned utilities deviated from the overall system average.

  • On 434 days during the study period, at least one Eversource node made downward scheduling changes more than two standard deviations larger than the average scheduling change made by all firms on the pipeline.
  • On 351 days, at least one Eversource location had a schedule change more than three standard deviations larger than the average.

The Eversource utilities primarily made large downscheduling changes on cold days, while Avangrid made large scheduling cuts far more often.

  • On 1043 days, at least one Avangrid location made downward scheduling change more than two standard deviations larger than the average.
  • On 1031 days, at least one Avangrid location made a downward change more than three standard deviations larger than the average.

Total unused capacity exceeded 100,000 MMBtu on 37 days in the three-year period we looked at, which is roughly 7% of the pipeline’s total daily capacity and 28% of the typical total daily supply to gas-fired generators. That these large amounts of downscheduled pipeline capacity were not made available to New England’s gas-fired generators raised both the gas price for generators as well as the price of electricity for New England’s electricity customers. We estimate that unused pipeline capacity increased average gas and electricity prices by 38% and 20%, respectively, over the three-year period we study.

While this behavior may have been within the companies’ contractual rights, the significant impacts in both the gas and electricity markets show the need to consider improvements to market design and regulation. The gas transportation market must become more transparent and flexible to better ensure that existing pipeline capacity is optimally utilized and that unbiased price signals in both the gas and electricity markets lead to cost-efficient investment in energy infrastructure.

 

Also posted in Energy efficiency / Leave a comment

Alternative Facts: 6 Ways President Trump’s Energy Plan Doesn’t Add Up

Photos by lovnpeace and KarinKarin

This blog was co-authored with Jonathan Camuzeaux and is the first in an occasional series on the economics of President Trump’s Energy Plan

Just 60 days into Trump’s presidency, his administration has wasted no time in pursuing efforts to lift oil and gas development restrictions and dismantle a range of environmental protections to push through his “America First Energy Plan.” An agenda that he claims will allow the country to, “take advantage of the estimated $50 trillion in untapped shale, oil, and natural gas reserves, especially those on federal lands that the American people own.”

Putting aside the convenient roundness of this number, the sheer size of it makes this policy sound appealing, but buyer beware. Behind the smoke and mirrors of this $50 trillion is a report commissioned by the industry-backed Institute for Energy Research (IER) that lacks serious economic rigor. The positive projections from lifting oil and gas restrictions come straight from the IER’s advocacy arm, the American Energy Alliance. Several economists reviewed the assessment and agreed: “this is not academic research and would never see the light of day in an academic journal.”

Here is why Trump’s plan promises a future it can’t deliver:

1. No analytical back up for almost $20 trillion of the $50 trillion.
Off the bat, it’s clear that President Trump’s Plan relies on flawed math. What’s actually estimated in the report is $31.7 trillion, not $50 trillion, based on increased revenue from oil, gas and coal production over 37 years (this total includes estimated increases in GDP, wages, and tax revenue). The other roughly half of this “$50 trillion” number appears to be conjured out of thin air.

2. Inflated fuel prices
An average oil price of $100 per barrel and of $5.64 per thousand cubic feet of natural gas (Henry Hub spot price) was used to calculate overall benefits. Oil prices are volatile: in the last five years, they reached a high of $111 per barrel and a low of $29 per barrel. They were below $50 a barrel a few days ago. A $5.64 gas price is not outrageous, but gas prices have mostly been below $5 for several years. By using inflated oil and gas prices and multiplying the benefits out over 37 years, the author dismisses any volatility or price impacts from changes in supply. There’s no denying oil and gas prices could go up in the future, but they could also go down, and the modeling in the IER report is inadequate at best when it comes to tackling this issue.

3. Technically vs. economically recoverable resources
The IER report is overly optimistic when it comes to the amount of oil and gas that can be viably produced on today’s restricted federal lands. Indeed, the report assumes that recoverable reserves can be exploited to the last drop over the 37-year period based on estimates from a Congressional Budget Office report. A deeper look reveals that these estimates are actually for “technically recoverable resources,” or the amount of oil and gas that can be produced using current technology, industry practice, and geologic knowledge. While these resources are deemed accessible from a technical standpoint, they cannot always be produced profitably. This is an important distinction as it is the aspect that differentiates technically recoverable from economically recoverable resources. The latter is always a smaller subset of what is technically extractable, as illustrated by this diagram from the Energy Information Administration. The IER report ignores basic industry knowledge to present a rosier picture.

4. Lack of discounting causes overestimations
When economists evaluate the economic benefits of a policy that has impacts well into the future, it is common practice to apply a discount rate to get a sense of their value to society in today’s terms. Discounting is important to account for the simple fact that we generally value present benefits more than future benefits. The IER analysis does not include any discounting and therefore overestimates the true dollar-benefits of lifting oil and gas restrictions. For example, applying a standard 5% discount rate to the $31.7 trillion benefits would reduce the amount to $12.2 trillion.

5. Calculated benefits are not additional to the status quo
The IER report suggests that the $31.7 trillion would be completely new and additional to the current status quo. This is false. One must compare these projections against a future scenario in which the restrictions are not lifted. Currently, the plan doesn’t examine a future in which these oil and gas restrictions remain and still produce large economic benefits, while protecting the environment.

6. No consideration of environmental costs
Another significant failure of IER’s report: even if GDP growth was properly estimated, it would not account for the environmental costs associated with this uptick in oil and gas development and use. This is not something that can be ignored, and any serious analysis would address it.

We know drilling activities can lead to disastrous outcomes that have real environmental and economic impacts. Oil spills like the Deepwater Horizon and Exxon Valdez have demonstrated that tragic events happen and come with a hefty social, environmental and hard dollar price tag. The same can be said for natural gas leaks, including a recent one in Aliso Canyon, California. And of course, there are significant, long-term environmental costs to increased emissions of greenhouse gases including more extreme weather, damages to human health and food scarcity to name a few.

The Bottom Line: The $50 Trillion is An Alternative Fact but the Safeguards America will Lose are Real
These factors fundamentally undercut President Trump’s promise that Americans will reap the benefits of a $50 trillion dollar future energy industry. Most importantly, the real issue is what is being sacrificed if we set down this path. That is, a clean energy future where our country can lead the way in innovation and green growth; creating new, long-term industries and high-paying jobs, without losing our bedrock environmental safeguards. If the administration plans to upend hard-fought restrictions that provide Americans with clean air and water, we expect them to provide a substantially more defensible analytical foundation.

Also posted in Politics, Trump's energy plan / Leave a comment