Market Forces

Not all fossil fuel subsidies are created equal, all are bad for the planet

This is part two of a five-part series exploring Policy Design for the Anthropocene,” based on a recent Nature Sustainability Perspective. The first post explored the intersection of policy and politics in the development of instruments to help humans and systems adapt to the changing planet.

A recent International Monetary Fund (IMF) working paper made headlines by revealing that the world is subsidizing fossil fuels to the tune of $5 trillion a year. Every one of these dollars is a step backward for the climate. That much is clear.

Instead of subsidizing fossil emissions, each ton of carbon dioxide (CO2) emitted should be appropriately priced. That’s also where it’s important to dig into the numbers.

It’s tempting to go with the $5 trillion figure, as it suggests a simple remedy: remove the subsidies. At one level, that is precisely the right message. But the details matter, and they go well beyond the semantics of what it means to subsidize something.

Direct subsidies are large

The actual, direct subsidies—money flowing directly from governments to fossil fuel companies and users—are “only” around $300 billion per year. That is still a huge number, and it may well be an underestimate at that. The International Energy Agency’s World Energy Outlook 2014, which took a closer look at fossil subsidies than reports since, put the number closer to around $500 billion; a 2015 World Bank paper provided more detailed methodologies and a range of between $500 billion and $2 trillion.

What all these estimates have in common is that they stick to a tight definition of a subsidy:

Subsidy (noun, \ ˈsəb-sə-dē \)

“a grant by a government to a private person or company to assist an enterprise deemed advantageous to the public,” per  Merriam-Webster.

These taxpayer-funded giveaways are not only not “advantageous to the public,” they also ignore the enormous now-socialized costs each ton of CO2 emitted causes over its lifetime in the atmosphere. (Each ton emitted today stays in the atmosphere for dozens to hundreds of years.)

The direct subsidies also come in various shapes and forms—from some countries keeping the cost of gasoline artificially low, to a $1 billion tax credit for “refined coal” in the United States.

Indirect subsidies are significantly larger

The vast majority of the IMF’s total $5 trillion figure is the unpriced socialized cost of each ton of CO2 emitted into the atmosphere. Each ton, the IMF estimates conservatively, causes about $40 in damages over its lifetime in today’s dollars.

Depending on one’s definition of a subsidy, this may well qualify. It’s a grant from the public to fossil fuel producers and users—something the public pays for in lives, livelihoods and other unpriced consequences of unmitigated climate change.

The remedy here is very different than removing direct government subsidies. It’s to price each ton of CO2 emitted for less to be emitted. The principle couldn’t be simpler: “When something costs more, people buy less of it,” as Bill Nye imbued memorably on John Oliver’s Last Week Tonight recently.

 

All that goes well beyond semantics of what it means to subsidize. One policy is to remove a tax loophole or another kind of subsidy, the other is to introduce a carbon price. The politics here are very different.

Unpriced climate risks might be much larger still

The $5 trillion figure also hides something else. By using a $40-per-ton figure, the IMF focuses on a point estimate for each ton of CO2 emitted, and a conservative one at that. The number comes from an estimate produced by President Obama’s Interagency Working Group on the Social Cost of Carbon. That’s a good starting point, certainly a better one than the current Administration’s estimate.

But even the $40 figure is conservative. It captures what was quantifiable and quantified at the time. It does not account for many known yet still-to-be-quantified damages. It does not account for risks and uncertainties, the vast majority of which would push the number significantly higher still.

In short, the $5 trillion figure may well convey a false sense of certainty.

In some sense, little of that matters. The point is: there is a vast thumb on the scale pushing the world economy toward fossil fuels, the exact opposite of what should be done to ensure a stable climate.

In another very real sense, the different matters a lot: Politics may trump all else, but policy design matters, too.

By now the task is so steep that it’s simply not enough to say we need to price emissions and leave things at that. Yes, we need to price carbon, but we also need to subsidize cleaner alternatives—in the true sense of what it means to subsidize: to do so for the benefit of the public. Whether that comes under the heading of a “Green New Deal” or not, it is a much more comprehensive approach than one single policy instrument.

This is party 2 of a 5-part series exploring policy solutions outlined in broad terms in Policy Design for the Anthropocene.” Part 3 will focus on “Coasian” rights-based instruments, taking a closer look at tools that limit overall pollution to create markets where there were none before.

Also posted in Economics, Markets 101, Social Cost of Carbon / Leave a comment

Policy Design for the Anthropocene

There’s no denying we humans are changing the planet at an unprecedented pace. If carbon dioxide in the atmosphere is any guide, that pace is increasing at an increasing rate. For those so mathematically inclined, that’s the third derivative pointing in the wrong direction.

Enter The Sixth Extinction, The Uninhabitable Earth, Falter, or simply the “Anthropocene”—us humans altering the planet to the point where the changes are visible in the geological record, ringing in a new epoch.

A team led by Earth systems scientists Johan Rockström and Will Steffen developed the concept of “planetary boundaries” in 2009. They identified nine major systems where humans are altering fundamental Earth systems—from climate change to land-system change to stratospheric ozone, and gave us now infamous spider graphs summarizing the all-too dire warnings (Figure 1).

Planetary boundaries, tipping points, and policies (Figure 1 from “Policy design for the Anthropocene”)

It is all the more significant, that both Rockström, now director of the famed Potsdam Institute for Climate Impact Research (PIK), and Steffen joined another large, multidisciplinary team ten years later to focus on “Policy design for the Anthropocene.” This team, led by EDF senior contributing economist Thomas Sterner, focused on the solutions.

The good news: there are many.

Table 2 summarizes the crowded field of approaches at the disposal of policymakers. It also shows the decisions to be made when deciding among them.

Policy instruments (based on Table 2 from “Policy design for the Anthropocene”)

How to choose?

Choosing among the many options available quickly moves from policy design to politics.

Take climate change as an all-too prominent example. For one, the obvious first step is to agree that there is a problem in the first place. Denying the problem is not going to get us anywhere near a constructive debate about policy solutions.

One big political decision then is to identify who benefits from acting—or conversely, who pays the costs. If the rights go to the polluter, it’s the victims who pay—all of us, in the case of climate change. If the rights go to society, it’s broadly speaking the polluters who pay. The difference is as stark as between permits on the one hand, and outright bans on the other.

Price or rights-based policies?

Often the decision how to act is among two broad buckets of policies: price or rights-based. The two are broadly speaking two sides of the same coin. Rights generate prices, and prices imply rights.

The difference plays out between carbon pricing and tradable permits. One fixes the price level, the other the amount of emissions. Cue endless academic debates about which instrument is better under which circumstances. Details, of course, do matter.

And this also brings us immediately back to politics. A big difference between price and rights-based policies, is that the latter implies that the political horse-trading doesn’t affect the overall quantity of pollution, at least to a first approximation. Whether tradeable permits are auctioned off—with polluters paying the full price—or whether they are given away for free doesn’t, at first, make a difference. Overall emissions reductions stay the same. I’m saying “at first,” because, any money raised could be spent intelligently on further emissions reductions.

Environmental effectiveness, economic efficiency, political efficacy

The larger point is that (almost) everything is possible. The problems might be dauntingly large. The solution space is similarly large. It’s also clear that no single decision criterion is enough.

Environmental effectiveness is key. Economic efficiency is similarly important.

Achieving the environmental goal is like building a train to the right station. That’s clearly the most important step. Economic efficiency is akin to building the fasted possible train. Just being fast doesn’t help, if the journey goes in the wrong direction. But efficiency implies that one can achieve the same goal at lower cost, or more at the same cost.

But smart policy design, of course, is not enough. It takes political will to get there. Designing policies that pass political muster is clearly one criterion, especially in a polarized environment.

Getting the policy minutiae right is important, but it’s also clear, of course, that politics trumps all. Policies don’t inspire action. Visions of a better future, and a just transition do.

This is party 1 of a 5-part series exploring these policy solutions outlined in broad terms in Policy Design for the Anthropocene in more detail. Part 2 will focus on “Pigouvian” price instruments, taking a closer look at fossil fuel subsidies and carbon pricing.

Also posted in International, Politics / Leave a comment

How smart congestion pricing will benefit New Yorkers

This post was co-authored with Maureen Lackner

Last week, New York became the first American city to adopt congestion pricing—a move that should benefit both the city’s crumbling transit system and the environment.

In highly dense areas such as lower Manhattan – where valuable road space is quite limited by its urban geography — congestion pricing allows for a better management for improving vehicle traffic flow. Since the 1950s, economists and transportation engineers have advocated for this market-based measure, which encourages drivers to consider the social cost of each trip by imposing an entrance fee to certain parts of cities—in this case, Manhattan below 60th street. These fees should both discourage driving, thus reducing traffic, while—in New York’s case—raising needed funds for the subway and city buses. These pricing plans have been successful in reducing congestion in places like Singapore and parts of Europe. They have provided additional social benefits, like reducing asthma attacks in children in Stockholm by almost half and cutting traffic accidents in London by 40 percent.

New York will formally introduce this policy instrument in 2021. And while many of the pricing decisions have been deferred, 80 percent of the revenue collected will go to the subway and bus network; 10 percent will go to New York’s commuter rail systems that serve the city. Those setting rates can look to existing pricing models and research to price for success.

Cristobal Ruiz-Tagle, an EDF High Meadows Fellow, spoke with Juan Pablo Montero – a leading environmental economist, fellow Chilean and member of our Economics Advisory Council – about his research on congestion pricing, and what New Yorkers can and should expect.

CRT: What’s the best case scenario of New Yorkers with this pricing plan?

JPM: The latest report from INRIX ranked New York as the fourth most congested city in the United States. New Yorkers lose an average of 133 hours per year in congestion—just sitting in a car and not moving, or moving very slowly. The cost of congestion per year in New York is $9.5 billion—the largest cost in the country. So that’s the starting point.

 To solve the problem, you need to set the congestion fee sufficiently high. In Santiago, we ran a study and found it should be around $14 per day. In New York as far as I understand, they’re proposing for passenger vehicles of around $12. It’s a little lower than what we see in London (£ 11.50), but I expect New Yorkers are still going to get most of the benefit from less congestion.

 The most important element of the plan is what you do with the resources collected. The proposal here is to improve public transportation. We did a study on this in Santiago and showed that if you don’t put the money back into the transportation system, the poor will be much worse off. We’re proposing something similar in Santiago–that you use the funds to both improve infrastructure and lower fares. That’s the only way to do it without turning it into a regressive policy.

CRT: Are there other benefits to these plans besides reducing traffic and improving public transportation?

 JPM: Maybe people will starting using bike lanes more frequently—or people are willing to walk more because their public transportation is better. There could be more outdoor activities. Those additional benefits are important, but they’re hard to quantify. 

 CRT: In addition to a congestion pricing plan, London also has a pollution fee, which started on April 8th. Do you think these kinds of fees will further reduce emissions and improve health? Or is there something else that should be considered?

JPM: They should, but it’s important to understand the local vehicle fleet—especially how old it is. The cars that contribute the most to local air pollution are very old cars. In terms of global pollutants—namely CO2—they’re roughly the same. Local pollutants—nitrogen oxides, fine particulate matter, etc., are much worse in older fleets. So if the fleet is new—younger than 10 years—you may not see as much as a reduction as if the fleet is very old or if you have ban on diesel pollution. People with lower incomes are more likely to leave their cars at home when charged for driving or with a congestion fee—and they tend to use older vehicles that emit more pollution.

 CRT: The age of local fleets has been important in other parts of the world, right? Especially when cities tried non-market-based measures like vehicle restrictions.

 JPM: Yes, Mexico City placed a uniform restriction on vehicles in 1989 without making any distinction between newer, cleaner cars and older, dirtier ones. You could only drive your car into the city for a limited number of days per week, so people went out and bought second cars that they could then use on the off days. And the cars they bought were older, which were dirtier. So in that way, the plan backfired.

CRT: It sounds like there are many ways to structure these congestion pricing plans:

JPM: Yes. There are ways to introduce dynamic pricing. You don’t want to keep these prices fixed forever in case the policy response wasn’t as expected. Ideally, you want to change prices based on location and time of day. This may eventually happen, but I don’t think it’s prudent to push for that today. Don’t let the perfect be the enemy of the good.

We are finally seeing this policy instrument taken seriously—and it will be very interesting to see which city is next. LA? Seattle? Washington, DC?

Congestion pricing should provide New Yorkers with a number of benefits, including cleaner air, better public health and a modernized public transit system, all while reducing that maddening gridlock in downtown Manhattan. EDF is part of a coalition of groups supporting New York’s congestion pricing plan.

 

Also posted in Markets 101 / Leave a comment

How reverse auctions can help scale energy storage

This post is co-authored with Maureen Lackner

Just as reverse auctions have helped increase new renewable energy capacity, our new policy brief for the Review of Environmental Economics and Policy argues they could also be an effective approach for scaling energy storage.

Why we need energy storage

Voters have spoken, and states are moving toward cleaner electricity. Legislatures in Hawaii and California passed mandates for 100 percent clean energy in the electricity sector, and governors in Colorado, Illinois, Nevada, New Jersey, New York, Maine, and Michigan have all made similar 100 percent clean energy promises in recent months. These ambitious targets will require large-scale integration of wind and solar energy, which can be unpredictable and intermittently available. Cost-effective energy storage solutions can play a leading role to provide clean, reliable electricity—even when the sun isn’t shining and wind isn’t blowing.

Energy storage systems—ranging from lithium-ion (Li-ion) batteries to hydroelectric dams—can provide a wide array of valuable grid services. Their ability to bank excess energy for use at a later date makes them particularly well-suited to address the intermittency challenge of wind and solar. In some cases, energy storage systems are also already cost-competitive with natural gas plants.

However, in order to reach ambitious clean energy targets, we’ll likely need to close a large energy storage gap. One recent estimate suggests approximately 10,000 Gigawatt hours (GWh) of energy storage may be needed to support a two-thirds renewables domestic electricity mix. In our policy brief, we estimate the United States currently has no more than 10 percent of this utility-scale energy storage capacity available; the actual quantity is likely much lower. Developing vast levels of energy storage will likely be an important factor toward integrating a greater share of renewables into the energy mix. Smart policy design can help drive energy storage prices even further below current historic lows, while ensuring these technologies are procured cost-effectively.

A path forward: using reverse auctions to scale energy storage

Reverse auctions have already helped scale renewables and, when designed well, may also be an effective tool when applied to energy storage. In a reverse auction, multiple sellers submit bids to a single buyer for the right to provide a good or service. In the case of renewables, developers bid to provide a portion of capacity desired by the buyer, typically a utility. This policy tool is gaining in popularity, because, if designed well, it can drive down bid prices and ensure reliable procurement. Globally, the share of renewables capacity procured through reverse auctions is expected to grow from 20 percent in 2016 to more than 50 percent in 2022. It seems likely that auction-induced competition has triggered a fall in renewable prices that some are calling the “Auctions Revolution.”

While examples in Colorado and Hawaii suggest reverse auctions can be effective in procuring energy storage, there’s little guidance on tailoring them for that purpose. We offer five recommendations:

1: Encourage a Large Number of Auction Participants

The more developers bidding into an auction, the fiercer the competition. How policymakers approach this depends on their end goal. In a 2016 Chilean auction, bidding was open to solar and coal developers, and policymakers were pleased when solar developers offered cheaper bids on a dollar per megawatt-hour basis than coal developers. Another approach: signaling consistent demand through auction schedules. Participation in South African renewable auctions increased after auction organizers took steps to give advance notice and instructions for future regular auctions.

2: Limit the Amount of Auctioned Capacity

If competition still seems tepid, auctioneers can always scale down the amount of capacity auctioned. As witnessed in a South African renewable auction, bidders respond to a supply squeeze by decreasing their bid prices.

3: Leverage Policy Frameworks and Market Structures

Auctions don’t exist in a vacuum. Renewable auctions benefit tremendously from existing market structures and companion policies. Where applicable, auction design should consider the multiple grid services energy storage systems can offer. Even if an auction is only focused on energy arbitrage, it should not preclude storage developers from participating in multiple markets (e.g. frequency regulation), as this may help bidders reduce their bid prices.

4: Earmark a Portion of Auctioned Capacity for Less-mature Technologies

A major criticism of early auctions is that they unintentionally favored the same large players and mature technologies. Policymakers shouldn’t forget that energy storage includes several technological options; they can design auctions to address this by separating procurement for more advanced technologies (Li-ion, for example) from newer technologies (zinc air batteries).

5: Penalize delivery failures without damaging competition

Developers should be incentivized to bid their cheapest possible price, but poor auction design can trigger a race to the bottom with ever more unrealistic bid prices. This is especially true if developers don’t believe they will be punished for delivery failures or poor quality projects. Already, some contract terms for energy storage by auctions include penalties if the developer cannot deliver its promised grid service.

Decarbonizing our energy supply isn’t an easy task. Reverse auctions stand out as a possible tool to quickly and cost-effectively increase our energy storage capacity, which will help integrate intermittent renewables. If this market-based mechanism can be tailored to suit energy storage systems’ capabilities (e.g. offering multiple grid services), it could help shift us to a future where we have access to clean energy at any time of day and year.

Also posted in Energy efficiency, Markets 101 / Leave a comment

What California’s history of groundwater depletion can teach us about successful collective action

California’s landscape will transform in a changing climate. While extended drought and recent wildfires seasons have sparked conversations about acute impacts today, the promise of changes to come is no less worrying. Among the challenges for water management:

These changes will make water resources less reliable when they are needed most, rendering water storage an even more important feature of the state’s water system.

One promising option for new storage makes use of groundwater aquifers, which enable water users to smooth water consumption across time – saving in wet times and extracting during drought. However, when extraction exceeds recharge over the long term, “overdraft” occurs. Falling water tables increase pumping costs, reduce stored water available for future use, and entail a host of other collateral impacts. Historically, California’s basins have experienced substantial overdraft.

Falling water tables reflect inadequate institutional rules

One cause of the drawdown is California’s history of open-access management. Any landowner overlying an aquifer can pump water, encouraging a race to extract. Enclosing the groundwater commons and thereby constraining the total amount of pumping from each aquifer is critical for achieving efficient use and providing the volume and reliability of water storage that California will need in the future. However, despite evidence of substantial long-run economic gain from addressing the problem, only a few groups of users in California have successfully adopted pumping regulations that enclose the groundwater commons.

SMGA addresses overdraft—but pumpers must agree to terms

California’s Sustainable Groundwater Management Act (SGMA) of 2014 aims to solve this challenge by requiring stakeholders in overdrafted basins to form Groundwater Sustainability Agencies (GSAs) and create plans for sustainable management. However, past negotiations have been contentious, and old disagreements over how best to allocate the right to pump linger. The map presented below illustrates how fragmentation in (historical) Groundwater Management Plans also tracks with current fragmentation in Groundwater Sustainability Agencies (GSAs) under SGMA. Such persistent fragmentation suggests fundamental bargaining difficulties remain.

Spatial boundaries of self-selected management units within basins under SGMA (GSAs) mirror those of previous management plans (GMPs). Persistent fragmentation may signal that adoption of SGMA doesn’t mean the fundamental bargaining difficulties facing the basin users have disappeared.

New research, co-authored with Eric Edwards (NC State) and Gary Libecap (UC, Santa Barbara) and published in the Journal of Environmental Economics and Management, provides broad insights into where breakdowns occur and which factors determine whether collective action to constrain pumping is successful. From it, we’ve gleaned four suggestions for easing SGMA implementation.

Understanding the costs of contracting to restrict access

To understand why resource users often fail in adopting new management institutional rules, it’s important to consider the individual economic incentives of various pumpers. Even when they broadly agree that groundwater extraction is too high, collective action often stalls when users disagree about how to limit it. When some pumpers stand to lose economically from restricting water use, they will fight change, creating obstacles to addressing over-extraction. When arranging side payments or other institutional concessions is difficult, these obstacles increase the economic costs of negotiating agreement, termed “contracting costs.”

To better understand the sources of these costs in the context of groundwater, we compare basins that have adopted effective institutions in the past with otherwise similar basins where institutions are fragmented or missing. Even when controlling for the level of benefits, we found that failures of collective action are linked to the size of the basin and its user group, as well as variability in water use type and the spatial distribution of recharge. When pumpers vary in their water valuation and placement over the aquifer, the high costs of negotiating agreement inhibit successful adoption of management institutions, and overdraft persists. Indeed, in many of California’s successfully managed basins, consensus did not emerge until much farmland was urbanized, resulting in a homogenization of user demand on the resource.

Four key takeaways to ease agreement

In the face of such difficult public choices, how can pumpers and regulators come to agreement? Four main recommendations result from our research:

  • Define and allocate rights in a way that compensates users who face large losses from cutbacks in pumping. Tradable pumping rights can help overcome opposition. Pumpers can sell unused rights and are oftentimes made better off. The option to sell also incentivizes efficient water use.
  • Facilitate communication to reduce costs of monitoring and negotiations. The Department of Water Resources has already initiated a program to provide professional facilitation services to GSAs.
  • Promote and accept tailored management. Stakeholders and regulators should remain open to approaches that reduce contracting costs by addressing issues without defining allocations or attempting to adopt the most restrictive rules uniformly throughout the basin. For example, pumpers have successfully adopted spatially restricted management rules to address overdraft that leads to localized problems; others have adopted well-spacing restrictions that reduce well interference without limiting withdrawals.
  • Encourage exchange of other water sources. Imported, non-native surface water may lower contracting costs because it can save users from large, costly cutbacks. Pumpers have written contracts to share imported water in order to avoid bargaining over a smaller total pie; where such water is available, exchange pools (such as those described here) can help to limit the costs of adjustment.

SMGA is a large-scale public experiment in collective action. To avoid the failures of previous attempts to manage groundwater, stakeholders crafting strategies for compliance and regulators assessing them should keep in mind the difficult economic bargaining problem pumpers face. Hopes for effective, efficient, and sustainable water management in California depend on it.

Also posted in California, Uncategorized / Leave a comment

California Bucks Global Trend with another Year of GHG Reductions

This post was co-authored by Maureen Lackner and originally appeared on the EDF Talks Global Climate blog.

The California Air Resources Board’s November 6 release of 2016 greenhouse gas (GHG) emissions data from the state’s largest electricity generators and importers, fuel suppliers, and industrial facilities shows that emissions have decreased even more than anticipated. California’s emissions trends are showing what is possible with strong climate policies in place and provide hope even as new analysis projects that global emissions will increase by 2% in 2017 after a three-year plateau.

California’s emissions kept falling in 2016

The 2016 emissions report, an annual requirement under California’s regulation for the Mandatory Reporting of Greenhouse Gas Emissions (MRR), shows that emissions covered by the state’s cap-and-trade program are shrinking, and doing so at a faster pace than in prior years. Covered emissions have dropped each year that cap and trade has been in place, amounting to 31 million metric tons of carbon dioxide-equivalent (MMt CO2e) over the whole period, or 8.8% reduction relative to 2012. The drop between 2015 and 2016 accounts for over half of these cumulative reductions (16 MMt CO2e; 4.8% reduction relative to 2015). The electricity sector is responsible for the bulk of this drop: electricity importers reduced emissions about 10 MMt CO2e while in-state electricity generation facilities reduced emissions by about 7 MMt CO2e.

Some sectors’ emissions grew in 2016. Just as with global transportation emissions, California’s transportation emissions have steadily crept up in recent years, and the MRR report suggests this trend is continuing. Transportation fuel suppliers, which account for the largest share of total emissions, reported a 1.8 MMt CO2e increase in emissions covered by cap and trade since 2015. Cement plants and hydrogen plants also experienced small increases in covered emissions. One of the benefits of cap and trade, however, is that if the clean transition is occurring more slowly in one sector, other sectors will be required to reduce further to keep emissions below the cap while the whole economy catches up.

Emissions that are not covered by the cap-and-trade program dropped, from 92 MMt CO2e in 2015 to 87 MMt CO2e in 2016. While small, this represents the largest reduction in non-covered emissions since 2012 and is mostly driven by suppliers of natural gas/NGL/LPG and electricity importers. Net non-covered and covered emissions reductions resulted in a 20.5 MMt CO2e drop in total emissions from these sectors.

These results are a welcome reminder that the cap-and-trade program is working in concert with other policies to accomplish the primary objective of reducing emissions.

The California climate policies are accomplishing their emissions reductions goals

The 2016 MRR data indicate impactful reductions in GHG emissions and progress toward reaching the state’s target emissions reductions by 2020. The 2016 emissions drop is a consequence of several factors: a CARB analysis of the year’s electricity generation points to increased renewable capacity, decreased imports of electricity from coal-fired power plants, and increased in-state hydroelectric power production. To put it in perspective, the 20.5 MMt CO2e emissions reductions is equivalent to offsetting the energy use of about 2.2 million homes, or 16% of California’s households.

Emissions below the cap are a climate win, not a concern

Total covered emissions in 2016 were about 324 MMt CO2e, well below California’s 2016 cap of roughly 382 MMt. Some observers of the cap-and-trade program worry that an “oversupply” of credits will result in reduced revenue for the state and lesser profits for traders on the secondary market. This concern was especially pronounced when secondary market prices dipped below the price floor in 2016 and 2017.

Importantly, oversupply of allowances is not a bad thing for the climate. As Frank Wolak, an energy economist at Stanford, points out, oversupply may be a sign of an innovative economy in which pollution reductions are easier to achieve than anticipated. Furthermore, having emissions below the cap represents earlier than anticipated reductions which is a win for the atmosphere. Warming is caused by the cumulative emissions that are present in the atmosphere so earlier reductions mean gases are not present in the atmosphere for at least the period over which emissions are delayed.

While market stability is a valid concern, the design of the program has built-in features to prevent market disruptions. Furthermore, the California legislature’s recent two-thirds majority vote to extend the cap-and-trade program through 2030 provides long-term regulatory certainty. Both the May and August auctions were completely sold out suggesting that the extension has succeeded in stabilizing demand.

These results are a welcome reminder that the cap-and-trade program is working in concert with other policies to accomplish the primary objective of reducing emissions, and that we’re doing it cheaply is an added bonus. Early reductions at a low cost can lead to sustained or even improved ambition as California implements its world-leading climate targets.

As California closes its fifth year of cap and trade, it should be with a sense of accomplishment and optimism for the future of the state’s emissions.

Also posted in California, Cap and Trade Watch, Economics / Leave a comment