Climate 411

More Efficient Trucks Will Improve the Bottom Line

Here in the United States, the Environmental Protection Agency and the Department of Transportation will unveil new fuel efficiency and greenhouse gas standards for big trucks soon, according to the New York Times. At first glance, many companies might conclude that these new polices do not impact them. They’d be mistaken.

In fact, they would be overlooking an enormous opportunity to cut costs while delivering real-world progress on sustainability.

The fact is that nearly every company in the United States is reliant on heavy trucks, which move 70% of U.S. freight. Brands and manufacturers use trucks to bring in supplies and ship out final products. Retailers and grocers count on trucks to keep the shelves stocked. Technology companies need trucks to deliver the hardware that powers their online services. Even Major League Baseball has turned its dependence on trucking into a quasi-holiday.

More efficient trucks matter to all business because they will cut supply chain costs.

Last year, American businesses spent $657 billion dollars on trucking services. A lot of that money went to pay for fuel – the top cost for trucking, accounting for nearly 40% of all costs.

EDF and Ceres teamed up with MJ Bradley and Associates to assess how strong heavy truck fuel efficiency standards would benefit businesses that rely on trucking. In an update of analysis originally produced last year, we found that companies could see freight rates fall nearly 7% as owners of tractor-trailer units see their costs fall by $0.21/mile. Given that class 8 trucks logged nearly 170 billion miles last year, that $0.21 per mile savings, for example, equates to $34 billion dollars less in annual freight costs.

The magnitude of the savings in this update was consistent with our findings from last year; however, there are important changes in the underlying cost structure. In this new analysis we modeled significantly lower future U.S. diesel prices, in light of new fuel cost projections by the Energy Information Administration. We also updated the cost of more efficient equipment based on recent analysisby the International Council on Clean Transportation.

These savings add up for large shippers. A big consumer goods company, for example, could save over $10 million a year in 2030 by using trucking companies with newer trucks. As an added kicker, these trucks also would help meet the supply chain sustainability targets that leading brands are increasingly setting.

So, while your company may not own or make big trucks, cleaner, more efficient trucks hold a big opportunity for its triple bottom line.

This post originally appeared on our EDF + Business Blog.

Also posted in Cars and Pollution, Greenhouse Gas Emissions / Comments are closed

Déjà vu: Pushback to U.S. Clean Power Plan Reminiscent of 2011 Mercury Rule

By Susan Tierney,  Managing Principal, Analysis Group, Inc.

This post originally appeared on World Resources Institute’s Insights blog.

Did you notice the massive blackout on April 16th, 2015?Reversed-GoldBackground

Actually, I didn’t either. That’s because the electric system didn’t falter. The fact that April 16th came and went without a reliability glitch was both nothing unusual and also a really big deal. Because history has a habit of repeating itself, it’s worth understanding why April 16th was a remarkable (and remarkably dull) milestone in electric-industry history.

The Origins of the Mercury and Air Toxics Standard (MATS)

Back in 2010, just under a third of all U.S. power-plant capacity burned coal to produce electricity. Many of those plants were emitting unhealthy levels of toxic air pollution, which forthcoming regulations from the Environmental Protection Agency (EPA) would limit. Critics of EPA’s rule doubted that manufacturers and installers could get enough pollution-control equipment into the market and on to power plants fast enough to meet the deadline under the new Mercury and Air Toxics Standard (MATS) – and that taking so much of the nation’s generating capacity off line all at once would inevitably lead to an unreliable electric system.

Before the EPA finalized its MATS rule at the end of 2011, countless groups published estimates of how many coal plants would retire due to the EPA regulations. The North American Electric Reliability Corporation (NERC) warned that “with [the mercury rules] as the primary driver, the industry faces considerable operational challenges to complete, coordinate and schedule the necessary environmental retrofits.” Others, including opponents of the rule, argued that, in the name of reliability, the rule would need to be delayed.

In December 2011, EPA issued the final MATS rule, which gave owners of affected power plants until April 16, 2015, to either bring their plants into compliance with the new requirements or cease their operations.

That date passed two weeks ago without incident. The lights didn’t dim.

Why not? First, the EPA stood by its commitment (made in November 2011 by then-Assistant EPA Administrator Gina McCarthy in testimony to the Federal Energy Regulatory Commission, the agency with responsibility for electric system reliability) that “In the 40-year history of the Clean Air Act, EPA rules have never caused the lights to go out, and the lights will not go out in the future as a result of EPA rules.”

Part of the reason for that is that the EPA is nowhere near as rigid or anti-business as many observers like to portray it. The final EPA rule gave power-plant owners the ability to request an additional year of time to comply, and allowed yet another year in unusual cases where continued operation of a plant would be needed for reliability. According to the National Association of Clean Air Agencies, as of March 2015, owners of 38 percent of the 460 coal-fired power plants affected by the MATS rule had requested additional time to comply and, of those, the EPA granted an extension to 95 percent.

Kentucky power plant. Photo by Cindy Cornett Seigle/Flickr

Second, the electric industry is already transitioning to rely less on coal, even without the MATS rule. Between 2011 and the end of 2014, 21.5 gigawatts (GW) of coal-fired power plants retired. The fact that these retirements occurred before the MATS deadline indicates that something other than EPA’s regulations is driving the least-efficient and oldest coal plants into retirement.

Coal’s ardent supporters may prefer to point the finger at EPA, but the truth is that market conditions are responsible: relatively flat electricity demand, increased supply from power plants using other domestic energy sources (natural gas, wind and solar), and price competition between natural gas and coal. Another 14.6 GW of power plants have retired or will retire in 2015. This total amount of coal-plant retirements (36.1 GW) falls at the mid-point of estimates made during the 2010-2011 period.

Third, the electric industry is dynamic. The market has responded to signals that additional electric resources are needed to replace old ones. Many projects have come forward: new power plants, upgraded transmission facilities, rooftop solar panels, energy-efficiency measures and energy-management systems. These varied responses are the norm, collectively maintaining reliability and modernizing the power system along the way.

That’s why there were no blackouts on April 16th, despite all the dire warnings.

History Repeats Itself

The reliability theme is re-emerging once again, as the states and the electric industry face the prospect of EPA finalizing its “Clean Power Plan” to control carbon pollution from the nation’s power plants. In anticipation of the final rules coming out this summer and of power plant owners having to comply with them by 2020, many observers are saying that the electric system’s reliability will be jeopardized if the EPA goes forward as planned. The latest warning came last month with a new assessment published by NERC, calling for more time to allow the industry and the states to respond to the forthcoming carbon-pollution rules.

Such warnings are common whenever there is major change in the industry, and they’re not without value: They play an important role in focusing the attention of the industry on taking the steps necessary to ensure reliable electric service.

But warnings lose their value when they are read as more than what they are. Notably, the reliability concerns currently being raised by some observers about EPA’s Clean Power Plan presume inflexible implementation, are based on worst-case scenarios, and assume that policy makers, regulators and market participants will stand on the sidelines until it is too late to act.

There is no historical basis for these assumptions. Reliability issues will be worked out by the dynamic interplay of actions by regulators, entities responsible for reliability, and market participants, all proceeding in parallel to find solutions.

EPA’s proposed carbon-pollution rule provides states and power plant owners with the means to prevent reliability problems by giving them a wide range of compliance options and plenty of operational discretion (including various market-based approaches, other means to allow emissions trading among power plants, and flexibility on deadlines to meet interim targets). And EPA Administrator McCarthy has stated repeatedly that her agency will write a final rule that reflects the importance of a reliable grid and provides the appropriate flexibility.

One of the best ways to assure electric reliability will be for states to actively avail themselves of the Clean Power Plan’s flexibility, rather than “just say no.” States that do not take advantage of this flexibility and then suggest that EPA’s regulations led to unreliable and uneconomic outcomes may be courting a self-fulfilling prophecy. The more states sit in the driver seat and figure out how to arrive at the emissions-reduction destination in a manner consistent with their goals and preferences, the more likely it is that they’ll accomplish them.

Also posted in Clean Power Plan, Greenhouse Gas Emissions, Health / Comments are closed

Three Climate Leadership Openings Corporate America Can’t Afford to Miss

By Ben Ratner, Senior Manager, Corporate Partnerships Program

Too much ink has been spilled on the anti-climate furor of the Koch brothers. If we lose on climate, it won’t be because of the Koch brothers or those like them.

It will be because too many potential climate champions from the business community stood quietly on the sidelines at a time when America has attractive policy opportunities to drive down economy-endangering greenhouse gas emissions.

Corporate executives have the savvy to understand the climate change problem and opportunity. They have the incentive to tackle it through smart policy, and the clout to influence politicians and policy makers. Perhaps most importantly, they can inspire each other.

And today, they have a chance to do what they do best: lead. Corporate climate leadership has nothing to do with partisanship – it’s ultimately about business acumen.

For starters, here are three immediate opportunities smart companies won’t want to miss.

1. Clean Power Plan: Will spur new jobs and investments.

The Obama administration’s plan will cut emissions from coal plants by 30 percent by 2030. This is expected to trigger a wave of clean energy investment and job creation. It will also seize energy efficiency opportunities and take advantage of America’s abundant and economic supply of natural gas.

Every company with an energy-related greenhouse gas footprint has something to gain from a cleaner power mix. Each one of those companies therefore has a stake in theClean Power Plan.

Google and Starbucks – two large and profitable American companies by any standard – are among more than 200 businesses that have already stepped up to voice their support.

Who will follow them?

2. First-ever methane rules: Will make industry more efficient.

The U.S. Environmental Protection Agency’s upcoming methane emission rules are another opportunity for business leaders to weigh in.

The rules are part of a White House plan that seeks to reducemethane emissions – a major contributor to global warming and resource waste – by almost half in the oil and gas industry.

Globally, an estimated 3.6 billion cubic feet of natural gas leaks from the sector each year. This wasted resource would be worth about $30 billion in new revenue if sold on the energy market.

Some oil and gas companies that have already taken positive steps include Anadarko, Noble and Encana, which helped develop the nation’s first sensible methane rules in Colorado.

Engaging to support strong and sensible national standards isa good next step for companies in this space. And for others with a stake in cleaning up natural gas, such as chemical companies, and manufacturers and users of natural gas vehicles.

3. New truck standards: Can help companies cut expenses and emissions.

New clean truck standards are scheduled for release this summer. Consumer goods companies and other manufacturers stand to see significant dollar and emissionsavings as they move their goods to market.

Cummins, Wabash, Fed Ex, Con-Way, Eaton and Waste Management are among those that applauded the decision to move forward with new standards.

Putting capitalism to work

American business leadership is still the global standard and will remain so if it adds climate policy to its to-do list. While it will take time to build the bi-partisan momentum for comprehensive national climate legislation, there are immediate opportunities to move the needle.

Which companies will take the field?

Image source: Flickr/Don McCullough

This post originally appeared on our EDF Voices blog.

Also posted in Clean Power Plan, Economics, Greenhouse Gas Emissions / Comments are closed

NERC’s Report is Flawed: We Can Reduce Climate Pollution and Ensure Electric Reliability

power-poles-503935_1920If reducing climate pollution from power plants were a football game, the U.S. team would be halfway to the goal line while fans were still singing the national anthem.

That is, we have already gotten about halfway to the expected goals of the Clean Power Plan – before the rule is even final.

The Clean Power Plan is the U.S. Environmental Protection Agency’s (EPA) historic effort to place the first-ever limits on climate pollution from our country’s existing fleet of fossil fuel-fired power plants. When it’s finalized this summer, it’s expected to call for a 30 percent reduction in carbon emissions compared to 2005 levels — but U.S. power plant emissions have already fallen 15 percent compared to 2005 levels.

That’s because renewable energy, energy efficiency resources, and natural gas generation have been steadily deployed and growing for years. Even conservative estimates forecast continued growth of these resources — which makes last week’s report from the North American Electric Reliability Corporation (NERC) seem really strange.

NERC’s report about the Clean Power Plan’s impacts on electric grid reliability makes predictions that starkly contrast from the progress we’re already seeing.

How did this departure from reality happen?

It’s due in large part to severely flawed assumptions underlying NERC’s analysis, which yield unrealistic results.

Those flawed assumptions cause NERC to greatly overstate the generation mix changes required to meet the Clean Power Plan. The NERC Assessment’s assumptions regarding energy efficiency, renewable energy deployment, and retirement modeling are at odds with both recent experience and current trends.

Unrealistically Low Energy Efficiency Gains

NERC assumes that demand for electricity will grow at an average of one percent per year through 2030, even after accounting for growth in energy efficiency investments. That growth rate is more than 40 percent higher than the U.S. Energy Information Administration (EIA) predicts.

It also fails to reflect likely energy efficiency growth. An analysis by McKinsey & Company found that implementing only those efficiency measures that pay for themselves would reduce the nation’s total end-use energy consumption by 23 percent by 2020.

Arbitrary and Unrealistic Projections on Wind and Solar Expansion  

NERC predicts expansions of wind and solar power that are far below those observed in recent years.

U.S. solar capacity stood at 20.5 gigawatts at the end of 2014. The NERC Assessment predicts an addition of 13 to 20 gigawatts of solar energy between 2016 and 2030 — when solar capacity is expected to grow by 20 gigawatts over the next two years alone.

The U.S. wind industry is also expected to add 18 gigawatts of new capacity in the next two years.

NERC’s low-ball assumptions greatly limit renewable energy deployment in their study. This in turn greatly increases the burden on other compliance options, namely coal-to-gas generation shifting.

Failure to Account for Dynamic Grid Reliability Management Tools

NERC assumes that the Clean Power Plan will drive coal power plant retirements over its entire life-span. However, numerous studies — including one by the Brattle Group and three by the Analysis Group, show that total output and emissions from coal units can decrease without retiring units that are needed to operate on occasion in order to maintain electric reliability.

There are also numerous tools and processes available to grid operators to ensure reliability in light of dynamic market, technological and regulatory change, including capacity and energy markets, resource adequacy forecasting, and reliability must-run contracts.

These instruments, for example, have worked well to maintain adequate capacity during the recent wave of coal-fired power plant retirements, so much so that the electric grid has added an average of roughly 30 gigawatts of total power every year since 2000. The NERC Assessment, however, finds only 11 to12 gigawatts of total power will be added every year – a significant departure from the past 15 years of evidence.

A History of Inaccurate Assessments

This report is not the first time that NERC has issued an inaccurate assessment of threats to reliability.

NERC has assessed previous public health and environmental safeguards, each time raising reliability concerns that were not borne out in reality.

  • In 2011, NERC issued its Long-Term Reliability Assessment, which looked at the Mercury and Air Toxics Standards, the Cross State Air Pollution Rule, the Clean Water Act Cooling Water Intake Structures rule, and the Coal Combustion Residuals rule. NERC raised numerous reliability concerns about these protections, which the EPA noted at the time were flawed and exaggerated. None of NERC’s concerns have manifested during implementation of these standards.
  • In a 2011 companion study, NERC issued its Potential Impacts of Future Environmental Regulations about the Mercury and Air Toxics Standards and a number of other regulations. NERC again raised reliability concerns, none of which have occurred in practice.
  • In its 2007 Long-Term Reliability Assessment, NERC predicted several regions, including New England and New York State, would drop below target capacity margins, threatening reliability. NERC’s prediction was based on a number of factors, including proposed environmental protections. Some power generators used the report to oppose to the Regional Greenhouse Gas Initiative. NERC’s predicted reliability shortfalls did not occur, nor has the Regional Greenhouse Gas Initiative caused reliability issues – even while emissions fell almost 50 percent below the region-wide emissions cap.
  • In 2000, NERC drafted a review of EPA’s nitrogen oxide emissions standards for eastern power plants, knows as the NOx SIP Call. Yet again, NERC predicted a number of reliability concerns that did not occur after the rule was implemented.

NERC has repeatedly produced analyses indicating that public health and environmental safeguards will come at the expense of electric reliability – and these analyses have consistently been contradicted by reality. In fact, emission standards have never caused a reliability problem in the more than four decades that EPA has been administering the Clean Air Act.

NERC’s newest report is no better. It gives no solid reasons to doubt that the Clean Power Plan will be compatible with a reliable electric grid.  

For a clearer picture of the link between reliability and environmental protections, read this post by my colleague Cheryl Roberto, a former Commissioner of the Ohio Public Utilities Commission and electric system operator.

You might also like EDF’s fact sheet about the Clean Power Plan and the latest flawed NERC report.

The progress made in the past demonstrates that our nation is already approaching the goal line under the Clean Power Plan. The tremendous flexibility that the Clean Power Plan provides to states and power companies alike, together with time-tested grid management tools, provides the framework we need to reach the goal line — protecting our communities and families from dangerous carbon pollution, strengthening our economy, and providing a steady flow of cost-effective electricity.

Also posted in Clean Air Act, Clean Power Plan, Policy, Setting the Facts Straight / Comments are closed

New Climate-Economic Thinking

By Gernot Wagner and Martin Weitzman

Each ton of carbon dioxide emitted into the atmosphere today causes about $40 worth of damages. So at least says standard economic thinking.

A lot goes into calculating that number. You might call it the mother of all benefit-cost analyses. It’s bean-counting on a global scale, extending out decades and centuries. And it’s a process that requires assumptions every step along the way.

The resulting $40 figure should be taken for what it is: the central case presented by the U.S. Government Interagency Working Group on Social Cost of Carbon when using its preferred 3% discount rate for all future climate damages. But it is by no means the full story.

Choose a different discount rate, get a different number. Yale economist Bill Nordhaus uses a discount rate of slightly above 4%. His resulting price is closer to $20 per ton of carbon dioxide. The Stern Review on the Economics of Climate Change uses 1.4%. The resulting price per ton is over $80.

And the discount rate is not the only assumption that makes this kind of a difference. In Climate Shock, we present the latest thinking on why and how we should worry about the right price for each ton of carbon dioxide, and other greenhouse gases, emitted into the atmosphere. There are so many uncertainties at every step—from economic projections to emissions, from emissions to concentrations, from concentrations to temperatures, and back to economics in form of climate damages—that pointing to one single, final number is false precision, misleading, or worse.

Of course, that does not mean that we shouldn’t attempt to make this calculation in the first place. The alternative to calculating the cost of carbon is to use a big fat zero in government benefit-cost calculations. That’s clearly wrong.

Most everything we know about what goes into calculating the $40 figure leads us to believe that $40 is the lower bound for sensible policy action. Most everything we know that is left out would push the number higher still, perhaps much higher.

As just one example, zero in on the link between carbon concentrations in the atmosphere and eventual temperature outcomes. We know that increasing concentrations will not decrease global temperatures. Thank you, high school chemistry and physics. The lower bound for the temperature impact when carbon concentrations in the atmosphere double can be cut off at zero.
In fact, we are pretty sure it can be cut off at 1°C or above. Global average temperatures have already warmed by over 0.8°C, and we haven’t even doubled carbon concentrations from preindustrial levels. Moreover, the temperature increases in this calculation should happen ‘eventually’—over decades and centuries. Not now.

What’s even more worrying is the upper tail of that temperature distribution. There’s no similarly definitive cut-off for the worst-case scenario. In fact, our own calculations (based on an International Energy Agency (IEA) scenario that greenhouse gas concentrations will end up around 700 parts per million) suggest a greater-than-10% chance of eventual global average warming of 6°C or above.

Focus on the bottom row in this table. If you do, you are already ahead of others, most of whom focus on averages, here depicted as “median Δ°C” (eventual changes in global average surface temperatures). The median is what we would expect to exceed half the time, given particular greenhouse gas concentrations in the atmosphere. And it’s bad enough.

But what really puts the “shock” into Climate Shock is the rapid increase in probabilities of eventual temperatures exceeding 6°C, the bottom row. While average temperatures go up steadily with rising concentrations, the chance of true extremes rises rapidly:

That 6°C is an Earth-as-we-know-it-altering temperature increase. Think of it as a planetary fever. Normal body temperatures hover around 37°C. Anything above 38°C and you have a fever. Anything above 40°C is life-threatening.

Global average warming of 3°C wouldn’t be unprecedented for the planet as a whole, in all of it geological history. For human society, it would be. And that’s where we are heading at the moment—on average, already assuming some ‘new policies’ to come into play that aren’t currently on the books.

It’s the high-probability averages rather than low-probability extremes that drive the original $40 figure. Our table links greenhouse gas concentrations to worryingly high probability estimates for temperatures eventually exceeding 6°C, an outcome that clearly would be catastrophic for human society as we know it.

Instead of focusing on averages then, climate ought to be seen as a risk management problem. Some greenhouse gas concentration thresholds should simply not be crossed. The risks are too high.

This kind of focus on temperature extremes is far from accepted wisdom. We argue it ought to be.

Gernot Wagner and Martin L. Weitzman are co-authors of Climate Shock (Princeton University Press, 2015). This post was originally published by The Institute for New Economic Thinking.

Also posted in Economics, Greenhouse Gas Emissions / Comments are closed

Electric Reliability and the Clean Power Plan: Perspectives of a Former Regulator

1024px-Wind_Turbines_and_Power_Lines,_East_Sussex,_England_-_April_2009

Source: Wikimedia Commons

There is no great disagreement that the U.S. energy system is transforming. With or without additional environmental regulations, like the U.S. Environmental Protection Agency’s (EPA) proposed Clean Power Plan, this transition is occurring. Our history and experience have demonstrated that we can weather it without threatening our uniform and non-negotiable commitment to reliability.

But to do that, we need to tap all of the tools at our disposal to ensure a robust, reliable, and integrated energy system that is no longer dependent exclusively upon centralized, fossil fuel generation. Done right, the resulting change can deliver benefits to customers, the economy, the environment, electric companies, innovators, and workers alike.

EPA’s proposed Clean Power Plan would place national limits on carbon pollution from existing fossil fuel power plants for the first time ever. In doing so, it would create long-term market signals that will help drive investments in energy efficiency, demand response, and renewable energy for years to come – not only reducing carbon pollution from the power sector to 30 percent below 2005 levels by 2030, but also by putting us on a path to a more reliable and resilient energy system.

As a former Commissioner of the Ohio Public Utilities Commission and electric system operator, I understand preserving the reliability of electric service is a paramount public responsibility for energy and environmental regulators, and for the power companies they oversee. As a Commissioner, I served as vice chair of the Critical Infrastructure Committee, a member of the Electricity Committee, and on the Task Force for Environmental Regulation and Generation within the National Association of Regulatory Utility Commissioners (NARUC). I co-chaired the National Electricity Forum 2012 to modernize the nation’s electricity infrastructure. At the request of the Federal Energy Regulatory Commission (FERC) and the U.S. Senate Committee on Energy and Natural Resources, I have provided testimony on reliability of the bulk power system before both of those bodies.

Prior to my appointment to the Commission, I served for six years as the Deputy Director and then Director of the City of Columbus, Ohio Department of Public Utilities. My duties there included running the City’s electric distribution utility. This hands-on experience meeting the daily needs of electricity customers as both a regulator and a system operator – while protecting the financial integrity of the system – gives me a keen appreciation for the real-world demands and importance of system reliability.

From that perspective, perhaps the most critical feature of the proposed Clean Power Plan is the flexibility it provides to states and power companies to craft individualized compliance plans that reduce pollution while preserving and strengthening electric reliability. EPA’s approach gives clear guidance on what limits and metrics must be met, but leaves states the flexibility to design solutions that will boost the economy and meet those requirements as they see fit.

That flexibility acts as a built-in “safety valve,” affording each state multiple pathways for compliance and providing leeway for states to make plans that are appropriate to their unique circumstances. Moreover, this flexibility complements the robust framework of operating practices, market instruments, and planning processes that already exist to address short-term and long-term reliability issues.

Leading experts on energy policy and electric reliability have recently weighed in to confirm reducing carbon pollution goes hand in hand with electric reliability, thanks to the flexible structure of the Clean Power Plan and our existing reliability tools and processes. According to a recent report by The Brattle Group, the combination of the ongoing transformation of the power sector, the steps already taken by system operators, the large and expanding set of technological and operational tools available, and the flexibility under the Clean Power Plan are likely sufficient to ensure compliance will not come at the cost of reliability.

And, just last week, Dr. Susan Tierney – a former Assistant Secretary for Policy at the U.S. Department of Energy and former Commissioner of the Massachusetts Department of Public Utilities— joined two other energy policy experts in sending a letter and report to the Chairman of the Federal Energy Regulatory Commission (FERC) concluding:

Evidence does not support the argument that the proposed CPP will result in a general and unavoidable decline in reliability.

The report provides examples of recent instances in which grid operators, FERC, and other entities have effectively used existing processes and tools to deftly address other kinds of reliability challenges in recent years, some of which were significant and unanticipated.

In 45 years of implementing the Clean Air Act, clean air standards have never caused the lights to go out. And nothing about the proposed Clean Power Plan – with all of its tremendous flexibility – will alter that record.

That’s a remarkable testament to the institutions and processes that exist to protect reliability, as well as the careful process EPA uses in developing clean air standards – and it is great news for families and communities who want and deserve clean air in addition to reliable, affordable electricity. The Clean Power Plan, like our other vital clean air standards, will help deliver both.

Also posted in Clean Air Act, Clean Power Plan, Policy, Setting the Facts Straight / Read 1 Response