Nuclear power plant performance includes both operational and safety aspects and is an outcome of numerous elements, such as the reliability of equipment, reduction in challenges to plant operations, protection of workers, and proficiency of operations. These elements are inextricably linked to each other and to the safety of each facility. In short, a well-run plant is a safe plant for the workers and the public, and a well-run plant is an efficient plant. By-products of high performance include improved regulatory performance, worker safety, plant reliability, and, most important, public health and safety.
Many factors have influenced this performance improvement, including the cultivation of a strong safety and reliability culture by utilities, a strong independent nuclear regulator in the Nuclear Regulatory Commission, an independent industry excellence organization in the Institute of Nuclear Power Operations (INPO), and the development and application of risk-informed programs. A fundamental tenet of U.S. nuclear regulation is that the licensees (utility owner-operators) are responsible for safety. The U.S. nuclear power industry takes this responsibility seriously, and safe operation is and will always be the number one priority. Over 40 years ago, the nuclear power industry’s chief executive officers established INPO to independently drive the industry to higher levels of excellence. The INPO model includes a level of accountability to the achievement of excellence perhaps not present in any other industry in the world.
The NRC has also played an important role in ensuring the protection of the public health and safety through the licensing, inspection, and oversight of the U.S. nuclear fleet. The NRC has a long history of evolving its regulatory programs to enhance safety. Many of these programs have adopted a “risk-informed” approach that identifies and focuses time and attention on what is most important to safety. This approach improves safety by avoiding the diversion of resources to items of low significance. Over the past 20 years, improving plant performance has been coupled with the enhanced safety focus provided by this risk-informed approach such that the U.S. nuclear industry is performing at the highest levels of safety and reliability in the world.
This article assembles all the relevant individual and aggregate performance indicators available from publicly available sources, including the NRC and INPO, spanning the past several decades. Its purpose is to illuminate the performance improvements achieved and demonstrate the connection between this improved performance and safety.
The unequivocal picture provided by these performance indicators includes the following:
U.S. industry performance is at an all-time high—Industry overall performance has dramatically improved over the past 20 years, continuing a trend started in the 1980s. This performance trend is so significant that over the past 20 years, every INPO and NRC performance indicator has improved.
Industry performance levels improve safety—The breadth of improved industry performance has directly led to improved safety and has reduced risk. The NRC’s risk models show that reduced challenges and improved equipment reliability have reduced risk levels by a factor of three to seven for a spectrum of representative plants, based on the NRC’s data.
Risk-informed focus improves safety—The safety focus enabled by the NRC’s establishment and support of a risk-informed framework and the industry’s adoption of risk-informed approaches over the past 25 years has further improved safety. A broad spectrum of risk-informed approaches has been shown to improve safety and operational focus.
Performance and safety are inextricably linked. A risk-informed safety focus further improves plant safety.
40 years of improvement
To help evaluate the nexus between industry performance and safety, all the relevant publicly available individual and aggregate performance indicators were collected from INPO and the NRC and analyzed holistically. The data demonstrate that the U.S. nuclear power industry has a long history of performance improvement, spanning nearly 40 years.
In 2000, the U.S. nuclear industry marked the completion of two decades of performance improvement and celebrated its highest levels of safety and reliability. The NRC documented the improved safety performance of the industry through its Performance Indicator Program 1. The indicators, depicted in Fig. 1, were used in conjunction with other tools, such as the results of routine and special inspections and the systematic assessment of licensee performance programs, for providing input to the NRC’s management decisions regarding the need to adjust plant-specific regulatory programs.
In a May 2001 article in Nuclear News, INPO Executive Vice President Alfred Tollison stated, “The strong 2000 WANO performance indicator results for U.S. plants cap an outstanding decade of performance for the industry.”
The industry’s 2000 median capability factor of 91.1 percent was the highest since INPO began collecting data and significantly improved from the 62.7 percent in 1980. For the third straight year, the median value of unplanned automatic plant shutdowns was zero, whereas in 1980, the median value of unplanned automatic scrams was 7.3.
In the two decades since 2000, industry performance has continued to improve, as depicted in Fig. 2. In his November 13, 2019, testimony before the Senate Environment and Public Works Committee, Adm. Robert Willard, president and chief executive officer of INPO, stated, “By many measures, the U.S. nuclear industry is in its seventh consecutive year of improving per formance. It is also by WANO standards the highest-performing national nuclear industry in the world” [emphasis added].
Like the WANO indicator in Fig. 2, the INPO Performance Indicator Index (PII)by Quartile depicted in Fig. 3 shows an overall improvement in the average performance but also the broad-based improvement where the current lower quartile can be seen to exceed the performance of the median quartile in 2006.
The INPO PII covers a broad range of performance measures, including reactor performance, safety system performance, occupational safety, chemistry performance, and radiation exposure. All of the INPO performance indicators show the improvement of the U.S. nuclear industry and are depicted in NEI 20-04, The Nexus Between Safety and Operational Performance in the U.S. Nuclear Industry2. A subset is displayed in Figs. 4 through 9.
In its input to the NRC’s report for the Convention on Nuclear Safety (NUREG-1650, Rev. 7, January 2019), INPO states: “At the end of 2018, the U.S. nuclear industry was performing at its highest levels ever. Today, the median industry capacity factor is above 93 percent, most plants experience no automatic scrams in a year, there were no significant operational events in 2018, and collective radiation dose and industrial accident rates are both lower by a factor of seven when compared with the rates of the 1980s industry.”
Figure 4 shows the industry average of the INPO Fuel Performance Indicator, which reflects the percentage of units with no failures in the metal barrier surrounding fuel and has shown steady improvement. The industry’s long-term goal is that units operate with zero fuel failures.
Figure 5 shows the industry average of the INPO Safety System Performance Indicator, which reflects the availability of three standby safety systems used to respond to unusual situations. The graph shows the percentage of units achieving availability goals.
Figures 6 and 7 show the industry average of the INPO Chemistry Effectiveness Indicators for pressurized water reactors and boiling water reactors, respectively. These indicators are comprehensive measures of overall chemistry performance as related to long-term material degradation. They are based on industry guidelines for water chemistry control and use a set of five conditions. Lower numbers represent better chemistry control.
Figures 8 and 9 show the industry average of the INPO Collective Radiation Exposure Indicators for PWRs and BWRs, respectively. The indicators reflect the effectiveness of practices that reduce radiation exposure at plants. Low exposure indicates strong management attention to radiation protection.
NRC Reactor Oversight Process
The Reactor Oversight Process (ROP) is the NRC’s program to inspect, measure, and assess the safety and security of operating commercial nuclear power plant licensees and to predictably respond to declining performance. The program was implemented in 2000 with the goal of providing an objective, risk-informed, understandable, and predictable approach to nuclear power plant oversight. The then-new ROP process acknowledged the improved performance of the nuclear industry in the 1980s and 1990s and used performance in the 1997–1998 time frame to establish the expected performance threshold for many of the new ROP indicators. With the ROP, the NRC implemented a process that places plants in performance indicators, from Category 1 for high performers to Category 5 for troubled performers. The process allows the NRC to focus more of its resources on the relatively small number of plants that evidence performance problems.
The ROP uses overall performance to establish the regulatory response, ranging from conducting a baseline number of hours of inspection (i.e., the baseline inspection program) for plants in Column 1 to potentially revoking a license to operate in Column 5. Columns 1 and 2 include plants that are fully meeting the NRC’s safety objectives, the difference being that plants in Column 1 receive the baseline while plants in Column 2 receive additional inspection above the baseline to ensure that robust margins to safety are being maintained. Plants in Column 3 are meeting the NRC’s safety objectives with a minimal reduction in margin, warranting a larger number of inspection hours beyond that received in Column 2. Margin to safety is the critical figure of merit, and in Columns 4 and 5, the NRC has determined that the margin has been reduced significantly or is unacceptable, respectively, and takes appropriate regulatory action. This process of increasing inspection and regulatory action as a function of declining performance is effective and has had the intended effect of addressing outlying performers well before public health and safety are affected.
In addition, the NRC uses a color-based system to track performance. The “Green-White” program establishes the transition from a region of expected performance (Green) to a region of performance that is outside an expected range of nominal utility performance (White), but related cornerstone objectives are still being met. In establishing the Green-White program, the NRC used 1997–1998 industry performance to establish expected performance by setting the threshold at a point where approximately 95 percent of the industry would be Green at any given time.
During the first year of the ROP, approximately 75 percent of nuclear plants were performing in a manner such that all indicators and all inspection findings were Green, placing them in Column 1 of the action matrix. Since that time, industry performance has progressively improved. In its report for the fourth quarter of 2019, the NRC determined that all performance indicators are Green, all inspection findings are Green or “No Color,” and all operating nuclear plants are in Column 1 of the NRC action matrix. Figure 10 shows the percentage of operating plants in Column 1 since the start of the ROP. This increasing trend is a clear indication of continued improvement in industry performance.
Operations and operational performance has improved along with safety performance. One way the energy industry measures operational performance is by capacity factor. Capacity factor is the measure of how consistently a power plant runs for a specific period of time. It’s expressed as a percentage and calculated by dividing the actual unit electricity output by the maximum possible output. In 2017, U.S. reactors operated at a record-setting level (92.2 percent capacity factor). In 2018, U.S. reactors set a new record with a capacity factor of 93.4 percent. The average capacity factor for U.S. reactors over the past two decades has been above 90 percent (Fig. 11). To put this into perspective, most utility-scale generators operate with much lower capacity factors (gas, 60s; wind, 30s; solar, 20s).
Nuclear energy facilities have annually produced about 20 percent of America’s electricity supplies for the past two decades. Because of their electric sector–leading capacity factors, nuclear power plants have done so even though they constitute only about 10 percent of the nation’s installed electric generating capacity.
There are two key contributors to the high capacity factor demonstrated by the U.S. nuclear power industry. The first is effective management and coordination of refueling outages. Since the 1990s, the average refueling outage duration has dropped by a factor of roughly three, from approximately 100 days to about 34 days.
The second key contributor is a reduction in the number of unplanned reactor trips. Since the late 1980s, the number of unplanned reactor trips has fallen by a factor of six (Fig. 12). This reduction has resulted, in part, from a more effective management of systems and equipment (INPO’s focus on single-point vulnerabilities in plant designs).
The link between safety performance and operational performance should not be surprising, as they are closely related. When considering modifications to equipment, processes, and procedures, plants evaluate the impact on all aspects of operation and utilize plant probabilistic risk assessment (PRA) models to assess the impact on plant safety. Thus, when changes in processes or equipment are made that result in improvements to equipment reliability/availability, the outcome has a positive impact on both safety performance and operational performance.
As depicted in Fig. 13, a focus on enhancing equipment reliability translates to fewer equipment failures and higher equipment availability. Both improve operational performance, and both result in safer plant operation. A reduction in unplanned reactor trips, reduced in part by high equipment reliability, leads to a higher capacity factor and reduces the potential for transients that can challenge safe plant operation. The sharing and evaluation of operating experience ensures that challenges are avoided to the benefit of both operational and safety performance. The use of plant PRAs early in the development of process improvements provides assurance that changes provide the desired benefits without compromising plant safety. Maintaining a risk-informed focus in all phases of operation provides a common benefit to both operational and safety performance.
The same performance improvements that have improved operational and safety metrics are reflected in reductions in risk. The primary risk metrics used today are core damage frequency (CDF) and large early release frequency (LERF). These metrics are obtained for each plant from PRA models established for each plant.
It is important to note the connection between the risk metrics generated by PRA models and compliance with the regulations. A PRA model assumes that the plant is in full compliance with the regulations and its licensing basis, so the calculated CDF and LERF values represent the residual plant risk that exists once the regulations are met. The NRC uses risk to determine whether additional regulatory requirements are necessary, as specified in 10 CFR 50.109, “Backfitting,” issued in 1988. For example, the NRC used risk insights to determine that the residual risk stemming from the failure to insert all control rods upon a shutdown signal warranted additional regulatory attention, and it issued 10 CFR 50.62, “Requirements for Reduction of Risk from Anticipated Transients without Scram (ATWS) Events for Light-water Cooled Nuclear Power Plants,” in 1984. In addition, the NRC issued 10 CFR 50.63, “Loss of All Alternating Current Power,” in 1988 to reduce the residual risk from station blackout events.
The NRC and the industry recognize the complementary nature of risk and compliance and use both in ensuring adequate protection of public health and safety. PRA is also used to determine the significance of noncompliance with a regulatory requirement or element of a licensing basis as is done in the Significance Determination Process of the ROP. This result is then used to determine the appropriate regulatory action to address the noncompliance based on the incremental increase in residual risk.
Figure 14 shows that the industry average CDF has improved by a factor of 10. This steady reduction in CDF and LERF since the early 1990s has been driven primarily by risk-informed initiatives, continued plant and equipment performance improvements, and plant enhancements. In addition, improvements to PRA models have given a clearer picture of actual risk.
The improvement in safety is further illustrated by results from a sensitivity study (Fig. 15) that compared the safety impact of changes in initiating event frequencies and equipment performance since the early 1990s. This evaluation eliminates the impact of PRA model changes by relying on NRC Standardized Plant Analysis Risk (SPAR) models and NRC equipment reliability data exclusively. Thus, the NRC’s own data and models show that improvements in equipment and operational performance have improved safety by a factor of three to seven for a broad range of plant designs.
The improvement in industry safety performance is easily shown by comparing the industry average CDF as a function of time. As shown in Fig. 14, there has been an order of magnitude decrease in the industry average CDF since the early 1990s. While the overall improvement can be attributed to many factors, a key factor to improvements in safety has been a number of risk-informed initiatives that enable plants to identify potential vulnerabilities, outcomes, and consequences, and then prioritize actions that have the greatest benefit to plant safety.
Since the early 1990s, U.S. nuclear power plants have undertaken several risk-informed activities aimed at improving plant safety performance. These include plant modifications in response to Generic Letter 88-20, “Individual Plant Examination for Severe Accident Vulnerabilities”; actions taken in accordance with 10 CFR 50.65, “Requirements for Monitoring the Effectiveness of Maintenance at Nuclear Power Plants,” the so-called Maintenance Rule; and plant changes enabled by 10 CFR 50.48(c), “Alternate Fire Protection Rule,” and NFPA-805, “Performance-Based Standard for Fire Protection.” These activities have increased the availability and reliability of systems and components and have reduced the likelihood of initiating events.
The benefits of risk-informed initiatives include both safety and operational benefits. Risk-informed initiatives allow the industry and the NRC to focus on issues that are most important to safety, yielding safety benefits and reducing risk. Operational performance benefits arise from increased flexibility, higher quality maintenance, greatly reduced focus on non-safety-significant systems, and reduced outages.
Each of the risk-informed activities implemented since the 1990s brings its own unique blend of operational and safety benefits, but they all share a common value: an improved focus on issues that are important to safety.
Maintenance Rule: The Maintenance Rule (10 CFR 50.65) involves the identification and monitoring of risk-significant structures, systems, and components (SSC). Operationally, the focus on safety-significant SSCs has led to a decreased focus on unimportant SSCs, which has improved operational efficiency and effectiveness. In addition, the rule supported the conduct of online maintenance, allowing licensees to conduct more routine maintenance between refueling outages. Online maintenance reduces availability while the maintenance is performed but increases equipment reliability. The net outcome has been an overall improvement in the total availability of key equipment. The result has been a net risk reduction across the industry.
Reactor Oversight Process: As previously described, the ROP adopted a risk-informed approach to oversight and assessment of licensee performance. The ROP’s performance indicators incentivize improvements in the reliability of risk-significant SSCs and the effectiveness of key processes that have, in turn, resulted in a net risk reduction and increased focus on safety-significant activities at operating plants. Improvements in industry performance are reflected in all NRC performance indicators, as shown in NEI 20-04.
The inspection program uses a risk-informed approach to select areas to inspect, based on their importance to potential risk, past operational experience, and regulatory requirements. The Significance Determination Process, a risk-informed process, is used to determine the risk significance of inspection findings, assigning a color to each finding based on its safety or security significance. Industry improvement in this area is readily seen in the January 2020 summary of inspection results for 2019 (Fig. 16). This summary shows either no findings (Grey) or Green findings such that there were no inspection findings in 2019 with other than very low safety/security significance.
Technical Specification Enhancements: A number of risk-informed technical specification changes have been developed through industry initiatives and plant-specific actions. Many of the changes have improved maintenance management on key equipment and risk-significant SSCs. Through the use of a configuration risk-management process, proper control of plant configuration during key maintenance activities is maintained to minimize the impact of out-of-service equipment and testing. These controls do not exist in traditional deterministic technical specifications. Operationally, risk-informed technical specifications provide greater flexibility in maintenance scheduling, ensure that higher quality maintenance is performed, and enable shorter, less complex outages.
Risk-informed in-service inspections: Traditional in-service inspection (ISI) programs identify the required inspections based on deterministic criteria and/or a random selection process. Risk-informed ISI uses operating experience and risk insights to target pipe segments and components that present the greatest risk, which considers both the likelihood and consequences of failure, so that the most time and attention are focused on the most risk-significant welds. The result is improved safety and fewer inspections performed during outages that are not safety significant, lowering personnel exposures.
Special treatment requirements: Under 10 CFR 50.69, the rule codified in 2004, licensees are able to categorize and modify the treatment of SSCs based on their safety significance. Under this regulation, the special treatment requirements can be reduced for safety-related SSCs that are determined to be of low safety significance, but increases the requirements for non-safety-related components that are safety significant. This improves safety by focusing time and attention on the most safety significant SSCs and overall enables reductions and associated savings in in-service testing, local leak-rate testing, Maintenance Rule scope, parts procurement, work control, and preventive maintenance tasks.
Fire protection: The NRC issued a fire protection rule change that provided an optional (voluntary) approach to demonstrating post-fire safe shutdown capability that involved a risk-informed, performance-based process. This option allowed licensees to commit to the provisions of 10 CFR 50.48(c) instead of the deterministic provisions associated with 10 CFR 50.48(b). Approximately half of the U.S. fleet took advantage of this option and submitted license amendment requests to change to this optional approach. As part of this license change process, most licensees did not rely on previously granted exemptions from the fire protection regulation. Instead, the licensees performed a comprehensive risk-informed, performance-based examination of the in situ plant configuration. Nearly all plants that transitioned to 10 CFR 50.48(c) performed plant modifications that resulted in a decrease in the plant core damage frequency.
Plants that transitioned to 10 CFR 50.48(c) were also required to report the post-transition plant risk metrics and the net change in risk as compared to a hypothetical plant configuration that complies with 10 CFR 50.48(b). As noted in NRC Regulatory Guide 1.205, Risk-Informed, Performance-Based Fire Protection for Existing Light-Water Nuclear Power Plants, any risk increase is required to be consistent with the provisions of NRC Regulatory Guide 1.174, An Approach for Using Probabilistic Risk Assessment in Risk-informed Decisions on Plant-specific Changes to the Licensing Basis. While some plants reported risk increases that were within the allowable limits provided in Regulatory Guide 1.174, about half of the transitioning plants reported a net reduction in overall plant risk. When the entire population of transitioning plants is evaluated on a collective basis, the average risk was reduced by roughly 50 percent.
To demonstrate the impact of risk-informed initiatives on plant performance and safety over the past two decades, a sensitivity study was performed to assess the change in CDF over the period from the early 1990s to the present day. To conduct the study, NRC SPAR models were used for six common plant types: Westinghouse 4-loop, Babcock & Wilcox, Combustion Engineering, and General Electric BWR 4, GE BWR 5, and GE BWR 6.
Two sets of cases were performed. The first set, referred to as the “Current CDF,” used SPAR models containing data that reflect the current performance at the six plants. For the second set, referred to as the “Pre-RI CDF,” the initiating event frequencies and component data were replaced with NRC-published data that preceded the implementation of risk-informed initiatives. The sensitivity study addressed only the changes in the data and took no credit for other safety enhancements implemented over the past 30 years, such as hardware and procedure changes. These changes would further increase the CDF improvement factor shown by the study.
Table 1 and Fig. 17 present a summary of CDF results for each plant. On average, CDF was shown to have improved by a factor of 4.3. This improvement reflects the reduction in initiating event frequencies (improvement factor of 3.0 with a range of 1.8 to 4.9) and component reliability (improvement factor is 2.1, with a range of 1.3 to 3.4). Test and maintenance unavailability increased slightly due primarily to risk-informed initiatives that extend allowed outage times and permit for more online maintenance. The composite values are shown in Fig. 18.The impact of a small increase in planned unavailability is minimized by the more pronounced reduction in failure rates. This leads to an overall positive improvement factor for component-based data. The combined component-based improvement factor is 1.9, with a range of 1.3 to 2.6.
|Plant Type||Pre RI CDF||Current CDF||Improvement Factor|
|Babcock & Wilcox||7.46x10-5||2.12x10-5||3.5|
The U.S. nuclear industry’s focus on performance has led to improvements in both safety and operational performance. Over the past 20 years, improving plant performance has been coupled with the enhanced safety focus provided by a risk-informed approach that focuses resources on the most safety significant issues. Today, the U.S. nuclear industry is performing at the highest levels of safety and reliability in the world.
Doug True (firstname.lastname@example.org) is Chief Nuclear Officer and Senior Vice President of Generation and Suppliers at the Nuclear Energy Institute. John Butler (email@example.com) is a Senior Technical Advisor at NEI.
Figures: NEI 20-04 – The Nexus Between Safety and Operational Performance in the US Nuclear Industry
1 NUREG-1187, Volume 3, Performance Indicators for Operating Commercial Nuclear Power Reactors