Currently operating U.S. nuclear power plants operate efficiently and provide base-load electricity at low cost. The nuclear industry relies on total annual power output (availability) as a measure of success, while the government regulator uses the rate of plant failures (reliability) as an indicator of safety, which is the more important performance metric from their point of view. This paper investigates the effects of extending the operating power of U.S. boiling water reactors (BWRs) on reliability as measured by the frequency of licensing event report submission by the plants under study. The possibility of selection bias was investigated by comparing the reliability of BWRs that did not perform an extended power uprate with the behavior of BWRs that would uprate in the future. The control plants exhibited higher reliability in the period 1990 to 2011 than the preextended power uprate plants [mean time between failures (MTBF) 49.1 versus 34.3 p = 0.009]. Finally, the reliability of the plants was investigated before and after the uprates. Since large power uprates are a relatively recent phenomenon, there is much less data available for the post extended power uprate (EPU) period. This has the effect of enlarging the confidence intervals around the MTBF estimates. The beta parameter (slope of the cumulative failure rate) is used to compare the pre- and post-EPU periods. The analysis shows that the reliability of the tested BWRs improved following the implementation of large power uprates ( 0.63 versus 0.56 p = 0.043). This result shows that the effect of replacing and refurbishing plant equipment as part of the power uprate is larger than the effect of the higher power on the plant reliability.