ANS is committed to advancing, fostering, and promoting the development and application of nuclear sciences and technologies to benefit society.
Explore the many uses for nuclear science and its impact on energy, the environment, healthcare, food, and more.
Explore membership for yourself or for your organization.
Conference Spotlight
2026 ANS Annual Conference
May 31–June 3, 2026
Denver, CO|Sheraton Denver
Latest Magazine Issues
Feb 2026
Jul 2025
Latest Journal Issues
Nuclear Science and Engineering
March 2026
Nuclear Technology
February 2026
Fusion Science and Technology
January 2026
Latest News
Mirion announces appointments
Mirion Technologies has announced three senior leadership appointments designed to support its global nuclear and medical businesses while advancing a company-wide digital and AI strategy. The leadership changes come as Mirion seeks to advance innovation and maintain strong performance in nuclear energy, radiation safety, and medical applications.
Jaques Reifman, Javier E. Vitela
Nuclear Technology | Volume 106 | Number 2 | May 1994 | Pages 225-241
Technical Paper | Reactor Control | doi.org/10.13182/NT94-A34978
Articles are hosted by Taylor and Francis Online.
The method of conjugate gradients is used to expedite the learning process of feedforward multilayer artificial neural networks and to systematically update both the learning parameter and the momentum parameter at each training cycle. The mechanism for the occurrence of premature saturation of the network nodes observed with the backpropagation algorithm is described, suggestions are made to eliminate this undesirable phenomenon, and the reason by which this phenomenon is precluded in the method of conjugate gradients is presented. The proposed method is compared with the standard backpropagation algorithm in the training of neural networks to classify transient events in nuclear power plants simulated by the Midland Nuclear Power Plant Unit 2 simulator. The comparison results indicate that the rate of convergence of the proposed method is much greater than the standard backpropagation, that it reduces both the number of training cycles and the CPU time, and that it is less sensitive to the choice of initial weights. The advantages of the method are more noticeable and important for problems where the network architecture consists of a large number of nodes, the training database is large, and a tight convergence criterion is desired.