ANS is committed to advancing, fostering, and promoting the development and application of nuclear sciences and technologies to benefit society.
Explore the many uses for nuclear science and its impact on energy, the environment, healthcare, food, and more.
Explore membership for yourself or for your organization.
Conference Spotlight
2026 ANS Annual Conference
May 31–June 3, 2026
Denver, CO|Sheraton Denver
Latest Magazine Issues
Mar 2026
Jan 2026
Latest Journal Issues
Nuclear Science and Engineering
April 2026
Nuclear Technology
February 2026
Fusion Science and Technology
Latest News
Going Nuclear: Notes from the officially unofficial book tour
I work in the analytical labs at one of Europe’s oldest and largest nuclear sites: Sellafield, in northwestern England. I spend my days at the fume hood front, pipette in one hand and radiation probe in the other (and dosimeter pinned to my chest, of course). Outside the lab, I have a second job: I moonlight as a writer and public speaker. My new popular science book—Going Nuclear: How the Atom Will Save the World—came out last summer, and it feels like my life has been running at full power ever since.
Paul Seurin, Koroush Shirvan
Nuclear Science and Engineering | Volume 200 | Number 3 | March 2026 | Pages 574-605
Research Article | doi.org/10.1080/00295639.2025.2488702
Articles are hosted by Taylor and Francis Online.
Optimizing the fuel cycle cost through the optimization of nuclear reactor core loading patterns (LPs) involves multiple objectives and constraints, leading to a vast number of candidate solutions that cannot be explicitly solved. To advance the state of the art in core reload patterns, we have developed methods based on deep Reinforcement Learning (RL) for both single- and multi-objective optimization. Our previous research laid the groundwork for these approaches and demonstrated their ability to discover high-quality patterns within a reasonable time frame. On the other hand, Stochastic Optimization (SO) approaches are commonly used in the literature, but there is no rigorous explanation that shows which approach is better in which scenario. In this paper, we demonstrate the advantage of our RL-based approach, specifically using Proximal Policy Optimization (PPO) against the most commonly used SO-based methods: Genetic Algorithm, Parallel Simulated Annealing with mixing of states, and Tabu Search, as well as an ensemble-based method, i.e. the Prioritized replay Evolutionary and Swarm Algorithm. We found that the LP scenarios derived in this paper are amenable to a global search to identify promising research directions rapidly but then need to transition into a local search to exploit these directions efficiently and prevent getting stuck in local optima. PPO adapts its search capability via a policy with learnable weights, allowing it to function as both a global search method and a local search method. Subsequently, we compared all algorithms against PPO in long runs, which exacerbated the differences seen in the shorter cases. Overall, the work demonstrates the statistical superiority of PPO compared to the other considered algorithms.