ANS is committed to advancing, fostering, and promoting the development and application of nuclear sciences and technologies to benefit society.
Explore the many uses for nuclear science and its impact on energy, the environment, healthcare, food, and more.
Explore membership for yourself or for your organization.
Conference Spotlight
2026 ANS Annual Conference
May 31–June 3, 2026
Denver, CO|Sheraton Denver
Latest Magazine Issues
Apr 2026
Jan 2026
Latest Journal Issues
Nuclear Science and Engineering
June 2026
Nuclear Technology
March 2026
Fusion Science and Technology
May 2026
Latest News
DOE selects first companies for nuclear launch pad
The Department of Energy’s Office of Nuclear Energy and the National Reactor Innovation Center have announced their first selections for the Nuclear Energy Launch Pad: three companies developing microreactors and one developing fuel supply.
The four companies—Deployable Energy, General Matter, NuCube Energy, and Radiant Industries—were selected from the initial pool of Reactor Pilot Program and Fuel Line Pilot Program applicants, the two precursor programs to the launch pad.
Y. Y. Azmy, B. L. Kirk
Nuclear Science and Engineering | Volume 120 | Number 1 | May 1995 | Pages 1-17
Technical Paper | doi.org/10.13182/NSE95-A24102
Articles are hosted by Taylor and Francis Online.
Mathematical performance models are developed for the parallel algorithm used to solve the neutron diffusion equation on message passing and shared memory multiprocessors represented by the Intel iPSC/860 and the Sequent Balance 8000, respectively. The performance models are validated through several test problems, and these models are used to estimate the performance of each of the two considered architectures in situations typical of practical applications, such as fine meshes and a large number of participating processors. While message passing computers are capable of producing speedup, the parallel efficiency deteriorates rapidly as the number of processors increases. Furthermore, the speedup fails to improve appreciably for massively parallel computers so that only small- to medium-sized message passing multiprocessors offer a reasonable platform for this algorithm. In contrast, the performance model for the shared memory architecture predicts very high efficiency over a wide range of number of processors reasonable for this architecture. Furthermore, the model efficiency of the Sequent remains superior to that of the hypercube if its model parameters are adjusted to make its processors as fast as those of the iPSC/860. It is concluded that shared memory computers are better suited for this parallel algorithm than message passing computers.