Nuclear Technology / Volume 170 / Number 1 / April 2010 / Pages 80-89
Technical Paper / Special Issue on the 2008 International Congress on Advances in Nuclear Power Plants / Radiation Measurements and Instrumentation / dx.doi.org/10.13182/NT10-A9447
Verification and validation (V&V) of nuclear data are critical to the accuracy of both stochastic and deterministic particle transport codes. To effectively test a set of nuclear data, the data must be applied to a wide variety of transport problems. Performing this task in a timely, efficient manner is tedious. The nuclear data team at Los Alamos National Laboratory in collaboration with the University of Florida has developed a methodology to automate the process of nuclear data V&V. This automated V&V process can efficiently test a number of data libraries using well-defined benchmark experiments, such as those in the International Criticality Safety Benchmark Experiment Project. The process is implemented through an integrated set of Python scripts. Material and geometry data are read from an existing medium or given directly by the user to generate a benchmark experiment template file. The user specifies the choice of benchmark templates, codes, and libraries to form a V&V project. The Python scripts automatically generate input decks for multiple transport codes, run and monitor individual jobs, and parse the relevant output. The output can then be used to generate reports directly or can be stored in a database for later analysis. This methodology eases the burden on the user by reducing the amount of time and effort required for obtaining and compiling calculation results. The resource savings by using this automated methodology could potentially be an enabling technology for more sophisticated data studies, such as nuclear data uncertainty quantification. Once deployed, this tool will allow the nuclear data community to more thoroughly test data libraries leading to higher-fidelity data in the future.