The theoretical basis for the application of sensitivity and uncertainty (S/U) analysis methods to the validation of benchmark data sets for use in criticality safety applications is developed. Sensitivity analyses produce energy-dependent sensitivity coefficients that give the relative change in the system multiplication factor keff value as a function of relative changes in the cross-section data by isotope, reaction, and energy. Integral indices are then developed that utilize the sensitivity information to quantify similarities between pairs of systems, typically a benchmark experiment and design system. Uncertainty analyses provide an estimate of the uncertainties in the calculated values of the system keff due to cross-section uncertainties, as well as correlation in the keff uncertainties between systems. These uncertainty correlations provide an additional measure of system similarity. The use of the similarity measures from both S/U analyses in the formal determination of areas of applicability for benchmark experiments is developed. Furthermore, the use of these similarity measures as a trending parameter for the estimation of the computational bias and uncertainty is explored. The S/U analysis results, along with the calculated and measured keff values and estimates of uncertainties in the measurements, were used in this work to demonstrate application of the generalized linear-least-squares methodology (GLLSM) to data validation for criticality safety studies.

An illustrative example is used to demonstrate the application of these S/U analysis procedures to actual criticality safety problems. Computational biases, uncertainties, and the upper subcritical limit for the example applications are determined with the new methods and compared to those obtained through traditional criticality safety analysis validation techniques.

The GLLSM procedure is also applied to determine cutoff values for the similarity indices such that applicability of a benchmark experiment to a criticality safety design system can be assured. Additionally, the GLLSM procedure is used to determine how many applicable benchmark experiments exceeding a certain degree of similarity are necessary for an accurate assessment of the computational bias.