Understanding the precision and accuracy of expensive computer models is critical to their use across areas of science and engineering. Part of this is understanding the uncertainty in calibration parameters of the computer model and how this contributes to the overall predictive uncertainty, as well as probing the ways in which the model fails replicate experimental observations. We present a generalization of the Kennedy-O’Hagan calibration framework to handle the count structure of our observed data and a sampling approach for Bayesian inference of this model. The methodology allows closer fidelity to the experimental system and provides flexibility for identifying different forms of model discrepancy between the simulator and experiment. This work is motivated by research at the Center for Exascale Radiation Transport (CERT) on the process of validating their radiation transport model across a hierarchy of physical experiments.
Speaker Biography:
Mike Grosskopf will be completing his PhD in Statistics at Simon Fraser University in the summer of 2017. His research interests include computer model calibration and validation, statistical methods for physical sciences and engineering, and Bayesian computation. Before his life as a statistics doctoral candidate, Mike gained 8 years of experience as a research engineer in a laboratory astrophysics group, running expensive computer simulators in support of laser experiments. Five of those years were part of a NNSA PSAAP center CRASH, dedicated to uncertainty quantification and predictive science on their radiation hydrodynamics model. His current research is done in collaboration of a current PSAAP center, the Center for Exascale Radiation Transport (CERT) at Texas A&M University.