A chain of computational models can be used to simulate the effects of a chemical release
The release of a chemical or biological agent into the environment and the subsequent effects on humans can be divided into stages, many of which can be simulated independently. To what extent does varying the parameters of each of these models lead to variation in the final prediction? This is the goal of sensitivity analysis.
Stephen Gow, a member of NGCM Cohort 1, delivered a seminar introducing the tools and techniques involved in sensitivity analysis, and explained how it forms the backbone of his research. His work is part-sponsored by DSTL.
Many computational models take a number of variables as input, and return a single value as an output. Determining the effect of each of the variables on the output has a number of advantages. Sensitivity analysis:
- Provides insight into the way the model behaves overall,
- Allows errors in the model to be found quickly,
- Can help to simplify the model, if a variable is found to not affect the output significantly,
- Can help researchers to prioritise further research on particular inputs to the model, if they have an unusual or significant effect on the output.
In many cases, the computational cost of running a model many times in order to gain accurate statistics is prohibitive. This can be mitigated by creating an emulator. This is a computationally cheap approximation to the model which can be 'trained' using data from the original model. The intermediate points between the specified data cannot be known exactly, but the probability that the emulator matches the true model for any given input can be estimated. In Stephen's work, this emulation provided a 70-fold reduction in the number of model runs required.
Example Emulator function