Scientists from Imperial College London have developed a way to make the conclusions drawn from mathematical models more reliable.
The work has implications for fields as diverse as medical research and ecology.
Models are, by necessity, gross simplifications and, as such, there is always the risk that the model – and so the conclusions we draw – are wrong
Most scientists choose to work with one mathematical model and change the input parameters to see what different outcomes result. The new approach will allow scientists to rapidly create a large set of plausible alternative models – in some cases up to 40 million – and then identify which of these – usually between 10 and 20 – will be the most appropriate for the system they are studying.
They will then have the option of running their experimental data through multiple models and comparing the outcomes to see where there is consensus, rather than relying on just one model from which to draw their conclusions.
Professor Michael Stumpf, from the Department of Life Sciences at Imperial College London, explains: “We rely on mathematical models to help us understand the workings of complex biological systems, such as how stem cells work, how the weather will change, or how we can control an outbreak of a disease like Ebola. But models are, by necessity, gross simplifications and, as such, there is always the risk that the model – and so the conclusions we draw – are wrong.
“Our approach makes it possible for scientists to quickly create many valid models and then see where there is agreement between them, which means we’re more likely to end up with reliable information on which to base important decisions.”
The research, published today [15 December] in PNAS, used models of gene expression and competitive population dynamics to test the new approach.
Dr Ann Babtie, from the Department of Life Sciences at Imperial College London, says: “All models are based on assumptions which affect the results they provide, so drawing conclusions based on the outcomes of just one model can be misleading. Using our approach we identified some outcomes that only appeared if one specific model was chosen. Other outcomes reoccurred across different models, which meant they weren’t obviously dependent on one set of assumptions and so are probably more likely to reflect what happens in reality.”
The new approach will also make it easier for scientists to quantify and communicate the uncertainty inherent in conclusions drawn from mathematical models, as Professor Stumpf explains:
“In some circumstances, where 90 per cent of the models have a common outcome, it might be valid to make a decision – such as which gene to target for a new therapy – based on those results,” he says. “However, in other situations, a one in ten risk of being wrong might be too serious to contemplate – which is why being able to quantify uncertainty can be of vital importance, both in biology and beyond.”
’ Topological sensitivity analysis for systems biology ’ by A.C. Babtie, P.Kirk and M.P.H. Stumpf is published in the Proceedings of the National Academy of Sciences (PNAS)