In his new post on Monster Models, Ben Recht explains why systems physiology modeling is so hard, and why just leaning harder into standard engineering principles cannot possibly be the way forward. Locking down a water-tight mathematical description of the biophysics of an isolated sub-system (the workhorse of traditional mathematical biology) is noble work, but there’s no guarantee that when you connect multiple such subsystems together you will get out a good model of the full systems. That is, the whole appears to be larger (dynamically) than the sum of its (mathematical) parts. This reductionist approach to modeling doesn’t just falter when bumping up against the layered complexities of biological systems designed with all the foresight of a blind watchmaker, Ben points out that it’s even hard to make a simulation of electronics designed by people so that they can be understood by people! I couldn’t agree more, and grappling with the fact that rigorous sub-system modeling doesn’t readily translate into systems-level models is one of the central open questions in computational physiology.
I would offer only one correction to Ben’s post (or maybe that’s too strong – an addendum?): the model doesn’t have to be a monster before the limits of mathematical divide-and-conquer start creeping into systems physiology modeling. The example Ben used in his post is the truly gargantuan HumMod model, but I would argue that the problem begins as early as three or four sub-systems interacting. You don’t need a system of 20 ODEs with feedback loops before connecting different sub-systems (i.e., modules) together makes proper handling of the system-level behavior very difficult. So, the problem that Ben highlights is even bigger than he makes it out to be, because the issues plague almost all systems-level physiology models.
One reason the problem of combining sub-systems (or modules) into larger models starts happening with so few modules is that, if we’re honest with ourselves, we don’t usually have water-tight descriptions of all the models. This is especially true in systems physiology. We usually have a very good description of the module we’re interested in (which is why we started building whatever model we’re working on in the first place) and many of the other modules are added as just-so heuristics to get the full system running. The errors in the weaker modules then propagate through the full system and either cause unwanted behavior or force us to change to centerpiece module post hoc to absorb the incoming errors. For example, we might be focusing on a nerve model and neglect the biomechanics of the muscle it innervates (McGee & Grill 2016), or focus on detailed biomechanics and treat the entire nervous system as a bang-bang controller (Bastiaanssen et al 1996).
I’m well aware of these issues because my team has been trying to chart a course to a systems-level computational model of the lower urinary tract, and it has been quite tricky.
What we’re trying to do is build a framework where we can be honest with ourselves about which modules in the larger system we aren’t confident in, then train ANNs to make approximations of them. The ANNs are constrained by the biophysics we know (and expressed confidently as ODEs) to generate physiologically meaningful outputs. The ANNs aren’t quite allowed to run their normal black-box magic because they are fed by ODEs and their outputs are inputs to yet other ODEs, so the ANN hallucinations have to at least be compatible with all the biophysics we understand. We’re “only” a few years into building this framework, but our hope is that it will let us scale up our traditional progress on individual modules into more reliable progress on the full system.
Links to the technical stuff:
> A Hybrid ODE-NN Framework for Modeling Incomplete Physiological Systems
> Learning Physics Informed Neural ODEs With Partial Measurements