A general method for numerically simulating the stochastic time evolution of coupled chemical reactions
Abstract
An exact method is presented for numerically calculating, within the framework of the stochastic formulation of chemical kinetics, the time evolution of any spatially homogeneous mixture of molecular species which interreact through a specified set of coupled chemical reaction channels. The method is a compact, computer-oriented, Monte Carlo simulation procedure. It should be particularly useful for modeling the transient behavior of well-mixed gas-phase systems in which many molecular species participate in many highly coupled chemical reactions. For “ordinary” chemical systems in which fluctuations and correlations play no significant role, the method stands as an alternative to the traditional procedure of numerically solving the deterministic reaction rate equations. For nonlinear systems near chemical instabilities, where fluctuations and correlations may invalidate the deterministic equations, the method constitutes an efficient way of numerically examining the predictions of the stochastic master equation. Although fully equivalent to the spatially homogeneous master equation, the numerical simulation algorithm presented here is more directly based on a newly defined entity called “the reaction probability density function.” The purpose of this article is to describe the mechanics of the simulation algorithm, and to establish in a rigorous, a priori manner its physical and mathematical validity; numerical applications to specific chemical systems will be presented in subsequent publications.
References (13)
- D.L. Bunker et al.
Combust. Flame
(1974) - D.A. McQuarrie
J. Appl. Probability
(1967) - I. Oppenheim et al.
J. Chem. Phys.
(1969) - T.G. Kurtz
J. Chem. Phys.
(1972) - R.D. Present
Kinetic Theory of Gases
(1958) - J.H. Gibbs et al.
J. Statist. Phys.
(1975)
Cited by (5025)
A detailed sensitivity analysis identifies the key factors influencing the enzymatic saccharification of lignocellulosic biomass
2024, Computational and Structural Biotechnology JournalCorn stover is the most abundant form of crop residue that can serve as a source of lignocellulosic biomass in biorefinery approaches, for instance for the production of bioethanol. In such biorefinery processes, the constituent polysaccharide biopolymers are typically broken down into simple monomeric sugars by enzymatic saccharification, for further downstream fermentation into bioethanol. However, the recalcitrance of this material to enzymatic saccharification invokes the need for innovative pre-treatment methods to increase sugar conversion yield. Here, we focus on experimental glucose conversion time-courses for corn stover lignocellulose that has been pre-treated with different acid-catalysed processes and intensities. We identify the key parameters that determine enzymatic saccharification dynamics by performing a Sobol's sensitivity analysis on the comparison between the simulation results from our complex stochastic biophysical model, and the experimental data that we accurately reproduce. We find that the parameters relating to cellulose crystallinity and those associated with the cellobiohydrolase activity are predominantly driving the enzymatic saccharification dynamics. We confirm our computational results using mathematical calculations for a purely cellulosic substrate. On the one hand, having identified that only five parameters drastically influence the saccharification dynamics allows us to reduce the dimensionality of the parameter space (from nineteen to five parameters), which we expect will significantly speed up our fitting algorithm for comparison of experimental and simulated saccharification time-courses. On the other hand, these parameters directly highlight key targets for experimental endeavours in the optimisation of pre-treatment and saccharification conditions. Finally, this systematic and two-fold theoretical study, based on both mathematical and computational approaches, provides experimentalists with key insights that will support them in rationalising their complex experimental results.
Automated importance sampling via optimal control for stochastic reaction networks: A Markovian projection–based approach
2024, Journal of Computational and Applied MathematicsWe propose a novel alternative approach to our previous work (Ben Hammouda et al., 2023) to improve the efficiency of Monte Carlo (MC) estimators for rare event probabilities for stochastic reaction networks (SRNs). In the same spirit of Ben Hammouda et al. (2023), an efficient path-dependent measure change is derived based on a connection between determining optimal importance sampling (IS) parameters within a class of probability measures and a stochastic optimal control formulation, corresponding to solving a variance minimization problem. In this work, we propose a novel approach to address the encountered curse of dimensionality by mapping the problem to a significantly lower-dimensional space via a Markovian projection (MP) idea. The output of this model reduction technique is a low-dimensional SRN (potentially even one dimensional) that preserves the marginal distribution of the original high-dimensional SRN system. The dynamics of the projected process are obtained by solving a related optimization problem via a discrete regression. By solving the resulting projected Hamilton–Jacobi–Bellman (HJB) equations for the reduced-dimensional SRN, we obtain projected IS parameters, which are then mapped back to the original full-dimensional SRN system, resulting in an efficient IS-MC estimator for rare events probabilities of the full-dimensional SRN. Our analysis and numerical experiments reveal that the proposed MP-HJB-IS approach substantially reduces the MC estimator variance, resulting in a lower computational complexity in the rare event regime than standard MC estimators.
Multiscale insights into the radiation effect of semiconductor materials
2024, Nuclear Instruments and Methods in Physics Research, Section B: Beam Interactions with Materials and AtomsWe develop a multiscale framework capturing the primary interaction, displacement cascade generation and evolution, and realistic observable damaged structure based on Monte Carlo, Molecular Dynamics, and Object Kinetic Monte Carlo using an effective defect information transmitting scheme. The radiation effects of phosphorus-doped n-type silicon materials are simulated based on a multiscale framework, and the results are consistent with experimental observations. The simulation results show that, the incident particle type has a large effect on the concentrations and distribution of defects, which is closely related to the Primary Knock-On Atom (PKA) energy spectra and the defects evolution of defects. A negative correlation between defect concentration and fluence rate is attributed to the dissipation of subsequent PKA kinetic energy in the pre-cascade region. By comparing the interatomic bond length, we reveal that the doped atom can change displacement threshold energy, thereby affecting the defect concentration.
A low-rank complexity reduction algorithm for the high-dimensional kinetic chemical master equation
2024, Journal of Computational PhysicsIt is increasingly realized that taking stochastic effects into account is important in order to study biological cells. However, the corresponding mathematical formulation, the chemical master equation (CME), suffers from the curse of dimensionality and thus solving it directly is not feasible for most realistic problems. In this paper we propose a dynamical low-rank algorithm for the CME that reduces the dimensionality of the problem by dividing the reaction network into partitions. Only reactions that cross partitions are subject to an approximation error (everything else is computed exactly). This approach, compared to the commonly used stochastic simulation algorithm (SSA, a Monte Carlo method), has the advantage that it is completely noise-free. This is particularly important if one is interested in resolving the tails of the probability distribution. We show that in some cases (e.g. for the lambda phage) the proposed method can drastically reduce memory consumption and run time and provide better accuracy than SSA.
Multilevel optimization for policy design with agent-based epidemic models
2024, Journal of Computational ScienceEpidemiological modeling has a long history and is often used to forecast the course of infectious diseases or pandemics. These models come in different complexities, ranging from systems of simple ordinary differential equations (ODEs) to complex agent-based models (ABMs). The former allow a fast and straightforward optimization, but are limited in accuracy, detail, and parameterization, while the latter can resolve spreading processes in detail, but are extremely expensive to optimize. Epidemiological modeling can also be used to propose and design non-pharmaceutical interventions such as lockdowns. In general, their optimal design often leads to nonlinear optimization problems. We consider policy optimization in a prototypical situation modeled as both ODE and ABM, review numerical optimization approaches, and propose a heterogeneous multilevel approach based on combining a fine-resolution ABM and a coarse ODE model. Numerical experiments, in particular with respect to convergence speed, are given for illustrative examples.
Koopman-based surrogate models for multi-objective optimization of agent-based systems
2024, Physica D: Nonlinear PhenomenaAgent-based models (ABMs) provide an intuitive and powerful framework for studying social dynamics by modeling the interactions of individuals from the perspective of each individual. In addition to simulating and forecasting the dynamics of ABMs, the demand to solve optimization problems to support, for example, decision-making processes naturally arises. Most ABMs, however, are non-deterministic, high-dimensional dynamical systems, so objectives defined in terms of their behavior are computationally expensive. In particular, if the number of agents is large, evaluating the objective functions often becomes prohibitively time-consuming. We consider data-driven reduced models based on the Koopman generator to enable the efficient solution of multi-objective optimization problems involving ABMs. In a first step, we show how to obtain data-driven reduced models of non-deterministic dynamical systems (such as ABMs) that depend potentially nonlinearly on control inputs. We then use them in the second step as surrogate models to solve multi-objective optimal control problems. We first illustrate our approach using the example of a voter model, where we compute optimal controls to steer the agents to a predetermined majority, and then using the example of an epidemic ABM, where we compute optimal containment strategies in a prototypical situation. We demonstrate that the surrogate models effectively approximate the Pareto-optimal points of the ABM dynamics by comparing the surrogate-based results with test points, where the objectives are evaluated using the ABM. Our results show that when objectives are defined by the dynamic behavior of ABMs, data-driven surrogate models support or even enable the solution of multi-objective optimization problems.