1 Introduction

The recurrent, unprovoked seizures associated with epilepsy can have a devastating effect on those with this disorder. Basic parts of every day life such as driving and obtaining employment become very difficult. While many people with epilepsy can control their seizures with medication, roughly thirty percent do not respond to this type of treatment and therefore seek out alternatives such as surgery (The Epilepsy Foundation 2009). The surgical procedure involves resecting the seizing portion of the brain, often part of the cortex, while avoiding any areas that provide vital functions such as speech, memory, and vision (The Epilepsy Foundation 2009). Because this is a very invasive procedure that does not guarantee success, other alternatives are being investigated. One of these is automatic feedback control, where subdural electrodes on the cortical surface would detect the seizure and apply an electrical signal to disrupt the abnormal electrocorticogram (ECoG) activity.

This method of treatment is currently being studied by experimentalists. It has been shown that the application of electric fields to rat cortex in vitro can modulate the behavior of seizure-like waves (Richardson et al. 2005). In vivo experiments on rats demonstrated that stimulation via proportional feedback can temporarily suppress seizure activity (Gluckman et al. 2001). A subsequent set of experiments showed that an increase in the amplitude of the proportional control feedback gain corresponds to decreases in both seizure amplitude (measured as a reduction in the amplitude variance) and Teager energy (Colpan et al. 2007). The Teager energy can be reduced by a decrease in the amplitude of a signal or a lowering of the frequencies in its power spectrum.

While these experiments suggest further exploration, they are all restricted to animal implementation. Therefore, to gain insight into the feasibility of human use, we turn first to mathematical models. The stochastic partial differential equation (SPDE) model of the cortex used here can support seizure-like oscillations that are qualitatively and quantitatively similar in frequency of maximum power and propagation speed to those seen in humans with epilepsy (Kramer et al. 2005). We previously demonstrated that various methods of feedback control can suppress these simulated seizures (Kramer et al. 2006), and we added the capability of looking at spatial properties of feedback control, such as electrode size and spacing (Lopour and Szeri 2008).

Our aim in the present work is to make this model more biologically relevant by refining the representation of feedback control. This will facilitate future comparison with experimental data. There are four key improvements in our approach that are presented in this article:

  1. 1.

    We verify the strong convergence of the numerical solution to the SPDE model.

  2. 2.

    Based on the studies of convergence, we utilize a smaller step size in our simulations, thereby allowing the inclusion of a more detailed electrode profile.

  3. 3.

    We develop a better motivated model of the signal measured by an electrode on the cortical surface. This model is used to calculate the applied electrical signal for feedback control.

  4. 4.

    Feedback control is performed with a new algorithm incorporating an integral component. This ensures that the applied signal is charge-balanced, which is thought to minimize damage of cortical tissue.

The organization of the paper is as follows. We first briefly review the SPDE cortical model that will be used in our simulations (Section 2), and then we discuss the convergence of its numerical solutions (Section 3). Next, we present the new model for electrode measurements (Section 4). Finally, we incorporate these improvements into simulations of feedback control, while implementing a new integral control law (Section 5).

2 Cortical model

To model the electrical activity of the human cortex, including seizure waves, we choose a set of stochastic PDEs that has been developed and adapted over the past decade (Liley et al. 1999, 2002; Steyn-Ross et al. 2003). The mesoscale nature of this model makes it well-suited to EEG-based applications such as epilepsy, sleep (Wilson et al. 2006), and anesthesia (Steyn-Ross et al. 2004) because it is based on length scales similar to commercial electrode arrays. It is a mean-field model, meaning that all of its variables represent spatially averaged properties of populations of neurons. This is similar to the manner in which an electrode provides a measurement based on the collective behavior of many neurons.

In 2006, the equations were restated in a dimensionless form by Kramer et al. (2007). This is the formulation of the model we will use here; it is a system of eight coupled nonlinear PDEs with stochastic inputs:

$${\frac{\partial \tilde{h}_e}{\partial \tilde{t}}} = 1 - \tilde{h}_e + \Gamma_e \big(h^0_e - \tilde{h}_e\big) \tilde{I}_{ee} + \Gamma_i \big(h^0_i - \tilde{h}_e\big) \tilde{I}_{ie} +u $$
(1)
$${\frac{\partial \tilde{h}_i}{\partial \tilde{t}}} = 1 - \tilde{h}_i + \Gamma_e \big(h^0_e - \tilde{h}_i\big) \tilde{I}_{ei} + \Gamma_i \big(h^0_i - \tilde{h}_i\big) \tilde{I}_{ii}$$
(2)
$$ {\kern-8.5pt}\left( \frac{1}{T_e} \frac{\partial}{\partial \tilde{t}} + 1 \right)^2 \tilde{I}_{ee} = N^{\beta}_e \tilde{S}_e\big[\tilde{h}_e\big] + \tilde{\phi}_e + P_{ee} + \tilde{\Gamma}_1 \label{eqn:Iee}\\ $$
(3)
$$ {\kern-8.5pt}\left( \frac{1}{T_e} \frac{\partial}{\partial \tilde{t}} + 1 \right)^2 \tilde{I}_{ei} = N^{\beta}_e \tilde{S}_e\big[\tilde{h}_e\big] + \tilde{\phi}_i + P_{ei} + \tilde{\Gamma}_2 \label{eqn:Iei}\\ $$
(4)
$$ {\kern-8.5pt}\left( \frac{1}{T_i} \frac{\partial}{\partial \tilde{t}} + 1 \right)^2 \tilde{I}_{ie} = N^{\beta}_i \tilde{S}_i\big[\tilde{h}_i\big] + P_{ie} + \tilde{\Gamma}_3 \label{eqn:Iie}\\ $$
(5)
$$ {\kern-8.5pt}\left( \frac{1}{T_i} \frac{\partial}{\partial \tilde{t}} + 1 \right)^2 \tilde{I}_{ii} = N^{\beta}_i \tilde{S}_i\big[\tilde{h}_i\big] + P_{ii} + \tilde{\Gamma}_4 \label{eqn:Iii}\\ $$
(6)
$$ {\kern-8.5pt}\left( \frac{1}{\lambda_e} \frac{\partial}{\partial \tilde{t}} + 1 \right)^2 \tilde{\phi}_e = \frac{1}{\lambda_e^2} \frac{\partial^2 \tilde{\phi}_e}{\partial \tilde{x}^2} + \left( \frac{1}{\lambda_e} \frac{\partial}{\partial \tilde{t}} + 1 \right) N^{\alpha}_e \tilde{S}_e\big[\tilde{h}_e\big] \label{eqn:phie}\\ $$
(7)
$$ {\kern-8.5pt}\left( \frac{1}{\lambda_i} \frac{\partial}{\partial \tilde{t}} + 1 \right)^2 \tilde{\phi}_i = \frac{1}{\lambda_i^2} \frac{\partial^2 \tilde{\phi}_i}{\partial \tilde{x}^2} + \left( \frac{1}{\lambda_i} \frac{\partial}{\partial \tilde{t}} + 1 \right) N^{\alpha}_i \tilde{S}_e\big[\tilde{h}_e\big] \label{eqn:phii} $$
(8)

All variables are dimensionless and are functions of time (\(\tilde{t}\)) and one spatial dimension (\(\tilde{x}\)). The \(\tilde{h}\) state variable is the mean soma potential for a neuronal population, while \(\tilde{I}\) represents postsynaptic activation due to local, long-range, and subcortical inputs, and \(\tilde{\phi}\) is a long-range (corticocortical) input. The subscripts e and i denote affiliation with the excitatory and inhibitory neuron populations, respectively; variables with two subscripts represent the transmission of information from one population to another, e.g. \(\tilde{I}_{ie}\) is the postsynaptic activation of the excitatory population due to inputs from the inhibitory population. In Eq. (1) we have added the variable u to represent the signal applied by a cortical surface electrode for feedback control. This will be discussed further in Section 5. For descriptions of all model variables and parameters, please refer to Table 1.

Table 1 Dimensionless variables and parameters of the SPDE cortical model

To appreciate the model as a whole, let us first look at the equations governing the excitatory neuron population, depicted graphically in Fig. 1. Equation (1) for \(\tilde{h}_e\) is reminiscent of the leaky integrate-and-fire model of a neuron, where the derivative of the membrane potential equals the resting potential minus the membrane potential plus any existing current inputs (Dayan and Abbott 2001). Here, the resting potential is “1” due to the dimensionless nature of the system. The inputs are \(\tilde{I}_{ee}\) and \(\tilde{I}_{ie}\), which evolve according to (3) and (5), respectively, based on three types of synaptic input: local, long-range, and subcortical.

  1. Local inputs, such as those from within the same macrocolumn, are represented by terms of the form \(N^{\beta}_e \tilde{S}_e [\tilde{h}_e]\), where \(\tilde{S}_e\) is a dimensionless sigmoid function:

    $$ \tilde{S}_e \big[\tilde{h}_e\big] = \frac{1}{1+\exp\big[-\tilde{g}_e \big(\tilde{h}_e - \tilde{\theta}_e\big)\big]} \label{eqn:Se} $$
    (9)

    This converts the mean soma potential of the excitatory population to its mean firing rate.

  2. Long-range inputs represent signals from other cortical macrocolumns and are defined by \(\tilde{\phi}_e\). The behavior of this variable is governed by (7). Note the similarity of this equation to the standard PDE wave equation; the idea that cortical tissue can support wave propagation is central to our simulation of epileptic seizures.

  3. Subcortical inputs are predominantly from the thalamus and contain both constant (P ee ) and stochastic (\(\tilde{\Gamma}_1\)) parameters. We define the stochastic term by

    $$ \tilde{\Gamma}_1 = \alpha_{ee} \sqrt{P_{ee}} \, \xi_1 \big[\tilde{x}, \tilde{t}\big] \;, $$
    (10)

    where α ee is a constant and ξ1 is zero mean, Gaussian white noise in time and one spatial dimension. When the SPDEs are solved numerically, the cumulative effect of this stochastic process will be Brownian motion. To ensure that the properties of this signal remain constant regardless of step size, we scale the discrete randomly generated numbers R(m,n) by the simulation time step:

    $$ \xi_1 \big[\tilde{x},\tilde{t}\big] = \frac{R(m,n)}{\sqrt{\Delta \tilde{t}}} \;. \label{eqn:xi} $$
    (11)

    The variables m and n are indices of space and time, so a single point is represented by spatial position \(\tilde{x} = m \Delta \tilde{x}\) at time \(\tilde{t} = n \Delta \tilde{t}\). Note that the discrete update form of (11) is

    $$ \xi_1 \big[\tilde{x},\tilde{t}\big]\Delta \tilde{t} = \sqrt{\Delta \tilde{t}} \, R(m,n) \;, \label{eqn:xi_update} $$
    (12)

    which will be used in all numerical experiments.

    Fig. 1
    figure 1

    Flow chart representation of model Eqs. (1), (3), (5), and (7), which govern the excitatory population. The boxes describe the physiological significance of the model variables and parameters listed beneath them. Note that feedback occurs through the sigmoid function \(\tilde{S}_e\), which is a function of \(\tilde{h}_e\). This population is also coupled to the inhibitory population through local inputs described by \(\tilde{S}_i[\tilde{h}_i]\). For a cell-based depiction of the model, see Steyn-Ross et al. (1999)

Thus, Eqs. (1), (3), (5), and (7) govern the excitatory population, while the remaining equations represent the inhibitory population and have exactly the same form. Together they compose the full cortical model.

There are several parameters that are especially relevant to the following numerical studies. The parameter P ee represents input from the population of subcortical excitatory neurons (such as those in the thalamus), and Γ e denotes the influence of synaptic input on the mean soma potential. Changes in these parameters allow for transition between normal cortical function (P ee  = 11.0 and \(\Gamma_e = 1.42 \times 10^{-3}\)) and the hyperexcited “seizure” state of the SPDE model (say, P ee  = 548.0 and \(\Gamma_e = 0.8 \times 10^{-3}\)). At low levels of excitation corresponding to low levels of P ee , the mean soma potential of the excitatory neurons \(\tilde{h}_e\) produces random fluctuations similar to those seen in an EEG measurement. However, at increased levels of subcortical excitation, the simulated cortex develops large amplitude seizure-like oscillations. Our goal is to suppress this pathological behavior via feedback control consisting of measurements from the cortical surface and the application of a potential based on those measurements.

3 Strong convergence of numerical solutions

Before performing simulations of feedback control, we must ensure that we can obtain accurate numerical solutions to this system of SPDEs. We will use a predictor-corrector algorithm written in MATLAB, so the accuracy of the solution will be determined by our choice of step sizes in space and time. In addition to considering the system equations and solution method, we shall assume that a typical cortical surface electrode is of order 1 cm in diameter. While the previously used step size of 7 mm (Lopour and Szeri 2008; Kramer et al. 2006) may have accurately solved the differential equations, it was not small enough for sufficient spatial resolution of the behavior of the cortical tissue underneath the electrodes. We will use much smaller step sizes in order to achieve both of these objectives.

3.1 Method

To determine the magnitude of these step sizes, we will examine the strong convergence of solutions to the cortical model. This will be accomplished by generating equivalent Brownian paths at several step sizes and demonstrating that the solutions converge as the step size decreases (note that this differs from weak convergence, which looks at the expected value of the solution over all possible Brownian paths) (Higham 2001). This task will be complicated by the fact that both the stochastic inputs and the numerical solutions vary in space and time.

Recall from (12) that the grid of stochastic inputs is defined by R, which consists of M independent Brownian paths, each of length N. This corresponds to M points in space at a reference step size of Δx = Δx 0 and N points in time at a reference step size of Δt = Δt 0. Therefore, we denote individual points by R(m,n), ∀ m = 1,...,M and ∀ n = 1,...,N. Then we can represent the same Brownian paths at a coarser step size 2Δt 0 by adding together every two adjacent elements in time:

$$ \tilde{R}(m,n) = R(m,2n-1) + R(m,2n)\;, \label{eqn:timeCollapse} $$
(13)

where n = 1,...,N/2 (Gaines 1995). We do not need any special scaling factors here because this combination of neighboring terms is consistent with the definition of a Brownian path. Similarly, we can represent the stochastic input at step size 2Δx 0 by adding together adjacent elements in space and scaling to keep the variance constant (Gaines and Lyons 1997):

$$ \tilde{R}(m,n) = \frac{1}{\sqrt{2}} \left(R(2m-1,n) + R(2m,n) \right), \label{eqn:spaceCollapse} $$
(14)

where m = 1,...,M/2. The factor \(1/\sqrt{2}\) is necessary because the stochastic inputs are independent in the spatial direction; it will be used for the relative scaling of the inputs at different step sizes for the purpose of determining convergence, but will not be present in a typical simulation of feedback control.

Now we can directly compare numerical solutions at decreasing step sizes (e.g. Δx = 4Δx 0, 2Δx 0, Δx 0) under equivalent stochastic inputs. We want to verify that the solution converges as we approach Δx 0.

3.2 Results

First, we look at the convergence in time using the method described above. We remove the spatial terms from the cortical model to reduce it to an ODE and then perform simulations with decreasing values of the time step. These indicate that the solution converges around Δt = 5 ×10 − 4 s (Fig. 2). The two smallest time steps in the figure, Δt = 5 ×10 − 4 and 2.5×10 − 4 s give very similar results for h e .

Fig. 2
figure 2

Convergence of numerical solutions as the time step is decreased using the method described in Section 3.1. Here the spatial terms in the model have been removed to reduce it to an ODE. We used \(\Delta t_0 = 2.5\times 10^{-4}\) s, and we plotted the solutions for Δt = 8Δt 0, 4Δt 0, 2Δt 0, and Δt 0. The two smallest time steps give overlapping results, indicating that the solution has converged. This study was done with typical excitation P ee  = 11.0 and \(\Gamma_e = 1.42\times 10^{-3}\)

Next, we study the convergence in space. We begin with a Δx that is smaller than 1 cm because we desire to resolve the solution across an electrode. As Δx decreases, we see that the accuracy of the solution improves; however, this does not give us a clear indication of which step size to choose. The amount of improvement seems to be the same for each reduction in Δx. We solve this problem by looking at the numerical worst-case scenario—a sharp transition between uncontrolled cortex and a single electrode with proportional feedback. We then choose Δx based on its ability to resolve this sharp spatial change (Fig. 3). While this figure shows that the differences between the step sizes are still subtle, it seems that the largest one, Δx = 0.448 mm, does not provide enough detail to show the sharp transition between cortex and electrode. The smaller step sizes appear to be more accurate and provide very similar solutions. Because it will be less computationally intensive to use Δx = 0.224 mm (or \(\Delta \tilde{x} = 0.0008\) dimensionless), we choose this as the step size for our simulations.

Fig. 3
figure 3

Convergence of numerical solutions over an electrode nonlinearity as the spatial step size is decreased using the method from Section 3.1. The full simulation spanned 22.4 mm, but here we show only 3 mm of uncontrolled cortex and 2 mm of cortex underneath an electrode while proportional feedback is applied. This figure shows solutions for Δx = 4Δx 0 (dashed), 2Δx 0 (solid), and Δx 0 (solid), where Δx 0 = 0.112 mm. Note that the two smallest time steps give very similar, overlapping solutions for \(\tilde{h}_e\), while 4Δx 0 gives an inaccurate result. Therefore, we choose the step size Δx = 2Δx 0 = 0.224 mm as a balance between accuracy and simulation cost. Other relevant parameters were Δt = 2×10 − 6 s, \(N=\text{80,000}\), P ee  = 548.0, and \(\Gamma_e = 0.8\times 10^{-3}\)

In order to verify this in the typical case with no feedback control, we plot numerical solutions with Δx = 0.448, 0.224, and 0.112 mm (Fig. 4). Because the spatial solution appears to have converged at these step sizes, our choice of Δx = 0.224 mm is valid. The last task is to choose a value of Δt. We would like to use the largest time step for which the solution converges because this will result in the shortest computation time; this value is \(\Delta \tilde{t} = 0.0001\) (dimensionless) or Δt = 4×10 − 6 s. Because we already showed that accurate solutions can be obtained with much larger values of Δt, this is an acceptable choice.

Fig. 4
figure 4

Convergence of numerical solutions as the spatial step size is decreased using the method described in Section 3.1. Here we used Δx 0 = 0.112 mm, and we plotted the solutions for Δx = 4Δx 0 (dashed), 2Δx 0 (solid), and Δx 0 (solid). The two smallest time steps give very similar results, indicating that the solution has converged; this justifies our choice of step size, Δx = 0.224 mm. This figure was created with typical excitation P ee  = 11.0 and \(\Gamma_e = 1.42\times 10^{-3}\), and \(N=\text{80,000}\) time steps were used at Δt = 2×10 − 6 s

Therefore, the step sizes used in all of the following cortical simulations will be Δx = 0.224 mm and Δt = 4×10 − 6 s.

4 Model of feedback control

Our previous simulations of feedback control (Kramer et al. 2006; Lopour and Szeri 2008) utilized two key assumptions: 1) the signal measured by an electrode on the cortical surface is proportional to h e , the mean soma potential of the excitatory neuron population, and 2) a voltage applied to the surface of the cortex via electrode directly affects the average soma voltage in that region. The first assumption allows us to define the control effort u in terms of \(\tilde{h}_e\) (in this case, \(\tilde{h}_e\) would represent the measured voltage), and the second assumption implies that the expression for u can be added directly to the SPDE model in Eq. (1). While the latter assumption appears to be valid, there is evidence that we cannot write u as an explicit function of \(\tilde{h}_e\) as the first assumption suggests. It is likely that the voltage sensed by a surface electrode is different than the averaged soma voltage, \(\tilde{h}_e\).

First, it is important to realize that the signal measured by an electrode is a function of the extracellular currents in the tissue, rather than the intracellular somatic potential (Nunez and Srinivasan 2006). We define the signal sensed at a point on the cortical surface to be \(\tilde{h}_m\). Then, to understand the difference between \(\tilde{h}_e\) and \(\tilde{h}_m\), we consider a pyramidal neuron in the cortex with one excitatory synapse as shown in Fig. 5. Say that the pyramidal neuron receives excitatory input due to a proximal synapse in layer 4 (Fig. 5(a)); this will cause intracellular flow of ions that will induce a current dipole with sources (+) on the apical dendrite near the surface and sinks (−) near the soma. The surface electrode \(\tilde{h}_m\) will sense the extracellular current source near the surface and will thus depolarize. The soma potential \(\tilde{h}_e\) will also depolarize due to the excitatory input; therefore, in this case, both \(\tilde{h}_e\) and the surface electrode show a depolarization. On the other hand, suppose that the pyramidal neuron receives excitatory input due to a distal synapse in layer 1 (Fig. 5(b)). Because the input is still excitatory, the neuron will depolarize, and this will be reflected in the soma potential \(\tilde{h}_e\). However, this input will cause an extracellular current dipole with reverse polarity; there will be a source (+) near the soma and current sinks (−) near the surface. This means that the voltage sensed by the surface electrode will show a hyperpolarization. Therefore, in this case, the deflection of \(\tilde{h}_e\) and the signal seen by the surface electrode are different (Kandel et al. 2000).

Fig. 5
figure 5

Relationship of the sensed signal \(\tilde{h}_m\) to the mean soma potential \(\tilde{h}_e\). In (a), an excitatory input synapses near the soma, causing a depolarization in \(\tilde{h}_e\). Similarly, the dipole current generated by this synaptic event involves current sources near the cortical surface, which are manifested as a depolarization in the electrode measurement \(\tilde{h}_m\). However, when the excitatory input occurs near the surface as in (b), the current dipole is reversed, causing opposite deflections in \(\tilde{h}_e\) and \(\tilde{h}_m\). This figure is modeled after Box 46-1 in Kandel et al. (2000)

This implies that we should no longer use \(\tilde{h}_e\) as the measured electrical potential in our expression for the control effort u. Instead, the measurement will be a function of the currents in the cortex due to synaptic inputs, denoted as \(\tilde{h}_m\). We will refer to this as the sensed signal. Then, for the purposes of feedback control, the applied effort u will be a function of \(\tilde{h}_m\).

4.1 Basic form of model

To determine the composition of the sensed signal, we must consider the extracellular current flows due to three types of synaptic input (local intracortical input, long-range corticocortical input, and subcortical input). To do this, we need to know whether the inputs are excitatory or inhibitory and whether they synapse near the surface or near the soma. In what follows, we take care to distinguish depolarization in the electrode measurement from depolarization of the soma.

  1. Local intracortical inputs, \(N^{\beta}_e \tilde{S}_e\) and \(N^{\beta}_i \tilde{S}_i\). Within a cortical macrocolumn, excitatory synapses tend to occur close to the surface, while inhibitory synapses are located near the soma (Nunez and Srinivasan 2006; Spruston 2008). Thus, the excitatory inputs are as depicted in Fig. 5(b), and the inhibitory inputs have the geometry of Fig. 5(a) but with the opposite sign (because Fig. 5 depicts excitatory inputs). Each of these configurations will cause a hyperpolarization in the electrode measurement; therefore, both terms will have negative signs in the measurement model: \(- A N^{\beta}_e \tilde{S}_e - B N^{\beta}_i \tilde{S}_i\), where A and B are positive constant weights to be determined.

  2. Long-range corticocortical input, \(\tilde{\phi}_e\). Corticocortical inputs are exclusively excitatory (Steyn-Ross et al. 2003; Liley et al. 2002; Nunez and Srinivasan 2006) and tend to synapse near the surface (Nunez and Srinivasan 2006; Spruston 2008). More specifically, layers 2 and 3 of the cortex seem to have a higher density of corticocortical inputs (Kandel et al. 2000; Nieuwenhuys 1994). As shown in Fig. 5(b), this input type will cause a hyperpolarization in the electrode signal and will thus have a negative sign in the measurement model: \(- C \tilde{\phi}_e\), where C is a positive constant weighting factor to be determined.

  3. Subcortical inputs, \((P_{ee}+\tilde{\Gamma}_1)\) and \((P_{ie}+\tilde{\Gamma}_3)\). While the distribution of these synapses is not clear-cut, it seems that subcortical inputs terminate most densely in layer 4 near the soma (Kandel et al. 2000; Nieuwenhuys 1994). Because P ee is an excitatory input of the type shown in Fig. 5(a), it will have a depolarizing effect on the electrode measurement; therefore, we give it a positive sign: \(+D(P_{ee}+\tilde{\Gamma}_1)\), where D is a positive constant weight. On the other hand, P ie is inhibitory and will thus have a hyperpolarizing effect: \(- E(P_{ie}+\tilde{\Gamma}_3)\), where E is a constant weighting factor. The values of D and E are to be determined.

Incorporating all three types of synaptic input gives us this basic expression:

$$\begin{array}{lll} measured \,current \sim & -A N^{\beta}_e \tilde{S}_e - B N^{\beta}_i \tilde{S}_i - C \tilde{\phi}_e \\ & + D(P_{ee}{\kern-1pt} + {\kern-1pt}\tilde{\Gamma}_1) {\kern-1pt}-{\kern-1pt} E(P_{ie} {\kern-1pt} +{\kern-1pt} \tilde{\Gamma}_3), \end{array}$$
(15)

with A, B, C, D, E of positive sign but (so far) unknown magnitude. The consequence of these inputs is only evident after synaptic transmission. Therefore, we include a rate constant for this process by using an equation similar to that of \(\tilde{I}_{ee}\) in the SPDE model. Let \(\tilde{I}_m\) represent the current measured at the cortical surface and T m represent a rate constant. Then

$$\begin{array}{lll} \Bigg( {\frac{1}{T_m} \frac{\partial}{\partial \tilde{t}} + 1} \Bigg)^2 \tilde{I}_m \,=\,& F (-A N^{\beta}_e \tilde{S}_e - B N^{\beta}_i \tilde{S}_i - C \tilde{\phi}_e \\ [-6pt] &{\kern12pt} + D(P_{ee}+\tilde{\Gamma}_1) - E(P_{ie}+\tilde{\Gamma}_3) ) \;,\end{array}$$
(16)

where A, B, C, D, E, and F are positive constant weights. We choose T m  = 12.0 to match the rate constant of the excitatory population T e . The values of A through E will depend on the number of synapses of each type and the average distance of the synapse from the soma. The coefficient F is a gain parameter that will scale the magnitude of all the synaptic inputs; this ensures that they have the appropriate amount of influence over the electrode measurement \(\tilde{I}_m\). In addition, we can think of F as containing the effective resistance of the cortex. Recall that the electrode measurement is determined by currents in the cortex, yet the components on the right side of (16) are based on voltages. Because the currents produced by these voltages can be calculated with Ohm’s Law (Kandel et al. 2000), the gain parameter F provides the necessary conversion.

To complete the model of the electrode measurement \(\tilde{h}_m\), we must account for the reversal potential of the cortical neurons. This determines the direction of current flow associated with the inputs described above (we previously assumed that the neurons were at resting potential). We once again take our cue from the SPDE model and define

$$ \tilde{h}_m \equiv \big(h^e_0 - \tilde{h}_e\big) \tilde{I}_m = \left( \frac{45 - h_e}{-70} \right) \tilde{I}_m \;. \label{eqn:hm} $$
(17)

Thus, (16) and (17) comprise a complete model of the potential sensed by a cortical surface electrode, \(\tilde{h}_m\). In our simulations of feedback control, the applied electric field u will be a function of this variable.

4.2 Estimation of coefficients

We have not yet assigned numerical values to the coefficients A through E. To do this, we think of them as the percentage of pyramidal neuron synapses due to each source. For example, A will represent the percentage of synapses on any given pyramidal neuron that come from other excitatory neurons in the same macrocolumn. There are three physiological relationships that allow us to determine these values:

  1. 1.

    The number of synapses on pyramidal cells due to local cortical neurons is roughly equal to the number of synapses due to cortical neurons in other macrocolumns or in the contralateral hemisphere (Abeles 1991). This implies that A + B = C.

  2. 2.

    Approximately 98 percent of synapses on pyramidal cells are corticocortical, while 2 percent are thalamocortical (Abeles 1991; Nunez and Srinivasan 2006). This implies that A + B + C = .98 and D + E = .02.

  3. 3.

    Roughly 90 percent of all cortical synapses are excitatory and 10 percent are inhibitory (Abeles 1991; Braitenberg and Schüz 1998). This implies that A = 9B and D = 9E.

After solving the above equations, we account for the fact that synapses near the soma will have a greater influence on the electrode measurement (Nunez and Srinivasan 2006) by multiplying B, D, and E by a factor of two.

While this method of estimation may seem crude, others have achieved similar results through more detailed probabilistic analysis (Liley and Wright 1994). Both are listed in Table 2 for comparison; note that each set of coefficients has been scaled to add to 1. In simulation, when comparing the two sets of coefficients A through E, the only difference in the resulting \(\tilde{h}_m\) appears to be an offset. Because offsets are not reflected in EEG measurements, this difference is inconsequential. Hence, two completely independent approaches provide essentially equivalent coefficients for the sensed signal.

Table 2 Values of the coefficients for Eq. (16)

4.3 Verification of full model

We can verify our model by returning to the hypothetical pyramidal neuron described at the beginning of Section 4 and based on Box 46-1 in (Kandel et al. 2000). As before, say that we are modeling the electrode measurement of a pyramidal neuron with excitatory inputs in both cortical layers 1 and 4. If a majority of the inputs occur in layer 1 near the cortical surface as in Fig. 5(b), \(\tilde{h}_e\) and \(\tilde{h}_m\) will have similar dynamics, but a hyperpolarization in one signal will be a depolarization in the other; the signals will be negatively correlated. This behavior is seen in our model for \(\tilde{h}_m\). If we run a simulation with the typical set of parameters (where local and corticocortical connections dominate because P ee is low), we see that \(\tilde{h}_e\) and \(\tilde{h}_m\) are negatively correlated (top of Fig. 6). On the other hand, when the strongest input is near the soma in layer 4 as in Fig. 5(a), it will have the same effect on both signals, and they will be positively correlated. This, too, is demonstrated by the measurement model. We can simulate a large excitatory input near the soma by increasing the value of P ee ; when we run the simulation with this change, we see that \(\tilde{h}_e\) and \(\tilde{h}_m\) become positively correlated (bottom of Fig. 6). Thus, the measurement model accurately reproduces the physiological effects of varying cortical inputs, and we will use it in subsequent simulations of feedback control. The applied electric field u will be a function of \(\tilde{h}_m\) as opposed to \(\tilde{h}_e\).

Fig. 6
figure 6

Comparison of \(\tilde{h}_m\) (solid) and \(\tilde{h}_e\) (dashed) with two sets of parameters. At typical levels of excitation where P ee  = 11.0 and \(\Gamma_e = 1.42\times10^{-3}\), the two signals are negatively correlated (top subfigure), as predicted by our physiological analysis. This negative correlation is especially noticeable if the synaptic time delay is taken into account by shifting \(\tilde{h}_e\) slightly to the left. However, when we simulate a strong excitatory input near the soma by setting \(P_{ee} = \text{1,000}.0\), the signals become positively correlated (bottom subfigure). This was also predicted by the physiology of the cortex, and thus helps justify our model of \(\tilde{h}_m\). Note that for both sets of parameters, the signal offsets (means) were removed to facilitate a direct comparison

For reference, we compare \(\tilde{h}_e\) and \(\tilde{h}_m\) at seizure parameters (\(\Gamma_e = 0.8 \times 10^{-3}\) and P ee  = 548.0) in Fig. 7. In this case, the signals have a large negative correlation, but very similar dynamics. This suggests that it will be possible to suppress seizures with feedback control based on \(\tilde{h}_m\) (as it was with \(\tilde{h}_e\) in (Kramer et al. 2006)), although we may need to use a gain of the opposite sign.

Fig. 7
figure 7

Comparison of the electrode measurement h m (mV, solid) and the mean soma potential h e (mV, dashed) at levels of subcortical excitation that cause seizure-like oscillations, P ee  = 548.0 and \(\Gamma_e = 0.8\times10^{-3}\). Here we see that the two signals are negatively correlated, but have very similar dynamics. This indicates that it should be possible to perform feedback control using our new measurement model \(\tilde{h}_m\) (as we did previously with \(\tilde{h}_e\) in Kramer et al. 2006), although a positive gain may be necessary

5 Simulation of integral control

In choosing a function to represent the applied electric field u, we start with the concept of proportional feedback control. This is the simplest and most common type of control—intuitively, the applied effort should be proportional to the error between the measured signal and its desired value. In this case, we can define proportional control as

$$ u = a(x,t) \big(\bar{h}_m + b\big) \;, \label{eqn:Pcontrol} $$
(18)

where a(x,t) is the control gain and \(\bar{h}_m\) is the measured electrode potential; it is calculated by taking the spatial average of \(\tilde{h}_m\) (17) under each electrode. The parameter b is a constant offset that can be tuned to achieve the desired equilibrium value of \(\tilde{h}_e\). While this control algorithm is able to suppress the seizure-like oscillations of the model, it would be difficult to implement safely. Whenever stimulation is applied, it is important that the process be chemically reversible in order to prevent damage due to the production of new chemical species. There is a threshold for reversibility called the “reversible charge injection limit,” which signifies the maximum allowable charge injection before the polarity is reversed (Robblee and Rose 1990). Because a proportional controller does not penalize the amount of effort used (i.e. the magnitude of the applied electric field), it relies on large signals of only one sign, which would exceed this threshold over time. The chemical processes associated with this type of stimulation would therefore be irreversible and damaging to cortical tissue. The simulation results based on this type of control have been presented in previous publications (Kramer et al. 2006), and we do not repeat them here.

To improve on this method, we may consider adding a derivative or integral component to the controller, or even using all three terms to create a proportional-integral-differential (PID) controller (Franklin et al. 2002). The derivative term increases or decreases the control effort based on the rate of change of the error. This can reduce the response time of the controller because the derivative term “anticipates” the behavior of the system. Simulation results with a PD controller were presented in Kramer et al. (2006).

Because the differential controller utilizes the same harmful voltages mentioned previously, we choose to implement a controller with an integral term:

$$ u = a(x,t) \big(\bar{h}_m + b\big) + c(x,t) \int u \,dt \;. \label{eqn:PIcontrol} $$
(19)

Here, c(x,t) is another gain term. It will be negative, meaning that this new term will oppose the total integral of the applied voltage u. If the integral of u is positive, it will add a negative component to the applied voltage, and vice versa; in this way, it pushes the integral of u to zero. In other words, it forces the applied signal to be charge-balanced and thus safe for cortical tissue. Because we have also included the proportional control term, this feedback setup will still suppress the seizure oscillations. Note that this is different than traditional integral control, which is based on the integral of the error between the desired value of the signal and its actual value.

In addition to adding the integral term and using the new measurement model \(\tilde{h}_m\), we incorporate the step sizes determined in Section 3. Because we are using smaller increments of space, we can make one further improvement: we add a more detailed electrode profile to the feedback simulation. We previously assumed that the electrodes maintained a constant profile across their surface while measuring or applying the stimulus (i.e. every point on the electrode sensed or provided the same value) and that no tissue beyond the edge of the electrode was affected by its activity. However it has been shown experimentally that this is not the case (Suesserman et al. 1991). Here, we take the first step towards a realistic electrode model by including a smooth falloff at the electrode edges. The falloff is incorporated into the function a(x,t), so that the gain varies between zero (over uncontrolled cortex) and a max (under the electrodes) via a hyperbolic tangent function. This gain function is used in the application of control to indicate that the influence of the electrode decreases with distance, and it is also applied during sensing with a max  = 1 to indicate that the cortical tissue has less impact on the electrode measurement as distance increases. We have not yet included any variation over the surface of the electrode, but this is certainly an adjustment that can be considered in the future.

With this approach, we are in a position to simulate the suppression of seizure waves using total integral feedback control as defined in (19). The results shown here were generated using an Intel Core 2 Duo 2.13 GHz processer, and the calculations took roughly 5 min at the highest spatial and temporal resolutions. The code was written and executed in MATLAB and has been provided as supplementary online material (Online Resource 1).

Figure 8 shows an example of the seizure-like oscillations, represented by the mean soma potential of the excitatory population h e as it varies in space and time. More specifically, we have simulated an uncontrolled strip of cortex 200 mm long over 0.5 s, and the value of h e [mV] is represented by grayscale. The seizure waves occur due to our choice of \(\Gamma_e = 0.8\times10^{-3}\) and a Gaussian P ee distribution with a maximum value of 548.0. They spontaneously arise in locations with sufficiently high P ee (here, this “hot spot” is at x = 100 mm) and travel outward until the level of excitation is too low to support them. This is why the waves terminate before they reach the edges of the simulation space.

Fig. 8
figure 8

(a) Seizure waves traveling on the simulated cortex, with parameters P ee  = 548.0 and \(\Gamma_e = 0.8\times10^{-3}\). The characteristics of the wave are determined by the distribution of P ee ; here it is a Gaussian curve, so the wave starts in the center where P ee is at its maximum (548.0) and propagates outward until the level of excitation is too low to support it. In this case, no feedback control is applied, so the waves will reoccur indefinitely. (b) Plot of h e in time extracted from (a) at 100.8 mm

Figure 9 shows the effect of total integral feedback control on this seizure-like behavior. In the first half of the time interval, we see a seizure wave emanating from the center of the space; then, at 0.25 s the control is switched on, and the pathological seizure activity quickly disappears. In this case, the feedback stimulation is applied via five electrodes that are 11.2 mm across. The geometry of these electrodes can be clearly seen in Fig. 10, which shows the value of the applied signal u [mV] for the same simulation. Here, we see that the applied voltage is zero for the first half of the time interval, and then each electrode varies between positive and negative voltages once the controller is turned on. While the largest potential applied by a single electrode is roughly 60 mV, the total signal applied by each electrode is very close to zero, due to our choice of control law u. This is the desired result because it indicates that each electrode applies a balanced signal. Lastly, note that it is not necessary to have electrodes covering the entire length of the seizure wave. When we place the electrodes at the center of the “hot spot,” they are able to halt the outward motion of the wave. For more information on the parameters used to create the figures, please refer to the figure captions.

Fig. 9
figure 9

(a) Plot of h e [mV] in space and time. In the first 0.25 s, we see a seizure wave traveling on the simulated cortex because we have set the excitation parameters to P ee  = 548.0 and \(\Gamma_e = 0.8\times10^{-3}\). At 0.25 s (dashed line), the integral controller (19) is turned on, and we see that the seizure-like waves are immediately suppressed. For this simulation, we used \(\tilde{h}_m\) as the electrode measurement, and control was applied via five electrodes of width 11.2 mm with a profile defined by the hyperbolic tangent function. The controller gains were a max  = 8, b = − 0.1, and c = − 8. (b) Plot of h e in time extracted from (a) at 100.8 mm

Fig. 10
figure 10

(a) Plot of applied effort u [mV] in space and time, corresponding to the simulation of feedback control in Fig. 9. This shows that the potential applied by the electrodes is zero until t = 0.25 s, when the controller is turned on (dashed line). Then each electrode oscillates between positive and negative signals, indicative of the “charge-balanced” nature of the control. For this simulation, the average magnitude of total applied effort over all five electrodes was 0.23 mV. This value will approach zero as t increases, and it can also be reduced by increasing the integral control gain, c. Note that, for clarity, the grayscale in this figure is the opposite of the ones used in Figs. 8 and 9. (b) Stimulation applied by the center electrode in (a)

Thus, in Figs. 9 and 10 we have demonstrated that the new model for electrode measurements \(\tilde{h}_m\) can be used to suppress seizure waves via feedback control. If the controller u contains the integral of the total applied effort, then this can be done in a manner that is thought to be safe for cortical tissue.

6 Discussion

Here we have presented several novel approaches to exploration of a model of feedback control for epileptic seizures in humans. We first verified the strong convergence of numerical solutions to the model of the cortex, paying special attention to discontinuities that may occur at electrode edges. This allowed us to choose appropriate step sizes for our simulations; because the spatial step size Δx was small relative to the size of the electrode, we were able to incorporate a more detailed electrode profile into the simulation. Then, based on evidence that the mean soma potential \(\tilde{h}_e\) cannot be used as the measurement for feedback control, we developed a new model \(\tilde{h}_m\) to represent the measurement of cortical surface electrodes. This model was based on the currents flowing in the cortex and was used for all simulations of feedback control. Those simulations utilized a new control algorithm containing the total integral of the applied potential u. Not only did this succeed in suppressing the seizure-like oscillations, but it guaranteed that the applied signal would be charge-balanced and therefore safe for cortical tissue.

Of course, there are always improvements to be made. In this work, we have assumed that each electrode can be simultaneously sensing and applying the control signal. This is not realistic; ideally, we would model separate electrodes for these two tasks, with the geometric properties chosen to match existing experimental setups. Also, as mentioned previously, it would be possible for us to improve the electrode profile used in simulation. Here, we incorporated a simple profile to demonstrate our capability to do so, but it would be more accurate to base our choice on existing theories of the potential difference across an electrode surface (Rubinstein et al. 1987). It may also be possible to account for electrochemical changes that occur in the vicinity. Lastly, we note that our use of the phrase “charge-balanced” should be taken lightly. Our controller measures and applies a voltage, so the integral term pushes the total voltage (over time) towards zero. Although, in concept, this is similar to having a charge-balanced signal, it does not guarantee that the applied charges will be balanced and safe. This could be remedied by utilizing a controller that measures cortical potential and applies a current. Not only would this be more accurate, but it would facilitate future comparisons with experiments, most of which are done in this manner (Colpan et al. 2007; Sunderam et al. 2006).

Validation via experimentation is only one of the many possible future directions of this work. For example, we could extend our model to two dimensions and use it to study seizure waves, or we could simulate experimental phenomena such as irregular, spiral, and plane cortical waves (Schiff et al. 2007). Related theoretical work has suggested that pre-processing of data using a Kalman filter can provide greater flexibility in the control of waves while minimizing the amount of energy needed to do so (Schiff and Sauer 2008). This concept could be readily applied to the simulations discussed here. Another possible avenue of future work is the investigation of spatial properties of our feedback model. We have the capability to do simulations with any number of electrodes at any size and spacing, which is a luxury not afforded to experimentalists. Theoretical work in this area may provide insight into important questions such as: how does the size of a seizure relate to the number and sizes of the electrodes needed to control it successfully? Where should the electrodes be placed for maximum effectiveness? What are the necessary resolutions for sensing and actuation? It is our hope that this work will act as a stepping stone to such intriguing questions.