WELLS STEPHEN (GB)
TAYLOR JOSEPH (GB)
MORRIS PAUL (GB)
US20200221991A9 | 2020-07-16 |
EVE ARMSTRONG: "Statistical data assimilation for estimating electrophysiology simultaneously with connectivity within a biological neuronal network", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 9 November 2017 (2017-11-09), XP081556913
TAYLOR JDWINNALL 5NOGARET A: "Estimation of neuron parameters from imperfect observations", PLOS COMPUT. BIOL., vol. 16, no. 7, 16 July 2020 (2020-07-16), Retrieved from the Internet
NOGARET AMELIZA C. D.MARGOLIASH D.ABARBANEL H. D. I.: "Automatic Construction of Predictive Neuron Models through Large Scale Assimilation of Electrophysiological Data", SCIENTIFIC REPORTS, vol. 6, 8 September 2016 (2016-09-08), pages 32749
C. D. MELIZAM. KOSTUKH. HUANGA. NOGARETD. MARGOLIASHH. D. I. ABARBANEL: "Estimating parameters and predicting membrane voltages with conductance-based neuron models", BIOLOGICAL CYBERNETICS, vol. 108, 2014, pages 495 - 516
Claims 1. A computer-implemented method for determining values of biological parameters of a cell, comprising: receiving membrane voltage readings of the cell corresponding to N time points; for a state vector comprising the biological parameters and a plurality of interdependent and time-dependent state variables, including a membrane voltage variable: setting an initial guess of the state vector; performing an optimization of the state vector by: setting the membrane voltage variable to be equal to a corresponding membrane voltage reading at a number of time points within the N time points; determining an updated value of the state vector by optimizing an objective function that operates on values of the membrane voltage variable and corresponding membrane voltage readings; and repeating the optimization with: the initial guess of the state vector set to the updated value of the state vector; and the membrane voltage variable set to be equal to a corresponding membrane voltage reading at fewer time points within the N time points. 2. The computer-implemented method of claim 1, wherein the cell is a neuron, wherein the plurality of biological parameters relate to one or more ion channels of the neuron, and wherein the state vector is of a conductance model of the one or more ion channels of the neuron. 3. The computer-implemented method of claim 1 or claim 2, wherein: performing the optimization of the state vector comprises setting the membrane voltage variable to be equal to a corresponding membrane voltage reading at intervals of M time points within the N time points, wherein M is less than N; and repeating the optimization comprises using an increased value of M. 4. The computer-implemented method of claim 3, comprising repeating the optimization over each time point of the time window using increasing values of M until M is equal to or greater than N. 5. The computer-implemented method of any preceding claim, wherein the objective function includes a term based on a square of the difference between the membrane voltage variable and the corresponding membrane voltage for each time point. 6. The computer-implemented method of any preceding claim, wherein optimizing the objective function comprises performing a constrained non-linear optimization over each time point to determine an updated value of the state vector that minimises the objective function. 7. The computer-implemented method of claim 6, wherein constraints of the constrained non-linear optimization comprise time dependent rate equations of the state variables, wherein the rate equations define a relationship of a value of a state variable at a time point to values of the state vector at one or more previous or subsequent time points. 8. The computer-implemented method of claim 6 or claim 7, wherein constraints of the constrained non-linear optimisation comprise search ranges comprising an upper search boundary and a lower search boundary for each state variable and each biological parameter. 9. The computer-implemented method of any preceding claim, wherein optimising the objective function comprises adjusting values of the plurality of biological parameters and / or state variables until the objective function satisfies a termination condition. 10. The computer-implemented method of claim 9, wherein the termination condition comprises one of: performing a maximum number of iterations of the constrained non-linear optimization; a value of function being less than a threshold; and a gradient of the objective function being less than a threshold. 11. The computer-implemented method of any preceding claim, wherein setting the initial estimate of the state vector comprises: setting a search range for each of the state variables and each of the biological parameters; and setting the value of each of the state variables at each time point and each of the biological parameters as a value within the corresponding search range. 12. The computer-implemented method of any preceding claim, wherein the repeated optimization is performed iteratively, a plurality of times. 13. The computer-implemented method of any preceding claim, wherein the repeated optimization results in recursive piecewise data assimilation. 14. The computer-implemented method of any preceding claim, further comprising: identifying one or more ion-channels of the cell affected by disease based on a comparison between the updated state vector and an expected state vector. 15. The computer-implemented method of claim 14, comprising identifying one or more ion-channels of the cell affected by application of the medicament based on a comparison between a state vector related to the application of the medicament and a state vector of the cell without the application of the medicament. 16. The computer-implemented method of any preceding claim, further comprising simulating the effect of a medicament on the cell using the updated state vector. 17. A data structure defining a state vector including values of biological parameters of a cell determined by the computer-implemented method of any of claims 1 to 13. 18. A non-transitory computer-readable storage medium comprising computer program code configured to cause one or more processors to execute the computer-implemented method of any of claims 1 to 16. 19. An apparatus comprising: one or more processors; and a memory comprising computer program code configured to cause the one or more processors to execute the computer-implemented method of any of claims 1 to 16. 20. The apparatus of claim 19 further comprising data logging hardware for receiving membrane voltage readings. 21. A method for determining values of biological parameters of a cell comprising: ex vivo, obtaining membrane voltage readings from a cell corresponding to N time points; and using the apparatus of claim 19 or claim 20 to determine the values of biological parameters of a cell. |
The m,n are activation state variables and the h are inactivation variables. Their generic voltage and time dependence is given by Eqs. 5 and 6 with the appropriate parameters. Activation variables have ^ ^^ ^ ^ whereas inactivation variables have ^ ^^ ^ ^. In some examples, the method may be used to perform constrained nonlinear optimization to determine values of the state variables and biological parameters that minimize the cost function (Eq.1). The minimisation may be subject to the equality constraints given by the conductance model (Eq.2) and the inequality constraints set by the search range of biological parameters and the interval of variation of the state variables. The function to minimize is the Lagrangian: where the ^ ^ are the Lagrangian multipliers associated with equality constraints and the ^ ^ are the inequality barriers parameters. In some examples, a primal-dual approach is used to introduce the vector^^ dual to the inequality barrier ^ ^ , which minimizes the violation of the barrier by . The Lagrangian combining primal (^ǡ ^^ and dual variables (^) is: This may converted into of a linear system satisfying the Karush-Kuhn-Tucker conditions. whose solution gives the search direction of the at the next iteration ^. Here ^^is the Hessian of the Lagrangian; ^ ^ ൌ ^^^^^^ is the Jacobian of the constraints; ^ is the identity matrix and ^ is a matrix with all 1s. The system in Eq.11 can be solved iteratively generating the new state vector and Lagrangian multipliers through: where the ^s specify the distance travelled in direction In some examples, the system may be solved by the method of gradient descent with the interior point optimization code (IPOPT) incorporating the MA97 linear solver for sparse differential equations. 1.2. Equality and inequality constraints in conductance models As mentioned above, in some examples, the optimisation is bounded by search ranges (referred to or defined as inequality constraints) for the state variables and the biological parameters. Example search ranges are defined in table 2. Inequality constraints:
for each component of the state vector. The choice of upper and lower bounds of the biological parameter range ^ ^ and ^ ^ may be guided by biological considerations. For example, the neuron membrane voltage typically oscillates between -90 mV and 45 mV and this can define the maximum possible search range for activation and inactivation voltage thresholds. As a further example, ion channel maximal conductances may typically be between 0 and 500 mS/cm 2 . As a further example, the recovery time constants are generally commensurate with the width of action potentials (0-3 ms) for fast ion channels (Na) and the oscillations period (1-100 ms) for the other ion channels. The selected boundaries should bracket the solution. A penalty for selecting too wide a search range is a longer computation time. Otherwise, the choice of boundaries, or for that matter the initial estimates of the state variables and biological parameters within the boundaries, is independent of the eventual solution. Section 2 below demonstrates this independence as piecewise data assimilation converges towards the same biological parameter solution irrespective of the choice of the initial estimate of the state vector or the choice of the search ranges.
Equality constraints: As mentioned above, constraints of the optimisation may include rate equations (Eq.2) of the conductance model that define relationships between a state variable at a time point and the state vector at previous time points. In some examples, the rate equations of Eq.2 may be linearized over 3 data points (Simpson) to define the equality constraints. In some examples, the rate equations of Eq.2 may be linearized over 5 data points (Boole) if higher accuracy is needed. (i) Simpson’s method linearizes the equations of motion over 3 points, ^ ^ ǡ ^ ^ା^ ǡ ^ ^ାଶ : In some examples, the mid-point at ^ ^ା^ can be further interpolated to second order in ^^ by cubic Hermite interpolation: where Mid-point interpolation can advantageously avoid high frequency oscillations in the hidden state variables (the gate variables). Figure 2 shows an example Lorenz model constructed by synchronizing a variable x to data with spurious oscillations of hidden state variables y and z. The figure shows a comparison of the original Lorenz model 218-1, 218-2, 218-3 with the prediction of its twin model constructed by data assimilation 220-1, 220-2, 220- 3. The data assimilation protocol omits the Hermite polynomial interpolation of the mid-point in Simpson’s method (Eq.14). Data assimilation synchronizes the state variable x(t) to the data 218-1, top panel. State variables y and z are not synchronized to their respective data in the middle and bottom panels but inferred from the synchronization of the x variable in the top panel. The absence of Hermite interpolation of the mid-point t i+1 in the (t i ,t i+1 ,t i+2 ) interval generates a model that fits the x(t) data correctly but has incorrect parameters. As a result, the prediction of the model with these completed parameters gives spurious oscillations in the y and z variables (220-2, 220-3, middle and bottom panel). Proof of removal of oscillations: Taylor expand at ^ ^ and ^ ^ାଶ and sum the two expressions retaining second order terms: Summing gives: Interpolating with a second order polynomial gives a constant second order derivative over the 3 points interval hence: Substituting: (ii) Boole’s method linearizes the equations of motion over 5 consecutive points, ^ ^ ǡ ^ ^ା^ ǡ ^ ^ାଶ ǡ ^ ^ାଷ ǡ ^ ^ାସ : The 3 mid points are interpolated with: In practice interpolating only two consecutive mid-points for the Boole method can prevent the spurious oscillations seen in Figure 2. Omitting one mid-point interpolation saves significant computation time without creating spurious oscillations or reduction in accuracy. A comparison of the accuracy of the Boole and the Simpson method is shown in Table 3. As illustrated by the table, the Boole method can provide significantly better accuracy when ^^ is larger. Table 3: Comparing the accuracy of parameters estimated with Simpson and Boole interpolation at different time steps ^^. Parameters ^ and ^ are parameters of the Lorenz system. 1.3. Adaptive Step size Δt When assimilating biological data with approximate models, longer data assimilation windows (longer time) have the merit of reducing uncertainty in the biological parameter estimates. This is because statistical averaging of experimental error over more data points reduces parameter uncertainty as ^Ȁξ^. Secondly longer windows can increase the curvature of the cost function at the minimum giving a deeper and narrower minimum. The combination of both effects increases the accuracy on parameter solutions. However, increasing ^ may result in a greater likelihood of making the Jacobian of constraints singular (Eq.11) by introducing redundant information. This is a failure to satisfy the observability criterion. Increasing N also increases the size of the system (Eq.11) and hence increases the computation time and rounding error. Therefore, in some examples, the size of the assimilation window may be extended (longer duration) while keeping N constant. This trade-off is possible because most model parameters are constrained by action potential waveforms localised in time (~2ms) which are a small fraction of the assimilation window (~1000ms). A comparatively small number of parameters may control sub-threshold oscillations. Therefore, some examples may include an adaptive mesh size to sample action potentials with a finite mesh size ^^ when the membrane voltage reading, and sub-threshold oscillations with a mesh size ^^^^when . This test on the membrane voltage reading can be implemented in the example code below by setting ^ ൌ ^ǡʹǡ^ depending on the value of ^^^^ . The assimilation of noisy data has shown that doubling the duration of the assimilation window can provide a ten-fold increase in parameter accuracy. Therefore, in some examples, the method may include setting an adaptive step- size (or mesh size) of the N time points based on a value of a membrane voltage waveform. In some examples, setting an adaptive step-size may comprise: setting a finite time step, Δt, if the value of the membrane voltage waveform is greater than a threshold membrane voltage; and setting a larger time step equal to an integer multiple of the finite time step, i.e. mΔt, if the membrane voltage is less than or equal to the threshold membrane voltage. In this way, action potentials (corresponding to Vmem > threshold membrane voltage) of the membrane voltage waveform are sampled with greater temporal resolution than sub-threshold oscillations (corresponding to V mem < threshold membrane voltage). As a result, the assimilation window or duration that the N time points relate to can be increased, resulting in a more accurate estimation of the biological parameters. 1.4. Recursive piecewise data assimilation (DA) In section 1.4.1 below, the recursive piecewise data assimilation approach is described and illustrated. In section 1.4.2, an exemplary computational implementation is described. 1.4.1. Recursive piecewise DA Conventional data assimilation evolves a conductance model continuously from the initial estimate of the state vector provided by a user (for example based on the values in Table 2). The state vector is then interpolated over every time point of the data assimilation window. In contrast, the piecewise approach re-injects the observed membrane voltage in the model’s membrane voltage variable, x1(t), every M+1 time points while interpolating the state vector normally in between. Figure 3 illustrates an example piecewise data assimilation (DA) fitting over three segments. The values of a membrane voltage variable, x 1 (t), 322 and corresponding membrane voltage readings 324 are plotted as a function of time for N time points. Each segment contains M time points. The constraints Eqs. 15 and 16 connect the neuron state (state vector values) across the M time points of each segment (M=10). At the beginning of each segment, the membrane voltage variable, x 1 (t), is set equal to the observed membrane voltage reading (circled points 325). The other state variable ^ ଶ ǥ^ ^ are connected to the previous time step using Eqs .15 and 16. At the end of each segment, the membrane voltage state variable may be discontinuous. The membrane voltage variable 322, is set to the membrane voltage reading 324, every 10 time points (M=10), indicated by the circled points. This periodic constraint can introduce a discontinuity in the membrane voltage variable at the end of each interval of M time points. The next model point for the membrane voltage variable in the following segment is evolved from the preceding voltage reading data point rather than the preceding model point. The unobserved state variables ^ ଶ ǡǥ ǡ ^ ^ may be interpolated normally across the full assimilation window. Re-injecting data in the model (setting the membrane voltage variable to the membrane voltage reading every M time points) biases the state vector solution towards the biological solution because at every M points the observed vector components of the state vector are set to be equal to the measured data, rather than having the intermediate state depend on initial conditions. In the limit ^ ՜ ^ this bias vanishes from the optimisation problem, except that this bias is now reflected in the initial value of the state vector. At low values of M, the membrane voltage variable, x 1 (t), will present ^^^^^Ȁ^^ discontinuities over the assimilation window of N time points. As discussed below, in some examples, a larger starting value of M may be selected if this large number of discontinuities leads to instability in the gradient descent algorithm of the parameter search (Eqs. 12 and 13). The instabilities can be resolved by reducing the bias of the model towards the data by starting from the larger value of M. The minimum possible segment length M may correspond to the number of steps over which the equations of motion are linearised; thus 2 intervals for Simpson’s method or 4 for Boole. In some examples, the segment length must always be a multiple of this minimum. A segment length M=N makes the piecewise method identical to global data assimilation. Recursive piecewise data assimilation iteratively carries out piecewise data assimilation on the same assimilation window, starting from a small segment length ^ ൌ ^ ^ and progressively incrementing, or increasing, M in each iteration, for example until ^ ൌ ^. Recursive piecewise data assimilation may be implemented through the 3 algorithms illustrated in Figures 4 to 6. The three algorithms are nested into each other A > B > C. C is the top level and A the lower level. Figure 4 illustrates the lower level algorithm, A, 444 defining a generic data assimilation solver that synchronizes the neuron model to data. The DA Algorithm solver is applied to a segment of M data points. This solver minimizes the objective function under the constraints. Starting from a user-provided initial estimate 426 (e.g. based on Table 2), the solver will repeatedly modify 430 the biological model parameters by iteratively solving 428 Eqs.12 and 13. The algorithm has three possible outcomes: x If the solver can satisfy all constraints (checked at decision 432) to within the user’s specified tolerances and reduce the objective function to a minimum value (for example, ^ ^^ ି^ ), it will complete successfully 434 and output the optimal set of parameters. x If it cannot do so and has exceeded a specified maximum number of iterations, or has spent multiple iterations without any improvement in constraint violations (checked at decision point 436), it will still output a parameter set but signal unsuccessful completion 438. This might indicate the solver arrived at a local minimum of the cost function which is addressed by implementing our piecewise DA method (Flowchart B) x The third possibility is that the solver may halt due to a numerical failure. For example, dealing with ill-defined systems (Eq. 11) or the discontinuities of Figure 3 may increase memory requirements and cause the program to abort 440. This is addressed by resetting the initial value of M=M 0 in piecewise DA (see Figure 6). Figure 5 illustrates the mid-level algorithm, B, of recursive piecewise data assimilation 555. Algorithm A of Figure 4 is embedded in the “Attempt numerical solution” block 544. Algorithm B shows how the data assimilation solver of Figure 4 may be recursively called to fit segments of length M covering the assimilation window. The M and initial estimate of the state vector are provided at block 542. A central loop 544, 546, 548, 550 iteratively attempts a solution using algorithm A 544, saves the resulting biological parameters and state vector 546, determines if M is greater than N 548, and, if not, increases M 550, otherwise the algorithm completes 551. The purpose of the central loop is to increase the length of data segments until M=N, each time re-injecting the output parameters as input to the new parameter search. The central loop can be halted prematurely by a numeric failure 452 in the solver in which case a new increased initial value of M 0 can be provided using algorithm C, illustrated in Figure 6. The central loop or “segment recursion loop” sets the initial values of the biological parameters and state variables at the values output by the previous iteration of M. In this way, the successive biological parameter estimates can carry the “bias toward data” as M increases. Figure 6 illustrates the top-level algorithm, C, of recursive piecewise DA which includes the resetting of the initial segment size M0 (initial value of M). Algorithm is optional and may anticipate the possibility of numerical failure arising from excessive biassing of the model towards the data. The initial lowest value of M 0 is set at block 654 along with the initial values of the state vector. At block 655 recursive piecewise DA 655 is performed according to algorithm B. At block 656 a determination is made whether a successful fit was obtained. In the event that numerical failure halts the loop in algorithm B, algorithm C increases the initial M0 at block 658, to the next value and begins execution of recursive piecewise DA 655 anew. This approach provides 100% convergence with a successful outcome 660 while biassing the solution towards the biologically relevant parameters. Consider the case in which M 0 was at first set to K and in which the loop in algorithm B (Figure 5) explored successive values of M = K, 2K, 3K, 4K ... If this loop fails, the outer loop in flowchart C sets M 0 = 2K and has the inner loop in algorithm B explore values of M = 2K, 3K, 4K ... until success or failure. Crucially, this iteration of the outer loop of algorithm C will explore different parameter and hidden state estimates at each value of M from those explored in the first execution of algorithm C. This is because the system state (value of the state vector) when starting at an initial value M 0 = 2K is not the same as the system state at M = 2K reached in the inner loop after starting at M 0 = K. Therefore, in some examples, the method may comprise setting an initial value of M (i.e. M 0 ) and restarting the method with a larger initial value of M if the method terminates due to a numerical failure. 1.4.2. Example implementation of piecewise DA in computer code The following description of a computer code implementation of recursive piecewise DA is merely exemplary. Other implementations may be defined without departing from the scope of the invention which is defined by the appended claims. Key sections of the example code implementing the recursive piecewise data assimilation method are presented in the inset text below. The exemplary implementation is based on the pyDSI code (https://github.com/coin-or/Ipopt). The pyDSI code is written in Python 3 and reads information about a dynamical system in the form of a text file specifying the model equations “dyn_sys.txt” and a file specifying the starting value and search range of model parameters “bounds.txt”. It calls SymPy to perform symbolic differentiation of the Hessian of the Lagrangian (^ ଶ ^) and the Jacobian of Eq.11. pyDSI then generates C++ code expressing the dynamical system in a form compatible with an IPOPT solver, specifically using IPOPT’s TNLP base class (https://coin- or.github.io/Ipopt/classIpopt_1_1TNLP.html). The C++ code, is then compiled to create an executable. When executed, the executable reads data from two text files: one holding a time series current injected in the neuron and the other a membrane voltage time series recorded by the current clamp. IPOPT then solves the sparse linear system (Eq.11) discretized at each point of the time window ^ ^ ǥ ^ ே . IPOPT uses the method of gradient descent (Newton’s method) to optimally synchronize the membrane voltage variable ^ to the observed membrane voltage^ . IPOPT varies the system parameters within the bounds specified by the “bounds.txt” file (for example, based on the values of table 2). Parameters governing the behaviour of IPOPT are provided in a text file “ipopt.opt”. Parameters for problem definition, e.g. what part of the data range is to be fitted, are provided in a text file “problem_info.txt”. Parameters for the recursive piecewise DA method, such as the starting segment length and its increment, are given in a text file “mss_info.txt”. These files are read at the start of execution of the C++ executable. Within the C++ code: x IPOPT TNLP class: the IPOPT TNLP is a virtual base class from which the user implements a derived class. In our case we make use of a class: class TestMSS_NLP : public TNLP which is a version of an original Test_NLP running non-piecewise DA. x A single column array of double-precision floating point numbers x stores N(L+2) double precision floating point numbers. The first N vector components hold the value of the first state variable (membrane voltage variable) across the assimilation window. The next N vector components hold the second state variable and so on, so that the first block of LN data holds the values of L state variables. The next 2*N block holds the current protocol and the voltage recorded over the data assimilation window. The last 2N block holds the control term u and its derivative du. A final block of size P holds the P parameters of the model. We can refer to this array x is as “state” of the system in general. x The number of steps M in data segments in the piecewise method is coded with integer n_mss. The problem is defined numerically in five member functions of the class, as follows: x The objective function is defined in eval_f. x The gradient of the objective function is defined in eval_grad_f. x The constraints are defined in eval_g. x The Jacobian of the constraints is defined in eval_jac_g. x The Hessian of the Lagrangian of the problem is defined in eval_h. These are defined as virtual functions in IPOPT’s base class, and provided by the user to implement their numerical problem in a form that IPOPT can attempt to solve. The role of the five member functions in (non-piecewise) data assimilation (DA) and how they can be modified in piecewise DA is as follows: x The objective function: The objective function (to be minimised) contains two components. The first is the sum of squares of error terms (data_[j] – x[j]) and the second is the sum of squares of values of the control variable u (here x[i+L*nObs_]), as in this instance the model included seven state variables before the control variable). In the C++ code a loop over an index variable sums all the terms contributing to this scalar over a total of nObs_ points. bool TestMSS_NLP::eval_f(Index n, const Number* x, bool new_x, Number& obj_value) { obj_value = 0.; for (Index i=0; i<nObs_; i++) { obj_value += (+(data_[i]-x[i])*(data_[i]- x[i])+x[i+7*nObs_]*x[i+7*nObs_]) / 2; } return true; } The code implementing the objective function is provided as an illustration. This example objective function definition does not change in the piecewise DA approach. x The gradient of the objective function: The gradient of the objective function is an array with the same size as x. This gradient has nonzero components with respect only to those parts of the state x that the objective explicitly depends on. // return the gradient of the objective function grad_{x} f(x) bool TestMSS_NLP::eval_grad_f(Index n, const Number* x, bool new_x, Number* grad_f) { // initialise gradient of objective function for (Index i=0; i<n; i++){ grad_f[i] = 0.0; } // specify non-zero values of grad f for (Index i=0; i<nObs_; i++){ grad_f[i] = (-2*data_[i] + 2*x[i]) / nObs_; grad_f[i+7*nObs_] = (2*x[i+7*nObs_]) / nObs_; } return true; } x The constraints: The equation of motion of each state variable is discretized (under Simpson interpolation) according to two equations Eq.13 and 14. Hermite interpolation in the form of Eq.14 is applied to the control term u to induce its smooth variation. As a result, the system has 2L+1 constraints every 2 steps of the data assimilation window. The constraint array eval_g is a sparse (2L+1)*(N-1)/2 array storing the coefficients of the constraint equations Eq.13 and 14. This incidentally requires the number of data points N to be an odd number, so that it will be indexed by an integer number of strides of length 2. // return the value of the constraints: g(x) bool TestMSS_NLP::eval_g(Index n, const Number* x, bool new_x, Index m, Number* g) Index con = 0; for(Index j=0; j<(nObs_-2); j+=2) { if ( j % n_mss == 0 and j > 0 ){ g[con+0*(nObs_-1)/2] = … g[con+1*(nObs_-1)/2] = -0.25*h[j]*(x[j+7*nObs_]*(0.0) … … g[con+14*(nObs_-1)/2] = … } else{ g[con+0*(nObs_-1)/2] = … g[con+1*(nObs_-1)/2] = -0.25*h[j]*(x[j+7*nObs_]*(data_[j] - x[j]) … … g[con+14*(nObs_-1)/2] = … } con++; } return true; } The full equations for an exemplary system are too long to reproduce here, but the nature of the alteration made to implement the piecewise DA system can be illustrated with a code fragment as shown above. The constraint array is constructed by a loop over an index variable j: for(Index j=0; j<(nObs_-2); j+=2) The constraints themselves are indexed by a variable con which is incremented by 1 in each iteration of the loop; therefore con = j/2 in each iteration of the loop. The fifteen successive blocks of constraints are indexed by con plus an integer multiple of (N-1)/2. The modification of the constraints occurs through a conditional which detects whether j is a multiple of the segment size, M, here written as n_mss: if ( j % n_mss == 0 and j > 0 ) When this conditional is false, the constraint equations are written as usual. When the conditional is true, the constraint equations replace x[j] with data_[j] . This implements the “restarting” of the evolution of x[j+1] and x[j+2] from a data value rather than a model value. The replacement is handled in the Python code by a string substitution, illustrated here: strings2.append( foo.replace( "x[j]","data_[j]" ).replace("data_[j] - data_[j]", "0.0") ) which writes a version of the equation in which x[j] is replaced by data_[j]. Since j is always even in this loop construction, n_mss must also be an even number. An effect of this alteration is that when the conditional is true, the constraints are not functions of x[j]. Methods and apparatus of the present disclosure can infer changes in ion channel conductances, kinetics and / or voltage thresholds from changes in electrical activity of a cell. By linking functional changes at the cell level to changes in ion channel proteins, the disclosed methods and apparatus can identify gene mutations responsible for decay in cognitive functions. This has a consequence for the Jacobian of the constraints, as discussed below. x The Jacobian of the constraints The modifications made to this function within piecewise DA are twofold. Firstly, each loop must be modified in the same way as eval_g to include a conditional based on n_mss. When the conditional is true, x[j] is replaced with data_[j] in the analytical expression of the Jacobian output by symbolic differentiation. This alteration is handled using the same string manipulation as above. Secondly, when the conditional is true, entries corresponding to differentiation by x[j] must be zero, as these expressions are no longer functions of x[j]. This avoids the IPOPT solver attempting to satisfy a constraint that does not depend on x[j] by varying x[j]. This is implemented within the code using a Boolean filter array with one entry for each entry in the Jacobian. This filter array is set to “true” in the cases where the constraint index con, obtained from the row index by modular arithmetic, and the column index j satisfy the con = j/2 relation, and the MSS conditional is also true. Entries where the filter array is true have their values set to zero before eval_jac_g returns the value array. The effect of this filtering is that constraints which are explicitly functions of x[j],x[j+1] and x[j+2] are differentiated by all three variables, while constraints which have become functions of data_[j],x[j+1] and x[j+2] are differentiated only by x[j+1] and x[j+2]. A C++ code fragment from the formation of the Jacobian illustrates how entries requiring this correction are identified based on their row and column indices: … //d/dx[j] becomes zero when data_[j] replaces x[j] correctJacobian = new bool[nele_jac]; //array now ready int modulus = (nObs_-1)/2 ;//to obtain con from iRow int thisCon; for ( Index inzz = 0 ; inzz < nele_jac ; inzz++){ correctJacobian[inzz] = false; if (jCol[inzz] >= nObs_ or jCol[inzz] == 0) continue; //not a d/dx[j] entry, or j=0 case thisCon = iRow[inzz] % modulus; if ( jCol[inzz] != 2*thisCon ) continue; //not d/dx[j] if ( jCol[inzz] % n_mss != 0 ) continue; //not an MSS point correctJacobian[inzz] = true; //a case for treatment … for ( Index inzz = 0 ; inzz < nele_jac ; inzz++ ){ if ( correctJacobian[inzz] ){ values[inzz] = 0.0; } } … x The Hessian of the Lagrangian: IPOPT also makes use of the Hessian of the Lagrangian of the problem. This entity, as discussed in the IPOPT documentation, is a matrix of second derivatives of both the objective function and the constraints. Like the Jacobian of the constraints, it is specified in a sparse matrix format by arrays recording row, column and value entries. Like the Jacobian, the Hessian is constructed by a series of loops; the Python code writes a loop in the C++ code for each instance in which a second derivative could be nonzero. Thus the same loop modification using a conditional is made so that data_[j] is used instead of x[j] when appropriate, and so that terms whose derivative is now zero are set to zero. The following code fragment illustrates how relevant terms in the Hessian are incremented, unless a derivative with respect to x[j] has become zero, in which case the term is not incremented; an effect achieved by commenting out the addition line. … for (Index j=0; j<(nObs_-2); j+=2){ inz = 0*nObs_; //in this case myInt is zero, so correct the Hessian for d/dx[j] if ( j % n_mss == 0 and j > 0 ){ //values[inz+j] += lambda[con+12*(nObs_-1)/2]*… inz++; //values[inz+j] += lambda[con+12*(nObs_-1)/2]*… inz++; //values[inz+j] += lambda[con+12*(nObs_-1)/2]*… inz++; } else { values[inz+j] += lambda[con+12*(nObs_-1)/2]*… inz++; values[inz+j] += lambda[con+12*(nObs_-1)/2]*… inz++; values[inz+j] += lambda[con+12*(nObs_-1)/2]*… inz++; }… Iterating over segment lengths to increase the accuracy on the biological solution: The principle of iterating over gradually increasing segment lengths M has been described above (Figure 5, Algorithm B). The purpose of this approach is to reconcile two requirements: x firstly to identify among all local minima of the cost function, the minimum that corresponds to the biological solution. A priori this is likely to be the global minimum but in ill-defined problems, this may be a local minimum. x secondly to track the solution in the vicinity of the biological minimum as the data bias is progressively removed (the biological parameter solution approaches the mathematical solution of the problem). The iteration is achieved by a loop in the main function of the C++ code: … //loop, increasing n_mss every time (done by finalize) int nLoops = 0; while ( mynlp->n_mss < mynlp->nObsVisible ){ nLoops += 1; cout << "Attempting loop " << nLoops << endl; cout << "Current n_mss " << mynlp->n_mss << endl; status = app->OptimizeTNLP(mynlp); if (status == Solve_Succeeded) { printf("\n\n*** The problem solved!\n"); } else { printf("\n\n*** The problem FAILED!\n"); } }… The TNLP object is created once and then called repeatedly. The initial segment value is M=2. In each successive iteration, the state vector x is passed on to conserve memory of the biological minimum, except the control variable u (and du) which is reset to zero. The observed voltage state variable is set to the values in the data file, data_[i]. These alterations are handled by modifications to the constructor of our TNLP object and to its member functions get_starting_point and finalize_solution. These member functions are called automatically when IPOPT begins and ends an attempt at a numerical solution. The constructor reads user-provided starting values of segment length M (integer) and a multiplier, which should be a real number greater than 1, from a text file, mss_info.txt. It also creates an array saved_state of size equal to the variable array x. Starting values for the state variables and parameters, obtained from a bounds.txt file, are written into this saved_state array. x The get_starting_point function is called once at the beginning of each biological parameter search. This function copies values from saved_state into the variable array x, initialising it. It then sets the first N entries of x (the voltage state (membrane voltage variable)) equal to the corresponding values in data_[i]. x The finalize_solution function is a function outputting the parameters from the state vector solution x obtained by IPOPT in the format of the input “bounds.txt” file. Modification made to implement piecewise DA aimed to update the values of n_mss, x into saved_state. These will in turn be passed back into the x array at the start of the next iteration when get_starting_point is called. x Parameter values which have hit their lower or upper bound value (of the search range) are reset at the midpoint of their search range, before the next iteration. x In the exemplary implementation M is increased from one iteration to the next by multiplying the current value by a multiplier and casting the result as an integer. Two checks are then implemented. Firstly, because M must be an even number, an odd value is incremented by 1 to make it even. Secondly, if the new value is not greater than the old value, the new value becomes the old value incremented by 2. Thus M will increase by at least 2 in every cycle. A “sweep” over successive even values of M can be carried out by setting the multiplier to a value only slightly greater than 1 (e.g. 1.001), so that the increment behaviour is triggered. x The output “parameters.dat” file records the values of n_mss and the objective function as well as the parameter values themselves at the end of each iteration to evaluate the convergence of parameters). 2. Numerical Results - Demonstration of convergence probability, parameter accuracy and identifiability criteria This section describes numerical results based on an example recursive piecewise DA model and computer program according to an embodiment of the disclosure. The results illustrate how the method handles the choice of the initial piecewise interval ^ to achieve convergence with 100% probability. The results also illustrate that once the biological solution has been selected at smaller values of ^, there is a rapid convergence over successive iterations as . The results also demonstrate that piecewise data assimilation estimates parameters with very high accuracy. Parameters of the representative RVLM neuron model are estimated within 0.1% of the true parameter values used to synthesize the data. This high accuracy is maintained over 9 different assimilation windows. This demonstrates the fulfilment of identifiability criteria since the biological parameters are uniquely determined by the current protocol. The results also demonstrate that the shape of the current waveform does not affect the accuracy on parameters, as long as the current protocol probes the full dynamic range of the neuron. Finally, the results show that the estimated parameters are independent of the choice of starting conditions of the biological parameters and the state variables (the initial estimate of the state vector). The results demonstrate that piecewise data assimilation generates a unique set of biological parameter solutions, which are both accurate and physical. Multiple criteria are used to assess convergence of the biological parameter estimation. These include: (i) the final value of the objective function (ii) the difference between the observed and fitted membrane voltage over the assimilation window ( iii) the value of the control variable ^ ^ ^ ^ across the window: ^ ^ ^ ^ ^ ^ when convergence is successful. (iv) in the case of “twin” experiments, where the true parameter values are known, the closeness of the estimated parameters to the true parameters is used to construct model data. These metrics are used below to demonstrate the convergence of recursive piecewise DA. 2.1. Example of successful convergence from a single starting point M0=2 (algorithm B) Figure 7 illustrates the convergence of a recursive piecewise DA method according to an embodiment. In this example, PyDSI21 is executed starting from M=2 with initial gate variable and parameter values halfway between the lower bound and upper bounds of the search ranges shown in Table 2. The value of the objective function (cost function) 762 is shown as ^ approaches ^ ൌ ^^ǡ^^^^(top panel) and over the first few iterations ^ ൌ ʹǡǥ ǡ^^^ (bottom panel). The Figure 7 illustrates convergence of the piecewise DA parameter search as ^ǣ ʹ ՜ ^. The global minimum is reached at ^ ൌ ͺ and the value of the cost function rapidly stabilizes to less than 10 -6 . Figure 8 illustrates vanishing values of the control variable ^^^^ 864 and the fitting error (mismatch between model and observed membrane voltage) 866 over the assimilation window at the end of the fitting process of Figure 7, demonstrating accuracy of the model. 2.2. Successful convergence from multiple starting points M_0=2,4,6 (algorithm C) Figure 9 illustrates convergence of recursive piecewise data assimilation starting from 3 successive values of ^ ^ . A first sequence of iterations 968 starts ൌ ʹ and halts at ^ ൌ ͺ. A second sequence of iterations 970 restarts at ^ ^ ൌ ^ and halts at ^ ൌ ʹ^. The third and final sequence of iterations 972 is initiated at ^ ^ ൌ ^ and successfully runs until completion at ^ ൌ ^. In this example, reinjection of the membrane voltage data in the model (i.e. setting the membrane voltage variable to the corresponding membrane voltage reading every M time steps) generates numerical instability at ^ ^ ൌ ʹ. Therefore, Algorithm C can be implemented to restart iterations at larger values of ^ ^ ൌ ^ǡ^ to reduce the bias towards the biological solution while letting mathematical optimisation proceed as naturally as possible. 2.3. Accuracy on estimated parameters (<0.1%) and fulfilment of identifiability criteria Figure 10 illustrates model data series that was used to investigate the dependence of the biological parameters determined by piecewise DA on the waveform of the current protocol. Such analysis enables the characterization of pharmacological ion channel inhibition, as described subsequently in section 3 below. The data series includes a current protocol 1076 (bottom panel) and corresponding membrane voltage readings 1074 (top panel). Model data were generated by forward integrating the RVLM model with the current protocol 1076. The sampling rate was 50kHz and the epoch had 50,000 data points in total. Some of the initial model parameters used to generate the model data series are shown in Table 4 (right hand column “True”). Within this epoch, 9 assimilation windows were defined each with N=10,001 data points and shifted by T=0, 5000,10,000, …, 40,000 data points from the start of the epoch. Table 4 shows representative parameter solutions of the RVLM model output by the piecewise DA method. Each column gives the biological parameters estimated from one of nine assimilation windows together with the reference parameters (right column). The biological parameters are representative of conductances ^ ^ , activation voltages and widths of open to close transitions ( ^^ ,^ ^^ ǡ ^ ௧^^ ), recovery time constants (^ ^ ^ ^ ) which are particularly hard to fit, and reversal potentials (^ ^ ). The estimated biological parameters in the first column deviate by less than 0.1% from the true value showing the very high accuracy of the method. The same accuracy is observed independently of the assimilation window. Hence piecewise DA estimates and uniquely identify the true parameters independently of the waveform of the current protocol. This demonstrates the identifiability condition is fulfilled across a full epoch since the accuracy of parameters remains outstanding (<0.1%) across all windows. Table 4: Parameters estimated from assimilation windows of constant duration (N=10,001) offset by T=0, 5000, …, 40,000 data points from the origin. The maximum deviation of the estimated parameters from the true parameters (right column) is 0.1%. 2.4. 100% convergence probability, independent of initial conditions Table 5 illustrates a study of the dependence of the biological parameter estimates on the initial estimate of the state vector (state variables and biological parameter values) at the start of piecewise data assimilation. The table shows results of the piecewise DA method for a statistical sample of 20 initial biological parameter estimates. The model used was the RVLM neuron model. The first set of 10 assimilation runs (Rand1,…,Rand10) initialized biological parameters at random within the search interval [p L ,p U ]. The second set of 10 assimilation runs had parameters positioned uniformly at p L +(p U -p L )i/10 where j=1,…,10 (Uni1,…,Uni10). The final series of assimilation runs used random parameters over different assimilation windows (Skip05,..,Skip40) as described above in relation to Table 4. The largest initial value of ^ corresponds to the one that led to convergence (via Algorithm C). The value of ^^at convergence measures the time it has taken for the parameter search to converge: cost function < ^^ ି^ , parameter error < 0.1%. Table 5 illustrates a convergence rate of 100%.
Table 5: Searches required for the parameter search to converge to the true parameter solutions when the initial conditions are varied. In Table 5, convergence is defined as ^^^^^ ^ ^^ ି^ and parameters within 0.5% of the true parameter values. The table shows that the rate of convergence depends on the choice of initial conditions (initial estimate of the state vector). However, even the slowest converging process at ^ ൌ ^^ʹ converges well before the final value of ^ ൌ ^ ൌ ^^ǡ^^^ is reached. This demonstrates that all processes comfortably converge to the correct solution independently of the choice of the initial conditions. This 100% probability of convergence, uniqueness of solutions and <0.1% accuracy on all parameters is an outstanding achievement of piecewise data assimilation with respect to the literature (Taylor et al., 2020, Nogaret et al., 2016; and C. D. Meliza, M. Kostuk, H. Huang, A. Nogaret, D. Margoliash, H. D. I. Abarbanel, “Estimating parameters and predicting membrane voltages with conductance-based neuron models”, Biological Cybernetics vol. 108, pp 495–516 (2014)) where convergence rate was <60%, solutions were many-fold and dependent on initial conditions. Figure 11 shows an example convergence of a biological parameter (epsilon n intra-parameter 1178) as a function of iterations on ^. This example was chosen because the time constants of conductance models are the most difficult variables to constrain and determine. This is because (in)activation time constants enter as a coefficient of an exponential time dependence of gate kinetics. The most difficult parameter to constraint is the ^ ^ intra-parameter which determines voltage dependence of the time constant. In known methods, estimation of this parameter nearly always fails as the parameter search systematically hits the boundaries of the search interval. Remarkably, Figure 11 shows that recursive piecewise data assimilation constrains time constant ^ ^ extremely well. The value is rapidly determined to less than 0.1% at ^ ^ ^^^and the accuracy further increases as ^ ՜ ^^ǡ^^^. The true value used to generate model data was 4.314 ms. 2.5. Longer assimilation windows increase accuracy on parameters x Increasing the data assimilation window from 10,001 to 15,001 data points produces essentially identical results in terms of biological parameter estimates. x Shorter assimilation windows can still give excellent accuracy on biological parameters if at least five or six action potentials are included in the time series data. A N=5,001 assimilation window spanning two or three action potentials can estimate channel conductances and reversal potentials with values similar to the original, however inaccuracies may arise in estimates of recovery time constants. An assimilation window of order 1 – 1.5 * 10 4 points may be sufficient for obtaining maximum parameter accuracy. x Long assimilation windows can increase the number of spurious local minima in the cost function by introducing redundant information in Eq.11 and making the Jacobian singular. However, implementing the optional adaptive step size described above in section 1.3 can address this issue. A piecewise data assimilation method that extracts the ionic conductances, gate recovery times and activation thresholds across the full complement of ion channels by analysing the membrane voltage oscillations of an electrically active cell. The parameter solution is single valued and has the accuracy such that it can be used in a model to detect disease induced mutations in ion channel proteins or pharmacologically induced changes, as discussed below. 3. Application to the identification of pharmacologically modified ion channels The computer-implemented method has a number of practical applications. For example, the updated state vector that is provided as a result of the method may be used to identifying whether the one or more ion-channel associated with the state vector is affected by disease based on a comparison between the updated state vector and an expected state vector. In another example, the effect of a medicament on the cell may be simulated using the updated state vector. The results of the use of the method in such applications are discussed in further detail below with reference to Figures 12 to 14 to provide proof of concept. A contribution of the invention relate to the ability to provide a state vector for use in the models with sufficient accuracy to describe the cell. Once the accurate state vector is known, conventional simulation techniques can be applied to it to model changes due to drugs or disease state. In this way, data assimilation may be used in an analytical method for predicting pharmacologically-induced changes in specific ion channel currents in an excitable cell. Predictions can be made by estimating the changes in total ionic charge transferred through the cell membrane under a common stimulation protocol. As described above, the method obtains the parameters of a conductance-based neuron model that synchronise the model to observed membrane voltage oscillations. These parameters included the ionic conductance, gate recovery times and voltage thresholds of individual ion channels. Models configured with optimal parameters have been found to accurately predict the state of the neuron. In one example, the biological parameters are determined using the method (for example using the neuron equations (e.g. Eq.7 and 8)). These equations are then integrated using 5th order adaptive step size Runge-Kutta method or the odeint() routine in Python to predict the exact response of the neuron to a current stimuli. Save for the determination of the accurate biological parameters using the methods described herein, such techniques are known in the art and the skilled person may readily use alternatives. Any change in neuron oscillations due to the application of a medicament or disease ) may cause very visible alterations when it is compared to the output of the digital twin (the completed neuron model)integrating the same current protocol. In one example, the dynamics of the individual ionic currents which underly the electrical function of a hippocampal CA1 neuron were reconstructed. This had the advantage of integrating the functional overlap of individual parameters into the calculation of these currents. As a result, the uncertainty on the reconstructed current was reduced by 55% on average over the uncertainty on parameter estimates. The method has been validated by estimating the amount of charge transferred through the neuron membrane before and after various ion channel inhibitory drugs were applied, and found good agreement between the predicted reduction in charge transferred per ion channel and the selectivity and potency of drugs for the BK, SK, A-type K + , and HCN currents. The selectivity of new compounds could therefore also be assessed in this way, as all ionic currents were obtained simultaneously, allowing a comparison of relative changes over all modelled channels concurrently. This approach provides a quantitative method linking changes in the electrical function of a cell to changes in specific ion channel currents. By elucidating the effect of both drug action and disease on the range of modelled ion channels, the method may enable a faster functional evaluation of drug binding, as well as revealing new targets and treatment strategies in diseases involving ion channel dysfunction. Figure 12 illustrates various profiles comparing recorded and predicted data regarding the medicament Iberiotoxin (IbTX), an ion channel inhibitor. Total predicted charge across the 2000 ms prediction window for assimilations from pre-drug data (n=15), and that in 100 nM IbTX (n=15) for all channels (in Figure 12A) and only the BK ion channel (in Figure 12D). The data is ‘spike normalised’ by dividing by the total number of spikes across the prediction window. Horizontal lines represent median values. Asterisks represent multiplicity adjusted q values from multiple Mann-Whitney U tests using a False Discovery Rate approach (Q) of 1%. It is apparent from comparing the predicted pre- and post-drug values that the drug does not have a visible effect on the NaT, K, A-type K+, SK ion channels shown in Figure 12A, but does have a noticeable effect on the BK ion channel shown in Figures 12A and 12D. Figure 12B shows an example action potential in recorded current clamp data before and during IbTX application. Figure 12C shows averaged predicted waveforms of action potentials from all assimilations from pre-drug data, and those in IbTX. There is a qualitative similarity in the predicted and recorded values. Figure 12E is a magnified view of Fig.12A focussing on the drop in ionic current through the BK channel before and after drug was applied, and represents mean predicted BK current under the same two conditions as Figure 12B and 12C. Figure 13 illustrates various profiles comparing recorded and predicted data for cells effected by Apamin. Total predicted charge across the 2000 ms prediction window for assimilations from pre-drug data (n=18), and that in 150 nM apamin (n=18) for all channels (in Figure 13A) and only the SK ion channel (in Figure 13D). As in the corresponding profiles for Figure 12, the data is ‘spike normalised’ by dividing by the total number of spikes across the prediction window. Horizontal lines represent median values. Asterisks represent multiplicity adjusted q values from multiple Mann-Whitney U tests using a False Discovery Rate approach (Q) of 1%. It is apparent from comparing the predicted pre- and post-drug values that the drug does not have a visible effect on the NaT, K, A-type K+, BK and Ca ion channels shown in Figure 13A, but does have a noticeable effect on the SK ion channel shown in Figures 13A and 13D. Figure 13B and 13E show example action potentials in recorded current clamp data before and during apamin application. Figure 13C shows averaged predicted waveforms of action potentials from all assimilations from pre-drug data, and those in apamin. Figure 13F represents mean predicted SK current under these same two conditions. Figure 14 illustrates various profiles comparing recorded and predicted data for cells effected by 4-aminopyridine (4-AP). Total predicted charge across the 2000 ms prediction window for assimilations from pre-drug data (n=19), and that in 300 μM 4-AP (n=18) for all channels (in Figure 14A) and only the A-type K+ ion channel (in Figure 14D). As in the corresponding profiles for Figures 12 and 13, the data is ‘spike normalised’ by dividing by the total number of spikes across the prediction window. Horizontal lines represent median values. Asterisks represent multiplicity adjusted q values from multiple Mann-Whitney U tests using a False Discovery Rate approach (Q) of 1%. Figure 14B and 14E show example action potentials in the recorded current clamp data before and during 4-AP application. Figure 14C shows averaged predicted waveforms of action potentials from all assimilations from pre-drug data, and those in 4-AP. Figure 14 F represents mean predicted A current under these same two conditions.
Table 5: A comparison of predicted inhibitions with literature values. For each channel, the percentage inhibition was read from the inhibition curve at the listed drug concentration, and the citation given. The results for comparison are listed, with both median and mean predicted inhibition reported. (+)/(++)/(+++) indicates the relative amount of expression of the SK and A-type K + channel types in CA1 neurons. Using our median predicted inhibition, we infer back, if possible, the drug concentration for that degree of inhibition from the inhibition curve in each study. Figure 15 illustrates a schematic diagram providing an overview of stages according to some aspects of the invention and their implementation. In this example, a proof-of-concept for inferring the potency and selectivity of ion channel blocker drugs from the electrical waveforms of CA1 neurons is provided. In an experimental phase 1580, which may be performed in vitro or ex vitro, membrane voltage oscillations of a CA1 neuron 1582 are induced by a current protocol 1586 designed to optimally constrain ion channel parameters 1584. CA1 neuron recordings were taken before and repeated 1590 after blocking 1588 the BK channel with iberiotoxin. In an in silico phase 1590, the assimilation method infers biological parameters ^^ ^^^ ǡ ^ ே^ ǡ ^ ^ ǥ ^ by synchronizing the neuron equations to membrane voltage oscillations as described above. The in silico phase comprises phases including setting an initial guess 1591 at the biological parameter values, performing an optimization procedure 1592 to provide an updated state vector solution 1593 that can be used to define the neuron to generate predictions 1594 of its behaviour. In a statistical analysis phase 1595, the ionic charge transferred by each ion channel per action potential is calculated 1597 from the estimated parameters for ionic charge transferred in a cell before 1598 and after 1599 applying a medicament (e.g. iberiotoxin), or applying a modification due to a disease state. The method may be used to infer changes across a plurality of, or even the full complement of, ion channels in one go. Only the Na + and K + channels are shown for clarity in this example. Figure 16 illustrates a schematic block diagram of a system 1600 which may be used to implement the methods described previously. The system 1600 comprises one or more processors 1602 in communication with memory 1604. The memory 1604 is an example of a computer readable storage medium. The one or more processors 1602 are also in communication with one or more input devices 1606 and one or more output devices 1608. The various components of the system 1600 may be implemented using generic means for computing known in the art. For example, the input devices 1606 may comprise a keyboard or mouse and the output devices 1608 may comprise a monitor or display, and an audio output device such as a speaker. In addition, the system 1600 comprises data logging hardware, which may be conventional, in order to capture measured voltage readings at least partially under the control of the one or more processors 1602. The system may be used to determine the values of biological parameters of a cell based on membrane voltage readings from a cell ex vivo using the data logging hardware. Figure 17 discloses a computer readable storage medium 1700. The computer readable storage medium may be used to store a computer program code configured to cause a processor to perform a method disclosed herein. In addition or alternatively, the computer readable storage medium may be used to store the data structure defining a state vector including values of biological parameters of a cell determined by the computer-implemented method.
Next Patent: A HYDRAULIC DEVICE AND A HYDRAULIC SYSTEM FOR CONTROLLING A HYDRAULIC ACTUATOR