Stochastic Biomathematical Models: with Applications to Neuronal Modeling

Free download. Book file PDF easily for everyone and every device. You can download and read online Stochastic Biomathematical Models: with Applications to Neuronal Modeling file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Stochastic Biomathematical Models: with Applications to Neuronal Modeling book. Happy reading Stochastic Biomathematical Models: with Applications to Neuronal Modeling Bookeveryone. Download file Free Book PDF Stochastic Biomathematical Models: with Applications to Neuronal Modeling at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Stochastic Biomathematical Models: with Applications to Neuronal Modeling Pocket Guide.

Combining ideas of periodic and stochastic averaging introduced previously, we present here theoretical results concerning multiscale SDEs driven by an external time-periodic input. The following assumption is made to ensure existence and uniqueness of a strong solution to system 4.

Definition 2. We are now able to present our main mathematical result.

We refer to [ 6 ] for more details about the full mathematical proof of this result. If w is the solution of. Moreover, in terms of applications, this parameter can have a relatively easy interpretation in terms of the ratio of time-scales between intrinsic neuronal activity and typical stimulus time-scales in a given situation. Although the zeroth order limit i. This is a difficult question which may deserve further analysis.

A particular role is played by the frozen periodically-forced SDE 6. By a rescaling of the frozen process 6 , one deduces the following scaling relationships:. However, this extension requires non-trivial technical improvements of [ 5 ] which are beyond the scope of this paper. We are interested in the large time behavior of the law of v t , which is a time-inhomogeneous Ornstein-Uhlenbeck process. Hence, in the linear case, the averaged vector field of equation 7 becomes.

In case G is quadratic in v , this remark implies that one can perform independently the integral over time and over R p in formula 10 noting that the crossed term has a zero average. In this case, contributions from the periodic input and from noise appear in the averaged vector field in an additive way. More precisely, we have the following three regimes:.

Such a system may have exponentially growing trajectories. To make this argument more rigorous, we suggest the following definition. Remark 2.

Stochastic Biomathematical Models

Moreover, if one considers non-linear models for the variable v , then the Gaussian property may be lost; however, adding sigmoidal non-linearity has, in general, the effect of bounding the dynamics, thus making these moment assumptions reasonable to check in most models of interest.

Property 2. Here, we show that it applies to system Then, as a special case of 10 , we obtain the following averaged system:. First, we design a generic learning model and show that one can define formally an averaged system with equation 7. However, going beyond the mere definition of the averaged system seems very difficult and we only manage to get explicit results for simple systems where the fast activity dynamics is linear.

In the three last subsections, we push the analysis for three examples of increasing complexity. In the following, we always consider that the initial connectivity is 0. We now introduce a large class of stochastic neuronal networks with learning models. Each neuron variable v i is assumed to follow the SDE. The input u i to neuron i has mainly two components: the external input u i ext and the input coming from other neurons in the network u i syn. The latter is a priori a complex combination of post-synaptic potentials coming from many other neurons.

Note that neurons can be connected to themselves, i. Thus, we can write. In practical cases, they are often taken to be sigmoidal functions. Together with a slow generic learning rule, this leads to defining a stochastic learning model as the following system of SDEs. This model is a non-autonomous, stochastic, non-linear slow-fast system.

However, one should keep in mind that these assumptions are only sufficient, and that the double averaging principle may work as well in systems which do not satisfy readily those assumptions. As we will show in Section 3. However, purely time-delayed systems do not enter the scope of this theory, although it might be possible to derive an analogous averaging method in this framework. We will also analyze the obtained averaged system, describing the slow dynamics of the connectivity matrix in the limit of perfect time-scale separation and, in particular, study the convergence of this averaged system to an equilibrium point.

The first question that arises is about the well-posedness of the system: What is the definition interval of the solutions of system 12? Do they explode in finite time? At first sight, it seems there may be a runaway of the solution if the largest real part among the eigenvalues of W grows bigger than l.

In fact, it turns out this scenario can be avoided if the following assumption linking the parameters of the system is satisfied. Assumption 3. It corresponds to making sure the external i. This is summarized in the following theorem. Theorem 3. In the following, we focus on the averaged system described by Its right-hand side is made of three terms: a linear and homogeneous decay, a correlation term, and a noise term. The last two terms are made explicit in the following. This term corresponds to the auto-correlation of neuronal activity.

Actually, one can perform an expansion of this term with respect to a small parameter corresponding to a weakly connected expansion. Most terms vanish if the connectivity W is small compared to the strength of the intrinsic decaying dynamics of neurons l. It is common knowledge, see [ 17 ] for instance, that this term gathers information about the correlation of the inputs. Actually, without the assumption of a slow input, lagged correlations of the input appear in the averaged system.

Before giving the expression of these temporal correlations, we need to introduce some notations. This is the correlation matrix of the inputs filtered by two different functions. It is easy to show that this is similar to computing the cross-correlation of the inputs with the inputs filtered by another function,. This notation is deliberately similar to that of the transpose operator we use in the proofs. We have not found a way to make them explicit; therefore, the following remarks are simply based on numerical illustrations.

We intend to express the correlation term as an infinite converging sum involving these filtered correlations. In this perspective, we use a result we have proved in [ 25 ] to write the solution of a general class of non-autonomous linear systems e. Lemma 3. This is a decomposition of the solution of a linear differential system on the basis of operators where the spatial and temporal parts are decoupled.

This important step in a detailed study of the averaged equation cannot be achieved easily in models with non-linear activity. Everything is now set up to introduce the explicit expansion of the correlation we are using in what follows.

Preprints 2009

Indeed, we use the previous result to rewrite the correlation term as follows. Property 3. This infinite sum of convolved filters is reminiscent of a property of Hawkes processes that have a linear input-output gain [ 26 ]. In particular, if the inputs are much slower than neuronal activity time-scale, i. The above decomposition has been used as the basis for numerical computation of trajectories of the averaged system.

The inputs have a random but frozen spatial structure and evolve according to a sinusoidal function.

with Applications to Neuronal Modeling

Now that we have found an explicit formulation for the averaged system, it is natural to study its dynamics. Actually, we prove in the following that if the connectivity W is kept smaller than l 3 , i. Therefore, one can prove the uniqueness of the fixed point with the Banach fixed point argument and exhibit an energy function for the system. When the network is weakly connected, the high-order terms in expansion 15 may be neglected. In this section, we follow this idea and find an explicit expansion for the equilibrium connectivity where the strength of the connectivity is the small parameter enabling the expansion.

The weaker the connectivity, the more terms can be neglected in the expansion. In fact, it is not natural to speak about a weakly connected learning network since the connectivity is a variable. However, we are able to identify a weak connectivity index which controls the strength of the connectivity.

We say the connectivity is weak when it is negligible compared to the intrinsic leak term, i. We show in the Appendix that this weak connectivity index depends only on the parameters of the network and can be written. At the first order, the final connectivity is C 0 , 0 , the filtered correlation of the inputs convolved with a bell-shaped centered temporal profile. It is made of two spatially random patterns that are shown alternatively. The off-diagonal terms are null because the two patterns are spatially orthogonal.

Not only the spatial correlation is encoded in the weights, but there is also some information about the temporal correlation, i. In this section, we study an improvement of the learning model by adding a certain form of history dependence in the system and explain the way it changes the results of the previous section.

Actually, this class of systems contains models which are biologically more relevant than the previous model and which will exhibit interesting additional functional behaviors. In particular, this covers the following features:.

Stochastic modeling

It is likely that a biological learning rule will integrate the activity over a short time. Rolls and Deco numerically show [ 15 ] that the temporal convolution, leading to a spatio-temporal learning, makes it possible to perform invariant object recognition. Many neurons have an oscillatory behavior. Although we cannot take this into account in a linear model, we can model a neuron by a damped oscillator, which also introduces a new important time-scale in the system. Adding adaptation to neuronal dynamics is an elementary way to implement this idea. This corresponds to modeling a single neuron without inputs by the equivalent formulations.

It is commonly given as [ 10 ]:. There are two types of errors associated with the parabolic surface: random and no random. The first type of errors is apparent changes in the sun width, scattering effects caused by random slope errors and associated with the reflective surface; these can be represented by normal probability distributions.

The second class of errors depends on the manufacturing and operation of the collector. These errors are due to the reflector profile imperfections, misalignment and receiver location errors [ 11 ]. Random errors are modeled statistically to calculate the standard deviation of the distribution of the total energy reflected at normal incidence [ 12 ]:. This standard provides a widely known method to obtain the thermal efficiency of solar energy collector that uses single phase fluids, in order to be compared with the similar solar collectors [ 13 ].

As can be seen, Eq. Artificial neural networks ANN are adaptive systems developed in the form of computational algorithms or electrics circuits, which are inspired in the biological neuron system operation.

Biological neuron model - Wikipedia

An ANN is composed of a large number of interconnected units called neurons that have a certain natural tendency for learning information from the outside world [ 14 ]. These structures are used to estimate or approximate functions that may depend on a lot of variables, which are generally unknown reason for why the ANN have been used in many practical applications such as pattern recognition, estimation of series time and modeling of nonlinear processes [ 15 ].

A model of ANN can be seen as a black box to which is entered a database composed of a series of input variables; each of these input variables is assigned an appropriate weighting factor called weight W. The sum of the weighted inputs and the use of bias b for adjustment produce an input value applied to a transfer function to generate an output Figure 3.

The main feature of these models is that they do not require specific information on the physical behavior of the system or how they were obtained data [ 16 ]. Among the many existing ANN models the most widely used is known as multi-layer perceptron MLP [ 17 ], which is used to solve multivariable problems by nonlinear equations using a process called training. The training process is performed through specific learning algorithms, where the most widely used is known as back-propagation through time [ 18 ].

The architecture of an MLP is usually divided into three parts known as: input layer, hidden layer and output layer. During the training process, the MLP learns from past errors to get a model that describes as closely as possible to the nonlinear phenomenon. To carry out this, during the training phase they adapt the weight and bias parameters until the approximation error is minimized [ 19 ]. A recurrent neural network responds temporally to an external input signal where the feedback allows the RNN to have a representation in state space; this versatility makes them convenient for diverse applications for modeling, optimization and control.

The order in an RNN refers to the form in which the neuron activation potential is defined [ 20 ].

When the local activation potential is combined with products of signals coming from of the feedback or when products are made between the later and the external input signals to the network, a neural network of order emerges, where the order represents the number of signals that are multiplied. In this work, in order to carry out modeling of the process, a high order recurrent neural network is designed. The structure is composed by an input vector, one hidden layer and an output layer composed of just one neuron with a linear activation function.

In Figure 4 , the designed recurrent neural network architecture is depicted. For the set of weights, we construct a weight vector that will be estimated by means of the Kalman Filtering. The Kalman Filtering KF algorithm was first designed as a method to estimate the state of a system under noise on the process and on the measurement.

Consider a linear, discrete-time dynamical system described by. On the other hand, Eq.

www.cantinesanpancrazio.it/components/pyfuwyn/820-programma-spia-pc.php

Biological neuron model

To deal with the nonlinear structure of the recurrent neural network mapping, an EKF algorithm is developed. The learning algorithm for the recurrent neural network based on the EKF is described as. The H k matrix is defined with each derivative of one of the neural network output, y i , with respect to one neural network weight, w j , as follows:.

In order to solve the optimization problem proposed, a computational intelligence methodology is developed. The proposed approach is divided into two parts: first is the generation of a mathematical model by RNN and the second part is responsible for performing inverse neural network for the optimization process. Figure 5 displays the methodology divided into three steps: i the first step consist in generate a database with the most important parameters that could affect the desired output, which is schematized in Figure 6 , which in our case is the thermal efficiency, ii in the second step an RNN model is trained to obtain the best approximation error that relates the inputs with desirable outputs; iii in the last step one of the RNN inputs is selected to function as a control variable to perform the optimization process by an inverse neural network architecture.

Experimental database provided by Jaramillo et al.

The parameter measurements are divided into two categories: operational variables conformed by inlet temperature T i and outlet temperature T o working fluid, as well as flow working fluid F w ; and environmental variables composed by ambient temperature T a , direct solar radiation G b and wind velocity V w. Table 2 shows the six parameters that form the database and the minimum and maximum ranges of each one.

Upload PDF. Follow this author. New articles by this author. New citations to this author. New articles related to this author's research. Email address for updates. My profile My library Metrics Alerts. Sign in. Articles Cited by. Nephrology Dialysis Transplantation 27 4 , , Fixed Point Theory and Applications 1 , ,