Biology:Biological neuron model
Biological neuron models, also known as spiking neuron models,[1] are mathematical descriptions of neurons. In particular, these models describe how the voltage potential across the cell membrane changes over time. In an experimental setting, stimulating neurons with an electrical current generates an action potential (or spike), that propagates down the neuron's axon. This axon can branch out and connect to a large number of downstream neurons at sites called synapses. At these synapses, the spike can cause release of a biochemical substance (neurotransmitter), which in turn can change the voltage potential of downstream neurons, potentially leading to spikes in those downstream neurons, thus propagating the signal. As many as 85% of neurons in the neocortex, the outermost layer of the mammalian brain, consist of excitatory pyramidal neurons,[2][3] and each pyramidal neuron receives tens of thousands of inputs from other neurons.[4] Thus, spiking neurons are a major information processing unit of the nervous system.
One such example of a spiking neuron model may be a highly detailed mathematical model that includes spatial morphology. Another may be a conductance-based neuron model that views neurons as points and describes the membrane voltage dynamics as a function of transmembrane currents. A mathematically simpler "integrate-and-fire" model significantly simplifies the description of ion channel and membrane potential dynamics (initially studied by Lapique in 1907).[5][6]
Introduction: Biological background, classification, and aims of neuron models
Non-spiking cells, spiking cells, and their measurement
Not all the cells of the nervous system produce the type of spike that define the scope of the spiking neuron models. For example, cochlear hair cells, retinal receptor cells, and retinal bipolar cells do not spike. Furthermore, many cells in the nervous system are not classified as neurons but instead are classified as glia.
Neuronal activity can be measured with different experimental techniques, such as the "Whole cell" measurement technique, which captures the spiking activity of a single neuron and produces full amplitude action potentials.
With extracellular measurement techniques an electrode (or array of several electrodes) is located in the extracellular space. Spikes, often from several spiking sources, depending on the size of the electrode and its proximity to the sources, can be identified with signal processing techniques. Extracellular measurement has several advantages:
- Is easier to obtain experimentally
- Is robust and lasts for a longer time
- Can reflect the dominant effect, especially when conducted in an anatomical region with many similar cells.
Overview of neuron models
Neuron models can be divided into two categories according to the physical units of the interface of the model. Each category could be further divided according to the abstraction/detail level:
- Electrical input–output membrane voltage models – These models produce a prediction for membrane output voltage as a function of electrical stimulation given as current or voltage input. The various models in this category differ in the exact functional relationship between the input current and the output voltage and in the level of detail. Some models in this category predict only the moment of occurrence of output spike (also known as "action potential"); other models are more detailed and account for sub-cellular processes. The models in this category can be either deterministic or probabilistic.
- Natural stimulus or pharmacological input neuron models – The models in this category connect the input stimulus which can be either pharmacological or natural, to the probability of a spike event. The input stage of these models is not electrical but rather has either pharmacological (chemical) concentration units, or physical units that characterize an external stimulus such as light, sound or other forms of physical pressure. Furthermore, the output stage represents the probability of a spike event and not an electrical voltage.
Although it is not unusual in science and engineering to have several descriptive models for different abstraction/detail levels, the number of different, sometimes contradicting, biological neuron models is exceptionally high. This situation is partly the result of the many different experimental settings, and the difficulty to separate the intrinsic properties of a single neuron from measurement effects and interactions of many cells (network effects). To accelerate the convergence to a unified theory, we list several models in each category, and where applicable, also references to supporting experiments.
Aims of neuron models
Ultimately, biological neuron models aim to explain the mechanisms underlying the operation of the nervous system. However, several approaches can be distinguished, from more realistic models (e.g., mechanistic models) to more pragmatic models (e.g., phenomenological models).[7][better source needed] Modeling helps to analyze experimental data and address questions. Models are also important in the context of restoring lost brain functionality through neuroprosthetic devices.
Electrical input–output membrane voltage models
The models in this category describe the relationship between neuronal membrane currents at the input stage and membrane voltage at the output stage. This category includes (generalized) integrate-and-fire models and biophysical models inspired by the work of Hodgkin–Huxley in the early 1950s using an experimental setup that punctured the cell membrane and allowed to force a specific membrane voltage/current.[8][9][10][11]
Most modern electrical neural interfaces apply extra-cellular electrical stimulation to avoid membrane puncturing which can lead to cell death and tissue damage. Hence, it is not clear to what extent the electrical neuron models hold for extra-cellular stimulation (see e.g.[12]).
Hodgkin–Huxley
Property of the H&H model | References |
---|---|
The shape of an individual spike | [8][9][10][11] |
The identity of the ions involved | [8][9][10][11] |
Spike speed across the axon | [8] |
The Hodgkin–Huxley model (H&H model)[8][9][10][11] is a model of the relationship between the flow of ionic currents across the neuronal cell membrane and the membrane voltage of the cell.[8][9][10][11] It consists of a set of nonlinear differential equations describing the behavior of ion channels that permeate the cell membrane of the squid giant axon. Hodgkin and Huxley were awarded the 1963 Nobel Prize in Physiology or Medicine for this work.
It is important to note the voltage-current relationship, with multiple voltage-dependent currents charging the cell membrane of capacity Cm
- [math]\displaystyle{ C_\mathrm{m} \frac{d V(t)}{d t} = -\sum_i I_i (t, V). }[/math]
The above equation is the time derivative of the law of capacitance, Q = CV where the change of the total charge must be explained as the sum over the currents. Each current is given by
- [math]\displaystyle{ I(t,V) = g(t,V)\cdot(V-V_\mathrm{eq}) }[/math]
where g(t,V) is the conductance, or inverse resistance, which can be expanded in terms of its maximal conductance ḡ and the activation and inactivation fractions m and h, respectively, that determine how many ions can flow through available membrane channels. This expansion is given by
- [math]\displaystyle{ g(t,V)=\bar{g}\cdot m(t,V)^p \cdot h(t,V)^q }[/math]
and our fractions follow the first-order kinetics
- [math]\displaystyle{ \frac{d m(t,V)}{d t} = \frac{m_\infty(V)-m(t,V)}{\tau_\mathrm{m} (V)} = \alpha_\mathrm{m} (V)\cdot(1-m) - \beta_\mathrm{m} (V)\cdot m }[/math]
with similar dynamics for h, where we can use either τ and m∞ or α and β to define our gate fractions.
The Hodgkin–Huxley model may be extended to include additional ionic currents. Typically, these include inward Ca2+ and Na+ input currents, as well as several varieties of K+ outward currents, including a "leak" current.
The result can be at the small end of 20 parameters which one must estimate or measure for an accurate model. In a model of a complex system of neurons, numerical integration of the equations are computationally expensive. Careful simplifications of the Hodgkin–Huxley model are therefore needed.
The model can be reduced to two dimensions thanks to the dynamic relations which can be established between the gating variables.[13] it is also possible to extend it to take into account the evolution of the concentrations (considered fixed in the original model).[14][15]
Perfect Integrate-and-fire
One of the earliest models of a neuron is the perfect integrate-and-fire model (also called non-leaky integrate-and-fire), first investigated in 1907 by Louis Lapicque.[16] A neuron is represented by its membrane voltage V which evolves in time during stimulation with an input current I(t) according
- [math]\displaystyle{ I(t)=C \frac{d V(t)}{d t} }[/math]
which is just the time derivative of the law of capacitance, Q = CV. When an input current is applied, the membrane voltage increases with time until it reaches a constant threshold Vth, at which point a delta function spike occurs and the voltage is reset to its resting potential, after which the model continues to run. The firing frequency of the model thus increases linearly without bound as input current increases.
The model can be made more accurate by introducing a refractory period tref that limits the firing frequency of a neuron by preventing it from firing during that period. For constant input I(t)=I the threshold voltage is reached after an integration time tint=CVthr/I after start from zero. After a reset, the refractory period introduces a dead time so that the total time until the next firing is tref+tint . The firing frequency is the inverse of the total inter-spike interval (including dead time). The firing frequency as a function of a constant input current is therefore
- [math]\displaystyle{ \,\! f(I)= \frac{I} {C_\mathrm{} V_\mathrm{th} + t_\mathrm{ref} I}. }[/math]
A shortcoming of this model is that it describes neither adaptation nor leakage. If the model receives a below-threshold short current pulse at some time, it will retain that voltage boost forever - until another input later makes it fire. This characteristic is not in line with observed neuronal behavior. The following extensions make the integrate-and-fire model more plausible from a biological point of view.
Leaky integrate-and-fire
The leaky integrate-and-fire model which can be traced back to Louis Lapicque,[16] contains, compared to the non-leaky integrate-and-fire model a "leak" term in the membrane potential equation, reflecting the diffusion of ions through the membrane. The model equation looks like[1]
- [math]\displaystyle{ C_\mathrm{m} \frac{d V_\mathrm{m} (t)}{d t}= I(t)-\frac{V_\mathrm{m} (t)}{R_\mathrm{m}} }[/math]
where Vm is the voltage across the cell membrane and Rm is the membrane resistance. (The non-leaky integrate-and-fire model is retrieved in the limit Rm to infinity, i.e. if the membrane is a perfect insulator). The model equation is valid for arbitrary time-dependent input until a threshold Vth is reached; thereafter the membrane potential is reset.
For constant input, the minimum input to reach the threshold is Ith = Vth / Rm. Assuming a reset to zero, the firing frequency thus looks like
- [math]\displaystyle{ f(I) = \begin{cases} 0, & I \le I_\mathrm{th} \\ \left[ t_\mathrm{ref}-R_\mathrm{m} C_\mathrm{m} \log\left(1-\tfrac{V_\mathrm{th}}{I R_\mathrm{m}}\right) \right]^{-1}, & I \gt I_\mathrm{th} \end{cases} }[/math]
which converges for large input currents to the previous leak-free model with the refractory period.[17] The model can also be used for inhibitory neurons.[18][19]
The biggest disadvantage of the Leaky integrate-and-fire neuron is that it does not contain neuronal adaptation so that it cannot describe an experimentally measured spike train in response to constant input current.[20] This disadvantage is removed in generalized integrate-and-fire models that also contain one or several adaptation-variables and are able to predict spike times of cortical neurons under current injection to a high degree of accuracy.[21][22][23]
Adaptive integrate-and-fire
Adaptive integrate-and-fire model model | References |
---|---|
Sub-threshold voltage for time-dependent input current | [22][23] |
Firing times for time-dependent input current | [22][23] |
Firing Patterns in response to step current input | [24][25][26] |
Neuronal adaptation refers to the fact that even in the presence of a constant current injection into the soma, the intervals between output spikes increase. An adaptive integrate-and-fire neuron model combines the leaky integration of voltage V with one or several adaptation variables wk (see Chapter 6.1. in the textbook Neuronal Dynamics[27])
- [math]\displaystyle{ \tau_\mathrm{m} \frac{d V_\mathrm{m} (t)}{d t} = R I(t)- [V_\mathrm{m} (t) - E_\mathrm{m} ]- R \sum_k w_k }[/math]
- [math]\displaystyle{ \tau_k \frac{d w_k (t)}{d t} = - a_k [V_\mathrm{m} (t) - E_\mathrm{m} ]- w_k + b_k \tau_k \sum_f \delta (t-t^f) }[/math]
where [math]\displaystyle{ \tau_m }[/math] is the membrane time constant, wk is the adaptation current number, with index k, [math]\displaystyle{ \tau_k }[/math] is the time constant of adaptation current wk, Em is the resting potential and tf is the firing time of the neuron and the Greek delta denotes the Dirac delta function. Whenever the voltage reaches the firing threshold the voltage is reset to a value Vr below the firing threshold. The reset value is one of the important parameters of the model. The simplest model of adaptation has only a single adaptation variable w and the sum over k is removed.[28]
Integrate-and-fire neurons with one or several adaptation variables can account for a variety of neuronal firing patterns in response to constant stimulation, including adaptation, bursting, and initial bursting.[24][25][26] Moreover, adaptive integrate-and-fire neurons with several adaptation variables are able to predict spike times of cortical neurons under time-dependent current injection into the soma.[22][23]
Fractional-order leaky integrate-and-fire
Recent advances in computational and theoretical fractional calculus lead to a new form of model called Fractional-order leaky integrate-and-fire.[29][30] An advantage of this model is that it can capture adaptation effects with a single variable. The model has the following form[30]
- [math]\displaystyle{ I(t)-\frac{V_\mathrm{m} (t)}{R_\mathrm{m}} = C_\mathrm{m} \frac{d^{\alpha} V_\mathrm{m} (t)}{d^{\alpha} t} }[/math]
Once the voltage hits the threshold it is reset. Fractional integration has been used to account for neuronal adaptation in experimental data.[29]
'Exponential integrate-and-fire' and 'adaptive exponential integrate-and-fire'
Adaptive exponential integrate-and-fire | References |
---|---|
The sub-threshold current-voltage relation | |
Firing patterns in response to step current input | [26] |
Refractoriness and adaptation | [31] |
In the exponential integrate-and-fire model,[32] spike generation is exponential, following the equation:
- [math]\displaystyle{ \frac{dV}{dt} - \frac{R} {\tau_m} I(t)= \frac{1} {\tau_m} \left[ E_m-V+\Delta_T \exp \left( \frac{V - V_T} {\Delta_T} \right) \right]. }[/math]
where [math]\displaystyle{ V }[/math] is the membrane potential, [math]\displaystyle{ V_T }[/math] is the intrinsic membrane potential threshold, [math]\displaystyle{ \tau_m }[/math] is the membrane time constant, [math]\displaystyle{ E_m }[/math]is the resting potential, and [math]\displaystyle{ \Delta_T }[/math] is the sharpness of action potential initiation, usually around 1 mV for cortical pyramidal neurons.[33] Once the membrane potential crosses [math]\displaystyle{ V_T }[/math], it diverges to infinity in finite time.[34] In numerical simulation the integration is stopped if the membrane potential hits an arbitrary threshold (much larger than [math]\displaystyle{ V_T }[/math]) at which the membrane potential is reset to a value Vr . The voltage reset value Vr is one of the important parameters of the model. Importantly, the right-hand side of the above equation contains a nonlinearity that can be directly extracted from experimental data.[33] In this sense the exponential nonlinearity is strongly supported by experimental evidence.
In the adaptive exponential integrate-and-fire neuron [31] the above exponential nonlinearity of the voltage equation is combined with an adaptation variable w
- [math]\displaystyle{ \tau_m \frac{dV}{dt} = R I(t) + \left[ E_m-V+\Delta_T \exp \left( \frac{V - V_T} {\Delta_T} \right) \right] - R w }[/math]
- [math]\displaystyle{ \tau \frac{d w (t)}{d t} = - a [V_\mathrm{m} (t) - E_\mathrm{m} ]- w + b \tau \delta (t-t^f) }[/math]
where w denotes the adaptation current with time scale [math]\displaystyle{ \tau }[/math]. Important model parameters are the voltage reset value Vr, the intrinsic threshold [math]\displaystyle{ V_T }[/math], the time constants [math]\displaystyle{ \tau }[/math] and [math]\displaystyle{ \tau_m }[/math] as well as the coupling parameters a and b. The adaptive exponential integrate-and-fire model inherits the experimentally derived voltage nonlinearity [33] of the exponential integrate-and-fire model. But going beyond this model, it can also account for a variety of neuronal firing patterns in response to constant stimulation, including adaptation, bursting, and initial bursting.[26] However, since the adaptation is in the form of a current, aberrant hyperpolarization may appear. This problem was solved by expressing it as a conductance.[35]
Stochastic models of membrane voltage and spike timing
The models in this category are generalized integrate-and-fire models that include a certain level of stochasticity. Cortical neurons in experiments are found to respond reliably to time-dependent input, albeit with a small degree of variations between one trial and the next if the same stimulus is repeated.[36][37] Stochasticity in neurons has two important sources. First, even in a very controlled experiment where input current is injected directly into the soma, ion channels open and close stochastically[38] and this channel noise leads to a small amount of variability in the exact value of the membrane potential and the exact timing of output spikes. Second, for a neuron embedded in a cortical network, it is hard to control the exact input because most inputs come from unobserved neurons somewhere else in the brain.[27]
Stochasticity has been introduced into spiking neuron models in two fundamentally different forms: either (i) a noisy input current is added to the differential equation of the neuron model;[39] or (ii) the process of spike generation is noisy.[40] In both cases, the mathematical theory can be developed for continuous time, which is then, if desired for the use in computer simulations, transformed into a discrete-time model.
The relation of noise in neuron models to the variability of spike trains and neural codes is discussed in Neural Coding and in Chapter 7 of the textbook Neuronal Dynamics.[27]
Noisy input model (diffusive noise)
A neuron embedded in a network receives spike input from other neurons. Since the spike arrival times are not controlled by an experimentalist they can be considered as stochastic. Thus a (potentially nonlinear) integrate-and-fire model with nonlinearity f(v) receives two inputs: an input [math]\displaystyle{ I(t) }[/math] controlled by the experimentalists and a noisy input current [math]\displaystyle{ I^{\rm noise}(t) }[/math] that describes the uncontrolled background input.
- [math]\displaystyle{ \tau_m \frac{dV}{dt} = f(V) + R I(t) + R I^\text{noise}(t) }[/math]
Stein's model[39] is the special case of a leaky integrate-and-fire neuron and a stationary white noise current [math]\displaystyle{ I^{\rm noise}(t) = \xi(t) }[/math] with mean zero and unit variance. In the subthreshold regime, these assumptions yield the equation of the Ornstein–Uhlenbeck process
- [math]\displaystyle{ \tau_m \frac{dV}{dt} = [E_m-V] + R I(t) + R \xi(t) }[/math]
However, in contrast to the standard Ornstein–Uhlenbeck process, the membrane voltage is reset whenever V hits the firing threshold Vth .[39] Calculating the interval distribution of the Ornstein–Uhlenbeck model for constant input with threshold leads to a first-passage time problem.[39][41] Stein's neuron model and variants thereof have been used to fit interspike interval distributions of spike trains from real neurons under constant input current.[41]
In the mathematical literature, the above equation of the Ornstein–Uhlenbeck process is written in the form
- [math]\displaystyle{ dV = [E_m-V + R I(t)] \frac{dt}{\tau_m} + \sigma \, dW }[/math]
where [math]\displaystyle{ \sigma }[/math] is the amplitude of the noise input and dW are increments of a Wiener process. For discrete-time implementations with time step dt the voltage updates are[27]
- [math]\displaystyle{ \Delta V = [E_m-V + R I(t)] \frac{\Delta t}{\tau_m} + \sigma \sqrt{\tau_m}y }[/math]
where y is drawn from a Gaussian distribution with zero mean unit variance. The voltage is reset when it hits the firing threshold Vth .
The noisy input model can also be used in generalized integrate-and-fire models. For example, the exponential integrate-and-fire model with noisy input reads
- [math]\displaystyle{ \tau_m \frac{dV}{dt} =E_m-V+\Delta_T \exp \left( \frac{V - V_T} {\Delta_T} \right) + R I(t) + R\xi(t) }[/math]
For constant deterministic input [math]\displaystyle{ I(t)=I_0 }[/math] it is possible to calculate the mean firing rate as a function of [math]\displaystyle{ I_0 }[/math].[42] This is important because the frequency-current relation (f-I-curve) is often used by experimentalists to characterize a neuron. It is also the transfer function in
The leaky integrate-and-fire with noisy input has been widely used in the analysis of networks of spiking neurons.[43] Noisy input is also called 'diffusive noise' because it leads to a diffusion of the subthreshold membrane potential around the noise-free trajectory (Johannesma,[44] The theory of spiking neurons with noisy input is reviewed in Chapter 8.2 of the textbook Neuronal Dynamics.[27]
Noisy output model (escape noise)
In deterministic integrate-and-fire models, a spike is generated if the membrane potential V(t) hits the threshold [math]\displaystyle{ V_{th} }[/math]. In noisy output models, the strict threshold is replaced by a noisy one as follows. At each moment in time t, a spike is generated stochastically with instantaneous stochastic intensity or 'escape rate' [27]
- [math]\displaystyle{ \rho(t) = f(V(t)-V_{th}) }[/math]
that depends on the momentary difference between the membrane voltage V(t) and the threshold [math]\displaystyle{ V_{th} }[/math].[40] A common choice for the 'escape rate' [math]\displaystyle{ f }[/math] (that is consistent with biological data[22]) is
- [math]\displaystyle{ f(V-V_{th}) = \frac{1}{\tau_0} \exp[\beta(V-V_{th}] }[/math]
where [math]\displaystyle{ \tau_0 }[/math]is a time constant that describes how quickly a spike is fired once the membrane potential reaches the threshold and [math]\displaystyle{ \beta }[/math] is a sharpness parameter. For [math]\displaystyle{ \beta\to\infty }[/math] the threshold becomes sharp and spike firing occurs deterministically at the moment when the membrane potential hits the threshold from below. The sharpness value found in experiments[22] is [math]\displaystyle{ 1/\beta\approx 4mV }[/math] which means that neuronal firing becomes non-negligible as soon the membrane potential is a few mV below the formal firing threshold.
The escape rate process via a soft threshold is reviewed in Chapter 9 of the textbook Neuronal Dynamics.[27]
For models in discrete time, a spike is generated with probability
- [math]\displaystyle{ P_F(t_n) = F[V(t_n)-V_{th}] }[/math]
that depends on the momentary difference between the membrane voltage V at time [math]\displaystyle{ t_n }[/math] and the threshold [math]\displaystyle{ V_{th} }[/math].[49] The function F is often taken as a standard sigmoidal [math]\displaystyle{ F(x) = 0.5[1 + \tanh(\gamma x)] }[/math] with steepness parameter [math]\displaystyle{ \gamma }[/math],[40] similar to the update dynamics in artificial neural networks. But the functional form of F can also be derived from the stochastic intensity [math]\displaystyle{ f }[/math] in continuous time introduced above as [math]\displaystyle{ F(y_n)\approx 1 - \exp[y_n\Delta t] }[/math] where [math]\displaystyle{ y_n = V(t_n)-V_{th} }[/math] is the threshold distance.[40]
Integrate-and-fire models with output noise can be used to predict the PSTH of real neurons under arbitrary time-dependent input.[22] For non-adaptive integrate-and-fire neurons, the interval distribution under constant stimulation can be calculated from stationary renewal theory. [27]
Spike response model (SRM)
Spike response model | References |
---|---|
Sub-threshold voltage for time-dependent input current | [23][22] |
Firing times for time-dependent input current | [23][22] |
Firing Patterns in response to step current input | [50][51] |
Interspike interval distribution | [50][40] |
Spike-afterpotential | [23] |
refractoriness and dynamic firing threshold | [23][22] |
main article: Spike response model
The spike response model (SRM) is a general linear model for the subthreshold membrane voltage combined with a nonlinear output noise process for spike generation.[40][52][50] The membrane voltage V(t) at time t is
- [math]\displaystyle{ V(t)= \sum_f \eta(t-t^f) + \int_0^\infty \kappa(s) I(t-s)\,ds + V_\mathrm{rest} }[/math]
where tf is the firing time of spike number f of the neuron, Vrest is the resting voltage in the absence of input, I(t-s) is the input current at time t-s and [math]\displaystyle{ \kappa(s) }[/math] is a linear filter (also called kernel) that describes the contribution of an input current pulse at time t-s to the voltage at time t. The contributions to the voltage caused by a spike at time [math]\displaystyle{ t^f }[/math] are described by the refractory kernel [math]\displaystyle{ \eta(t-t^f) }[/math]. In particular, [math]\displaystyle{ \eta(t-t^f) }[/math] describes the reset after the spike and the time course of the spike-afterpotential following a spike. It therefore expresses the consequences of refractoriness and adaptation.[40][23] The voltage V(t) can be interpreted as the result of an integration of the differential equation of a leaky integrate-and-fire model coupled to an arbitrary number of spike-triggered adaptation variables.[24]
Spike firing is stochastic and happens with a time-dependent stochastic intensity (instantaneous rate)
- [math]\displaystyle{ f(V-\vartheta(t)) = \frac{1}{\tau_0} \exp[\beta(V-\vartheta(t)] }[/math]
with parameters [math]\displaystyle{ \tau_0 }[/math] and [math]\displaystyle{ \beta }[/math] and a dynamic threshold [math]\displaystyle{ \vartheta(t) }[/math] given by
- [math]\displaystyle{ \vartheta(t)= \vartheta_0 + \sum_f \theta_1(t-t^f) }[/math]
Here [math]\displaystyle{ \vartheta_0 }[/math] is the firing threshold of an inactive neuron and [math]\displaystyle{ \theta_1(t-t^f) }[/math] describes the increase of the threshold after a spike at time [math]\displaystyle{ t^f }[/math].[22][23] In case of a fixed threshold, one sets [math]\displaystyle{ \theta_1(t-t^f) }[/math]=0. For [math]\displaystyle{ \beta \to \infty }[/math] the threshold process is deterministic.[27]
The time course of the filters [math]\displaystyle{ \eta,\kappa,\theta_1 }[/math] that characterize the spike response model can be directly extracted from experimental data.[23] With optimized parameters the SRM describes the time course of the subthreshold membrane voltage for time-dependent input with a precision of 2mV and can predict the timing of most output spikes with a precision of 4ms.[22][23] The SRM is closely related to linear-nonlinear-Poisson cascade models (also called Generalized Linear Model).[48] The estimation of parameters of probabilistic neuron models such as the SRM using methods developed for Generalized Linear Models[53] is discussed in Chapter 10 of the textbook Neuronal Dynamics.[27]
The name spike response model arises because in a network, the input current for neuron i is generated by the spikes of other neurons so that in the case of a network the voltage equation becomes
- [math]\displaystyle{ V_i(t)= \sum_f \eta_i(t-t_i^f) + \sum_{j=1}^N w_{ij} \sum_{f'}\varepsilon_{ij}(t-t_j^{f'}) + V_\mathrm{rest} }[/math]
where [math]\displaystyle{ t_j^{f'} }[/math] is the firing times of neuron j (i.e., its spike train), and [math]\displaystyle{ \eta_i(t-t^f_i) }[/math] describes the time course of the spike and the spike after-potential for neuron i, [math]\displaystyle{ w_{ij} }[/math] and [math]\displaystyle{ \varepsilon_{ij}(t-t_j^{f'}) }[/math] describe the amplitude and time course of an excitatory or inhibitory postsynaptic potential (PSP) caused by the spike [math]\displaystyle{ t_j^{f'} }[/math] of the presynaptic neuron j. The time course [math]\displaystyle{ \varepsilon_{ij}(s) }[/math] of the PSP results from the convolution of the postsynaptic current [math]\displaystyle{ I(t) }[/math] caused by the arrival of a presynaptic spike from neuron j with the membrane filter [math]\displaystyle{ \kappa(s) }[/math].[27]
SRM0
The SRM0[50][54][55] is a stochastic neuron model related to time-dependent nonlinear renewal theory and a simplification of the Spike Response Model (SRM). The main difference to the voltage equation of the SRM introduced above is that in the term containing the refractory kernel [math]\displaystyle{ \eta(s) }[/math] there is no summation sign over past spikes: only the most recent spike (denoted as the time [math]\displaystyle{ \hat{t} }[/math]) matters. Another difference is that the threshold is constant. The model SRM0 can be formulated in discrete or continuous time. For example, in continuous time, the single-neuron equation is
- [math]\displaystyle{ V(t)= \eta(t-\hat{t}) + \int_0^\infty \kappa(s) I(t-s) \, ds + V_\mathrm{rest} }[/math]
and the network equations of the SRM0 are[50]
- [math]\displaystyle{ V_i(t\mid\hat{t}_i) = \eta_i(t-\hat{t}_i) + \sum_j w_{ij} \sum_f \varepsilon_{ij}(t-\hat{t}_i,t-t^f) + V_\mathrm{rest} }[/math]
where [math]\displaystyle{ \hat{t}_i }[/math] is the last firing time neuron i. Note that the time course of the postsynaptic potential [math]\displaystyle{ \varepsilon_{ij} }[/math] is also allowed to depend on the time since the last spike of neuron i to describe a change in membrane conductance during refractoriness.[54] The instantaneous firing rate (stochastic intensity) is
- [math]\displaystyle{ f(V-\vartheta) = \frac{1}{\tau_0} \exp[\beta(V-V_{th})] }[/math]
where [math]\displaystyle{ V_{th} }[/math] is a fixed firing threshold. Thus spike firing of neuron i depends only on its input and the time since neuron i has fired its last spike.
With the SRM0, the interspike-interval distribution for constant input can be mathematically linked to the shape of the refractory kernel [math]\displaystyle{ \eta }[/math] .[40][50] Moreover the stationary frequency-current relation can be calculated from the escape rate in combination with the refractory kernel [math]\displaystyle{ \eta }[/math].[40][50] With an appropriate choice of the kernels, the SRM0 approximates the dynamics of the Hodgkin-Huxley model to a high degree of accuracy.[54] Moreover, the PSTH response to arbitrary time-dependent input can be predicted.[50]
Galves–Löcherbach model
The Galves–Löcherbach model[56] is a stochastic neuron model closely related to the spike response model SRM0 [55][50] and the leaky integrate-and-fire model. It is inherently stochastic and, just like the SRM0 linked to time-dependent nonlinear renewal theory. Given the model specifications, the probability that a given neuron [math]\displaystyle{ i }[/math] spikes in a period [math]\displaystyle{ t }[/math] may be described by
- [math]\displaystyle{ \mathop{\mathrm{Prob}}(X_{t}(i) = 1\mid \mathcal{F}_{t-1}) = \varphi_i \Biggl( \sum_{j\in I} W_{j \rightarrow i} \sum_{s=L_t^i}^{t-1} g_j(t-s) X_s(j),~~~ t-L_t^i \Biggl), }[/math]
where [math]\displaystyle{ W_{j \rightarrow i} }[/math] is a synaptic weight, describing the influence of neuron [math]\displaystyle{ j }[/math] on neuron [math]\displaystyle{ i }[/math], [math]\displaystyle{ g_j }[/math] expresses the leak, and [math]\displaystyle{ L_t^i }[/math] provides the spiking history of neuron [math]\displaystyle{ i }[/math] before [math]\displaystyle{ t }[/math], according to
- [math]\displaystyle{ L_t^i =\sup\{s\lt t:X_s(i)=1\}. }[/math]
Importantly, the spike probability of neuron i depends only on its spike input (filtered with a kernel [math]\displaystyle{ g_{j} }[/math] and weighted with a factor [math]\displaystyle{ W_{j\to i} }[/math]) and the timing of its most recent output spike (summarized by [math]\displaystyle{ t-L_t^i }[/math]).
Didactic toy models of membrane voltage
The models in this category are highly simplified toy models that qualitatively describe the membrane voltage as a function of input. They are mainly used for didactic reasons in teaching but are not considered valid neuron models for large-scale simulations or data fitting.
FitzHugh–Nagumo
Sweeping simplifications to Hodgkin–Huxley were introduced by FitzHugh and Nagumo in 1961 and 1962. Seeking to describe "regenerative self-excitation" by a nonlinear positive-feedback membrane voltage and recovery by a linear negative-feedback gate voltage, they developed the model described by[57]
- [math]\displaystyle{ \begin{array}{rcl} \dfrac{d V}{d t} &=& V-V^3/3 - w + I_\mathrm{ext} \\ \tau \dfrac{d w}{d t} &=& V-a-b w \end{array} }[/math]
where we again have a membrane-like voltage and input current with a slower general gate voltage w and experimentally-determined parameters a = -0.7, b = 0.8, τ = 1/0.08. Although not derivable from biology, the model allows for a simplified, immediately available dynamic, without being a trivial simplification.[58] The experimental support is weak, but the model is useful as a didactic tool to introduce dynamics of spike generation through phase plane analysis. See Chapter 7 in the textbook Methods of Neuronal Modeling[59]
Morris–Lecar
In 1981 Morris and Lecar combined the Hodgkin–Huxley and FitzHugh–Nagumo models into a voltage-gated calcium channel model with a delayed-rectifier potassium channel, represented by
- [math]\displaystyle{ \begin{align} & C\frac{d V}{d t} &=& -I_\mathrm{ion}(V,w) + I \\[6pt] & \frac{d w}{d t} &=& \varphi \cdot \frac{w_\infty - w}{\tau_{w}} \end{align} }[/math]
where [math]\displaystyle{ I_\mathrm{ion}(V,w) = \bar{g}_\mathrm{Ca} m_\infty \cdot(V-V_\mathrm{Ca}) + \bar{g}_\mathrm{K} w\cdot(V-V_\mathrm{K}) + \bar{g}_\mathrm{L}\cdot(V-V_\mathrm{L}) }[/math].[17] The experimental support of the model is weak, but the model is useful as a didactic tool to introduce dynamics of spike generation through phase plane analysis. See Chapter 7[60] in the textbook Methods of Neuronal Modeling.[59]
A two-dimensional neuron model very similar to the Morris-Lecar model can be derived step-by-step starting from the Hodgkin-Huxley model. See Chapter 4.2 in the textbook Neuronal Dynamics.[27]
Hindmarsh–Rose
Building upon the FitzHugh–Nagumo model, Hindmarsh and Rose proposed in 1984[61] a model of neuronal activity described by three coupled first-order differential equations:
- [math]\displaystyle{ \begin{align} & \frac{d x}{d t} &=& y+3x^2-x^3-z+I \\[6pt] & \frac{d y}{d t} &=& 1-5x^2-y \\[6pt] & \frac{d z}{d t} &=& r\cdot (4(x + \tfrac{8}{5})-z) \end{align} }[/math]
with r2 = x2 + y2 + z2, and r ≈ 10−2 so that the z variable only changes very slowly. This extra mathematical complexity allows a great variety of dynamic behaviors for the membrane potential, described by the x variable of the model, which includes chaotic dynamics. This makes the Hindmarsh–Rose neuron model very useful, because it is still simple, allows a good qualitative description of the many different firing patterns of the action potential, in particular bursting, observed in experiments. Nevertheless, it remains a toy model and has not been fitted to experimental data. It is widely used as a reference model for bursting dynamics.[61]
Theta model and quadratic integrate-and-fire
The theta model, or Ermentrout–Kopell canonical Type I model, is mathematically equivalent to the quadratic integrate-and-fire model which in turn is an approximation to the exponential integrate-and-fire model and the Hodgkin-Huxley model. It is called a canonical model because it is one of the generic models for constant input close to the bifurcation point, which means close to the transition from silent to repetitive firing.[62][63]
The standard formulation of the theta model is[27][62][63]
- [math]\displaystyle{ \frac{d\theta(t)}{d t} = (I-I_0) [1+ \cos(\theta)] + [1- \cos(\theta)] }[/math]
The equation for the quadratic integrate-and-fire model is (see Chapter 5.3 in the textbook Neuronal Dynamics [27]))
- [math]\displaystyle{ \tau_\mathrm{m} \frac{d V_\mathrm{m} (t)}{d t} = (I-I_0) R + [V_\mathrm{m} (t) - E_\mathrm{m} ][V_\mathrm{m} (t) - V_\mathrm{T} ] }[/math]
The equivalence of theta model and quadratic integrate-and-fire is for example reviewed in Chapter 4.1.2.2 of spiking neuron models.[1]
For input I(t) that changes over time or is far away from the bifurcation point, it is preferable to work with the exponential integrate-and-fire model (if one wants the stay in the class of one-dimensional neuron models), because real neurons exhibit the nonlinearity of the exponential integrate-and-fire model.[33]
Sensory input-stimulus encoding neuron models
The models in this category were derived following experiments involving natural stimulation such as light, sound, touch, or odor. In these experiments, the spike pattern resulting from each stimulus presentation varies from trial to trial, but the averaged response from several trials often converges to a clear pattern. Consequently, the models in this category generate a probabilistic relationship between the input stimulus to spike occurrences. Importantly, the recorded neurons are often located several processing steps after the sensory neurons, so that these models summarize the effects of the sequence of processing steps in a compact form
The non-homogeneous Poisson process model (Siebert)
Siebert[64][65] modeled the neuron spike firing pattern using a non-homogeneous Poisson process model, following experiments involving the auditory system.[64][65] According to Siebert, the probability of a spiking event at the time interval [math]\displaystyle{ [t, t+\Delta_t] }[/math] is proportional to a non-negative function [math]\displaystyle{ g[s(t)] }[/math], where [math]\displaystyle{ s(t) }[/math] is the raw stimulus.:
- [math]\displaystyle{ P_\text{spike}(t\in[t',t'+\Delta_t])=\Delta_t \cdot g[s(t)] }[/math]
Siebert considered several functions as [math]\displaystyle{ g[s(t)] }[/math], including [math]\displaystyle{ g[s(t)] \propto s^2(t) }[/math] for low stimulus intensities.
The main advantage of Siebert's model is its simplicity. The shortcomings of the model is its inability to reflect properly the following phenomena:
- The transient enhancement of the neuronal firing activity in response to a step stimulus.
- The saturation of the firing rate.
- The values of inter-spike-interval-histogram at short intervals values (close to zero).
These shortcomings are addressed by the age-dependent point process model and the two-state Markov Model.[66][67][68]
Refractoriness and age-dependent point process model
Berry and Meister[69] studied neuronal refractoriness using a stochastic model that predicts spikes as a product of two terms, a function f(s(t)) that depends on the time-dependent stimulus s(t) and one a recovery function [math]\displaystyle{ w(t-\hat{t}) }[/math] that depends on the time since the last spike
- [math]\displaystyle{ \rho(t) = f(s(t))w(t-\hat{t}) }[/math]
The model is also called an inhomogeneous Markov interval (IMI) process.[70] Similar models have been used for many years in auditory neuroscience.[71][72][73] Since the model keeps memory of the last spike time it is non-Poisson and falls in the class of time-dependent renewal models.[27] It is closely related to the model SRM0 with exponential escape rate.[27] Importantly, it is possible to fit parameters of the age-dependent point process model so as to describe not just the PSTH response, but also the interspike-interval statistics.[70][71][73]
Linear-nonlinear Poisson cascade model and GLM
The linear-nonlinear-Poisson cascade model is a cascade of a linear filtering process followed by a nonlinear spike generation step.[74] In the case that output spikes feed back, via a linear filtering process, we arrive at a model that is known in the neurosciences as Generalized Linear Model (GLM).[48][53] The GLM is mathematically equivalent to the spike response model SRM) with escape noise; but whereas in the SRM the internal variables are interpreted as the membrane potential and the firing threshold, in the GLM the internal variables are abstract quantities that summarizes the net effect of input (and recent output spikes) before spikes are generated in the final step.[27][48]
The two-state Markov model (Nossenson & Messer)
The spiking neuron model by Nossenson & Messer[66][67][68] produces the probability of the neuron firing a spike as a function of either an external or pharmacological stimulus.[66][67][68] The model consists of a cascade of a receptor layer model and a spiking neuron model, as shown in Fig 4. The connection between the external stimulus to the spiking probability is made in two steps: First, a receptor cell model translates the raw external stimulus to neurotransmitter concentration, and then, a spiking neuron model connects neurotransmitter concentration to the firing rate (spiking probability). Thus, the spiking neuron model by itself depends on neurotransmitter concentration at the input stage.[66][67][68]
An important feature of this model is the prediction for neurons firing rate pattern which captures, using a low number of free parameters, the characteristic edge emphasized response of neurons to a stimulus pulse, as shown in Fig. 5. The firing rate is identified both as a normalized probability for neural spike firing, and as a quantity proportional to the current of neurotransmitters released by the cell. The expression for the firing rate takes the following form:
- [math]\displaystyle{ R_\text{fire}(t)=\frac{P_\text{spike}(t;\Delta_t)}{\Delta_t}=[y(t)+R_0] \cdot P_0(t) }[/math]
where,
- P0 is the probability of the neuron being "armed" and ready to fire. It is given by the following differential equation:
- [math]\displaystyle{ \dot{P}_0=-[y(t)+R_0+R_1] \cdot P_0(t) +R_1 }[/math]
P0 could be generally calculated recursively using the Euler method, but in the case of a pulse of stimulus, it yields a simple closed-form expression.[66][75]
- y(t) is the input of the model and is interpreted as the neurotransmitter concentration on the cell surrounding (in most cases glutamate). For an external stimulus it can be estimated through the receptor layer model:
- [math]\displaystyle{ y(t) \simeq g_\text{gain} \cdot \langle s^2(t)\rangle, }[/math]
with [math]\displaystyle{ \langle s^2(t)\rangle }[/math] being a short temporal average of stimulus power (given in Watt or other energy per time unit).
- R0 corresponds to the intrinsic spontaneous firing rate of the neuron.
- R1 is the recovery rate of the neuron from the refractory state.
Other predictions by this model include:
1) The averaged evoked response potential (ERP) due to the population of many neurons in unfiltered measurements resembles the firing rate.[68]
2) The voltage variance of activity due to multiple neuron activity resembles the firing rate (also known as Multi-Unit-Activity power or MUA).[67][68]
3) The inter-spike-interval probability distribution takes the form a gamma-distribution like function.[66][75]
Property of the Model by Nossenson & Messer | References | Description of experimental evidence |
---|---|---|
The shape of the firing rate in response to an auditory stimulus pulse | [76][77][78][79][80] | The Firing Rate has the same shape of Fig 5. |
The shape of the firing rate in response to a visual stimulus pulse | [81][82][83][84] | The Firing Rate has the same shape of Fig 5. |
The shape of the firing rate in response to an olfactory stimulus pulse | [85] | The Firing Rate has the same shape as Fig 5. |
The shape of the firing rate in response to a somatosensory stimulus | [86] | The Firing Rate has the same shape as Fig 5. |
The change in firing rate in response to neurotransmitter application (mostly glutamate) | [87][88] | Firing Rate change in response to neurotransmitter application (Glutamate) |
Square dependence between an auditory stimulus pressure and the firing rate | [89] | Square Dependence between Auditory Stimulus pressure and the Firing Rate (- Linear dependence in pressure square (power)). |
Square dependence between visual stimulus electric field (volts) and the firing rate | [82] | Square dependence between visual stimulus electric field (volts) - Linear Dependence between Visual Stimulus Power and the Firing Rate. |
The shape of the Inter-Spike-Interval Statistics (ISI) | [90] | ISI shape resembles the gamma-function-like |
The ERP resembles the firing rate in unfiltered measurements | [91] | The shape of the averaged evoked response potential in response to stimulus resembles the firing rate (Fig. 5). |
MUA power resembles the firing rate | [68][92] | The shape of the empirical variance of extra-cellular measurements in response to stimulus pulse resembles the firing rate (Fig. 5). |
Pharmacological input stimulus neuron models
The models in this category produce predictions for experiments involving pharmacological stimulation.
Synaptic transmission (Koch & Segev)
According to the model by Koch and Segev,[17] the response of a neuron to individual neurotransmitters can be modeled as an extension of the classical Hodgkin–Huxley model with both standard and nonstandard kinetic currents. Four neurotransmitters primarily influence the CNS. AMPA/kainate receptors are fast excitatory mediators while NMDA receptors mediate considerably slower currents. Fast inhibitory currents go through GABAA receptors, while GABAB receptors mediate by secondary G-protein-activated potassium channels. This range of mediation produces the following current dynamics:
- [math]\displaystyle{ I_\mathrm{AMPA}(t,V) = \bar{g}_\mathrm{AMPA} \cdot [O] \cdot (V(t)-E_\mathrm{AMPA}) }[/math]
- [math]\displaystyle{ I_\mathrm{NMDA}(t,V) = \bar{g}_\mathrm{NMDA} \cdot B(V) \cdot [O] \cdot (V(t)-E_\mathrm{NMDA}) }[/math]
- [math]\displaystyle{ I_\mathrm{GABA_A}(t,V) = \bar{g}_\mathrm{GABA_A} \cdot ([O_1]+[O_2]) \cdot (V(t)-E_\mathrm{Cl}) }[/math]
- [math]\displaystyle{ I_\mathrm{GABA_B}(t,V) = \bar{g}_\mathrm{GABA_B} \cdot \tfrac{[G]^n}{[G]^n+K_\mathrm{d}} \cdot (V(t)-E_\mathrm{K}) }[/math]
where ḡ is the maximal[8][17] conductance (around 1S) and E is the equilibrium potential of the given ion or transmitter (AMDA, NMDA, Cl, or K), while [O] describes the fraction of open receptors. For NMDA, there is a significant effect of magnesium block that depends sigmoidally on the concentration of intracellular magnesium by B(V). For GABAB, [G] is the concentration of the G-protein, and Kd describes the dissociation of G in binding to the potassium gates.
The dynamics of this more complicated model have been well-studied experimentally and produce important results in terms of very quick synaptic potentiation and depression, that is fast, short-term learning.
The stochastic model by Nossenson and Messer translates neurotransmitter concentration at the input stage to the probability of releasing neurotransmitter at the output stage.[66][67][68] For a more detailed description of this model, see the Two state Markov model section above.
HTM neuron model
The HTM neuron model was developed by Jeff Hawkins and researchers at Numenta and is based on a theory called Hierarchical Temporal Memory, originally described in the book On Intelligence. It is based on neuroscience and the physiology and interaction of pyramidal neurons in the neocortex of the human brain.
Artificial Neural Network (ANN)
|
Neocortical Pyramidal Neuron (Biological Neuron)
|
HTM Model Neuron
|
- Few synapses
- No dendrites - Sum input x weights - Learns by modifying the weights of synapses |
- Thousands of synapses on the dendrites
- Active dendrites: cell recognizes hundreds of unique patterns - Co-activation of a set of synapses on a dendritic segment causes an NMDA spike and depolarization at the soma - Sources of input to the cell:
- Learns by growing new synapses |
- Inspired by the pyramidal cells in neocortex layers 2/3 and 5
- Thousands of synapses - Active dendrites: cell recognizes hundreds of unique patterns - Models dendrites and NMDA spikes with each array of coincident detectors having a set of synapses - Learns by modeling the growth of new synapses |
Applications
Spiking Neuron Models are used in a variety of applications that need encoding into or decoding from neuronal spike trains in the context of neuroprosthesis and brain-computer interfaces such as retinal prosthesis:[12][93][94][95] or artificial limb control and sensation.[96][97][98] Applications are not part of this article; for more information on this topic please refer to the main article.
Relation between artificial and biological neuron models
The most basic model of a neuron consists of an input with some synaptic weight vector and an activation function or transfer function inside the neuron determining output. This is the basic structure used for artificial neurons, which in a neural network often looks like
- [math]\displaystyle{ y_i = \varphi\left( \sum_j w_{ij} x_j \right) }[/math]
where yi is the output of the i th neuron, xj is the jth input neuron signal, wij is the synaptic weight (or strength of connection) between the neurons i and j, and φ is the activation function. While this model has seen success in machine-learning applications, it is a poor model for real (biological) neurons, because it lacks time-dependence in input and output.
When an input is switched on at a time t and kept constant thereafter, biological neurons emit a spike train. Importantly this spike train is not regular but exhibits a temporal structure characterized by adaptation, bursting, or initial bursting followed by regular spiking. Generalized integrate-and-fire models such as the Adaptive Exponential Integrate-and-Fire model, the spike response model, or the (linear) adaptive integrate-and-fire model can capture these neuronal firing patterns.[24][25][26]
Moreover, neuronal input in the brain is time-dependent. Time-dependent input is transformed by complex linear and nonlinear filters into a spike train in the output. Again, the spike response model or the adaptive integrate-and-fire model enables to prediction of the spike train in the output for arbitrary time-dependent input,[22][23] whereas an artificial neuron or a simple leaky integrate-and-fire does not.
If we take the Hodkgin-Huxley model as a starting point, generalized integrate-and-fire models can be derived systematically in a step-by-step simplification procedure. This has been shown explicitly for the exponential integrate-and-fire[32] model and the spike response model.[54]
In the case of modeling a biological neuron, physical analogs are used in place of abstractions such as "weight" and "transfer function". A neuron is filled and surrounded with water-containing ions, which carry electric charge. The neuron is bound by an insulating cell membrane and can maintain a concentration of charged ions on either side that determines a capacitance Cm. The firing of a neuron involves the movement of ions into the cell that occurs when neurotransmitters cause ion channels on the cell membrane to open. We describe this by a physical time-dependent current I(t). With this comes a change in voltage, or the electrical potential energy difference between the cell and its surroundings, which is observed to sometimes result in a voltage spike called an action potential which travels the length of the cell and triggers the release of further neurotransmitters. The voltage, then, is the quantity of interest and is given by Vm(t).[19]
If the input current is constant, most neurons emit after some time of adaptation or initial bursting a regular spike train. The frequency of regular firing in response to a constant current I is described by the frequency-current relation which corresponds to the transfer function [math]\displaystyle{ \varphi }[/math] of artificial neural networks. Similarly, for all spiking neuron models the transfer function [math]\displaystyle{ \varphi }[/math] can be calculated numerically (or analytically).
Cable theory and compartmental models
All of the above deterministic models are point-neuron models because they do not consider the spatial structure of a neuron. However, the dendrite contributes to transforming input into output.[99][59] Point neuron models are valid description in three cases. (i) If input current is directly injected into the soma. (ii) If synaptic input arrives predominantly at or close to the soma (closeness is defined by a length scale [math]\displaystyle{ \lambda }[/math] introduced below. (iii) If synapse arrives anywhere on the dendrite, but the dendrite is completely linear. In the last case, the cable acts as a linear filter; these linear filter properties can be included in the formulation of generalized integrate-and-fire models such as the spike response model.
The filter properties can be calculated from a cable equation.
Let us consider a cell membrane in the form of a cylindrical cable. The position on the cable is denoted by x and the voltage across the cell membrane by V. The cable is characterized by a longitudinal resistance [math]\displaystyle{ r_l }[/math] per unit length and a membrane resistance [math]\displaystyle{ r_m }[/math] . If everything is linear, the voltage changes as a function of time
-
[math]\displaystyle{ \frac{r_m}{r_l} \frac{\partial ^2 V}{\partial x^2}=c_m r_m \frac{\partial V}{\partial t}+ V }[/math]
(
)
We introduce a length scale [math]\displaystyle{ \lambda^2 = {r_m}/{r_l} }[/math] on the left side and time constant [math]\displaystyle{ \tau = c_m r_m }[/math] on the right side. The cable equation can now be written in its perhaps best-known form:
-
[math]\displaystyle{ \lambda^2 \frac{\partial ^2 V}{\partial x^2}=\tau \frac{\partial V}{\partial t}+ V }[/math]
(
)
The above cable equation is valid for a single cylindrical cable.
Linear cable theory describes the dendritic arbor of a neuron as a cylindrical structure undergoing a regular pattern of bifurcation, like branches in a tree. For a single cylinder or an entire tree, the static input conductance at the base (where the tree meets the cell body or any such boundary) is defined as
- [math]\displaystyle{ G_{in} = \frac{G_\infty \tanh(L) + G_L}{1+(G_L / G_\infty )\tanh(L)} }[/math],
where L is the electrotonic length of the cylinder which depends on its length, diameter, and resistance. A simple recursive algorithm scales linearly with the number of branches and can be used to calculate the effective conductance of the tree. This is given by
- [math]\displaystyle{ \,\! G_D = G_m A_D \tanh(L_D) / L_D }[/math]
where AD = πld is the total surface area of the tree of total length l, and LD is its total electrotonic length. For an entire neuron in which the cell body conductance is GS and the membrane conductance per unit area is Gmd = Gm / A, we find the total neuron conductance GN for n dendrite trees by adding up all tree and soma conductances, given by
- [math]\displaystyle{ G_N = G_S + \sum_{j=1}^n A_{D_j} F_{dga_j}, }[/math]
where we can find the general correction factor Fdga experimentally by noting GD = GmdADFdga.
The linear cable model makes several simplifications to give closed analytic results, namely that the dendritic arbor must branch in diminishing pairs in a fixed pattern and that dendrites are linear. A compartmental model[59] allows for any desired tree topology with arbitrary branches and lengths, as well as arbitrary nonlinearities. It is essentially a discretized computational implementation of nonlinear dendrites.
Each piece, or compartment, of a dendrite, is modeled by a straight cylinder of arbitrary length l and diameter d which connects with fixed resistance to any number of branching cylinders. We define the conductance ratio of the ith cylinder as Bi = Gi / G∞, where [math]\displaystyle{ G_\infty=\tfrac{\pi d^{3/2}}{2\sqrt{R_i R_m}} }[/math] and Ri is the resistance between the current compartment and the next. We obtain a series of equations for conductance ratios in and out of a compartment by making corrections to the normal dynamic Bout,i = Bin,i+1, as
- [math]\displaystyle{ B_{\mathrm{out},i} = \frac{B_{\mathrm{in},i+1}(d_{i+1}/d_i)^{3/2} }{ \sqrt{R_{\mathrm{m},i+1}/R_{\mathrm{m},i}} } }[/math]
- [math]\displaystyle{ B_{\mathrm{in},i} = \frac{ B_{\mathrm{out},i} + \tanh X_i }{ 1+B_{\mathrm{out},i}\tanh X_i } }[/math]
- [math]\displaystyle{ B_\mathrm{out,par} = \frac{B_\mathrm{in,dau1} (d_\mathrm{dau1}/d_\mathrm{par})^{3/2}} {\sqrt{R_\mathrm{m,dau1}/R_\mathrm{m,par}}} + \frac{B_\mathrm{in,dau2} (d_\mathrm{dau2}/d_\mathrm{par})^{3/2}} {\sqrt{R_\mathrm{m,dau2}/R_\mathrm{m,par}}} + \ldots }[/math]
where the last equation deals with parents and daughters at branches, and [math]\displaystyle{ X_i = \tfrac{l_i \sqrt{4R_i}}{\sqrt{d_i R_m}} }[/math]. We can iterate these equations through the tree until we get the point where the dendrites connect to the cell body (soma), where the conductance ratio is Bin,stem. Then our total neuron conductance for static input is given by
- [math]\displaystyle{ G_N = \frac{A_\mathrm{soma}}{R_\mathrm{m,soma}} + \sum_j B_{\mathrm{in,stem},j} G_{\infty,j}. }[/math]
Importantly, static input is a very special case. In biology, inputs are time-dependent. Moreover, dendrites are not always linear.
Compartmental models enable to include nonlinearities via ion channels positioned at arbitrary locations along the dendrites.[99][100] For static inputs, it is sometimes possible to reduce the number of compartments (increase the computational speed) and yet retain the salient electrical characteristics.[101]
Conjectures regarding the role of the neuron in the wider context of the brain principle of operation
The neurotransmitter-based energy detection scheme
The neurotransmitter-based energy detection scheme[68][75] suggests that the neural tissue chemically executes a Radar-like detection procedure.
As shown in Fig. 6, the key idea of the conjecture is to account for neurotransmitter concentration, neurotransmitter generation, and neurotransmitter removal rates as the important quantities in executing the detection task, while referring to the measured electrical potentials as a side effect that only in certain conditions coincide with the functional purpose of each step. The detection scheme is similar to a radar-like "energy detection" because it includes signal squaring, temporal summation, and a threshold switch mechanism, just like the energy detector, but it also includes a unit that emphasizes stimulus edges and a variable memory length (variable memory). According to this conjecture, the physiological equivalent of the energy test statistics is neurotransmitter concentration, and the firing rate corresponds to neurotransmitter current. The advantage of this interpretation is that it leads to a unit-consistent explanation which allows to bridge between electrophysiological measurements, biochemical measurements, and psychophysical results.
The evidence reviewed in[68][75] suggest the following association between functionality to histological classification:
- Stimulus squaring is likely to be performed by receptor cells.
- Stimulus edge emphasizing and signal transduction is performed by neurons.
- Temporal accumulation of neurotransmitters is performed by glial cells. Short-term neurotransmitter accumulation is likely to occur also in some types of neurons.
- Logical switching is executed by glial cells, and it results from exceeding a threshold level of neurotransmitter concentration. This threshold crossing is also accompanied by a change in neurotransmitter leak rate.
- Physical all-or-non movement switching is due to muscle cells and results from exceeding a certain neurotransmitter concentration threshold on muscle surroundings.
Note that although the electrophysiological signals in Fig.6 are often similar to the functional signal (signal power/neurotransmitter concentration / muscle force), there are some stages in which the electrical observation differs from the functional purpose of the corresponding step. In particular, Nossenson et al. suggested that glia threshold crossing has a completely different functional operation compared to the radiated electrophysiological signal and that the latter might only be a side effect of glia break.
General comments regarding the modern perspective of scientific and engineering models
- The models above are still idealizations. Corrections must be made for the increased membrane surface area given by numerous dendritic spines, temperatures significantly hotter than room-temperature experimental data, and nonuniformity in the cell's internal structure.[17] Certain observed effects do not fit into some of these models. For instance, the temperature cycling (with minimal net temperature increase) of the cell membrane during action potential propagation is not compatible with models that rely on modeling the membrane as a resistance that must dissipate energy when current flows through it. The transient thickening of the cell membrane during action potential propagation is also not predicted by these models, nor is the changing capacitance and voltage spike that results from this thickening incorporated into these models. The action of some anesthetics such as inert gases is problematic for these models as well. New models, such as the soliton model attempt to explain these phenomena, but are less developed than older models and have yet to be widely applied.
- Modern views regarding the role of the scientific model suggest that "All models are wrong but some are useful" (Box and Draper, 1987, Gribbin, 2009; Paninski et al., 2009).
- Recent conjecture suggests that each neuron might function as a collection of independent threshold units. It is suggested that a neuron could be anisotropically activated following the origin of its arriving signals to the membrane, via its dendritic trees. The spike waveform was also proposed to be dependent on the origin of the stimulus.[102]
External links
- Neuronal Dynamics: from single neurons to networks and models of cognition (W. Gerstner, W. Kistler, R. Naud, L. Paninski, Cambridge University Press, 2014).[27] In particular, Chapters 6 - 10, html online version.
- Spiking Neuron Models[1] (W. Gerstner and W. Kistler, Cambridge University Press, 2002)
See also
- Binding neuron
- Bayesian approaches to brain function
- Brain-computer interfaces
- Free energy principle
- Models of neural computation
- Neural coding
- Neural oscillation
- Quantitative models of the action potential
- Spiking Neural Network
References
- ↑ 1.0 1.1 1.2 1.3 Spiking neuron models : single neurons, populations, plasticity. Cambridge, U.K.: Cambridge University Press. 2002. ISBN 0-511-07817-X. OCLC 57417395. https://www.worldcat.org/oclc/57417395.
- ↑ DeFelipe, Javier; Farinas, Isabel (1992). "The pyramidal neuron of the cerebral cortex: morphological and chemical characteristics of the synaptic inputs". Progress in Neurobiology 39 (6): 563–607. doi:10.1016/0301-0082(92)90015-7. PMID 1410442.
- ↑ Markram, Henry; Muller, Eilif; Ramaswamy, Srikanth; Reimann, Michael; Abdellah, Marwan (2015). "Reconstruction and simulation of neocortical microcircuitry". Cell 163 (2): 456–492. doi:10.1016/j.cell.2015.09.029. PMID 26451489.
- ↑ Wong, R. K. S.; Traub, R. D. (2009-01-01), Schwartzkroin, Philip A., ed. (in en), NETWORKS | Cellular Properties and Synaptic Connectivity of CA3 Pyramidal Cells: Mechanisms for Epileptic Synchronization and Epileptogenesis, Oxford: Academic Press, pp. 815–819, doi:10.1016/b978-012373961-2.00215-0, ISBN 978-0-12-373961-2, http://www.sciencedirect.com/science/article/pii/B9780123739612002150, retrieved 2020-11-18
- ↑ Lapicque, LM (1907). "Recherches quantitatives sur l'excitation electrique des nerfs". J Physiol Paris 9: 620–635.
- ↑ Abbott, Larry (1999). "Lapicque's introduction of the integrate-and-fire model neuron (1907)". Brain Research Bulletin 50 (5): 303–304. doi:10.1016/S0361-9230(99)00161-6. PMID 10643408.
- ↑ Gauld, Christophe; Brun, Cédric; Boraud, Thomas; Carlu, Mallory; Depannemaecker, Damien (2022-01-14). "Computational Models in Neurosciences Between Mechanistic and Phenomenological Characterizations". doi:10.20944/preprints202201.0206.v1. https://www.preprints.org/manuscript/202201.0206/v1.
- ↑ 8.0 8.1 8.2 8.3 8.4 8.5 8.6 "A quantitative description of membrane current and its application to conduction and excitation in nerve". The Journal of Physiology 117 (4): 500–44. August 1952. doi:10.1113/jphysiol.1952.sp004764. PMID 12991237.
- ↑ 9.0 9.1 9.2 9.3 9.4 "Measurement of current-voltage relations in the membrane of the giant axon of Loligo". The Journal of Physiology 116 (4): 424–48. April 1952. doi:10.1113/jphysiol.1952.sp004716. PMID 14946712.
- ↑ 10.0 10.1 10.2 10.3 10.4 "Currents carried by sodium and potassium ions through the membrane of the giant axon of Loligo". The Journal of Physiology 116 (4): 449–72. April 1952. doi:10.1113/jphysiol.1952.sp004717. PMID 14946713.
- ↑ 11.0 11.1 11.2 11.3 11.4 "The components of membrane conductance in the giant axon of Loligo". The Journal of Physiology 116 (4): 473–96. April 1952. doi:10.1113/jphysiol.1952.sp004718. PMID 14946714.
- ↑ 12.0 12.1 "Photovoltaic Retinal Prosthesis with High Pixel Density". Nature Photonics 6 (6): 391–397. June 2012. doi:10.1038/nphoton.2012.104. PMID 23049619. Bibcode: 2012NaPho...6..391M.
- ↑ Dynamical systems in neuroscience: the geometry of excitability and bursting. Cambridge, MA: MIT Press. 2010. ISBN 978-0-262-51420-0. OCLC 457159828.
- ↑ "The influence of sodium and potassium dynamics on excitability, seizures, and the stability of persistent states: I. Single neuron dynamics". Journal of Computational Neuroscience 26 (2): 159–70. April 2009. doi:10.1007/s10827-008-0132-4. PMID 19169801.
- ↑ "A unified physiological framework of transitions between seizures, sustained ictal activity and depolarization block at the single neuron level" (in en). bioRxiv: 2020.10.23.352021. 2021-02-17. doi:10.1101/2020.10.23.352021. https://www.biorxiv.org/content/10.1101/2020.10.23.352021v2.
- ↑ 16.0 16.1 "Lapicque's introduction of the integrate-and-fire model neuron (1907)". Brain Research Bulletin 50 (5–6): 303–4. 1999. doi:10.1016/S0361-9230(99)00161-6. PMID 10643408. http://neurotheory.columbia.edu/~larry/AbbottBrResBul99.pdf.
- ↑ 17.0 17.1 17.2 17.3 17.4 Methods in neuronal modeling: from ions to networks (2nd ed.). Cambridge, Massachusetts: MIT Press. 1999. pp. 687. ISBN 978-0-262-11231-4. http://www.klab.caltech.edu/MNM/. Retrieved 2013-01-10.
- ↑ "Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons". Journal of Computational Neuroscience 8 (3): 183–208. 2000-05-01. doi:10.1023/A:1008925309027. PMID 10809012.
- ↑ 19.0 19.1 "Simple capacitor-switch model of excitatory and inhibitory neuron with all parts biologically explained allows input fire pattern dependent chaotic oscillations". Scientific Reports 10 (1): 7353. April 2020. doi:10.1038/s41598-020-63834-7. PMID 32355185. Bibcode: 2020NatSR..10.7353C.
- ↑ "Interpretation of the repetitive firing of nerve cells". The Journal of General Physiology 45 (6): 1163–79. July 1962. doi:10.1085/jgp.45.6.1163. PMID 13895926.
- ↑ "Minimal models of adapted neuronal response to in vivo-like input currents". Neural Computation 16 (10): 2101–24. October 2004. doi:10.1162/0899766041732468. PMID 15333209. https://doi.org/10.1162/0899766041732468.
- ↑ 22.00 22.01 22.02 22.03 22.04 22.05 22.06 22.07 22.08 22.09 22.10 22.11 22.12 22.13 "Predicting spike timing of neocortical pyramidal neurons by simple threshold models". Journal of Computational Neuroscience 21 (1): 35–49. August 2006. doi:10.1007/s10827-006-7074-5. PMID 16633938. http://infoscience.epfl.ch/record/97835.
- ↑ 23.00 23.01 23.02 23.03 23.04 23.05 23.06 23.07 23.08 23.09 23.10 23.11 23.12 23.13 "Temporal whitening by power-law adaptation in neocortical neurons". Nature Neuroscience 16 (7): 942–8. July 2013. doi:10.1038/nn.3431. PMID 23749146. http://infoscience.epfl.ch/record/189518.
- ↑ 24.0 24.1 24.2 24.3 "What matters in neuronal locking?". Neural Computation 8 (8): 1653–76. November 1996. doi:10.1162/neco.1996.8.8.1653. PMID 8888612. http://infoscience.epfl.ch/record/97772.
- ↑ 25.0 25.1 25.2 "Simple model of spiking neurons". IEEE Transactions on Neural Networks 14 (6): 1569–72. November 2003. doi:10.1109/TNN.2003.820440. PMID 18244602.
- ↑ 26.0 26.1 26.2 26.3 26.4 26.5 "Firing patterns in the adaptive exponential integrate-and-fire model". Biological Cybernetics 99 (4–5): 335–47. November 2008. doi:10.1007/s00422-008-0264-7. PMID 19011922.
- ↑ 27.00 27.01 27.02 27.03 27.04 27.05 27.06 27.07 27.08 27.09 27.10 27.11 27.12 27.13 27.14 27.15 27.16 27.17 Neuronal dynamics : from single neurons to networks and models of cognition. Cambridge, United Kingdom. 24 July 2014. ISBN 978-1-107-06083-8. OCLC 861774542. https://www.worldcat.org/oclc/861774542.
- ↑ "From subthreshold to firing-rate resonance". Journal of Neurophysiology 89 (5): 2538–54. May 2003. doi:10.1152/jn.00955.2002. PMID 12611957.
- ↑ 29.0 29.1 "Fractional differentiation by neocortical pyramidal neurons". Nature Neuroscience 11 (11): 1335–42. November 2008. doi:10.1038/nn.2212. PMID 18931665.
- ↑ 30.0 30.1 "Neuronal spike timing adaptation described with a fractional leaky integrate-and-fire model". PLOS Computational Biology 10 (3): e1003526. March 2014. doi:10.1371/journal.pcbi.1003526. PMID 24675903. Bibcode: 2014PLSCB..10E3526T.
- ↑ 31.0 31.1 "Adaptive exponential integrate-and-fire model as an effective description of neuronal activity". Journal of Neurophysiology 94 (5): 3637–42. November 2005. doi:10.1152/jn.00686.2005. PMID 16014787. http://infoscience.epfl.ch/record/97829.
- ↑ 32.0 32.1 "How spike generation mechanisms determine the neuronal response to fluctuating inputs". The Journal of Neuroscience 23 (37): 11628–40. December 2003. doi:10.1523/JNEUROSCI.23-37-11628.2003. PMID 14684865.
- ↑ 33.0 33.1 33.2 33.3 Cite error: Invalid
<ref>
tag; no text was provided for refs named:11
- ↑ "How connectivity, background activity, and synaptic properties shape the cross-correlation between spike trains". The Journal of Neuroscience 29 (33): 10234–53. August 2009. doi:10.1523/JNEUROSCI.1275-09.2009. PMID 19692598.
- ↑ "Conductance-Based Adaptive Exponential Integrate-and-Fire Model". Neural Computation 33 (1): 41–66. January 2021. doi:10.1162/neco_a_01342. PMID 33253029.
- ↑ "Spike initiation by transmembrane current: a white-noise analysis". The Journal of Physiology 260 (2): 279–314. September 1976. doi:10.1113/jphysiol.1976.sp011516. PMID 978519.
- ↑ "Reliability of spike timing in neocortical neurons". Science 268 (5216): 1503–6. June 1995. doi:10.1126/science.7770778. PMID 7770778. Bibcode: 1995Sci...268.1503M.
- ↑ "Detecting and estimating signals in noisy cable structure, I: neuronal noise sources". Neural Computation 11 (8): 1797–829. November 1999. doi:10.1162/089976699300015972. PMID 10578033. https://authors.library.caltech.edu/28336/. Retrieved 2021-04-04.
- ↑ 39.0 39.1 39.2 39.3 "A Theoretical Analysis of Neuronal Variability". Biophysical Journal 5 (2): 173–94. March 1965. doi:10.1016/s0006-3495(65)86709-1. PMID 14268952. Bibcode: 1965BpJ.....5..173S.
- ↑ 40.0 40.1 40.2 40.3 40.4 40.5 40.6 40.7 40.8 "Associative memory in a network of 'spiking' neurons". Network: Computation in Neural Systems 3 (2): 139–164. January 1992. doi:10.1088/0954-898X_3_2_004. ISSN 0954-898X. http://infoscience.epfl.ch/record/97753.
- ↑ 41.0 41.1 "Estimation of the input parameters in the Ornstein–Uhlenbeck neuronal model". Physical Review E 71 (1 Pt 1): 011907. January 2005. doi:10.1103/PhysRevE.71.011907. PMID 15697630. Bibcode: 2005PhRvE..71a1907D.
- ↑ "Firing-rate response of linear and nonlinear integrate-and-fire neurons to modulated current-based and conductance-based synaptic drive". Physical Review E 76 (2 Pt 1): 021919. August 2007. doi:10.1103/PhysRevE.76.021919. PMID 17930077. Bibcode: 2007PhRvE..76b1919R.
- ↑ "Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons". Journal of Computational Neuroscience 8 (3): 183–208. 2000-05-01. doi:10.1023/A:1008925309027. PMID 10809012. https://doi.org/10.1023/A:1008925309027.
- ↑ "Diffusion models for the stochastic activity of neurons". Neural Networks. Springer. 1968. pp. 116–144. ISBN 9783642875960. https://books.google.com/books?id=EsWqCAAAQBAJ&pg=PA116.
- ↑ "Associative memory in a network of 'spiking' neurons". Network: Computation in Neural Systems 3 (2): 139–164. 1992-01-01. doi:10.1088/0954-898X_3_2_004. ISSN 0954-898X. https://doi.org/10.1088/0954-898X_3_2_004.
- ↑ "Time structure of the activity in neural network models". Physical Review E 51 (1): 738–758. January 1995. doi:10.1103/PhysRevE.51.738. PMID 9962697. Bibcode: 1995PhRvE..51..738G. https://infoscience.epfl.ch/record/97769/files/Gerstner95PRE.pdf.
- ↑ "A point process framework for relating neural spiking activity to spiking history, neural ensemble, and extrinsic covariate effects". Journal of Neurophysiology 93 (2): 1074–89. February 2005. doi:10.1152/jn.00697.2004. PMID 15356183.
- ↑ 48.0 48.1 48.2 48.3 "Spatio-temporal correlations and visual signalling in a complete neuronal population". Nature 454 (7207): 995–9. August 2008. doi:10.1038/nature07140. PMID 18650810. Bibcode: 2008Natur.454..995P.
- ↑ "A model of the peripheral auditory system". Kybernetik 3 (4): 153–75. November 1966. doi:10.1007/BF00290252. PMID 5982096.
- ↑ 50.0 50.1 50.2 50.3 50.4 50.5 50.6 50.7 50.8 "Population dynamics of spiking neurons: fast transients, asynchronous states, and locking". Neural Computation 12 (1): 43–89. January 2000. doi:10.1162/089976600300015899. PMID 10636933. https://infoscience.epfl.ch/record/97797/files/Gerstner00.pdf.
- ↑ "Coding and decoding with adapting neurons: a population approach to the peri-stimulus time histogram". PLOS Computational Biology 8 (10): e1002711. 2012-10-04. doi:10.1371/journal.pcbi.1002711. PMID 23055914. Bibcode: 2012PLSCB...8E2711N.
- ↑ "Why spikes? Hebbian learning and retrieval of time-resolved excitation patterns". Biological Cybernetics 69 (5–6): 503–15. October 1993. doi:10.1007/BF00199450. PMID 7903867.
- ↑ 53.0 53.1 "Maximum likelihood estimation of cascade point-process neural encoding models". Network: Computation in Neural Systems 15 (4): 243–62. November 2004. doi:10.1088/0954-898X_15_4_002. PMID 15600233.
- ↑ 54.0 54.1 54.2 54.3 "Reduction of the Hodgkin-Huxley Equations to a Single-Variable Threshold Model" (in en). Neural Computation 9 (5): 1015–1045. 1997-07-01. doi:10.1162/neco.1997.9.5.1015. ISSN 0899-7667. http://infoscience.epfl.ch/record/97776.
- ↑ 55.0 55.1 "Time structure of the activity in neural network models". Physical Review E 51 (1): 738–758. January 1995. doi:10.1103/PhysRevE.51.738. PMID 9962697. Bibcode: 1995PhRvE..51..738G. http://infoscience.epfl.ch/record/97769.
- ↑ "Infinite Systems of Interacting Chains with Memory of Variable Length — A Stochastic Model for Biological Neural Nets". Journal of Statistical Physics 151 (5): 896–921. 2013. doi:10.1007/s10955-013-0733-9. Bibcode: 2013JSP...151..896G.
- ↑ "Impulses and Physiological States in Theoretical Models of Nerve Membrane". Biophysical Journal 1 (6): 445–66. July 1961. doi:10.1016/S0006-3495(61)86902-6. PMID 19431309. Bibcode: 1961BpJ.....1..445F.
- ↑ "FitzHugh-Nagumo model". Scholarpedia 1 (9): 1349. 2006. doi:10.4249/scholarpedia.1349. Bibcode: 2006SchpJ...1.1349I.
- ↑ 59.0 59.1 59.2 59.3 Methods in neuronal modeling: from ions to networks. (02 ed.). [Place of publication not identified]: Mit Press. 2003. ISBN 0-262-51713-2. OCLC 947133821. https://www.worldcat.org/oclc/947133821.
- ↑ "Chapter 7: Analysis of Neural Excitability and Oscillations". Methods in Neuronal Modeling. MIT Press. August 1998. pp. 251. ISBN 978-0262517133.
- ↑ 61.0 61.1 "The development of the hindmarsh-rose model for bursting". Bursting. WORLD SCIENTIFIC. 2005-10-01. pp. 3–18. doi:10.1142/9789812703231_0001. ISBN 978-981-256-506-8.
- ↑ 62.0 62.1 "Parabolic Bursting in an Excitable System Coupled with a Slow Oscillation". SIAM Journal on Applied Mathematics 46 (2): 233–253. 1986. doi:10.1137/0146017. ISSN 0036-1399.
- ↑ 63.0 63.1 "Type I membranes, phase resetting curves, and synchrony". Neural Computation 8 (5): 979–1001. July 1996. doi:10.1162/neco.1996.8.5.979. PMID 8697231.
- ↑ 64.0 64.1 "Frequency discrimination in the auditory system: Place or periodicity mechanisms?". Proceedings of the IEEE 58 (5): 723–730. 1970-05-01. doi:10.1109/PROC.1970.7727. ISSN 0018-9219.
- ↑ 65.0 65.1 "Some implications of the stochastic behavior of primary auditory neurons". Kybernetik 2 (5): 206–15. June 1965. doi:10.1007/BF00306416. PMID 5839007.
- ↑ 66.0 66.1 66.2 66.3 66.4 66.5 66.6 66.7 66.8 66.9 "Modeling neuron firing pattern using a two-state Markov chain". 2010 IEEE Sensor Array and Multichannel Signal Processing Workshop. 2010. doi:10.1109/SAM.2010.5606761. ISBN 978-1-4244-8978-7.
- ↑ 67.0 67.1 67.2 67.3 67.4 67.5 67.6 "Optimal sequential detection of stimuli from multiunit recordings taken in densely populated brain regions". Neural Computation 24 (4): 895–938. April 2012. doi:10.1162/NECO_a_00257. PMID 22168560.
- ↑ 68.00 68.01 68.02 68.03 68.04 68.05 68.06 68.07 68.08 68.09 68.10 68.11 68.12 68.13 "Detection of stimuli from multi-neuron activity: Empirical study and theoretical implications.". Neurocomputing 174: 822–837. 2016. doi:10.1016/j.neucom.2015.10.007.
- ↑ "Refractoriness and neural precision". The Journal of Neuroscience 18 (6): 2200–11. March 1998. doi:10.1523/JNEUROSCI.18-06-02200.1998. PMID 9482804.
- ↑ 70.0 70.1 "A spike-train probability model". Neural Computation 13 (8): 1713–20. August 2001. doi:10.1162/08997660152469314. PMID 11506667.
- ↑ 71.0 71.1 "Stimulus and recovery dependence of cat cochlear nerve fiber spike discharge probability". Journal of Neurophysiology 48 (3): 856–73. September 1982. doi:10.1152/jn.1982.48.3.856. PMID 6290620.
- ↑ "A statistical study of cochlear nerve discharge patterns in response to complex speech stimuli". The Journal of the Acoustical Society of America 92 (1): 202–9. July 1992. doi:10.1121/1.404284. PMID 1324958. Bibcode: 1992ASAJ...92..202M.
- ↑ 73.0 73.1 "The transmission of signals by auditory-nerve fiber discharge patterns". The Journal of the Acoustical Society of America 74 (2): 493–501. August 1983. doi:10.1121/1.389815. PMID 6311884. Bibcode: 1983ASAJ...74..493J.
- ↑ "A simple white noise analysis of neuronal light responses". Network: Computation in Neural Systems 12 (2): 199–213. May 2001. doi:10.1080/713663221. PMID 11405422.
- ↑ 75.0 75.1 75.2 75.3 75.4 Nossenson N (2013). Model Based Detection of a Stimulus Presence from Neurophysiological Signals (PDF) (Ph.D. thesis). The Neiman Library of Exact Sciences & Engineering, Tel Aviv University: University of Tel-Aviv. Archived from the original (PDF) on 2017-03-05. Retrieved 2016-04-12.
- ↑ "Somatosensory inputs modify auditory spike timing in dorsal cochlear nucleus principal cells". The European Journal of Neuroscience 33 (3): 409–20. February 2011. doi:10.1111/j.1460-9568.2010.07547.x. PMID 21198989.
- ↑ "Stimulus-specific adaptations in the gaze control system of the barn owl". The Journal of Neuroscience 28 (6): 1523–33. February 2008. doi:10.1523/JNEUROSCI.3785-07.2008. PMID 18256273.
- ↑ "Sustained firing in auditory cortex evoked by preferred stimuli". Nature 435 (7040): 341–6. May 2005. doi:10.1038/nature03565. PMID 15902257. Bibcode: 2005Natur.435..341W.
- ↑ "Response properties of single auditory nerve fibers in the mouse". Journal of Neurophysiology 93 (1): 557–69. January 2005. doi:10.1152/jn.00574.2004. PMID 15456804.
- ↑ "Processing of learned information in paradoxical sleep: relevance for memory". Behavioural Brain Research. The Function of Sleep 69 (1–2): 125–35. 1995-07-01. doi:10.1016/0166-4328(95)00013-J. PMID 7546303.
- ↑ "Quantitative analysis of cat retinal ganglion cell response to visual stimuli". Vision Research 5 (11): 583–601. December 1965. doi:10.1016/0042-6989(65)90033-7. PMID 5862581.
- ↑ 82.0 82.1 "The control of retinal ganglion cell discharge by receptive field surrounds". The Journal of Physiology 247 (3): 551–78. June 1975. doi:10.1113/jphysiol.1975.sp010947. PMID 1142301.
- ↑ "Adaptation and dynamics of cat retinal ganglion cells". The Journal of Physiology 233 (2): 271–309. September 1973. doi:10.1113/jphysiol.1973.sp010308. PMID 4747229.
- ↑ "Stimulus size and intensity alter fundamental receptive-field properties of mouse retinal ganglion cells in vivo". Visual Neuroscience 22 (5): 649–59. 2005-09-01. doi:10.1017/S0952523805225142. PMID 16332276.
- ↑ "Biophysical mechanisms underlying olfactory receptor neuron dynamics". Nature Neuroscience 14 (2): 208–16. February 2011. doi:10.1038/nn.2725. PMID 21217763.
- ↑ "Response of anterior parietal cortex to cutaneous flutter versus vibration". Journal of Neurophysiology 82 (1): 16–33. July 1999. doi:10.1152/jn.1999.82.1.16. PMID 10400931.
- ↑ "Effects of the activity of the internal globus pallidus-pedunculopontine loop on the transmission of the subthalamic nucleus-external globus pallidus-pacemaker oscillatory activities to the cortex". Journal of Computational Neuroscience 16 (2): 113–27. 2004-03-01. doi:10.1023/B:JCNS.0000014105.87625.5f. PMID 14758061.
- ↑ "Glutamate evokes firing through activation of kainate receptors in chick accessory lobe neurons". Journal of Comparative Physiology A: Neuroethology, Sensory, Neural & Behavioral Physiology 199 (1): 35–43. January 2013. doi:10.1007/s00359-012-0766-6. PMID 23064516.
- ↑ "Rate-versus-level functions of primary auditory nerve fibres: evidence for square law behaviour of all fibre categories in the guinea pig". Hearing Research 55 (1): 50–6. September 1991. doi:10.1016/0378-5955(91)90091-M. PMID 1752794.
- ↑ "Analysis of discharges recorded simultaneously from pairs of auditory nerve fibers". Biophysical Journal 16 (7): 719–34. July 1976. doi:10.1016/s0006-3495(76)85724-4. PMID 938715. Bibcode: 1976BpJ....16..719J.
- ↑ "Comparative physiology of acoustic and allied central analyzers". Acta Oto-Laryngologica. Supplementum 532 (sup532): 13–21. 1997-01-01. doi:10.3109/00016489709126139. PMID 9442839.
- ↑ "Progressive changes in auditory response patterns to repeated tone during normal wakefulness and paralysis". Brain Research 16 (1): 133–48. November 1969. doi:10.1016/0006-8993(69)90090-0. PMID 5348845.
- ↑ "Update on retinal prosthetic research: the Boston Retinal Implant Project". Journal of Neuro-Ophthalmology 31 (2): 160–8. June 2011. doi:10.1097/wno.0b013e31821eb79e. PMID 21593628.
- ↑ "The Artificial Synapse Chip: a flexible retinal interface based on directed retinal cell growth and neurotransmitter stimulation". Artificial Organs 27 (11): 975–85. November 2003. doi:10.1046/j.1525-1594.2003.07307.x. PMID 14616516.
- ↑ "Microfluidic neurotransmiter-based neural interfaces for retinal prosthesis". 2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society. Minneapolis: IEEE Engineering in Medicine and Biology Society. 6–9 September 2009. pp. 4563–5. doi:10.1109/IEMBS.2009.5332694. ISBN 978-1-4244-3296-7.
- ↑ "Multichannel Intraneural and Intramuscular Techniques for Multiunit Recording and Use in Active Prostheses". Proceedings of the IEEE 98 (3): 432–449. 2010-03-01. doi:10.1109/JPROC.2009.2038613. ISSN 0018-9219.
- ↑ "Real-time control of hind limb functional electrical stimulation using feedback from dorsal root ganglia recordings". Journal of Neural Engineering 10 (2): 026020. April 2013. doi:10.1088/1741-2560/10/2/026020. PMID 23503062. Bibcode: 2013JNEng..10b6020B.
- ↑ "BrainGate - Home". 2015-12-04. http://braingate2.org/.
- ↑ 99.0 99.1 "Models of neocortical layer 5b pyramidal cells capturing a wide range of dendritic and perisomatic active properties". PLOS Computational Biology 7 (7): e1002107. July 2011. doi:10.1371/journal.pcbi.1002107. PMID 21829333. Bibcode: 2011PLSCB...7E2107H.
- ↑ "Reconstruction and Simulation of Neocortical Microcircuitry". Cell 163 (2): 456–92. October 2015. doi:10.1016/j.cell.2015.09.029. PMID 26451489.
- ↑ "Simulation of alcohol action upon a detailed Purkinje neuron model and a simpler surrogate model that runs >400 times faster". BMC Neuroscience 16 (27): 27. April 2015. doi:10.1186/s12868-015-0162-6. PMID 25928094.
- ↑ "New Types of Experiments Reveal that a Neuron Functions as Multiple Independent Threshold Units" (in En). Scientific Reports 7 (1): 18036. December 2017. doi:10.1038/s41598-017-18363-1. PMID 29269849. Bibcode: 2017NatSR...718036S.
Original source: https://en.wikipedia.org/wiki/Biological neuron model.
Read more |