Phase reduction

From HandWiki
Revision as of 07:01, 27 June 2023 by WikiGary (talk | contribs) (add)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Phase reduction is a method used to reduce a multi-dimensional dynamical equation describing a nonlinear limit cycle oscillator into a one-dimensional phase equation.[1][2] Many phenomena in our world such as chemical reactions, electric circuits, mechanical vibrations, cardiac cells, and spiking neurons are examples of rhythmic phenomena, and can be considered as nonlinear limit cycle oscillators.[2]

History

The theory of phase reduction method was first introduced in the 1950s, the existence of periodic solutions to nonlinear oscillators under perturbation, has been discussed by Malkin in,[3] in the 1960s, Winfree illustrated the importance of the notion of phase and formulated the phase model for a population of nonlinear oscillators in his studies on biological synchronization.[4] Since then, many researchers have discovered different rhythmic phenomena related to phase reduction theory.

Phase model of reduction

Consider the dynamical system of the form

[math]\displaystyle{ \frac{dx}{dt}=f(x), }[/math]

where [math]\displaystyle{ x\in \mathbb{R}^N }[/math] is the oscillator state variable, [math]\displaystyle{ f(x) }[/math] is the baseline vector field. Let [math]\displaystyle{ \varphi:\mathbb{R}^N\times \mathbb{R} \rightarrow \mathbb{R}^N }[/math] be the flow induced by the system, that is, [math]\displaystyle{ \varphi(x_0,t) }[/math] is the solution of the system for the initial condition [math]\displaystyle{ x(0)=x_0 }[/math]. This system of differential equations can describe for a neuron model for conductance with [math]\displaystyle{ x=(V,n)\in \mathbb{R}^N }[/math], where [math]\displaystyle{ V }[/math] represents the voltage difference across the membrane and [math]\displaystyle{ n }[/math] represents the [math]\displaystyle{ (N-1) }[/math]-dimensional vector that defines gating variables.[5] When a neuron is perturbed by a stimulus current, the dynamics of the perturbed system will no longer be the same with the dynamics of the baseline neural oscillator.

Isochrons and a stable limit cycle of the planar system [math]\displaystyle{ \dot{x}=x - y - x(x^2+y^2); \dot{y}= x + y - y(x^2+y^2) }[/math]. The system has a unique stable limit cycle (solid circle). Only isochrons corresponding to phases [math]\displaystyle{ nT/5, n=1, 2, 3, 4, 5 }[/math], where [math]\displaystyle{ T=2\pi }[/math] is the period of the orbit, are shown (dotted lines). Neighbouring trajectories (blue dotted curves) with different initial conditions are attracted to the cycle (except the origin).

The target here is to reduce the system by defining a phase for each point in some neighbourhood of the limit cycle. The allowance of sufficiently small perturbations (e.g. external forcing or stimulus effect to the system) might cause a large deviation of the phase, but the amplitude is perturbed slightly because of the attracting of the limit cycle.[6] Hence we need to extend the definition of the phase to points in the neighborhood of the cycle by introducing the definition of asymptotic phase (or latent phase).[7] This helps us to assign a phase to each point in the basin of attraction of a periodic orbit. The set of points in the basin of attraction of [math]\displaystyle{ \gamma }[/math] that share the same asymptotic phase [math]\displaystyle{ \Phi(x) }[/math] is called the isochron (e.g. see Figure 1), which were first introduced by Winfree.[8] Isochrons can be shown to exist for such a stable hyperbolic limit cycle [math]\displaystyle{ \gamma }[/math]. [9] So for all point [math]\displaystyle{ x }[/math] in some neighbourhood of the cycle, the evolution of the phase [math]\displaystyle{ \varphi=\Phi(x) }[/math] can be given by the relation [math]\displaystyle{ \frac{d\varphi}{dt}=\omega }[/math], where [math]\displaystyle{ \omega=\frac{2\pi}{T_0} }[/math] is the natural frequency of the oscillation.[5][10] By the chain rule we then obtain an equation that govern the evolution of the phase of the neuron model is given by the phase model:

[math]\displaystyle{ \frac{d\varphi}{dt}=\nabla\Phi(x)\cdot f(x)=\omega, }[/math]

where [math]\displaystyle{ \nabla\Phi(x) }[/math] is the gradient of the phase function [math]\displaystyle{ \Phi(x) }[/math] with respect to the vector of the neuron's state vector [math]\displaystyle{ x }[/math], for the derivation of this result, see [2][5][10] This means that the [math]\displaystyle{ N }[/math]-dimensional system describing the oscillating neuron dynamics is then reduced to a simple one-dimensional phase equation. One can notice that, it is impossible to retrieve the full information of the oscillator [math]\displaystyle{ x }[/math] from the phase [math]\displaystyle{ \Phi }[/math] because [math]\displaystyle{ \Phi(x) }[/math] is not one-to-one mapping.[2]

Phase model with external forcing

Consider now a weakly perturbed system of the form

[math]\displaystyle{ \frac{dx(t)}{dt}=f(x)+\varepsilon g(t), }[/math]

where [math]\displaystyle{ f(x) }[/math] is the baseline vector field, [math]\displaystyle{ \varepsilon g(t) }[/math] is a weak periodic external forcing (or stimulus effect) of period [math]\displaystyle{ T }[/math], which can be different from [math]\displaystyle{ T_0 }[/math] (in general), and frequency [math]\displaystyle{ \Omega=2\pi/T }[/math], which might depend on the oscillator state [math]\displaystyle{ x }[/math]. Assuming that the baseline neural oscillator (that is, when [math]\displaystyle{ \varepsilon=0 }[/math]) has an exponentially stable limit cycle [math]\displaystyle{ \gamma }[/math] with period [math]\displaystyle{ T_0 }[/math] (example, see Figure 1) [math]\displaystyle{ \gamma }[/math] that is normally hyperbolic,[11] it can be shown that [math]\displaystyle{ \gamma }[/math] persists under small perturbations.[12] This implies that for a small perturbation, the perturbed system will remain close to the limit cycle. Hence we assume that such a limit cycle always exists for each neuron.

The evolution of the perturbed system in terms of the isochrons is [13]

[math]\displaystyle{ \frac{d\varphi}{dt}=\omega +\varepsilon \, \nabla\Phi(x)\cdot g(t), }[/math]

where [math]\displaystyle{ \nabla\Phi(x) }[/math] is the gradient of the phase [math]\displaystyle{ \Phi(x) }[/math] with respect to the vector of the neuron's state vector [math]\displaystyle{ x }[/math], and [math]\displaystyle{ g(t) }[/math] is the stimulus effect driving the firing of the neuron as a function of time [math]\displaystyle{ t }[/math]. This phase equation is a partial differential equation (PDE).

For a sufficiently small [math]\displaystyle{ \varepsilon\gt 0 }[/math], a reduced phase model evaluated on the limit cycle [math]\displaystyle{ \gamma }[/math] of the unperturbed system can be given by, up to the first order of [math]\displaystyle{ \varepsilon }[/math],

[math]\displaystyle{ \frac{d\varphi}{dt}=\omega + \varepsilon \, Z(\varphi) \cdot g(t), }[/math]

where function [math]\displaystyle{ Z(\varphi):=\nabla\Phi(\gamma(t)) }[/math] measures the normalized phase shift due to a small perturbation delivered at any point [math]\displaystyle{ x }[/math] on the limit cycle [math]\displaystyle{ \gamma }[/math], and is called the phase sensitivity function or infinitesimal phase response curve.[8][13]

In order to analyze the reduced phase equation corresponding to the perturbed nonlinear system, we need to solve a PDE, which is not a trivial one. So we need to simplify it into an autonomous phase equation for [math]\displaystyle{ \varphi }[/math], which can more easily be analyzed.[13] Assuming that the frequencies [math]\displaystyle{ \omega }[/math] and [math]\displaystyle{ \Omega }[/math] are sufficiently small so that [math]\displaystyle{ \omega-\Omega=\varepsilon\delta }[/math], where [math]\displaystyle{ \delta }[/math] is [math]\displaystyle{ O(1) }[/math], we can introduce a new phase function [math]\displaystyle{ \psi(t)=\varphi(t)-\Omega t }[/math].[13]

By the method of averaging,[14] assuming that [math]\displaystyle{ \psi(t) }[/math] does not vary within [math]\displaystyle{ T }[/math], we obtain an approximated phase equation

[math]\displaystyle{ \frac{d\psi(t)}{dt}=\Delta_\varepsilon + \varepsilon\Gamma(\psi), }[/math]

where [math]\displaystyle{ \Delta_\varepsilon=\varepsilon\delta }[/math], and [math]\displaystyle{ \Gamma(\psi) }[/math] is a [math]\displaystyle{ 2\pi }[/math]-periodic function representing the effect of the periodic external forcing on the oscillator phase,[13] defined by

[math]\displaystyle{ \Gamma(\psi)= \frac 1 {2\pi} \int_0^{2\pi}Z(\psi+\eta)\cdot g\left(\frac\eta\Omega\right) \, d\eta . }[/math]

The graph of this function [math]\displaystyle{ \Gamma(\psi) }[/math] can be shown to exhibit the dynamics of the approximated phase model, for more illustrations see.[2]

Examples of phase reduction

For a sufficiently small perturbation of a certain nonlinear oscillator or a network of coupled oscillators, we can compute the corresponding phase sensitivity function or infinitesimal PRC [math]\displaystyle{ Z(\varphi) }[/math].

References

  1. Tang, Zhicheng; Geng, Dongsheng; Lu, Gongxuan (2005-05-01). "A simple solution-phase reduction method for the synthesis of shape-controlled platinum nanoparticles" (in en). Materials Letters 59 (12): 1567–1570. doi:10.1016/j.matlet.2005.01.024. 
  2. 2.0 2.1 2.2 2.3 2.4 H.Nakao (2017). "Phase reduction approach to synchronization of nonlinear oscillators". Contemporary Physics 57 (2): 188–214. doi:10.1080/00107514.2015.1094987. 
  3. Hoppensteadt F.C. and Izhikevich E.M (1997). Weakly connected neural networks. Applied Mathematical Sciences. 126. Springer-Verlag, New York. doi:10.1007/978-1-4612-1828-9. ISBN 978-1-4612-7302-8. 
  4. Winfree A.T. (2001). The Geometry of Biological Time. Springer, New York. 
  5. 5.0 5.1 5.2 E.Brown, J.Moehlis, P.Holmes (2004). "On the Phase Reduction and Response Dynamics of Neural Oscillator Populations". Neural Computation 16 (4): 673–715. doi:10.1162/089976604322860668. PMID 15025826. 
  6. M.Rosenblum and A.Pikovsky (2003). "Synchronization: from pendulum clocks to chaotic lasers and chemical oscillators". Contemporary Physics 44 (5): 401–416. doi:10.1080/00107510310001603129. Bibcode2003ConPh..44..401R. 
  7. The Geometry of Biological Time. Interdisciplinary Applied Mathematics. 12. Springer. 2001. doi:10.1007/978-1-4757-3484-3. ISBN 978-1-4757-3484-3. https://www.springer.com/gp/book/9780387989921. 
  8. 8.0 8.1 A.T.Winfree (1967). "Biological rhythms and the behavior of populations of coupled oscillators". Journal of Theoretical Biology 16 (1): 15–42. doi:10.1016/0022-5193(67)90051-3. PMID 6035757. Bibcode1967JThBi..16...15W. 
  9. J.Guckenkeimer (1975). "Isochrons and phaseless sets". Journal of Mathematical Biology 1 (3): 259–273. doi:10.1007/BF01273747. PMID 28303309. 
  10. 10.0 10.1 N.W.Schultheiss (2012). "The Theory of Weakly Coupled Oscillators". Phase Response Curves in Neuroscience. Springer Series in Computational Neuroscience. 6. pp. 3–31. doi:10.1007/978-1-4614-0739-3_1. ISBN 978-1-4614-0738-6. 
  11. J.Guckenheimer and P.Holmes (1983). Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields. Springer, NY. 
  12. N.Fenichel (1971). "Persistence and smoothness of invariant manifolds for flows". Indiana University Mathematics Journal 21 (3): 193–226. doi:10.1512/iumj.1972.21.21017. 
  13. 13.0 13.1 13.2 13.3 13.4 Y.Kuramoto (1984). Chemical oscillations, waves, and turbulence. Springer Series in Synergetics. 19. Springer-Verlag, Berlin. doi:10.1007/978-3-642-69689-3. ISBN 978-3-642-69691-6. 
  14. J.A.Sanders (2010). Averaging methods in nonlinear dynamical systems. Applied Mathematical Sciences. 59. Springer-Verlag, New York. doi:10.1007/978-0-387-48918-6. ISBN 978-0-387-48916-2.