BCM theory
BCM theory, BCM synaptic modification, or the BCM rule, named for Elie Bienenstock, Leon Cooper, and Paul Munro, is a physical theory of learning in the visual cortex developed in 1981. The BCM model proposes a sliding threshold for long-term potentiation (LTP) or long-term depression (LTD) induction, and states that synaptic plasticity is stabilized by a dynamic adaptation of the time-averaged postsynaptic activity. According to the BCM model, when a pre-synaptic neuron fires, the post-synaptic neurons will tend to undergo LTP if it is in a high-activity state (e.g., is firing at high frequency, and/or has high internal calcium concentrations), or LTD if it is in a lower-activity state (e.g., firing in low frequency, low internal calcium concentrations).[1] This theory is often used to explain how cortical neurons can undergo both LTP or LTD depending on different conditioning stimulus protocols applied to pre-synaptic neurons (usually high-frequency stimulation, or HFS, for LTP, or low-frequency stimulation, LFS, for LTD).[2]
Development
In 1949, Donald Hebb proposed a working mechanism for memory and computational adaption in the brain now called Hebbian learning, or the maxim that cells that fire together, wire together.[3] This notion is foundational in the modern understanding of the brain as a neural network, and though not universally true, remains a good first approximation supported by decades of evidence.[3][4]
However, Hebb's rule has problems, namely that it has no mechanism for connections to get weaker and no upper bound for how strong they can get. In other words, the model is unstable, both theoretically and computationally. Later modifications gradually improved Hebb's rule, normalizing it and allowing for decay of synapses, where no activity or unsynchronized activity between neurons results in a loss of connection strength. New biological evidence brought this activity to a peak in the 1970s, where theorists formalized various approximations in the theory, such as the use of firing frequency instead of potential in determining neuron excitation, and the assumption of ideal and, more importantly, linear synaptic integration of signals. That is, there is no unexpected behavior in the adding of input currents to determine whether or not a cell will fire.
These approximations resulted in the basic form of BCM below in 1979, but the final step came in the form of mathematical analysis to prove stability and computational analysis to prove applicability, culminating in Bienenstock, Cooper, and Munro's 1982 paper.
Since then, experiments have shown evidence for BCM behavior in both the visual cortex and the hippocampus, the latter of which plays an important role in the formation and storage of memories. Both of these areas are well-studied experimentally, but both theory and experiment have yet to establish conclusive synaptic behavior in other areas of the brain. It has been proposed that in the cerebellum, the parallel-fiber to Purkinje cell synapse follows an "inverse BCM rule", meaning that at the time of parallel fiber activation, a high calcium concentration in the Purkinje cell results in LTD, while a lower concentration results in LTP.[2] Furthermore, the biological implementation for synaptic plasticity in BCM has yet to be established.[5]
Theory
The basic BCM rule takes the form
- [math]\displaystyle{ \,\frac{d m_j(t)}{d t} = \phi(\textbf{c}(t))d_j(t)-\epsilon m_j(t), }[/math]
where:
- [math]\displaystyle{ m_j }[/math] is the synaptic weight of the [math]\displaystyle{ j }[/math]th synapse,
- [math]\displaystyle{ d_j }[/math] is [math]\displaystyle{ j }[/math]th synapse's input current,
- [math]\displaystyle{ c(t) = \textbf{w}(t)\textbf{d}(t) = \sum_j w_j(t)d_j(t) }[/math] is the inner product of weights and input currents (weighted sum of inputs),
- [math]\displaystyle{ \phi(c) }[/math] is a non-linear function. This function must change sign at some threshold [math]\displaystyle{ \theta_M }[/math], that is, [math]\displaystyle{ \phi(c)\lt 0 }[/math] if and only if [math]\displaystyle{ c \lt \theta_M }[/math] . See below for details and properties.
- and [math]\displaystyle{ \epsilon }[/math] is the (often negligible) time constant of uniform decay of all synapses.
This model is a modified form of the Hebbian learning rule, [math]\displaystyle{ \dot{m_j}=c d_j }[/math], and requires a suitable choice of function [math]\displaystyle{ \phi }[/math] to avoid the Hebbian problems of instability.
Bienenstock at al. [6] rewrite [math]\displaystyle{ \phi(c) }[/math] as a function [math]\displaystyle{ \phi(c,\bar{c}) }[/math] where [math]\displaystyle{ \bar{c} }[/math] is the time average of [math]\displaystyle{ c }[/math]. With this modification and discarding the uniform decay the rule takes the vectorial form:
- [math]\displaystyle{ \dot{\mathbf{m}}(t) = \phi(c(t),\bar{c}(t))\mathbf{d}(t) }[/math]
The conditions for stable learning are derived rigorously in BCM noting that with [math]\displaystyle{ c(t)=\textbf{m}(t)\cdot\textbf{d}(t) }[/math] and with the approximation of the average output [math]\displaystyle{ \bar{c}(t) \approx \textbf{m}(t)\bar{\mathbf{d}} }[/math], it is sufficient that
- [math]\displaystyle{ \,\sgn\phi(c,\bar{c}) = \sgn\left(c-\left(\frac{\bar{c}}{c_0}\right)^p\bar{c}\right) ~~ \textrm{for} ~ c\gt 0, ~ \textrm{and} }[/math]
- [math]\displaystyle{ \,\phi(0,\bar{c}) = 0 ~~ \textrm{for} ~ \textrm{all} ~ \bar{c}, }[/math]
or equivalently, that the threshold [math]\displaystyle{ \theta_M(\bar{c}) = (\bar{c}/c_0)^p\bar{c} }[/math], where [math]\displaystyle{ p }[/math] and [math]\displaystyle{ c_0 }[/math] are fixed positive constants.[6]
When implemented, the theory is often taken such that
- [math]\displaystyle{ \,\phi(c,\bar{c}) = c(c-\theta_M) ~~~ \textrm{and} ~~~ \theta_M = \bar{c}^2 = \frac{1}{\tau}\int_{-\infty}^t c^2(t^\prime)e^{-(t-t^\prime)/\tau}d t^\prime, }[/math]
where [math]\displaystyle{ \tau }[/math] is a time constant of selectivity.
The model has drawbacks, as it requires both long-term potentiation and long-term depression, or increases and decreases in synaptic strength, something which has not been observed in all cortical systems. Further, it requires a variable activation threshold and depends strongly on stability of the selected fixed points [math]\displaystyle{ c_0 }[/math] and [math]\displaystyle{ p }[/math]. However, the model's strength is that it incorporates all these requirements from independently derived rules of stability, such as normalizability and a decay function with time proportional to the square of the output.[7]
Example
This example is a particular case of the one at chapter "Mathematical results" of Bienenstock at al. [6] work, assuming [math]\displaystyle{ p=2 }[/math] and [math]\displaystyle{ c_0 = 1 }[/math]. With these values [math]\displaystyle{ \theta_M=(\bar{c}/c_0)^p\bar{c}=\bar{c}^3 }[/math] and we decide [math]\displaystyle{ \phi(c,\bar{c}) = c (c - \theta_M) }[/math] that fulfills the stability conditions said in previous chapter.
Assume two presynaptic neurons that provides inputs [math]\displaystyle{ d_1 }[/math] and [math]\displaystyle{ d_2 }[/math], its activity a repetitive cycle with half of time [math]\displaystyle{ \mathbf{d}=(d_1,d_2)=(0.9,0.1) }[/math] and remainder time [math]\displaystyle{ \mathbf{d}=(0.2,0.7 ) }[/math] . [math]\displaystyle{ \bar{c} }[/math] time average will be the average of [math]\displaystyle{ c }[/math] value in first and second half of a cycle.
Let initial value of weights [math]\displaystyle{ \mathbf{m}=(0.1,0.05) }[/math]. In the first half of time [math]\displaystyle{ \mathbf{d}=(0.9,0.1) }[/math] and [math]\displaystyle{ \mathbf{m}=(0.1,0.05) }[/math], the weighted sum [math]\displaystyle{ c }[/math] is equal to 0.095 and we use same value as initial average [math]\displaystyle{ \bar{c} }[/math]. That means [math]\displaystyle{ \theta_M=0.001 }[/math] , [math]\displaystyle{ \phi=0.009 }[/math], [math]\displaystyle{ \dot{m}=(0.008,0.001) }[/math]. Adding 10% of the derivative to the weights we obtain new ones [math]\displaystyle{ \mathbf{m}=(0.101,0.051) }[/math].
In next half of time, inputs are [math]\displaystyle{ \mathbf{d}=(0.2,0.7 ) }[/math] and weights [math]\displaystyle{ \mathbf{m}=(0.101,0.051) }[/math]. That means [math]\displaystyle{ c=0.055 }[/math], [math]\displaystyle{ \bar{c} }[/math] of full cycle is 0.075, [math]\displaystyle{ \theta_M=0.000 }[/math] , [math]\displaystyle{ \phi=0.003 }[/math], [math]\displaystyle{ \dot{m}=(0.001,0.002) }[/math]. Adding 10% of the derivative to the weights we obtain new ones [math]\displaystyle{ \mathbf{m}=(0.110,0.055) }[/math].
Repeating previous cycle we obtain, after several hundred of iterations, that stability is reached with [math]\displaystyle{ \mathbf{m}=(3.246,-0.927) }[/math], [math]\displaystyle{ c=\sqrt{8}=2.828 }[/math] (first half) and [math]\displaystyle{ c=0.000 }[/math] (remainder time), [math]\displaystyle{ \bar{c}=\sqrt{8}/2=1.414 }[/math], [math]\displaystyle{ \theta_M = \sqrt{8} = 2.828 }[/math] , [math]\displaystyle{ \phi=0.000 }[/math] and [math]\displaystyle{ \dot{m}=(0.000,0.000) }[/math].
Note how, as predicted, the final weight vector [math]\displaystyle{ m }[/math] has become orthogonal to one of the input patterns, being the final values of [math]\displaystyle{ c }[/math] in both intervals zeros of the function [math]\displaystyle{ \phi }[/math].
Experiment
The first major experimental confirmation of BCM came in 1992 in investigating LTP and LTD in the hippocampus. Serena Dudek's experimental work showed qualitative agreement with the final form of the BCM activation function.[8] This experiment was later replicated in the visual cortex, which BCM was originally designed to model.[9] This work provided further evidence of the necessity for a variable threshold function for stability in Hebbian-type learning (BCM or others).
Experimental evidence has been non-specific to BCM until Rittenhouse et al. confirmed BCM's prediction of synapse modification in the visual cortex when one eye is selectively closed. Specifically,
- [math]\displaystyle{ \log\left(\frac{m_{\rm closed}(t)}{m_{\rm closed}(0)}\right) \sim -\overline{n^2}t, }[/math]
where [math]\displaystyle{ \overline{n^2} }[/math] describes the variance in spontaneous activity or noise in the closed eye and [math]\displaystyle{ t }[/math] is time since closure. Experiment agreed with the general shape of this prediction and provided an explanation for the dynamics of monocular eye closure (monocular deprivation) versus binocular eye closure.[10] The experimental results are far from conclusive, but so far have favored BCM over competing theories of plasticity.
Applications
While the algorithm of BCM is too complicated for large-scale parallel distributed processing, it has been put to use in lateral networks with some success.[11] Furthermore, some existing computational network learning algorithms have been made to correspond to BCM learning.[12]
References
- ↑ Izhikevich, Eugene M.; Desai, Niraj S. (2003-07-01). "Relating STDP to BCM". Neural Computation 15 (7): 1511–1523. doi:10.1162/089976603321891783. ISSN 0899-7667. PMID 12816564.
- ↑ 2.0 2.1 Coesmans, Michiel; Weber, John T.; Zeeuw, Chris I. De; Hansel, Christian (2004). "Bidirectional Parallel Fiber Plasticity in the Cerebellum under Climbing Fiber Control". Neuron 44 (4): 691–700. doi:10.1016/j.neuron.2004.10.031. PMID 15541316.
- ↑ 3.0 3.1 Principles of neural science. Kandel, Eric R. (5th ed.). New York. 2013. ISBN 978-0-07-139011-8. OCLC 795553723.
- ↑ Markram, Henry; Gerstner, Wulfram; Sjöström, Per Jesper (2012). "Spike-Timing-Dependent Plasticity: A Comprehensive Overview" (in en). Frontiers in Synaptic Neuroscience 4: 2. doi:10.3389/fnsyn.2012.00002. ISSN 1663-3563. PMID 22807913.
- ↑ Cooper, L.N. (2000). "Memories and memory: A physicist's approach to the brain". International Journal of Modern Physics A 15 (26): 4069–4082. doi:10.1142/s0217751x0000272x. http://physics.brown.edu/physics/researchpages/Ibns/Lab%20Publications%20(PDF)/memoriesandmemory.pdf. Retrieved 2007-11-11.
- ↑ 6.0 6.1 6.2 Bienenstock, Elie L.; Leon Cooper; Paul Munro (January 1982). "Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex". The Journal of Neuroscience 2 (1): 32–48. doi:10.1523/JNEUROSCI.02-01-00032.1982. PMID 7054394. PMC 6564292. http://www.physics.brown.edu/physics/researchpages/Ibns/Cooper%20Pubs/070_TheoryDevelopment_82.pdf. Retrieved 2007-11-11.
- ↑ Intrator, Nathan (2006–2007). "The BCM theory of synaptic plasticity". Neural Computation. School of Computer Science, Tel-Aviv University. http://www.cs.tau.ac.il/~nin/Courses/NC05/BCM.ppt.
- ↑ Dudek, Serena M.; Mark Bear (1992). "Homosynaptic long-term depression in area CA1 of hippocampus and effects of N-methyl-D-aspartate receptor blockade". Proc. Natl. Acad. Sci. 89 (10): 4363–4367. doi:10.1073/pnas.89.10.4363. PMID 1350090. PMC 49082. Bibcode: 1992PNAS...89.4363D. http://www.pnas.org/cgi/reprint/89/10/4363.pdf. Retrieved 2007-11-11.
- ↑ Kirkwood, Alfredo; Marc G. Rioult; Mark F. Bear (1996). "Experience-dependent modification of synaptic plasticity in rat visual cortex". Nature 381 (6582): 526–528. doi:10.1038/381526a0. PMID 8632826. Bibcode: 1996Natur.381..526K.
- ↑ Rittenhouse, Cynthia D.; Harel Z. Shouval; Michael A. Paradiso; Mark F. Bear (1999). "Monocular deprivation induces homosynaptic long-term depression in visual cortex". Nature 397 (6717): 347–50. doi:10.1038/16922. PMID 9950426. Bibcode: 1999Natur.397..347R.
- ↑ Intrator, Nathan (2006–2007). "BCM Learning Rule, Comp Issues". Neural Computation. School of Computer Science, Tel-Aviv University. http://www.cs.tau.ac.il/~nin/Courses/NC05/bcmppr.pdf.
- ↑ Baras, Dorit; Ron Meir (2007). "Reinforcement Learning, Spike-Time-Dependent Plasticity, and the BCM Rule". Neural Computation 19 (8): 2245–2279. doi:10.1162/neco.2007.19.8.2245. 2561. PMID 17571943. http://eprints.pascal-network.org/archive/00002561/01/RL-STDP_Final.pdf. Retrieved 2007-11-11.
External links
Original source: https://en.wikipedia.org/wiki/BCM theory.
Read more |