Physics:Coarse-grained modeling

From HandWiki

Coarse-grained modeling, coarse-grained models, aim at simulating the behaviour of complex systems using their coarse-grained (simplified) representation. Coarse-grained models are widely used for molecular modeling of biomolecules[1][2] at various granularity levels. A wide range of coarse-grained models have been proposed. They are usually dedicated to computational modeling of specific molecules: proteins,[1][2] nucleic acids,[3][4] lipid membranes,[2][5] carbohydrates[6] or water.[7] In these models, molecules are represented not by individual atoms, but by "pseudo-atoms" approximating groups of atoms, such as whole amino acid residue. By decreasing the degrees of freedom much longer simulation times can be studied at the expense of molecular detail. Coarse-grained models have found practical applications in molecular dynamics simulations.[1] Another case of interest is the simplification of a given discrete-state system, as very often descriptions of the same system at different levels of detail are possible.[8][9] An example is given by the chemomechanical dynamics of a molecular machine, such as Kinesin.[8][10]

The coarse-grained modeling originates from work by Michael Levitt and Ariel Warshel in 1970s.[11][12][13] Coarse-grained models are presently often used as components of multiscale modeling protocols in combination with reconstruction tools[14] (from coarse-grained to atomistic representation) and atomistic resolution models.[1] Atomistic resolution models alone are presently not efficient enough to handle large system sizes and simulation timescales.[1][2]

Coarse graining and fine graining in statistical mechanics addresses the subject of entropy [math]\displaystyle{ S }[/math], and thus the second law of thermodynamics. One has to realise that the concept of temperature [math]\displaystyle{ T }[/math] cannot be attributed to an arbitrarily microscopic particle since this does not radiate thermally like a macroscopic or "black body". However, one can attribute a nonzero entropy [math]\displaystyle{ S }[/math] to an object with as few as two states like a "bit" (and nothing else). The entropies of the two cases are called thermal entropy and von Neumann entropy respectively.[15] They are also distinguished by the terms coarse grained and fine grained respectively. This latter distinction is related to the aspect spelled out above and is elaborated on below.

The Liouville theorem (sometimes also called Liouville equation)

[math]\displaystyle{ \frac{d}{dt}(\Delta q\Delta p) = 0 }[/math]

states that a phase space volume [math]\displaystyle{ \Gamma }[/math] (spanned by [math]\displaystyle{ q }[/math] and [math]\displaystyle{ p }[/math], here in one spatial dimension) remains constant in the course of time, no matter where the point [math]\displaystyle{ q,p }[/math] contained in [math]\displaystyle{ \Delta q\Delta p }[/math] moves. This is a consideration in classical mechanics. In order to relate this view to macroscopic physics one surrounds each point [math]\displaystyle{ q,p }[/math] e.g. with a sphere of some fixed volume - a procedure called coarse graining which lumps together points or states of similar behaviour. The trajectory of this sphere in phase space then covers also other points and hence its volume in phase space grows. The entropy [math]\displaystyle{ S }[/math] associated with this consideration, whether zero or not, is called coarse grained entropy or thermal entropy. A large number of such systems, i.e. the one under consideration together with many copies, is called an ensemble. If these systems do not interact with each other or anything else, and each has the same energy [math]\displaystyle{ E }[/math], the ensemble is called a microcanonical ensemble. Each replica system appears with the same probability, and temperature does not enter.

Now suppose we define a probability density [math]\displaystyle{ \rho(q_i,p_i,t) }[/math] describing the motion of the point [math]\displaystyle{ q_i,p_i }[/math] with phase space element [math]\displaystyle{ \Delta q_i\Delta p_i }[/math]. In the case of equilibrium or steady motion the equation of continuity implies that the probability density [math]\displaystyle{ \rho }[/math] is independent of time [math]\displaystyle{ t }[/math]. We take [math]\displaystyle{ \rho_i=\rho(q_i,p_i) }[/math] as nonzero only inside the phase space volume [math]\displaystyle{ V_{\Gamma} }[/math]. One then defines the entropy [math]\displaystyle{ S }[/math] by the relation

[math]\displaystyle{ S=-\Sigma_i\rho_i\ln\rho_i, \;\; }[/math] where [math]\displaystyle{ \;\;\Sigma_i\rho_i=1. }[/math]

Then,by maximisation for a given energy [math]\displaystyle{ E }[/math], i.e. linking [math]\displaystyle{ \delta S=0 }[/math] with [math]\displaystyle{ \delta }[/math] of the other sum equal to zero via a Lagrange multiplier [math]\displaystyle{ \lambda }[/math], one obtains (as in the case of a lattice of spins or with a bit at each lattice point)

[math]\displaystyle{ V_{\Gamma}= e^{(\lambda +1)}=\frac{1}{\rho} }[/math] [math]\displaystyle{ \;\;\; }[/math] and [math]\displaystyle{ \;\;\; }[/math] [math]\displaystyle{ S=\ln V_{\Gamma} }[/math],

the volume of [math]\displaystyle{ \Gamma }[/math] being proportional to the exponential of S. This is again a consideration in classical mechanics.

In quantum mechanics the phase space becomes a space of states, and the probability density [math]\displaystyle{ \rho }[/math] an operator with a subspace of states [math]\displaystyle{ \Gamma }[/math] of dimension or number of states [math]\displaystyle{ N_{\Gamma} }[/math] specified by a projection operator [math]\displaystyle{ P_{\Gamma} }[/math]. Then the entropy [math]\displaystyle{ S }[/math] is (obtained as above)

[math]\displaystyle{ S= -Tr\rho\ln\rho = \ln N_{\Gamma}, }[/math]

and is described as fine grained or von Neumann entropy. If [math]\displaystyle{ N_{\Gamma}=1 }[/math], the entropy vanishes and the system is said to be in a pure state. Here the exponential of S is proportional to the number of states. The microcanonical ensemble is again a large number of noninteracting copies of the given system and [math]\displaystyle{ S }[/math], energy [math]\displaystyle{ E }[/math] etc. become ensemble averages.

Now consider interaction of a given system with another one - or in ensemble terminology - the given system and the large number of replicas all immersed in a big one called a heat bath characterised by [math]\displaystyle{ \rho }[/math]. Since the systems interact only via the heat bath, the individual systems of the ensemble can have different energies [math]\displaystyle{ E_i, E_j, ... }[/math] depending on which energy state [math]\displaystyle{ E_i, E_j, ... }[/math] they are in. This interaction is described as entanglement and the ensemble as canonical ensemble (the macrocanonical ensemble permits also exchange of particles).

The interaction of the ensemble elements via the heat bath leads to temperature [math]\displaystyle{ T }[/math], as we now show.[16] Considering two elements with energies [math]\displaystyle{ E_i,E_j }[/math], the probability of finding these in the heat bath is proportional to [math]\displaystyle{ \rho(E_i)\rho(E_j) }[/math], and this is proportional to [math]\displaystyle{ \rho(E_i+E_j) }[/math] if we consider the binary system as a system in the same heat bath defined by the function [math]\displaystyle{ \rho }[/math]. It follows that [math]\displaystyle{ \rho(E)\propto e^{-\mu E} }[/math] (the only way to satisfy the proportionality), where [math]\displaystyle{ \mu }[/math] is a constant. Normalisation then implies

[math]\displaystyle{ \rho(E_i) = \frac{e^{-\mu E_i}}{\Sigma_j e^{-\mu E_j}}, \Sigma_i \rho(E_i) =1. }[/math]

Then in terms of ensemble averages

[math]\displaystyle{ {\overline S} =-{\overline {\ln\rho}} }[/math], and [math]\displaystyle{ \mu\equiv\frac{1}{T}, \;k_B=1, }[/math]

or by comparison with the second law of thermodynamics. [math]\displaystyle{ {\overline S} }[/math] is now the entanglement entropy or fine grained von Neumann entropy. This is zero if the system is in a pure state, and is nonzero when in a mixed (entangled) state.

Above we considered a system immersed in another huge one called heat bath with the possibility of allowing heat exchange between them. Frequently one considers a different situation, i.e. two systems A and B with a small hole in the partition between them. Suppose B is originally empty but A contains an explosive device which fills A instantaneously with photons. Originally A and B have energies [math]\displaystyle{ E_A }[/math] and [math]\displaystyle{ E_B }[/math] respectively, and there is no interaction. Hence originally both are in pure quantum states and have zero fine grained entropies. Immediately after explosion A is filled with photons, the energy still being [math]\displaystyle{ E_A }[/math] and that of B also [math]\displaystyle{ E_B }[/math] (no photon has yet escaped). Since A is filled with photons, these obey a Planck distribution law and hence the coarse grained thermal entropy of A is nonzero (recall: lots of configurations of the photons in A, lots of states with one maximal), although the fine grained quantum mechanical entropy is still zero (same energy state), as also that of B. Now allow photons to leak slowly (i.e. with no disturbance of the equilibrium) from A to B. With fewer photons in A, its coarse grained entropy diminishes but that of B increases. This entanglement of A and B implies they are now quantum mechanically in mixed states, and so their fine grained entropies are no longer zero. Finally when all photons are in B, the coarse grained entropy of A as well as its fine grained entropy vanish and A is again in a pure state but with new energy. On the other hand B now has an increased thermal entropy, but since the entanglement is over it is quantum mechanically again in a pure state, its ground state, and that has zero fine grained von Neumann entropy. Consider B: In the course of the entanglement with A its fine grained or entanglement entropy started and ended in pure states (thus with zero entropies). Its coarse grained entropy, however, rose from zero to its final nonzero value. Roughly half way through the procedure the entanglement entropy of B reaches a maximum and then decreases to zero at the end.

The classical coarse grained thermal entropy of the second law of thermodynamics is not the same as the (mostly smaller) quantum mechanical fine grained entropy. The difference is called information. As may be deduced from the foregoing arguments, this difference is roughly zero before the entanglement entropy (which is the same for A and B) attains its maximum. An example of coarse graining is provided by Brownian motion.[17]

Software packages

  • Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS)
  • Extensible Simulation Package for Research on Soft Matter ESPResSo (external link)

References

  1. 1.0 1.1 1.2 1.3 1.4 "Coarse-Grained Protein Models and Their Applications". Chemical Reviews 116 (14): 7898–936. July 2016. doi:10.1021/acs.chemrev.6b00163. PMID 27333362. 
  2. 2.0 2.1 2.2 2.3 "The power of coarse graining in biomolecular simulations". Wiley Interdisciplinary Reviews: Computational Molecular Science 4 (3): 225–248. May 2014. doi:10.1002/wcms.1169. PMID 25309628. 
  3. "SimRNA: a coarse-grained method for RNA folding simulations and 3D structure prediction". Nucleic Acids Research 44 (7): e63. April 2016. doi:10.1093/nar/gkv1479. PMID 26687716. 
  4. "Recent successes in coarse-grained modeling of DNA" (in en). Wiley Interdisciplinary Reviews: Computational Molecular Science 3 (1): 69–83. 2013-01-01. doi:10.1002/wcms.1114. ISSN 1759-0884. 
  5. "Comparison of thermodynamic properties of coarse-grained and atomic-level simulation models". ChemPhysChem 8 (3): 452–61. February 2007. doi:10.1002/cphc.200600658. PMID 17290360. https://pure.rug.nl/ws/files/3623816/2007ChemPhysChemBaronSupp.pdf. 
  6. "Martini Coarse-Grained Force Field: Extension to Carbohydrates". Journal of Chemical Theory and Computation 5 (12): 3195–210. December 2009. doi:10.1021/ct900313w. PMID 26602504. https://figshare.com/articles/Martini_Coarse_Grained_Force_Field_Extension_to_Carbohydrates/2807734. 
  7. "Coarse-Grained Molecular Models of Water: A Review". Molecular Simulation 38 (8–9): 671–681. July 2012. doi:10.1080/08927022.2012.671942. PMID 22904601. 
  8. 8.0 8.1 "Coarse graining of biochemical systems described by discrete stochastic dynamics". Physical Review E 102 (6–1): 062149. December 2020. doi:10.1103/PhysRevE.102.062149. PMID 33466014. Bibcode2020PhRvE.102f2149S. 
  9. "Optimal Dimensionality Reduction of Multistate Kinetic and Markov-State Models". The Journal of Physical Chemistry B 119 (29): 9029–37. July 2015. doi:10.1021/jp508375q. PMID 25296279. 
  10. "Kinesin's network of chemomechanical motor cycles". Physical Review Letters 98 (25): 258102. June 2007. doi:10.1103/PhysRevLett.98.258102. PMID 17678059. Bibcode2007PhRvL..98y8102L. 
  11. "Computer simulation of protein folding". Nature 253 (5494): 694–8. February 1975. doi:10.1038/253694a0. PMID 1167625. Bibcode1975Natur.253..694L. 
  12. "Theoretical studies of enzymic reactions: dielectric, electrostatic and steric stabilization of the carbonium ion in the reaction of lysozyme". Journal of Molecular Biology 103 (2): 227–49. May 1976. doi:10.1016/0022-2836(76)90311-9. PMID 985660. 
  13. "Birth and future of multiscale modeling for macromolecular systems (Nobel Lecture)". Angewandte Chemie 53 (38): 10006–18. September 2014. doi:10.1002/anie.201403691. PMID 25100216. 
  14. "Computational reconstruction of atomistic protein structures from coarse-grained models". Computational and Structural Biotechnology Journal 18: 162–176. 2020. doi:10.1016/j.csbj.2019.12.007. PMID 31969975. 
  15. Black Holes, Information and the String Theory Revolution. World Scientific. 2005. pp. 69–77. ISBN 981-256-131-5. 
  16. Basics of Statistical Physics (2nd ed.). World Scientific. 2013. pp. 28–31, 152–167. ISBN 978-981-4449-53-3. 
  17. Macroscopic and Large Scale Phenomena: Coarse Graining, Mean Field Limits and Ergodicity. Springer. 2016. ISBN 978-3-319-26883-5.