Effective complexity
From HandWiki
Effective complexity is a measure of complexity defined in a 1996 paper by Murray Gell-Mann and Seth Lloyd that attempts to measure the amount of non-random information in a system.[1][2] It has been criticised as being dependent on the subjective decisions made as to which parts of the information in the system are to be discounted as random.[3]
See also
- Kolmogorov complexity
- Excess entropy
- Logical depth
- Renyi information
- Self-dissimilarity
- Forecasting complexity
References
- ↑ Gell-Mann, Murray; Lloyd, Seth (1996). "Information Measures, Effective Complexity, and Total Information". Complexity 2 (1): 44–52. doi:10.1002/(SICI)1099-0526(199609/10)2:1<44::AID-CPLX10>3.0.CO;2-X. Bibcode: 1996Cmplx...2a..44G. https://philpapers.org/rec/GELIME.
- ↑ Ay, Nihat; Muller, Markus; Szkola, Arleta (2010). "Effective Complexity and Its Relation to Logical Depth". IEEE Transactions on Information Theory 56 (9): 4593–4607. doi:10.1109/TIT.2010.2053892.
- ↑ McAllister, James W. (2003). "Effective Complexity as a Measure of Information Content". Philosophy of Science 70 (2): 302–307. doi:10.1086/375469. https://philpapers.org/rec/MCAECA.
External links
Original source: https://en.wikipedia.org/wiki/Effective complexity.
Read more |