Computational irreducibility

From HandWiki
Short description: Concept proposed by Stephen Wolfram

Computational irreducibility is one of the main ideas proposed by Stephen Wolfram in his 2002 book A New Kind of Science, although the concept goes back to studies from the 1980s.

The idea

Computational irreducibility explains observed limitations of existing mainstream science. In cases of computational irreducibility, only observation and experiment can be used.

Implications

  • There is no easy theory for any behavior that seems complex.
  • Complex behavior features can be captured with models that have simple underlying structures.
  • An overall system's behavior based on simple structures can still exhibit behavior indescribable by reasonably "simple" laws.

Analysis

Navot Israeli and Nigel Goldenfeld found that some less complex systems behaved simply and predictably (thus, they allowed approximations). However, more complex systems were still computationally irreducible and unpredictable. It is unknown what conditions would allow complex phenomena to be described simply and predictably.

Compatibilism

Marius Krumm and Markus P Muller tie computational irreducibility to Compatibilism.[1] They refine concepts via the intermediate requirement of a new concept called computational sourcehood that demands essentially full and almost-exact representation of features associated with problem or process represented, and a full no-shortcut computation. The approach simplifies conceptualization of the issue via the No Shortcuts metaphor. This may be analogized to the process of cooking, where all the ingredients in a recipe are required as well as following the 'cooking schedule' to obtain the desired end product. This parallels the issues of the profound distinctions between similarity and identity.

See also

External links and references

References

  1. Computational irreducibility and compatibilism: towards a formalization https://arxiv.org/pdf/2101.12033.pdf