Simulation hypothesis

From apm
Jump to: navigation, search
This article is speculative. It covers topics that are not straightforwardly derivable from current knowledge. Take it with a grain of salt. See: "exploratory engineering" for what can be predicted and what not.

This is all deep speculation, mostly unrelated to APM. Please treat it just as food for thought. Other parts of the wiki are much more serious.


Practical applications have shown again and again that natures behaviors are well amenable for approximation by simple rules (natural laws) which can be described by math (Paper: "The Unreasonable Effectiveness of Mathemathics in the Natural Sciences") and simulated with programs.

Further (initially phenomenological) refinement of models to better match reality may complicate mathematical models at first but once understanding widens (unifying theoretical models) the mathematical descriptions usually can be compactified again. (Also a thing from humanities practical experience). In a way we are reverse engineering the inner workings of the universe. The "souce-code" of our universe becomes increasingly clear to us.

Question: Is there "existing" some hardware executing that source-code?
Question: If this hardware is not interacting with us at all isn't the meaning of "existence" (in a relational sense) flawed?
Idea: In contrast to the dominating view of one single parent simulating universe our universe may be an equivalence-class / quantum-superposition of many (transfinite) simulations.

Relation to entropy-dip big-bang

The aforementioned reverse engineerability of natural law points to ultra highly compressed models (ultra-high-level source-code) compatible with the idea of "big bang as spontaneous demixing event". (Minimal entropy in total terms bits & maximum entropy in data compression ??)

Quantum-random as most sophisticated PRNG in the univerese?

A minimization of the depth of entropy-dip leads to the need for compressing even quantum-randomness (beside the randomness from the big bang). The only true random number generator (TRNG) that we know of today would actually too be a deterministic pseudo random number generator PRNG. And the multi world interpretation would break down? Hidden variables become necessary but local hidden variables are ruled out by now. Global hidden variables?? But then again there is the possibility of an equivalence class of simulations.

Nesting of simulated universes

Idea: If every possible universe exists than for every universe theres a universe simulating it. (Just fine circular logic?)
Even if for every universe there are universes simulating it that simulation are not the cause of its existence since the entropy dip required for the simulating universes may be much bigger than for the simulated universe.

Question A: Does a universe simulating universe need a much bigger entropy dip than a simulated universe?
If so universe simulating universes may be much more unlikely to "spontaneously form" than the universes they simulate.

If yes to A:
Idea: If we are a simulations we must be an uncontrolled and uncontrollable "waste product" in a long pure side effect free stretch of (reversible or quantum) computation.
If one simulates things one usually does so since one wants to tamper with the simulation. Severe tampering though ("flying pigs", air molecules spontaneously deciding to collaborate and punch you) would increase the magnitude of the entropy-dip to the level of the simulating universe which may be power-towers more unlikely.
Question: Can it be called a simulation if none can interfere?

If no to A than maybe yes to these?
Question: If our universe did not use quantum computation in any significant depth till now (no natural quantum computers and no advanced quantum computer building aliens) would it be possible to simulate our own (mostly classical) universe in a quantum computer?
Question: If we are simulated in quantum computers and build at least one powerful quantum computer, can we overload our simulating hardware? Or would we just loose one of many universe that are simulating us indistinguishably from our point of view.
Question: If quantum computation shows fundamental limits (TODO: find and add hypothesis - about decoherence setting in at some scale?) can be conclusions drawn for the simulation hypothesis (simulation hardware limits)?

Accidental universes vs Accidentally simulated brains

Idea: Contrary to the "widespread" assumption a "Boltzmann brain" (the simulation of a life experience instead of a universe) may need a much bigger entropy dip than the one necessary for the simulation of a whole universe that contains this (and many more) brains/experiences.

Simulation of all life experiences of an individual person/animal/decently-intelligent-creature requires the encoding / plain-stroing of lots of facts without reason (giant mountains of data). There's a large stateful informational surface area.

An analogy: A compact clean modularized high level program source code compiled down to a big entangled mess of assembly code. Cropping out a random small patch of the assembly code (or even its execution pattern) may need much more storage space than the original source code.

As is well known even very simple programs can create very complicated patterns when executed (e.g. game of life and many more). Now imagine what a program the size of todays operation systems could encode (not create - execution is a different story) if it where known how to encode structure with near "maximal intelligence". A rather small, very compact, enormously high level, high information density, wisdom packed, source code.

Question: Is it unreasonable to assume that an immensely wisely arranged OS sized amount of data (state 2017) (far beyond anything humans will ever be capable of) would suffice for encoding a (our) whole universe (consistent with all of humanities observations)?

Here's the source code for the multiverse – Warning! execute at your own risk

One can construct programs that systematically construct and execute all constructible programs in a semi-parallel Cantor-diagonal like fashion. (Very elegantly doable with lambda calculus - which is Turing complete) This contains (countably) infinite halting problems => infinite truths without reason => infinite axioms It could be seen as kind of a "theory of everything" but since these programs are the very definition of inefficiency and uselessness it might be better to call them "theory of nothing" (TODO: Many programs can do that. Are there any programs that create product-programs that other programs don't create? It shouldn't be so, otherwise there seems to be trouble with turing-completeness.)

Notes

  • In some sense math, including it in the form of (pure) programming, is more fundamental than (known) physics. Is there any chance to bridge the gap from the simulation hypothesis to fundamental physics that has some signs that it could work at least theoretically? Or much better even with physically testable hypotheses?

Related

External links

Boltzmann brain:
What if the encoding of the universe requires less of an entropy dip than a human Boltzmann brain? (Reasons for why this could even be in above's text.) That would resolve that conundrum. Right? But if universes capable of supporting universe-simulating-computations inside necessarily require a much bigger entropy dip then wouldn't these universes be susceptible to the Boltzmann brain problem? At least at some point in an upwards infinite hierarchy of simulations Boltzmann brains must become more likely than the the entropy dip that was necessary for our universe. And what does that even mean??