Difference between revisions of "A true but useless theory of everything"

From apm
Jump to: navigation, search
(Simulation hypothesis as a consequence rather than as the cause: added a note on inconsistency with quantum randomness & some fixes)
(Simulation hypothesis as a consequence rather than as the cause: TRNG PRNG)
Line 32: Line 32:
 
Meaning:  
 
Meaning:  
 
* (self deploying) computing-substrate, code, and initial conditions (all maximally compressed)  
 
* (self deploying) computing-substrate, code, and initial conditions (all maximally compressed)  
* and just deterministic execution from there. <br>Deterministic execution (hyperdeterminism) seems at odds with truly random quantum randomness. Odds only resolvable when assuming global hidden variables in quantum mechanics. Global over very vast stretches of spacetime as cosmic Bell experiments show.
+
* and just deterministic execution from there. <br>Deterministic execution (hyperdeterminism) seems at odds with truly random quantum randomness ([[TRNG]]). <br>Odds only resolvable when assuming global hidden variables in quantum mechanics (making quantum randomess in effect an [[PRNG]]). <br>Globally hidden over very vast stretches of spacetime as cosmic Bell experiments show.
  
 
'''Note:''' It may well be that the encoding of our whole universe in so incredible efficient that <br>
 
'''Note:''' It may well be that the encoding of our whole universe in so incredible efficient that <br>

Revision as of 08:08, 3 October 2022

Or: "The program that executes all programs"
Or: "The program that constructs and executes in parallel all programs that are constructible"

Claim/Conjecture: Turing completeness allows for simulating all possible universes

Turing completeness allows for all possible programs and,
given one interprets the simulation hypothesis in a non-mainstream way as a consequence rather than a cause,
then turing completeness allows for simulation of all possible universes.

As for all we know today a Turing complete machine can compute all of what is in principle computable. So by ...

  • constructing all possible programs that a Turing machine can represent (cantor diagonally) and
  • executing all possible programs that a Turing machine can represent (cantor diagonally)

any part of our universe that we ever have observed, do observe, and will ever observe,
all that will eventually be simulated to arbitrary accuracy somewhere within this transfinite "program of all programs".
Plus all and anything that any other sentient intelligent entities can observe in their respective realms.

Of course this is totally ignoring the ...

  • maximized brute force computing.
  • computing inefficiency and the uber-astronomical scale of the task.

Simulation hypothesis as a consequence rather than as the cause

There is no need to assume one of the classical forms of the simulation hypothesis here because:

  • Simulations as a consequence of spontaneous de-mixing
  • Presence of equivalence classes of simulations

Simulations as a consequence of spontaneous de-mixing:
When assuming the big bang as spontaneous demixing event
then every single bit more of encoding in the/a universe makes it half as likely.
This speaks incredibly strongly for an ultra highly compressed/encoded form of the/a universe. Meaning:

  • (self deploying) computing-substrate, code, and initial conditions (all maximally compressed)
  • and just deterministic execution from there.
    Deterministic execution (hyperdeterminism) seems at odds with truly random quantum randomness (TRNG).
    Odds only resolvable when assuming global hidden variables in quantum mechanics (making quantum randomess in effect an PRNG).
    Globally hidden over very vast stretches of spacetime as cosmic Bell experiments show.

Note: It may well be that the encoding of our whole universe in so incredible efficient that
it makes it much much likelier than the encoding of human experience (Boltzmann brain).

Presence of equivalence classes of simulations:
For anything simulated there will be infinitely many other places in the "program of all programs" that

  • execute the exact same simulation or
  • execute a simulation similar enough to be perceptually indistinguishable from within.

Note: Simulation to arbitrary accuracy means that for all sentient intelligent entities in there,
there is is a point beyond which different programs become perceptually indistinguishable.
An equivalence class of simulations.

Constructing and executing the program that constructs and executes all logically possible programs

Lambda calculus is Turing complete. Ignoring evaluation strategies (just as we ignore computing inefficiencies) lambda calculus is extremely simple. Just three sort of things combine into abstract syntax trees that can then be executed.

Lambda calculus provides a nice and straightforward way to construct all possible programs in a systematic way that does not leave any possible program out. Concretely one constructs all possible abstract syntax trees by going through all combinations. One gets an infinite list of all possible program starting states.

In parallel to the generation one can start executing all these programs. To execute a to infinity growing list op programs in parallel one can do do it in a Cantor diagonalization like fashion. After running one computation step on all proggrams that are running so fare run the first computation step on one more program.

Do not get confused here. Despite mentioning Georg Cantors diagonalization here this has nothing to do with proving the existence of the uncountable infinite set of the real numbers. Intuitionistic and constructivistic mathematics has problems accepting as real such numbers that are not constructible by a finite amount of steps. But that's a different story.

  • The program described here can really be written and is surprisingly short.
  • The execution of this program though is pretty much the very definition of inefficiency itself.

Termination, nontermination in simple loops, nonperiodic nontermination, uncecidable termination

Some of the programs should terminate or fall into a repeating loop pretty quickly.
For the programs that do neither terminate nor fall in a clearly nonterminating repeating loop there is the question whether there is a general algorithm that can decide if that program will ever halt or will certainly never terminate without running the program. This question is the the "entscheidungsproblem" in the formulation of the "halting problem". It turned out (meaning it was proven) that there fundamentally cannot be such an algorithm.

Instead termination behavior of seemingly non-terminating and nonperiodic programs needs to be proven in a program-case by program-case (or programm-class by program-class) manner. For some programs with unclear termination properties is fundamentally non-provable whether they are terminating or non-terminating. (Violating the law of the excluded middle?) and thus these mysterious programs (in some sense) can be seen to be equivalent to new axioms.

It looks like it won't terminate, we can't prove that it does not terminate, so lets just say it won't terminate and go with it for now. Yes, that is a deadly sin for math, but then again every axiom is a deadly sin for math and with zero axioms there would be no such thing as math to begin with.

  • So the core of math is a necessarily a deadly sin by its own judgement. Scary.
  • So the very deepest core of math is Faith, Magic, Lies, call it what you want.

In the end it may all run down on practicability and how it matches our observed physical universe.

(Side-note – going meta: What about provability of our proving capability on a particular program? ...)

Is this even good for anything?

Art as an eventual application case

Fractals are pretty. Programs that neither terminate nor get into a clearly nonterminating repetitive loop are often of fractal nature. Well there are also clearly non-terminating non-repetitive ones too. Maybe this "program of all programs" here can be used to discover some new pretty fractals that were till now undiscovered.

Challenges:

  • For pictures there needs to be a mapping to 2D space. That would be pretty arbitrary.
  • This program of all program is pretty much "the very definition of inefficiency".
    Almost all of these programs do nothing or almost noting.

Related

Nonterminating nonperiodic programs

An interesting clearly non-terminating non-periodic programs is the algorithm that generates the square-root of two in binary. It achieves never ending non-periodicity by taking as input ever longer sequences of bits from previously computed steps. This causes the algorithm to necessarily slow down with progression. Is this slowdown the case for all nonperiodic nonterminating programs? It certainly seems so from this example.

Another example is obviously pi. But we are already in higher math concepts here much more specific than general computation that falls out from the program of all programs described above.

Fractals

How do these relate to the program of all programs?

  • Penrose tiling – a non-periodic non-terminating structure
  • Mandelbrot set