# Foundations of mathematics

 This article is speculative. It covers topics that are not straightforwardly derivable from current knowledge. Take it with a grain of salt. See: "exploratory engineering" for what can be predicted and what not.

## Axioms - truth's without reason

Without a basic set of axioms math cannot be performed. Axioms by definition cannot be proven in their own formal system. If they get proven then it becomes clear that they where no axioms in the first place.

Axioms draw their value from trust building up from repeated successful applicability. In fact these axioms present the wisdom modern society put the highest level of trust in.

But still there's no absolute guarantee that even the most fundamental axioms might turn out to be non axiom and possibly even wrong. If so then they can only be wrong in very extreme cases otherwise we would have recognized way sooner.

Just as in physics advancing capabilities may sometimes force one to change (refine) ones fundamental truths (here "laws of nature" instead of axioms) such that they continue to be useful. From Newton to Einstein, from classical mechanics to quantum mechanics, from gravity and electromagnetism to those plus strong and weak force, and so no and so forth.

In the end accepting axioms runs down to practicality / pragmatism.
It seems this somehow ties math to physics.

### How many axioms are there?

Math tries to keep the number of axioms as small as possible (and of course finite). And for a good reason. In some sense axioms are the antithesis to science. The point where you're forced to switch to trust, faith, magic and miracles.

But is the number of axioms really finite? There's a connection between axioms and the halting problem(s) (or the Entscheidungsproblem) that suggests the existence of an infinite number of axioms.

By systematically (combinatorially) constructing all constructible programs (the lambda calculus is very suitable for doing that) and then executing them in a cantor triangular parallel fashion

• like so [p1-s1, p1-s2,p2-s1, p1-s3,p2-s2,p3-s1, p1-s4,p2-s3,p3-s2,p4-s1, ...] (p...program, s...execution step)

one runs into an unending amount of programs that seem to refrain from stopping / terminating. Since its provably impossible to have a program that can tell whether one of these programs (chosen from all the seemingly no-nstopping ones) halts or not, the only way to find out is to investigate them one by one.

It is often believed that Kurt Gödel thought the human mind is special for being capable of solving all these individual halting questions. This is not true. (TODO: find and add relevant citation and context clearing up the usual misunderstanding) Back then there was one physical way the human mind was (and mostly still is) special compared to computers though. Computers back then where strongly isolated from the universe. They had no notable sensors to speak of and artificial intelligence was far from anything. Both are requirements for judging the practicability of some new hypothesis thus they couldn't and still mostly can't bet on the acceptability of a new axiom.

• Related to "truths without reason" in math is "effect without cause" in physics.
This one seems to be present in quantum randomness and the big bang.
• It seems what are axioms in one "framework" can be derivable in another framework with different axioms and vice versa.

## Classic vs intuitionistic / constructive

Unconstrained math allows proof by contradiction without forcing one to give concrete counterexamples. Sometimes the existence of some mathematical objects can be proven that cannot (yet?) be constructed.

### From programming

Coming from the practical realm of programming (especially purely functional programming) it turned out that types correspond to logic propositions in math (the Curry-Howard isomorphism). But interestingly this logic corresponds to a subset of math that comes without the aforementioned proof by contradiction.

Practical programming can also be linked to category theory. This is going one step further (the Curry-Howard-Lambek isomorphism).

In category theory there are only objects and morphisms present as base constructs. Objects have no internal structure. It's kind of like defining a concept not by whats inside (intensional equality) but solely by how and to what it connects to. It's kind of like defining existence itself. Without any context any concept would become completely meaningless and nonexistent. Given exhaustive content a missing context can be completely reconstructed. Or in a practical setting: Only if one has some remaining context one has a chance to do some reconstruction. (Specific case: The total loss of a strongly linked to website does not remove all its information)

### Category theory

From the insight form category theory:

• one finds that beside the "product-type" (that is the type of tuples) that is implemented in almost every programming language there is also a symmetric thing called "sum-type" (that is an "Either A or B or ..." type). This one is just as important (given it's symmetry this is perhaps unsurprising). The missing of this second one in many programming languages had and still has horrible consequences.
• one finds that one can even do calculus with types (derivation of datatypes somewhat relates to data diffing) These sings are further related to generating functions which are used in physics to connect fundamental symmetries with conserved quantities (Nöther theorem).
• one finds many cases of the co & contra symmetry - one case is the division between "upward" constructive corecursion building things up from the "initial element" and the other direction normal "contra-recursion" evaluating the built up stuff. Sometimes one direction is completely natural while the other is rather surprising or one does not even know for what this might be useful yet.