Difference between revisions of "Exotic math"

From apm
Jump to: navigation, search
(added link to Useful math)
(Related: == External links == added link to WP page about Lambert W function)
 
Line 81: Line 81:
 
== Related ==
 
== Related ==
  
* fractional calculus – [https://en.wikipedia.org/wiki/Fractional_calculus (on wikipedia)]
 
----
 
 
* In contrast here is more generally [[useful math]] especially for the context of APM
 
* In contrast here is more generally [[useful math]] especially for the context of APM
 +
----
 +
* '''[[Useful math]]'''
 +
 +
== External links ==
 +
 +
* fractional calculus – [https://en.wikipedia.org/wiki/Fractional_calculus (on wikipedia)]
 
----
 
----
 
* [https://en.wikipedia.org/wiki/Fabius_function Fabius function]
 
* [https://en.wikipedia.org/wiki/Fabius_function Fabius function]
 
* [https://en.wikipedia.org/wiki/Flat_function Flat Function]
 
* [https://en.wikipedia.org/wiki/Flat_function Flat Function]
 
----
 
----
* '''[[Useful math]]'''
+
* [https://en.wikipedia.org/wiki/Lambert_W_function Lambert W function]
 +
* [https://en.wikipedia.org/wiki/Gamma_function Gamma function]

Latest revision as of 13:26, 7 December 2023

This article is a stub. It needs to be expanded.

In statistical physics

There's a transformation from a huge statistical number of nested products to a huge statistical number of nested sums
involved as a critical step in the derivation of thermodynamic potentials from microstates in statistical physics.
Conventional mathematical notation has no means to denote this step formally in a proper way. So it comes over as hand-wavey.
It works though.

In the classical scattering problem

In the case of the the physical scattering problem the solution approach involves solving it by Fourier transforming space forth and back.
There are several implicit limits involved in the classical scattering problem.
This is making a proper mathematical treatment very difficult (and tedious), and thus such a treatment is practically never done.
Especially not in a limited time educational context where students struggle with the basic concepts.


On an other note: There's a matrix in the denominator of a fraction involved, which is quite odd. Related:

  • Cauchy's integral theorem – it gets a very prominent application here to solve an integral
  • Born–Oppenheimer approximation – and its deceiving pseudo convergence – (to check)

In generalized functions (distributions)

"Support functions" (zero everywhere except in a finite region, bounded, and infinitely often continuously differentiable)
Are not forced to be a constant function (f(x)=c) thereby seemingly contradicting Liouville's theorem (complex analysis). But they don't.
What was the reason here again? ...

Related: Repeated integration of the Thue–Morse sequence leading to the Fabius function which interestingly is nowhere analytic

Here's something relevant on wikipedia:
Non-analytic smooth function

In the theory of quantum chaos

In the theory of quantum chaos to get the Lyapunov exponent
there's something involved even more wild than matrices in the just exponents.
TODO figure out what that was.

In the context of generating functions

A link between math, computer science, and physics?

In the Curry-Howard-Lambeck correspondence (or isomorphism).
(The three names refer to programming-language-types, logical-propositions, and category-theory-constructs respectively.)
Data structures (like lists and trees) of specific format (ADTs algebraic data types – product types and sum types)
have a direct correspondence to algebraic polynomials.

Fascinating thing #1: These polynomials representing data-structures can be collapsed down to generating functions.
And generating function are also in the Nöther theorem
(which links invariances under transformations, aka symmetries, to conserved physical quantities).
So are there data-structures corresponding to conserved physical quantities?!

Fascinating thing #2: Differentiating data-structures gives new data-structures with holes as derivatives.
So called zippers. Eventually relevant for efficient data storage (diffing).

Fascinating thing #3:
While finding the category theoretical analogies of prog-lang types is straightforward
(Note: Functions and data are being treated unitedly as one and the same thing.)
the reverse going from category theory to prog-lang types leads some unexpectedly present blank spots being filled.
It's about inverse operations that at first glance don't seem to make sense.
(Maybe just like matrices in exponents and in denominators at first glance don't seem to make sense.)
But maybe these new operations do make sense in some way. It seems likely.

Just playing around

In the context of generating functions there is an analog with products instead of sums.
What was this all about again? ...

Limits of math

The program that constructs and executes in parallel all possibly constructable programs.
See: A true but useless theory of everything << Warning! you are moving into more speculative areas.

Interesting less known fractals

  • devils stairecase
  • fractal from (x+-1) polynomials ?? ...

Related

  • In contrast here is more generally useful math especially for the context of APM

External links