Difference between revisions of "Purely functional programming"
(→Related: added link to yet unwritten page * Mathematical substitutability) |
m (→Relation to OOP (object oriented programming)) |
||
Line 127: | Line 127: | ||
Bartosz Milewski: | Bartosz Milewski: | ||
− | >> ''"Unfortunately, objects | + | >> ''"Unfortunately, objects dont compose in the presence of concurrency. They hide the wrong kind of things. They hide sharing and mutation. Let me quote the definition of data race: Two or more threads accessing the same piece of memory at the same time, at least one of them writing. In other words: Sharing + Mutation = Data Race. Nothing in the object’s interface informs you about the possibility of sharing and mutation inside the object’s implementation. Each object in isolation may be data-race-free but their composition may inadvertently introduce data races. And you won’t know about it unless you study the details of their implementation down to every single memory access."'' |
Source: [https://bartoszmilewski.com/2014/06/09/the-functional-revolution-in-c/] | Source: [https://bartoszmilewski.com/2014/06/09/the-functional-revolution-in-c/] | ||
Revision as of 07:48, 5 May 2024
Purely functional programming (PFP) seems to be of high relevance in the context of APM.
For reasons why that is see the main page: Relations of APM to purely functional programming
This page is about the very basics of PFP.
For diving deeper into the topic the author advises to consult material specifically dedicated to the topic
of which there exists a huge amount.
Contents
Basics
Pureness
Basic (mutually self implying) properties:
- forbidden: updating variables aka destructive assignments -- E.g. x := x+1
- required: "referential transparency" -- f(x) = y -- same x in f's domain leads to same y in f's range -- no matter where in code and when in execution.
- allowed: "mathematical reasoning" -- E.g. substitution or the reverse: a=b ∧ b=c ⇔ a=c -- code order does not matter
For people coming from "normal" (imperative) programming at first it can be quite flabbergasting how one could possible program without being allowed to ever overwrite a thing. (And without loops).
Purity means that there are areas of the code where these properties are a mathematical GUARANTEED fact.
There are several ways to get that kind of isolation (or to not need it in the first place):
- pure Haskell uses monads for 100% tight isolation (always tying entry and exit point together)
- pure Microsoft Excel (and open clones) do not face that problem of needing isolation having all state visible all the time
- pure OpenSCAD does not face the problem because it does batch processing only.
- pure functional reactive programming (FRP) libraries manage to do this without monads and they managed to overcome PFP's "batch processing only" stigma.
Purity demotes debugging from an art to technique. (Just like the difference in difficulty between integration and differentiation). Narrowing down on an error always leads to the error.
Functionalness
It's about treating functions just as regular values (one says as "first class citizens") and thus allowing for functions to be passed as arguments to other functions. This is a powerful abstraction tool.
Note that functionalness in neither a necessary nor a sufficient property for purity.
(As a name "pure and functional" would be better than the historic accident: "purely functional")
There are languages that are pure but not functional and vice versa.
Pureness and functionalness combined
There are no loops. One uses recursion instead. Usually in an implicit and meaningful form.
Preferrably recursion hidden in functions that take functions as arguments (aka higher order functions). That gives: maps, folds, scans, zips, .... So loops fall out as a consequence. They get replaced by more descriptive names. "Mystery loops" go the way of the "goto". Extinct.
Preferrably tail recursion. To avoid to overflow the stack. In case this is not possible so called "trampolines" can help. A bit of an issue is that current HW (the Von Neumann architecture) is not optimized for PFP languages. There where some interesting experiments. (wiki-TODO: add references)
Costs of purely functional programming
If PFP is better than classical imperative programming then why didn't it go mainstream? It may just have been ahead of its time (and it may still be a bit). Concurrency has become a major pain point just recently. "Recently" in terms of programming language design timescales which are very long for the digital space.
There are several old problems that have become ingrained in that collective consciousness of the programmer / developer part of global society. Many of them so deeply that they pertinaciously stay albeit technology has moved on in solving many of these old problems. The perceptual problem may have some similarities with the one that APM faces.
The batch processing stigma
A naive look suggests that IO necessarily and unavoidably breaks purity. Leaving PFP forever in an isolated non-interactive space of "pure thinking" and batch-processing. This manifested in the half joke: "purely functional programming is poorly functional".
Functional reactive programming (FRP) has made great progress in this area.
The general inferiority stigma
Especially in the early days there was a huge lack of purely functional data-structures and the belief that it was not doable at all. By now there are a lot of purely functional data-structures available, some of which formerly thought to be impossible.
On a close look at a lower level one can indeed actually proof that PFP is fundamentally weaker than non-pure programming. That is because purity has as a prerequisite reversibility. And if reversible programming would be just as powerful as irreversible programming, then one could crack all asymmetric problems (which are the basis for cryptography) (wiki-TODO: this needs more elaboration & add reference).
But this does not seem to be a notable constraint for practical problems. To gain more safety some programmers are willing to accept a lot more self constraint by giving up on even Turing-completeness. (See: "total programming").
Even things that on a first glance seem only doable imperatively (like e.g. generation of mazes) have turned out to be very doable in PFPs and are even elegant and comprehensible at that.
The statical typing stigma
- Early forms of static typing gave more pain than benefit and left a very bad and lasting after-taste.
- Type inference made things a lot better. (Still much room for improvement e.g. in terms of error message comprehensibility)
Static typing promises all kinds of automated refactoring without having the IDE needing to reverse engineer the code first. See: Visually augmented purely functional programming
Statical typing based structural editors:
This is an area where some unfathomable potential seems to linger quite closely.
If done well structured editing promises the "impossibilification" of any and all syntax errors while not at all restricting expressiveness creativity. Also you can get maximally fine grained incremental debugging. So getting slews of consequential errors puked in your face would be mostly a thing of the past.
With this programming could quite literally become like playing a puzzle game. With the libraries like construction kits.
(No most definitely NOT referring to "Scratch" and the like here)!
You can only put things together that do actually fit together
but you can do so in all directions. Meaning at all open ends aka "type holes".
And for the spots where things don't fit together you can chain together adapters that make the ends meet.
Or you can break hole pieces up in the middle and thereby create two pieces with identical "type holes").
More visual representation come quite naturally. And since they are isomorphic to the code they cannot be weaker than the code.
Visual programming languages come with a HUGE stigma of being non-scalable toys for beginners only.
They usually grow into monstrous krakens and fail miserably at scaling.
What few realize is that almost all of them where mostly "invented and not discovered". Existing visual programming languages are often pure. (they lend themselves to that - dataflow) but they are very rarely functional. (functional meaning in a graphical sense that graphs can flow through graphs).
John Tromps pretty compact lambda diagrams [1] may be an interesting starting point for visualization that may be useful for certain kinds of tasks.
With visual representations (if done right) many things can become much more clear and obvious. Like e.g. the duality between constructive corecursion and deconstructive recursion.
Relation to OOP (object oriented programming)
Bartosz Milewski: >> "Unfortunately, objects dont compose in the presence of concurrency. They hide the wrong kind of things. They hide sharing and mutation. Let me quote the definition of data race: Two or more threads accessing the same piece of memory at the same time, at least one of them writing. In other words: Sharing + Mutation = Data Race. Nothing in the object’s interface informs you about the possibility of sharing and mutation inside the object’s implementation. Each object in isolation may be data-race-free but their composition may inadvertently introduce data races. And you won’t know about it unless you study the details of their implementation down to every single memory access." Source: [2]
Warning! you are moving into more speculative areas.
- Rust may have solved the problem by introducing purity via (linear types)??
- But if it did is seems that came at the big expense of significantly reduced expressiveness??
(wiki-TODO: look deeper into that)
Related
- Software in general
- Relations of APM to purely functional programming
- Visually augmented purely functional programming
- Reversible computing
- General software issues
- Programming languages
- Mathematical substitutability
External links
- the-downfall-of-imperative-programming by Bartosz Milewski
Wikipedia: