Visually augmented purely functional programming

From apm
Jump to: navigation, search

Programming the physical world entails a lot of 3D-modelling.

  • 3D-models for production are usually non-interactive.
  • 3D-models are usually described with scene graphs which have no notion of time or execution order - they are declarative (animation can be treated statically too - just one more dimension).

This made it easy to create 3D-modelling languages (e.g. OpenSCAD a quasi standard in the RepRap maker community) that build up on scene graphs and remained declarative in nature.

Declarative programming has a number of advantages including guaranteed converging bug-tracking since there are no global implicit cross interactions. Disadvantages of declarative programming are e.g. difficult to implement random access datastructures and as already mentioned no interactivity. Thus more and more non declarative (imperative) wrapper languages appear that output auto-generated OpenSCAD code.

I'll argue that this is the wrong way to go. Instead the declarative language OpenSCAD should be upgraded to a fully fledged purely functional programming language (functions treated as values aka functions as "first order citisens"). While retaining the benefits of declarative programming this would thwart the problems:

  • advanced functional datastructures -- like in Haskell
  • multimedia interactivity -- like Elm (+live program transparency)
  • truly scalable graphical programming -- like in Conall Elliotts toy demo Eros

Today's graphical programming languages (2016) are usually considered to be a rather sparse notation but there's the notion that a picture is worth a thousand words - so clearly we must be doing something wrong. Images have the capacity to carry more information density than words and are often easier to process by our minds.

Extending on Conal Elliotts concept and elms first order functional reactive programming I think there could be done a lot more.

  • the standard program transformations that all programmers regularly do and do way too work intensive without even thinking on it could be simplify to the bare minimum of work.
  • a graphical visualization of modern type systems with type inference type-classes types of types (aka kinds) and so on could be made

The resulting software production system will be not just applicable to 3D-modelling but to just about any problem including future highly interactive makroscale nanosystems. It may even be a revolution in software development.

Program Transformations

Drag and drop to:

  • change between tuple-input and separate function parameters -- aka curry/uncurry
  • change the order of the function arguments or the order of the tuple elements (self-propagating)
  • move a helper-function one level more general/global* - or all the way to top level global -- (*if helper functions have helper functions)
  • move a function one level more specialized/local (if used in multiple places independently changeable copies will emerge)

Click to:

  • consistently rename variables (self propagating) -- aka alpha conversion
  • substitute variables with their definition -- aka beta reduction
  • make anonymous functions explicit or vice versa -- aka eta conversion
  • do simple out of context unit testing
    decouple a single function argument i.e. replace only one parameter with a test-input
    decouple all function arguments i.e. replace all parameters with test-inputs
  • open a single function context for visualization and complexity control of function parameters and the helper-function-context
    [let x1=..,x2=..,xN=.. in -- function_definition(xi,yi) -- where y1=..,y2=..,yN=..]
  • open one or all sub-contexts of the in-focus-function (sub-contexts = definitions of the functions which define the in-focus-function) -- (only first layer - no sub sub contexts)
  • Open one or all super-contexts of the in-focus-function (super-contexts = the functions which use the in-focus function to define themselves) -- (only first layer - no super super contexts)

Regarding the last two note that:

  • breadth search corresponds to divide and conquer "wishful thinking" i.e. assuning that subproblems can be solved and delaying that task for later - applies to both program creation and program execution -- (aka normal order evaluation - lazy evaluation - shortcut evaluation)
  • depth search corresponds to constructive buildup from simple building blocks -- aka (application order evaluation eager evaluation)

Regarding the reverse of beta reduction:

  • substitute equal definitions with a new variable -- aka "beta expansion" -- the IDE could try find and show differences that need to be removed to archive extensional equality (equality in the outside behavior not necessarily the inside implementation) but this is an art not a technique since its provable that functions fundamentally can't be tested on equality (halting problem)

Note

This outline is based on a mapping of lambda calculus and a tiny generic bit of Haskell to a graphically augmented programming system.

In plain lambda calculus all functions without exception have exactly one input and exactly one output. The most obvious difference of Haskell functions to plain lambda calculus is that all lambdas of a function are in front. This splits the lambda calculus tree at the application nodes (?) (buried lambdas in let-in and where clauses)

Related

External Links