Data decompression chain

From apm
Revision as of 20:44, 19 November 2021 by Apm (Talk | contribs) (Triangle nets: note on today's 3d printers)

Jump to: navigation, search
This article is a stub. It needs to be expanded.

This article defines a novel term (that is hopefully sensibly chosen). The term is introduced to make a concept more concrete and understand its interrelationship with other topics related to atomically precise manufacturing. For details go to the page: Neologism.

The "data decompression chain" is the sequence of expansion steps from

  • very compact highest level abstract blueprints of technical systems to
  • discrete and simple lowest level instances that are much larger in size.

3D modeling

Constructive solid geometry graph (CSG graph). Today (2017) often still at the top of the chain.

Programmatic high level 3D modelling representations with code can
considered to be a highly compressed data representation of the target product.

The principle rule of programming which is: "don't repeat yourself" does apply.

  • Multiply occurring objects (including e.g. rigid body crystolecule parts) are specified only once plus the locations and orientations (poses) of their occurrences.
  • Curves are specified in a not yet discretized (e.g. not yet triangulated) way. See: Non-destructive modelling
  • Complex (and perhaps even dynamic) assemblies are also encoded such that they complexly unfold on code execution.
    Laying out gemstone based metamaterials in complex dynamically interdigitating/interlinking/interweaving ways.

Note: "Programmatic" does not necessarily mean purely textual and in: "good old classical text editors".
Structural editors might and (as to the believe of the author) eventually will take over
allowing for an optimal mixing of textual and graphical programmatic representation of target products in the "integrated deveuser interfaces".

The decompression chain in gem-gum factories (and 3D printers)

The list goes:

  • from top high level small data footprint
  • to bottom low level large data footprint

  • high language 1: functional, logical, connection to computer algebra system
  • high language 2: imperative, functional
  • Volume based modeling with "level set method" or even "signed distance fields"
    (organized in CSG graphs which reduce to the three operations: sign-flip, sum and maximum)
  • Surface based modeling with parametric surfaces (organized in CSG graphs)
  • quadric nets C1 (rarely employed today 2017)
  • triangle nets C0
  • tool-paths
  • Primitive signals: step-signals, rail-switch-states, clutch-states, ...

(TODO: add details to decompression chain points)

Quadric nets

This is highly unusual but seems interesting (state 2021).
One could say that in some way quadric nets are an intermediary representation of 3D geometry lying between

  • general arbitrary function representation and
  • triangulated representation.

Piecewise defined quadric mesh surfaces can be made to be one time continuously differentiable (C1).
See: Wikipedia Differentiability_classes)
(TODO: Find if pretty much all surfaces of practical interest van be "quadriculated" just as they can be "triangulated")

The second derivative of a function representation (in form of a scalar field) gives the Hesse matrix where

  • it's eigenvectors give the principle curvatures and
  • it's determinant giving the Gaussian curvature (smaller zero: saddle, equal zero: or valley/plane, bigger zero: or hill/through)

Side-note: Projections from 3D down to 2D form conic sections of same or lower degree.
These can be re-extrusion to 3D. It may be interesting to implement this functionality in programmatic 3D modelling tools.

It seems that calculating convex hulls is only possible for a few special cases.
This is different for triangle meshes where convex hulls are always possible.
Convex hulls can be quite useful in 3D modelling, but it seems they are only applicable quite far down the decompression chain.

Triangle nets

  • There is an enormous mountain of theoretical work about them.
  • Often it might be desirable to skip this step and go directly
    –– from some functional representation
    –– to toolpaths.
    Today's (2021) FDM/FFF 3D-printers pretty much all go through the intermediary representation of triangle-meshes.
  • There are difficult since they offer a gazillion ways of bad geometries
    (degenerate triangles, more than two edges meeting, vertices on edges, mesh holes, flipped normals, ...)

Tool-paths

Related:

Primitive signals

The many places Where the control-subsystem finally comes together with the power-subsystem.
Signals are amplified to do the driving of motions but also possibly energy recuperation from the motions of the nano-robotics.
Related:

(Compilation) Targets

Beside the actual physical product another desired product of the code is just a digital preview.
So there are several desired outputs for one and the same code.
Maybe useful for compiling the same code to different targets (as present in this context): Compiling to categories (Conal Elliott)

Possible desired outputs include but are not limited to:

  • the actual physical target product object
  • virtual simulation of the potential product (2D or some 3D format)
  • approximation of output in form of utility fog?

3D modeling & functional programming

Modeling of static 3D models is purely declarative.

...

Similar situations in today's computer architectures

  • high level language ->
  • compiler infrastructure (e.g. llvm) ->
  • assembler language ->
  • actual actions of the target data processing machine

Bootstrapping of the decompression chain

One of the concerns regarding the feasibility of advanced productive nanosystems is the worry that that all the necessary data cannot be fed to

The former are mostly hard coded and don't need much data by the way.

For example this size comparison in E. Drexlers TEDx talk (2015) 13:35 can (if taken to literally)
lead to the misjudgment that there is an fundamentally insurmountable data bottleneck.
Of course trying to feed yotabits per second over those few pins would be ridiculous and impossible, but that is not what is planned.
(wiki-TODO: move this topic to Data IO bottleneck)

We already know how to avoid such a bottleneck.
Albeit we program computers with our fingers delivering just a few bits per second
computers now perform petabit per second internally.

The goal is reachable by gradually building up a hierarchy of decompression steps.
The most low level most high volume data is generated internally and locally very near to where it's finally "consumed".

Related

External Links

Wikipedia