Difference between revisions of "Data decompression chain"

From apm
Jump to: navigation, search
(Targets: added link to page Compiling to categories (Conal Elliott))
(Bootstrapping of the decompression chain: some review and cleanup)
Line 41: Line 41:
 
== Bootstrapping of the decompression chain ==
 
== Bootstrapping of the decompression chain ==
  
One of the [[common misconceptions|many flawed critique points of APM]]
+
One of the [[common misconceptions|concerns]] regarding the feasibility of [[advanced productive nanosystem]]s
is that all the necessary data cannot be fed to the [[mechanosynthesis core]]s and [[crystolecule assembly robotics]] (the former are mostly hard coded and don't need much data by the way).
+
is the worry that that all the necessary data cannot be fed to  
 +
* the [[mechanosynthesis core]]s and  
 +
* the [[crystolecule assembly robotics]]  
 +
<small>The former are mostly hard coded and don't need much data by the way.</small>
  
For example this size comparison in [https://youtu.be/Q9RiB_o7Szs?t=13m35s E. Drexlers TEDx talk (2015) 13:35]
+
For example this size comparison in [https://youtu.be/Q9RiB_o7Szs?t=13m35s E. Drexlers TEDx talk (2015) 13:35] can (if taken to literally) <br>
can (if taken to literally) can lead to the misjudgment that there is an fundamentally insurmountable data bottleneck. Of course feeding yotabit per second over those few pins is ridiculous but that is not what is planned. {{wikitodo| move this topic to [[Data IO bottleneck]]}}
+
lead to the misjudgment that there is an fundamentally insurmountable data bottleneck. <br>
 +
Of course trying to feed yotabits per second over those few pins would be ridiculous and impossible, but that is not what is planned. <br>
 +
{{wikitodo| move this topic to [[Data IO bottleneck]]}}
  
We already know how to avoid such a bottleneck.
+
We already know how to avoid such a bottleneck. <br>
Albeit we program computers with our fingers delivering just a few bits per second computers now perform petabit per second internally.
+
Albeit we program computers with our fingers delivering just a few bits per second <br>
 +
computers now perform petabit per second internally.
  
The goal is reachable by gradually building up a hierarchy of decompression steps.
+
The goal is reachable by gradually building up a hierarchy of decompression steps. <br>
The most low level most high volume data is generated internally and locally very near where it's finally "consumed".
+
The most low level most high volume data is generated internally and locally very near to where it's finally "consumed".
  
 
== Related ==
 
== Related ==

Revision as of 20:03, 19 November 2021

This article is a stub. It needs to be expanded.

This article defines a novel term (that is hopefully sensibly chosen). The term is introduced to make a concept more concrete and understand its interrelationship with other topics related to atomically precise manufacturing. For details go to the page: Neologism.

The "data decompression chain" is the sequence of expansion steps from

  • very compact highest level abstract blueprints of technical systems to
  • discrete and simple lowest level instances that are much larger in size.

3D modeling

Constructive solid geometry graph (CSG graph). Today (2017) often still at the top of the chain.

(TODO: add details)

  • high language 1: functional, logical, connection to computer algebra system
  • high language 2: imperative, functional
  • Volume based modeling with "level set method" or even "signed distance fields"
    (organized in CSG graphs which reduce to the three operations: sign-flip, sum and maximum)
  • Surface based modeling with parametric surfaces (organized in CSG graphs)
  • quadric nets C1 (rarely employed today 2017)
  • triangle nets C0
  • tool-paths
  • Primitive signals: step-signals, rail-switch-states, clutch-states, ...

Targets

  • physical object
  • virtual simulation

Maybe useful for compiling the same code to different targets (as present in this context): Compiling to categories (Conal Elliott)

3D modeling & functional programming

Modeling of static 3D models is purely declarative.

  • example: OpenSCAD

...

Similar situations in today's computer architectures

  • high level language ->
  • compiler infrastructure (e.g. llvm) ->
  • assembler language ->
  • actual actions of the target data processing machine

Bootstrapping of the decompression chain

One of the concerns regarding the feasibility of advanced productive nanosystems is the worry that that all the necessary data cannot be fed to

The former are mostly hard coded and don't need much data by the way.

For example this size comparison in E. Drexlers TEDx talk (2015) 13:35 can (if taken to literally)
lead to the misjudgment that there is an fundamentally insurmountable data bottleneck.
Of course trying to feed yotabits per second over those few pins would be ridiculous and impossible, but that is not what is planned.
(wiki-TODO: move this topic to Data IO bottleneck)

We already know how to avoid such a bottleneck.
Albeit we program computers with our fingers delivering just a few bits per second
computers now perform petabit per second internally.

The goal is reachable by gradually building up a hierarchy of decompression steps.
The most low level most high volume data is generated internally and locally very near to where it's finally "consumed".

Related

External Links

Wikipedia