Difference between revisions of "Data IO bottleneck"

From apm
Jump to: navigation, search
(added link to yet unwritten page: Relativity of complexity)
 
(One intermediate revision by the same user not shown)
Line 20: Line 20:
  
 
It's the magic of emergent structure from chaotic systems "de novo data decompression"
 
It's the magic of emergent structure from chaotic systems "de novo data decompression"
 +
 +
-----
 +
 +
In any advanced high throughput nanofactory there is necessarily an [[Atom placement frequency|enormous effective total atom placement frequency]]. Most of these atom placement processes though will be:
 +
* in local hardware pre-pogrammed/"pre-hard-matter-coded" (single function mill style factories for standard parts) and
 +
* driven from local integrated computation. Almost no data is threaded through from the very top level down to the very bottom.
  
 
== Related ==
 
== Related ==
Line 25: Line 31:
 
* [[Data decompression chain]]
 
* [[Data decompression chain]]
 
* [[Relativity of complexity]] philosophical topic {{speculativity warning}}
 
* [[Relativity of complexity]] philosophical topic {{speculativity warning}}
 +
 +
[[Category:Programming]]
 +
[[Category:Information]]

Latest revision as of 12:04, 11 July 2023

This article is a stub. It needs to be expanded.

There's an incremental path to the more advanced advanced systems. So we'll end up with a hierarchical data management and transmission subsystem in these systems. With local data caches and so on.

If one wants a simple block of featureless material all the nanomachinery at the bottom needs to do the same operations. There's no need to control each and every manipulator from the very top of this hierarchy. Threading all that data all the way though top to bottom fully parallel would indeed be impossible.

In case one wants more complicated heterogeneous structures the data won't rise too much. Why is that? It's because of something like reverse data compression.

To elaborate:

Although we often feed our computers exclusively via the keyboard with just a few keystrokes per minute (no high data rate inputs here like e.g. images) the amount of data and the seeming complexity that computers generate us from that input it is can be gigantic. One example with absolutely minimalistic input and very big output would be the generation of those pretty mandelbrot set zooms videos.

Now imagine what amounts of data and degrees of complexities may arise from some more practical program that generates instructions for product nanomanufacturing. Even when the program is just somewhat more somewhat complex than the madelbrot example incredible seeming complexity can emerge.

It's the magic of emergent structure from chaotic systems "de novo data decompression"


In any advanced high throughput nanofactory there is necessarily an enormous effective total atom placement frequency. Most of these atom placement processes though will be:

  • in local hardware pre-pogrammed/"pre-hard-matter-coded" (single function mill style factories for standard parts) and
  • driven from local integrated computation. Almost no data is threaded through from the very top level down to the very bottom.

Related