Data IO bottleneck

From apm
Jump to: navigation, search
This article is a stub. It needs to be expanded.

There's an incremental path to the more advanced advanced systems. So we'll end up with a hierarchical data management and transmission subsystem in these systems. With local data caches and so on.

If one wants a simple block of featureless material all the nanomachinery at the bottom needs to do the same operations. There's no need to control each and every manipulator from the very top of this hierarchy. Threading all that data all the way though top to bottom fully parallel would indeed be impossible.

In case one wants more complicated heterogeneous structures the data won't rise too much. Why is that? It's because of something like reverse data compression.

To elaborate:

Although we often feed our computers exclusively via the keyboard with just a few keystrokes per minute (no high data rate inputs here like e.g. images) the amount of data and the seeming complexity that computers generate us from that input it is can be gigantic. One example with absolutely minimalistic input and very big output would be the generation of those pretty mandelbrot set zooms videos.

Now imagine what amounts of data and degrees of complexities may arise from some more practical program that generates instructions for product nanomanufacturing. Even when the program is just somewhat more somewhat complex than the madelbrot example incredible seeming complexity can emerge.

It's the magic of emergent structure from chaotic systems "de novo data decompression"


In any advanced high throughput nanofactory there is necessarily an enormous effective total atom placement frequency. Most of these atom placement processes though will be:

  • in local hardware pre-pogrammed/"pre-hard-matter-coded" (single function mill style factories for standard parts) and
  • driven from local integrated computation. Almost no data is threaded through from the very top level down to the very bottom.

Related