Data IO bottleneck
There's an incremental path to the more advanced advanced systems. So we'll end up with a hierarchical data management and transmission subsystem in these systems. With local data caches and so on.
If one wants a simple block of featureless material all the nanomachinery at the bottom needs to do the same operations. There's no need to control each and every manipulator from the very top of this hierarchy. Threading all that data all the way though top to bottom fully parallel would indeed be impossible.
In case one wants more complicated heterogeneous structures the data won't rise too much. Why is that? It's because of something like reverse data compression.
To elaborate:
Although we often feed our computers exclusively via the keyboard with just a few keystrokes per minute (no high data rate inputs here like e.g. images) the amount of data and the seeming complexity that computers generate us from that input it is can be gigantic. One example with absolutely minimalistic input and very big output would be the generation of those pretty mandelbrot set zooms videos.
Now imagine what amounts of data and degrees of complexities may arise from some more practical program that generates instructions for product nanomanufacturing. Even when the program is just somewhat more somewhat complex than the madelbrot example incredible seeming complexity can emerge.
It's the magic of emergent structure from chaotic systems "de novo data decompression"
Related
- Data decompression chain
- Relativity of complexity philosophical topic Warning! you are moving into more speculative areas.