Difference between revisions of "Assembly layer"

From apm
Jump to: navigation, search
m (Maximizing productivity)
m (Maximizing productivity)
Line 38: Line 38:
 
Going beyond that it becomes tedious. As an analogy one can compare it to going from very useful PCs to more specialised grapic cards.
 
Going beyond that it becomes tedious. As an analogy one can compare it to going from very useful PCs to more specialised grapic cards.
  
Filling the whole Factory volume with the immense productivity density of the bottommost layer(s) of a stratified design would lead to unhandlable requirements for product expulsion (too high accelerations) and ridiculousdly high waste heat that could probably even with advanced [[thermal energy transmission|heat transportation]] only be handled for very short peaks. ('''Todo:'''check when it becomes possible).
+
Filling the whole Factory volume with the immense productivity density of the bottommost layer(s) of a stratified design would lead to unhandlable requirements for product expulsion (too high accelerations) and ridiculousdly high waste heat that could probably even with advanced [[thermal energy transmission|heat transportation]] only be handled for very short peaks. ('''Todo:'''check when it becomes possible). See: [[productivity explosion]].
 
Sensible working designs for continuous maximum performance cannot fill a whole cube volume with an implementation of the bottommost [[assembly levels]] but need a complicated an unflexible 3D fractal design. If one goes to the limits the cooling facilities will become way bigger than the factory.
 
Sensible working designs for continuous maximum performance cannot fill a whole cube volume with an implementation of the bottommost [[assembly levels]] but need a complicated an unflexible 3D fractal design. If one goes to the limits the cooling facilities will become way bigger than the factory.
  

Revision as of 10:42, 22 February 2015

Cross section through a nanofactory showing the lower assembly levels vertically stacked on top of each other. Image from the official "productive nanosystems" video. The most part of the stack are the bottom layers. Convergent assembly happens very thin at the top.
This is an extremely simplified model of the layer structure of a AP small scale factory. The stack of bottom layers (fine tubes in the image) is reduced in height by two to three orders of magnitude! The size steps above will likely be bigger than x4 in a practical system. Size steps of x32 allow for easy reasoning since they are nicely visualizable and two steps (32^2) make about a round 1000fold size increase.

The layers in a stratified nanofactory are the assembly levels mapped to the assembly layers interspersed by lockout routing and other layers. Note that here the levels refer to abstract order and layers to physical parallel stacked sheets.

Layers as natural choice

Scaling laws say that when halfing the size of some generalized assembly unit one can put four such units below. Those are twice as fast and produce each an eight of the amout of product the upper unit produces. Multiplied together one sees that the top layer and the layer with units of halve size below have exactly the same throughput. This works not just with halving the size but with any subdivision.

All layers in an arbitrarily deep stack (with equivalent step sizes) of cube shaped units have equal throughput.

Especially the upper convergent assembly layers very much behave scale invariant. At the bottommost assembly layers the lower physical size limit becomes relevant. That is manipulators cannot be as small or smaller than Moieties). This and the fact that one needs to slow down slightly from m/s to cm/s or mm/s speeds to prevent excessive waste heat. distorts this scale invariancy somewhat. Stacks of identical layers that thread by finnished DMEs are sensible at the bottom.

Slowdown through stepsize

Increasing the size of a step between layers slows down the throughput due to a shrinking number of manipulators per surface area. In the extreme case one has one scanning probe microscope for a whole mole of particles. There it would take times way beyond the age of the universe to assemble anything human hand sized. This by the way is the reason why massive parallelity gained by either exponential assembly or self replication is an absolute necessary.

Increased stepsizes bring the benefit of less design restrictions in the products (fewer borders). The slowdown incurred by bigger stepsizes can in bounds be compensated with parallelity in parts assembly. To avoid a bottleneck all stepsizes in the stack should be similar.

Consequence of lack of layers

Using a tree structure instead of a stack means halving the size leads to more then four subunits and the upper convergent assembly layers can potentially become a bottleneck.

Since every layer has the same productivity (mass per time) the very thin bottommost layer has the same productivity as the (practically or hypothetically implementet) uppermost convergent assembly layer - a single cube with the size of the sidelength of the factory. This lets the density of productivity (productivity per volume) explode - but there are issues.

Maximizing productivity

For all but the most extreme applications a stratified design will work well. Going beyond that it becomes tedious. As an analogy one can compare it to going from very useful PCs to more specialised grapic cards.

Filling the whole Factory volume with the immense productivity density of the bottommost layer(s) of a stratified design would lead to unhandlable requirements for product expulsion (too high accelerations) and ridiculousdly high waste heat that could probably even with advanced heat transportation only be handled for very short peaks. (Todo:check when it becomes possible). See: productivity explosion. Sensible working designs for continuous maximum performance cannot fill a whole cube volume with an implementation of the bottommost assembly levels but need a complicated an unflexible 3D fractal design. If one goes to the limits the cooling facilities will become way bigger than the factory.

Deviating from the stack structure to get more volume than the bottommost layer in a stratified design where actual mechanosynthesis happens

  • makes system design onsiderably harder (less scale invariance, harder post design system adjustability)
  • may lead to a bottleneck at the upper convergent assembly levels.

Assemblers

The early (and probably outdated) "universal molecular assembler" concept also fills space but has a lot lower density of locations where mechanosynthesis takes place so there might not be a bottleneck problem. The actual problems are:

  • the integration of basic assembly levels (not layers) and if "simplified" the severly complicated product design by the lack of intermediate standard part handling.
  • the obstructive scaffold that needs mobility logic at or above the one of microcomponent maintainance units

Large scale

What about building whole houses skyscrapers cities or even giant space station?

In large scales mass becomes increasingly relevant and may begin to pose a top level bottleneck. But this is far off. Unplanned working is likely to emerge when time is not critical. This would look like many nanofactories operated at different locations at the same time in a semi manual style. This is a fractal style of manufacturing at the makro scale with very low throughput compared to what would be possible with even a single simple stratified nanofactory.