Difference between revisions of "Assembly layer"
m (→Layers as a limitation) |
m (added related section) |
||
Line 72: | Line 72: | ||
This would look like many nanofactories operated at different locations at the same time in a semi manual style. | This would look like many nanofactories operated at different locations at the same time in a semi manual style. | ||
This is a fractal style of manufacturing at the makro scale with very low throughput compared to what would be possible with even a single simple stratified nanofactory. | This is a fractal style of manufacturing at the makro scale with very low throughput compared to what would be possible with even a single simple stratified nanofactory. | ||
+ | |||
+ | == Related == | ||
+ | |||
+ | * [[Convergent assembly]] | ||
+ | * [[Assembly levels]] | ||
[[Category:General]] | [[Category:General]] | ||
[[Category:Nanofactory]] | [[Category:Nanofactory]] | ||
[[Category:Site specific definitions]] | [[Category:Site specific definitions]] |
Revision as of 17:45, 15 November 2016
The layers in a stratified nanofactory are the assembly levels mapped to the assembly layers interspersed by lockout routing and other layers. Note that here the levels refer to abstract order and layers to physical parallel stacked sheets.
Contents
Layers as natural choice
Scaling laws say that (assuming scale invariant operation speeds!!) when halfing the size of some generalized assembly unit one can put four such units below. Those are twice as fast and produce each an eight of the amout of product the upper unit produces. Multiplied together one sees that the top layer and the layer with units of halve size below have exactly the same throughput. This works not just with halving the size but with any subdivision.
All layers in an arbitrarily deep stack (with equivalent step sizes) of cube shaped units have equal throughput.
Especially the upper convergent assembly layers very much behave scale invariant. At the bottommost assembly layers the lower physical size limit becomes relevant. That is manipulators cannot be as small or smaller than Moieties). This and the fact that one needs to slow down slightly from m/s to cm/s or mm/s speeds to prevent excessive waste heat. distorts this scale invariancy somewhat. Stacks of identical layers that thread by finnished DMEs are sensible at the bottom.
Layers as a limitation
If power dissipation per volume is the parameter that one wants to keep constant instead of operation speeds the speeds must be raised with progressing convergent assembly steps. Bearing surface per volume falls quickly which would make losses fall too when speeds are kept constant. But if the bearing surface it is kept constant and the total constant speeds are distributed over many bearing surfaces in infinitesimal bearings power dissipation falls even faster.
When using higher speeds at the higher convergent assembly levels one can either consent to being only able to use those speeds for recycling of preproduced parts or one needs to change the nanofactory to o more fractal design with increasing branching at the bottom end of the convergent assembly chain.
At some point speeds get limited through acceleration forces (a spinning thin-walled tube ring made from nanotubes ruptures at around 3km/s independent of scale) much sooner mechanical resonances and probably some other problems will occur (acceleration & breaking losses?).
Slowdown through stepsize
Increasing the size of a step between layers slows down the throughput due to a shrinking number of manipulators per surface area. In the extreme case one has one scanning probe microscope for a whole mole of particles. There it would take times way beyond the age of the universe to assemble anything human hand sized. This by the way is the reason why massive parallelity gained by either exponential assembly or self replication is an absolute necessary.
Increased stepsizes bring the benefit of less design restrictions in the products (fewer borders). The slowdown incurred by bigger stepsizes can in bounds be compensated with parallelity in parts assembly. To avoid a bottleneck all stepsizes in the stack should be similar.
Consequence of lack of layers
Using a tree structure instead of a stack means halving the size leads to more then four subunits and the upper convergent assembly layers can potentially become a bottleneck.
Since every layer has the same productivity (mass per time) the very thin bottommost layer has the same productivity as the (practically or hypothetically implementet) uppermost convergent assembly layer - a single cube with the size of the sidelength of the factory. This lets the density of productivity (productivity per volume) explode - but there are issues.
Maximizing productivity
For all but the most extreme applications a stratified design will work well. Going beyond that it becomes tedious. As an analogy one can compare it to going from very useful PCs to more specialised grapic cards.
Filling the whole Factory volume with the immense productivity density of the bottommost layer(s) of a stratified design would lead to unhandlable requirements for product expulsion (too high accelerations) and ridiculousdly high waste heat that could probably even with advanced heat transportation only be handled for very short peaks. (Todo:check when it becomes possible). See: productivity explosion. Sensible working designs for continuous maximum performance cannot fill a whole cube volume with an implementation of the bottommost assembly levels but need a complicated an unflexible 3D fractal design. If one goes to the limits the cooling facilities will become way bigger than the factory.
Deviating from the stack structure to get more volume than the bottommost layer in a stratified design where actual mechanosynthesis happens
- makes system design onsiderably harder (less scale invariance, harder post design system adjustability)
- may lead to a bottleneck at the upper convergent assembly levels.
Assemblers
The early (and probably outdated) "universal molecular assembler" concept also fills space but has a lot lower density of locations where mechanosynthesis takes place so there might not be a bottleneck problem. The actual problems are:
- the integration of basic assembly levels (not layers) and if "simplified" the severly complicated product design by the lack of intermediate standard part handling.
- the obstructive scaffold that needs mobility logic at or above the one of microcomponent maintainance units
Large scale
What about building whole houses skyscrapers cities or even giant space station?
In large scales mass becomes increasingly relevant and may begin to pose a top level bottleneck. But this is far off. Unplanned working is likely to emerge when time is not critical. This would look like many nanofactories operated at different locations at the same time in a semi manual style. This is a fractal style of manufacturing at the makro scale with very low throughput compared to what would be possible with even a single simple stratified nanofactory.