# Assembly layer

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
 This article defines a novel term (that is hopefully sensibly chosen). The term is introduced to make a concept more concrete and understand its interrelationship with other topics related to atomically precise manufacturing. For details go to the page: Neologism.
Cross section through a nanofactory showing the lower assembly levels vertically stacked on top of each other. Image from the official "productive nanosystems" video. The most part of the stack are the bottom layers. Convergent assembly happens very thin at the top.
This is an extremely simplified model of the layer structure of a AP small scale factory. The stack of bottom layers (fine tubes in the image) is reduced in height by two to three orders of magnitude! The size steps above will likely be bigger than x4 in a practical system. Size steps of x32 allow for easy reasoning since they are nicely visualizable and two steps (32^2) make about a round 1000fold size increase.

The layers in a stratified nanofactory are the assembly levels mapped to the assembly layers interspersed by lockout routing and other layers. Note that here the levels refer to abstract order and layers to physical parallel stacked sheets.

## Layers as natural choice

Scaling laws say that (assuming scale invariant operation speeds!!) when halfing the size of some generalized assembly unit one can put four such units below. Those are twice as fast and produce each an eight of the amout of product the upper unit produces. Multiplied together one sees that the top layer and the layer with units of halve size below have exactly the same throughput. This works not just with halving the size but with any subdivision.

All layers in an arbitrarily deep stack (with equivalent step sizes) of cube shaped units have equal throughput.

Especially the upper convergent assembly layers very much behave scale invariant. At the bottommost assembly layers the lower physical size limit becomes relevant. That is manipulators cannot be as small or smaller than Moieties). This and the fact that one needs too slow down slightly from m/s to cm/s or mm/s speeds to prevent excessive waste heat. distorts this scale invariancy somewhat. Stacks of identical layers that thread by finnished DMEs are sensible at the bottom.

### Layers as a limitation

If power dissipation per volume is the parameter that one wants to keep constant instead of operation speeds the speeds must be raised with progressing convergent assembly steps. Bearing surface per volume falls quickly which would make losses fall too when speeds are kept constant. But if the bearing surface it is kept constant and the total constant speeds are distributed over many bearing surfaces in infinitesimal bearings power dissipation falls even faster.

When using higher speeds at the higher convergent assembly levels one can either consent to being only able to use those speeds for recycling of preproduced parts or one needs to change the nanofactory to o more fractal design with increasing branching at the bottom end of the convergent assembly chain.

At some point speeds get limited through acceleration forces (a spinning thin-walled tube ring made from nanotubes ruptures at around 3km/s independent of scale) much sooner mechanical resonances and probably some other problems will occur (acceleration & breaking losses?).

## Slowdown through stepsize

Increasing the size of a step between layers slows down the throughput due to a shrinking number of manipulators per surface area. In the extreme case one has one scanning probe microscope for a whole mole of particles. There it would take times way beyond the age of the universe to assemble anything human hand sized. This by the way is the reason why massive parallelity gained by either exponential assembly or self replication is an absolute necessary.

Increased stepsizes bring the benefit of less design restrictions in the products (fewer borders). The slowdown incurred by bigger stepsizes can in bounds be compensated with parallelity in parts assembly. To avoid a bottleneck all stepsizes in the stack should be similar.

## Consequence of lack of layers

Using a tree structure instead of a stack means halving the size leads to more then four subunits and the upper convergent assembly layers can potentially become a bottleneck.

Since every layer has the same productivity (mass per time) the very thin bottommost layer has the same productivity as the (practically or hypothetically implementet) uppermost convergent assembly layer - a single cube with the size of the sidelength of the factory. This lets the density of productivity (productivity per volume) explode - but there are issues.

## Maximizing productivity

For all but the most extreme applications a stratified design will work well. Going beyond that it becomes tedious. As an analogy one can compare it to going from very useful PCs to more specialised grapic cards.

Filling the whole Factory volume with the immense productivity density of the bottommost layer(s) of a stratified design would lead to unhandlable requirements for product expulsion (too high accelerations) and ridiculousdly high waste heat that could probably even with advanced heat transportation only be handled for very short peaks. (Todo:check when it becomes possible). See: productivity explosion. Sensible working designs for continuous maximum performance cannot fill a whole cube volume with an implementation of the bottommost assembly levels but need a complicated an unflexible 3D fractal design. If one goes to the limits the cooling facilities will become way bigger than the factory.

Deviating from the stack structure to get more volume than the bottommost layer in a stratified design where actual mechanosynthesis happens

• makes system design onsiderably harder (less scale invariance, harder post design system adjustability)
• may lead to a bottleneck at the upper convergent assembly levels.

### Assemblers

In the early (and now outdated) "universal diamondoid molecular assembler" concept space is also filled completely with production machinery. But in this concept there is a lot lower volumetric density of locations where mechanosynthesis takes place compared to gem-gum factories. That is molecular assemblers would freature few and big and slow mechanosynthesis cores due to their general purpose applicability requirement. So for the molecular assembler concept there might not be a bottleneck problem despite of the productive devices filling the whole volume.

The actual problems with molecular assemblers are:

• Limited space for the integration of a second assembly level beyond the first one. Here not organizable as not layers. And …
• If assembly levels are "simplified" then product assembly design gets severely complicateddue to the lack of intermediate standard part handling capabilities.
• The product growth obstructing molecular assembler crystal scaffold that needs logic for mobility and coordinated motien that lies in complecity at or even above what would be needed for microcomponent maintenance microbots.

## Delineation to microcomponent maintenance microbots

• In the diamondoid molecular assemblers concept they are supposed to be capable of self replication given just molecular feedstock.
• Microcomponent maintenance microbots usually are incapable of self-replication. And if they are then need their crystolecules supplied as vitamins.

In the diamondoid molecular assemblers concept they are usually assumed to perform assembly at the fist assembly level and perhaps the second assembly level at best. Assembly should happen in their internal building chamber. Usually in the past it where assumed there is just a single internal building chamber. And outside in a volume that is somehow very well fenced off against inundation of air.

Microcomponent maintenance microbots would usually not feature any mechanosynthetic capabilities on the first assembly level. Thery would rather rather only freature assembly capabilities at the second assembly level and third assembly level.

## Large scale

What about building whole houses skyscrapers cities or even giant space station?

In large scales mass becomes increasingly relevant and may begin to pose a top level bottleneck. But this is far off. Unplanned working is likely to emerge when time is not critical. This would look like many nanofactories operated at different locations at the same time in a semi manual style. This is a fractal style of manufacturing at the makro scale with very low throughput compared to what would be possible with even a single simple stratified nanofactory.