Convergent assembly

From apm
Jump to: navigation, search
Note that in a practical design the size steps would be much bigger than mere doubling steps also it is not a necessity that the topmost convergent assembly levels have the same size as the whole factory device - like shown here. - Devices can be made thin and flat. - Coplanar layers are natural when equal operation speeds in all assembly levels are present.

In advanced atomically precise manufacturing systems convergent assembly is the general process of taking small parts and putting them together to bigger parts and then taking those bigger parts and putting them together to even bigger parts and so on.

Convergent assembly must not be confused with exponential assembly (a concept for bootstrapping AP manufacturing).

Motivations for convergent assembly

  • Avoiding unnecessary disassembly when reconfiguring already produced products in a way that just swaps big chunks.
  • Allowing to assemble unstable overhangs or impossible undercuts without scaffolds. (stalactite like structures)
  • The possibility to keep everything in a vacuum/cleanroom till the final product release - this should not be necessary and may decrease the incentive for the creation of systems that are capable of recycling.
  • Nontrivial effects on speed.

General

In an advanced gem-gum-factory the convergent assembly levels can be identified with the abstract assembly levels. Note that those are not tied to any specific geometric layout.

Stacking those levels to layers as a concrete implementation is a good first approximation (especially in the mid range size levels where scale invariant design holds for a decent range of orders of magnitudes) it creates a nanofactory which is practical and conveniently also reasonably easy to analyze. For optimal performance (in efficiency or throughput) deviations from a design with coplanar layers may be necessary.

Both at the very small scales and at the very large scales a highly optimized nanofactroy design may strongly deviate from a simple stack of layers. On very large scales self self mass starts to play a role and the fact that at very very large scales even abundant materials can get scarce and in the most extreme far off cases self gravity kicks in.

Degree and stepsize

There are two important parameters to characterize convergent assembly.
There are "degree" and "stepsize".

  • The "degree" of convergent assembly, in terms of number of convergent assembly levels till cutoff, has little effect on speed! But not none.
  • The "stepsize" of (convergent) assembly, in terms of the ratio of product-size to resource-size in each assembly step, has a huge effect on speed. Note though that this parameter is present for all "degrees" of convergent assembly even degree one or zero. Meaning it is also present in productive nanosystems that lack convergent assembly. (In systems with more than one convergent assembly layer unequal stepsizes can occur.)

Influence of the degree of convergent assembly on throughput speed

Convergent assembly per se is not faster than if one would just use the highly parallel bottom layer(s) to assemble final product in one fell swoop. Assembling the final product in one fell swoop right from a naive general purpose highly parallel bottom-most layer would be just as fast as a system with the same bottom-layer that has a convergent assembly hierarchy stacked on top. There are indirect aspects of convergent assembly that provide speedup though.

Speedup of recycling (recomposing product updates) by enabling partial top down disassembly

  • Simpler decomposition into standard assembly-groups that can be put together again in completely different ways.
  • Automated management of bigger logical assembly-groups

Full convergent assembly all the way up to the macro-scale allows one to perform rather trivial automated macroscopic reconfigurations with the available macroscopic manipulators. Otherwise it would be necessary of fully disassemble the product almost to the molecular level which would be wasteful in energy and time. In short: just silly.

Choosing to leave out just the topmost one to three convergent assembly layers could provide the huge portability benefit of a flat form factor (without significant loss of reconfiguration speed). Alternatively with a bit more design effort the topmost convergent layers could be made collapsible/foldable.

Convergent assembly makes low level specialization possible => speedup

Putting the first convergent assembly layers right above the bottom layers allows for specialized production units (mechanosynthesis cores specialized to specific molecular machine elements) that can operate faster than general purpose production unist. The pre-produced standard-parts get redistributed form where they are made to where they are needed by an intermediary transport layer and then assembled by the next layer in the convergent assembly hierarchy.

Component routing logistics

Between the layers of convergent assembly there is the opportunity to nestle transport layers that are potentially non local.

If necessary the products outputted by the small assembly cells below the one bigger associated upper assembly cell may be routed beyond the limits of this associated assembly cell that lies directly above them. That is if the geometric layout decisions allow this (this seems e.g. relatively easy the case in a stratified nanofactory design). This allows the upper bigger assembly cells to receive more part types than the limited number of associated lower special purpose mill outputs would allow. The low lying crystolecule routing layer is especially critical in this regard.

Comparison to specialization on the macroscale

In today's industry of non atomically precise production convergent assembly is the rule but in most cases it is just not fully automated. An example is the path from raw materials to electronic parts to printed circuit boards and finally to complete electronic devices. The reason for convergent assembly here is that for the separate parts there are many specialized production places necessary. The parts just can't be produced directly in place in the final product.

Usually one needs a welter of completely identical building components in a product. Connection pins are a good example. Single atoms are completely identical but they lack in variety in their independent function. Putting together standard parts in place with a freely programmable general purpose manipulator amounts to a waste of space and time. General purpose manipulators are misused that way.

Even in general purpose computer architectures there are - if one takes a closer look - specially optimized areas for special tasks. Specialization on a higher abstraction level is usually removed from the hardware and put into software.

(In a physically producing personal fabricator there's a far wider palette of possibilities for physical specialization than in a data shuffling microprocessor since there are so many possible diamondoid molecular elements that can be designed.)

Bigger assembly groups provide more design freedom and for the better or the worse the freedom of format proliferation. Here the speed gain from specialization drops and the space usage explodes exponentially because of the combinatoric possibilities. Out of this reason this is the place where to switch hardware generalization compensated by newly introduced software specialization.

Thus In a personal fabricator the most if not all the specialization is distributed in the bottom-most layers. Further up the assembly levels specialization is not a motivation for convergent assembly anymore. Some of the other motivations may prevail. Higher convergent assembly levels (layers) quickly loose their logistic importance (the relative transport distances to the part sizes shrink). The main distribution action takes place in the first three logistic layers.

Side-notes:

Related

[Todo: investigate this further]

External links