Why ultra-compact molecular assemblers are too inefficient

From apm
Revision as of 09:59, 9 September 2023 by Apm (Talk | contribs) (Why too inefficient as a near term bootstrapping pathway)

Jump to: navigation, search

Up: Molecular assembler

Why too inefficient as a far term ideal target

For discussion in regards to near term systems see further below

Too low reaction density in spacetime

Mechanochemisty & mechanosynthesis has the (ovecompensateable) disadvantage compared to liquid phase chemistry that reaction sites are far apart due to the volume of machinery per reaction site. This scales by the cube of the size of machinery per reaction site. Speeding up to compensate for that is not an option as friction grows quadratically with speed. This might be another reason for why critics assumed GHz frequencies (that were never proposed) being necessary for practical throughput and would lead to way too much waste heat.

To get to a higher spacial reaction site density one needs to eventually specialize the mechanosynthesis and hard-code the operations. This leads to factory style systems with production lines for specialized parts.

Specialized assembly lines pays off big time. Specialized assembly lines are not necessarily restrictive in part-types. More specialized assembly lines just makes a minimal complete nanofactory pixel a bit bigger. Well there'll be a few software controllable general purpose mechanosynthesis units akin to just the manipulators part of molecular assemblers. There's no need for one "replication backpack" per manipulator. And there'll be only rather few of these ones compared to the other specialized ones.

As an interesting side-note: Streaming kind of motions are an option but as inertia at proposed 5mm/s and diamond stiffness plays little role smooth motions along curves are by far not as critical as in maccroscale factories. Smoother trajectories may be preferable, a thing to investigate.

Related: There is limited room at the bottom, Atom placement frequency,
Bottom scale assembly lines in gem-gum factories, Molecular mill

Replication backpack overhead (BIG ISSUE)

Every assembler needs a "replication backpack" that is: For each general purpouse mecchanosynthesis manipulator stage (plus stick-n-place stage). the entire machinery for self replication needs to be added. This makes the already big volue of the nonspecialized manipulater further blow up by a significant factor and makes the reaction site density drop down to abysmally low levels. An extremely slow system even when pushed to the limits making it run hot and wateful.

Mobility subsystem

In case of mobile molecular assemblers (most proposals).
The replication packpack includes also a mobility system per
every single mechanosyntesis manipulator stage adding further bloat.
For an example of non-mobile molecular assemblers (featuring all the issues discussed here)
See Chris Phoenix early "nanofactory" analysis. (wiki-TODO: add link)
Not really a nanofactory as sepcialization, optimization, and open modularity of a factory are all absent.

level throughput balancing mismatch

If the proposal even suggests to avoid likely hellishly difficult direct in-place mechanosynthesis then (due ti the per-definition ultra compactness requirement) there's just space left for a single stick-n-place crystolecule manipulator. This manipulator is massively underfed starved. Most of the time it needs to wait around for the next crystolecule to be finished.
See related page: level throughput balancing

Abusive underuse of general purpose capability

In a productive nanosystem of larger scale there will be a need for an incredible large numbers of exactly equal standard parts. Using a general purpose assembler for cranking out gazillions of exactly equal standard parts is not making use of the general purpose capability at all.

Unnecessarily long travel distance for molecule fragments

A general purpouse mechanosyntheis manipulater is necessarily significantly bigger than a highly optimized special purpouse station hardcoded for a specific mechanosynthetic operation. Btw: Hardcoding only means lack of life reprogrammabiluity during runtime. Disassembly and reassembly in a nanofactory can reprogram that hard-coding in a good design.

Analogy with electronic chip design having large specialized subsystems for efficiency

Molecular assemblers are like computromium in the sense of maximal small scale homogenity. Most closely to computronium come PLAs (programmable logic arrays) that can implement any logical function and can recurse to make finite state machines. Practically there are FPGAs which have a little bit of specialisation beyond a plain monolithic PLA. FPGAs are possible and useful in niche applications cases.

The inefficiency overhead in the case of assemblers over nanofactories is likely much bigger though.

Mismatch to physical scaling laws

The scaling law of higher throughput of smaller machinery tells us that a well designed system can have very high trhoughput densities

(like a on-chip nanofactory)

Why too inefficient as a near term bootstrapping pathway

This is direct path context.
Similar things to the above hold with few caveats.
Efficiency is obviously less important in early systems,
In early systems difficulty in their building are a bigger factor:
See: Why ultra-compact molecular assemblers are too difficult

Still, if inefficiencies make speeds drop a giant amount like
from the replication backpack overhead it's problem.

Obviousy fullblown hyperspecialized assembly lines are totally not an option in early systems BUT …

  • there is no need to have one replication packpack per mechanosynthetic manipulator stage
  • there is no need to have a closed system (adjacent systems can interleave)
  • there is no need for an expandic vacuum shell – See: Vacuum lockout
  • Also systems can be granted the size they actually want to have to become selfreplicative
    instead of stubbornly attempting to force selfreplicatibity into a package of predifined size
    given by ones

non-relpicatibve preproduction limits using a single macroscale SPM needle-tip.

When going for such a more sensible early nanofactory pixel (direct path) what one gets …

  • is an early rather inefficient quite compact but not ultra-compact nanofactory pixel
  • is most definitely no longer a bunch of "molecular assemblers tacked to a surface".

See: Why nanofactories are not molecular assemblers tacked to a surface.

Nonlinear increase in inefficiency and building difficulty:
For selfreplicating systems their inefficiency and their "selfreplication for the sake of selfreplication" factor grows nonlinearly with system compactness. Well, it's worse for system construction difficulty. There the nonliearity is hyperbolic with a singularity. Beyond a certain size it's fundamentally impossible. And with pre-defining the size (often suggested a cube with 100nm sidelength) and insisting that full monolithic relfreplicativity must fit within the difficulty might be stratospheric. Especially with early primitive means.

Related

TODO