Why ultra-compact molecular assemblers are too difficult

From apm
Revision as of 23:54, 23 April 2025 by Apm (talk | contribs) (Lack of system modularity (BIG ISSUE): slightly more wordy for netter comprehenibility.)
Jump to navigation Jump to search
OUTDATED CONCEPT several highly conceptual illustrations of diamondoid molecular assemblers. Common traits: Just one or two manipulators, a gastight hull enclosing everything, full selfreplicativity, very small size ~100nm, more or less optionally some sort of mobility some swimming others here unspecified
OUTDATED CONCEPT Artistic depiction of a mobile assembler unit capable of self replication (linked to a "crystal" of assemblers and thus not free floating). An outdated idea.

Up: Molecular assembler

Why too difficult as a near term bootstrapping pathway

Note that this is direct path context.

The molecular assembler concept comes strongly tied with
the hard insistence that a monolithic selfreplicating unit
must unconditionally fit into a very small given volume. Typically (100nm)³ and a bit above.
A volume determined by the throughput of a single macroscale SPM needle-tip.

Means of non-replicative prescaling to lift that restriction are not considered in the molecular assembler concepts.
Perhaps an additional reason for criticism on the direct path which is often associated with molecular assemblers.
See: Bootstrapping, Bridging the gaps, Parallel macroscale SPM scaling
Asking for a smaller volume makes the problem nonlinearly harder (hyperbolically, there's a wall).

Unreliability of estimations due to inapplicability of EE

There is no book like Nanosystems that is analyzing molecular assemblers for a good reason.
Work on (guess)estimating minimal molecular assembler size is limited likely for the same reason.
All the work that does exist may well suffer from underestimation of size (and building difficulty).
That's not a failure of analysis but a fundamental lack of reliable analyzability.
The "good reason" is that exploratory engineering (EE) can't be well applied here because …

  • … no large safety margin (forced so by assumed throughput constrains in proto-systems production)
  • … heavily compounding errors due to subsystem inter-dependencies
  • … degree of uncertainty usually not quantifiable
  • … even qualitative uncertainty as one might eventually find out that
    one unexpectedly needs some more subsystems that so far were not at all considered.

For the reason of failure of exploratory engineering in this context
predictions about minimal size of an ultra-compact monolithic replicating nanobot
are for the most part a house of cards based on which one should probably not make decisions on.

Much the same holds for an (more realistic and promising) early nanosystem pixel (direct path) too.
But this one can just grow to the size it naturally needs to have.

It gets even more unpredictable going to early nanofactories with assembly level 3 and above.
Side-note: The link here "assembly level 3" discusses the topic in the context of
advanced far-term systems which are much more amenable to exploratory engineering.

For a (not very useful) example of an unintentional analysis of molecular assemblers see
Chris Phoenix's "Primitive Nanofactory Design by Chris Phoenix - October 2003"
Discussion of proposed nanofactory designs
This is pretty much about molecular assemblers tacked to a surface.
Just small ultra-compact self replicative units at the very base.
Putting focus on either more near term or more far term system analysis
is a more productive use of time compared to putting focus on
the maximally blurry spot in the middle of the bootstrapping pathway.

Consequences of system isolation & predefined size

Lack of system modularity (BIG ISSUE)

Squeezing a system in absolute minimal space/volume tends to make it non-modular.
It gives a lot of conflation of concerns with a rat tail of consequences.

More complicated design-time expandability

If more space is needed the whole thing needs a redesign.
Well, parametric designs are possible but not as trivial as just
expanding an open grid of tracks to get the additional space needed
in an early nanosystem pixel (direct path).

Some concrete stuff - not necessarily major

Molecular assemblers often come with the expanding vacuum hull requirement.
A quite complicated system adding to the replication backpack overhead.
This can be entirely avoided. See: Vacuum lockout

Rather minor: Mobile molecular assemblers would need a potentially
more complicated motion system moving the whole monolithic thing around.

Capabilities and limits of SPM technology

Progress towards 3D capabilities

Recent capabilities with qPlus nc-AFM demonstrated
– the capability of subatomic resolution scanning scanning of the top of
very flat topped tetramantane molecules that are higher than the picked up CO molecule used for scanning.
– the capability to pick up a C60 and covalently fuse it to 3 3D graphene nanoribbon.
So going 3D is finally rudimentary experimentally shown possible as of 2025.
(wiki-TODO: Add references to these two experimental demonstration papers.)

Some form of SPM focus fusion might even allow to
subatomically resolving qPlus nc-AFM scan the tops of surfaces
that are not extremely flat, like e.g. an atomic step.

STM can image atomic steps but structures are more often than not not properly interpretable by human intuition.
Rater they are more often than not highly deceiving, which can be a big problem.

Anything notably beyond singe atomic steps is till closed off for both AFM and STM for a long time still
as SPM needle tips of current technology are pretty much more like a giant balls rather than sharp tips
as many (almost all) illustrations misleadingly might make one believe.
See page: Tip surface folding shadow

Remaining severe limitations even in best case

Still with all that solved anything beyond the top
and everything in the SPM tips tip surface folding shadow
is permanently buried and inaccessible for imaging.

There are only very few viable nondestructive methods (if any at all)
to find out anything about what is going on down below the disjoint imageable patches
of the very topmost imageable surface.

And finding out about what is going on under the topmost surface is crucially important
to the debugging of early prototype systems.

This is related to data IO and it is one of the biggest challenges.
The more distributed, flat, and spread out the early bootstrapping systems are,
the more accessible they are for our current suite of analytic capabilities.

This also makes for more economic side use opportunities.

Concluding with identification of much better approaches

Alternate systems that follow these constraints
(distributed, flat, and spread out) will be much more viable.
See pages:
Early diamondoid nanosystem pixel (direct path) & Mixed path

When too difficult as a far term ideal target

With advanced gemstone based APM and nanofactories
already being a fact of reality, designing and building things becomes much easier.
So there's a good chance of someone coming along to design a compact
selfreplicating KSRM type molecular assembler just to show that it is
indeed physically possible (albeit not economically sensible).

This thing will likely be notably bigger than the 100nm sidelength cube proposals.
Mostly because …
– size is not a problem at that point and …
– designing bigger ones is hyperbolically easier.

This of course leads to safety concerns. See: Reproduction hexagon
At that point we might have already had to deal with more severe calibers of malicious nanosystems.

No matter how far the tech advances, there will always be a certain volume below which
any kind of monolithic selfreplicative molecular assemblers is truly fundamentally impossible.
We have a true singularity here.
Thus a hyperbolical drop in difficulty not a merely exponential drop in difficulty in size.

It may well be that the current ultra-compact molecular assembler proposals (100nm)³
remains hard even with mature technology (and AI designers).
Time may tell.

Related