Reproduction hexagon

From apm
Jump to: navigation, search
This article defines a novel term (that is hopefully sensibly chosen). The term is introduced to make a concept more concrete and understand its interrelationship with other topics related to atomically precise manufacturing. For details go to the page: Neologism.
The reproduction hexagon as an analogon to the combustion triangle lists all the requirements which must be met at the same time such that unbounded and uncontollable growth can happen [todo: add SVG]

Reproduction vs Replication

The term "reproduction" was conciously choosen to point at the fact that sufficient adaptability e.g. mutations are included in the hexagon. Removing this requirement from the hexagon one ends up with the replication pentagon where the term "replication" is used in the sense of exactly identical copies.

Adaptability by mutations is a property that today is only observed in living systems. It is absolutely not necessary for advanced productive nanosystems like nanofactories.

The six requirements

Replication capability

The system needs some minimal form of self replication capability in the first place. See: Self replication (TODO: elaborate)

Building block availability

Early APM systems depend on prefabricated blocks like e.g. pre-folded foldamers.

Scavenging building material from arbitrary sources can be extremely difficult. (See: Unknown matter claimer) There are some giant reservoirs with standardized building blocks in nature though.

  • One is the carbon dioxide in the atmosphere and hydrosphere. (This carries no energy).
  • Another one is the sugar in the blood of animals and humans. (carries energy when combined with oxygen)

Advanced macro sized APM systems do not need any scarce elements for self replication. This cannot be enforced either (like often suggested in naive proposals). Advanced APM systems are limited in self replication by some of the other points.

Energy supply

If carbon dioxide is used as resource material solar energy is needed for breaking up the bonds. This poses a challenge for compact self replication autonomous replicators.

For microscopic autonomous bots there is generally the issue that primary energy sources are usually macroscopic in nature.

Taking the biosphere as an example it shows that the limited supply of energy is a harsh design restriction that has a major impact on its architecture.

  • plants can be seen as solar cells (or bacteria as primary chemo-reactors in the deep sea)
  • the food chain can be seen as an energy distribution/delivery network for the highest non-human predators. This chain grew from bottom up. The bottom levels supporting the parasitic levels higher up.

In advanced gem-gum factories (designed to be capable of macroscopic self replication) the building material can supply the necessary energy with it (such is the case for e.g. ethyne). Ideally the power consumption and production is balanced out to zero. This can be done easiest if the (e.g. desktop sized) factories are connected to a global balancing network.

In the very very far term one might speculate about a multitude of self replicative "parasites" leaching on the power distribution/balancing networks. But due to the presence of high intelligence the situation is completely different than in the natural food chain. Parasites would not add up on top making nwe impressive creatures like lions and sharks. In ultra advanced artificial systems new low level parasites that start to leach on previously untapped (chemomechanical) power lines would only drag the top level down so they will be heavily fought and will need to be quite elabotare (and big) to survive.

Blueprint data mobility

Copying the whole blueprint for a whole macroscopic production device capable of replication in each of its microscopic constituent units (like biology does) is obviously not something that one would want to recreate in artificial systems. With the availability of fast wired and wireless data transmission that we already have today (2017) excessive blue-print copying becomes an unnecessary waste of performance in several dimension.

Often not even a single copy per macroscopic replicative production device is necessary (even more so for passive products), but having everything on a centralized global server does increase existential risk and does undermine/sacrifice the desirable "disatser proof property" that advanced APM systems have to offer.

Replicator mobility

The current far term goal for advanced productive nanosystems features self replicative capability only as whole macroscopic device. Earlier systems may feature more compact self replicative capability but they lack in "building lock availability" since they need prefabricated artificial foldamers blocks as "vitamins".

Sufficient adaptivity

Compact Evolution via "random" mutations (for expansion of an ecological niche and robustness against environmental changes) is probably the most severe point where artificial systems differ from life. Its the main point that differentiates between mere identical replication and adaptive reproduction.

There is no desire or need for programming compact evolution like properties into productive nanosystems (just as there is no such desire in today's software). This does not hold for big non-compact systems though.

Only on the highest complexity levels there is something like "real technological evolution". And technological progress pushes it further up not down. Today (2017) "technological evolution" is still mostly driven by human minds. But with decreasing isolation of computing systems from the "outside environment" (more sensors) and improving AI this might soon change.

Evolution in general (both natural and technological) has aspects of brute force breath search due to lack of knowledge where to look when standing at one particular far out edge of knowledge. If we knew where to look towards from that particular edge it would be directed design and not evolution. This is similar to the situation with unexpected/surprising discoveries of fundamental physics & science.

  • Science & research has some analogy to evolution (in both cases there is an attack where there is the most lack of understanding)
  • Engineering and development has some analogy to directed design (where there is actually a decent understanding of what is done)

  • Science and research produce abstract models based on the observation of physical systems
  • Engineering and development produce physical systems based on the application of abstract models

Since science & research (analog to evolution) are on the forefront bleeding edge of knowledge and engineering and development (analog to design) comes only after the models produced by science and research have solidified enough to be applicable, one could judge that directed design was, is, and will always be wrapped in a shell of undirected evolution.

The other way around newly engineered and developed (designed) physical systems extend our observational tools pointing to new further out borders of knowledge in specific designed directions providing new endpoints where one again does not know which direction to look at. Thus the complexity level where "technical evolution" is applied will (on average) always go up and not down.

On a side-note evolution is somewhat complementary to exploratory engineering.

  • On the very furthest out edge of knowledge we encounter things that we can do that but that we don't understand yet. The surprise bag of evolution.
  • On the very innermost core of knowledge (wisdom?) we encounter things that we can understand but that we aren't able to build yet. The reliable orientation helper called "exploratory engineering".

Resilience against harsh environmental conditions

Related

External links