Difference between revisions of "Higher throughput of smaller machinery"

From apm
Jump to: navigation, search
(Related: added link to page: Scaling law)
(added section: == Basic math ==)
Line 1: Line 1:
When production machines are made smaller they can produce massively more product per time.
+
When production machines are made smaller they can produce massively more product per time.
  
 
[[File:Throughput_of_convergent_assembly_-_annotated.svg|400px|thumb|right|Q...throughput s...side-length f...frequency<br>{{wikitodo|Resolve the issue with the text in this illustration!}}]]
 
[[File:Throughput_of_convergent_assembly_-_annotated.svg|400px|thumb|right|Q...throughput s...side-length f...frequency<br>{{wikitodo|Resolve the issue with the text in this illustration!}}]]
Line 13: Line 13:
  
 
This can't be extended arbitrary though. Below the micrometer level some effects (discussed later) prevent further rise.
 
This can't be extended arbitrary though. Below the micrometer level some effects (discussed later) prevent further rise.
 +
 +
== Basic math ==
 +
 +
* n = 1 ... one robotic unit – this is just here to not mix it up with other dimensionless numbers
 +
* s ... sidelength of the blocks that the robotic unit processes <br>s^3 ... volume <br>rho*s^3 ... mass
 +
* f ... frequency of operation of robotic unit
 +
 +
Throughput of one robotic cell: <br>
 +
'''Q = (1n * s^3 * f)''' <br>
 +
Throughput of eight smaller robotic cells in that same volume operating at same speed: <br>
 +
(same speed means same magnitude of velocity not same frequency) <br>
 +
'''Q' = (2n)^3 * (s/2)^3 * 2f''' <br>
 +
'''Q' = 2 * (1n * s^3 * f)''' <br>
 +
'''Q' = 2Q'''
  
 
== Getting silly – questionable and unnecessary productivity levels ==
 
== Getting silly – questionable and unnecessary productivity levels ==

Revision as of 09:03, 1 April 2021

When production machines are made smaller they can produce massively more product per time.

Q...throughput s...side-length f...frequency
(wiki-TODO: Resolve the issue with the text in this illustration!)

When going down the convergent assembly levels in an advanced gem-gum factory one finds that the throughput capacity of the next lower assembly stage (a mono-layer that may or may not conform to flat coplanar sheets) is "always" equal to the throughput capacity of the local stage. The important thing is that this mono-"layer" has a smaller total volume than the local stage. Going down further assembly levels one finds mono-"layers" with even smaller thickness (and smaller corresponding volume) but the throughput capacity is still the same.

("always" ... that is when assuming all levels work at the same speed and throughput is balance is not influenced by other factors).

That is a very pleasant surprise! In a first approximation halving the size of manufacturing robotics doubles throughput capacity per volume. Going down from one meter to one micrometer (a factor of a million) the throughput capacity per volume explodes a whopping millionfold (a linear scaling law).

This can't be extended arbitrary though. Below the micrometer level some effects (discussed later) prevent further rise.

Basic math

  • n = 1 ... one robotic unit – this is just here to not mix it up with other dimensionless numbers
  • s ... sidelength of the blocks that the robotic unit processes
    s^3 ... volume
    rho*s^3 ... mass
  • f ... frequency of operation of robotic unit

Throughput of one robotic cell:
Q = (1n * s^3 * f)
Throughput of eight smaller robotic cells in that same volume operating at same speed:
(same speed means same magnitude of velocity not same frequency)
Q' = (2n)^3 * (s/2)^3 * 2f
Q' = 2 * (1n * s^3 * f)
Q' = 2Q

Getting silly – questionable and unnecessary productivity levels

Now what if one would take a super thin microscale (possibly non-flat) assembly mono-"layer" that one finds pretty far down the convergent assembly stack and fills a whole macroscopic volume with many copies of it?

The answer is (in case of general purpose gem-gum factories) that the product couldn't be removed/expulsed fast enough. One hits fundamental acceleration limits (even for the strongest available diamondoid metamaterials) and long before that severe problems with mechanical resonances are likely to occur.

Note that the old and obsolete idea of packing a volume full with diamondoid molecular assemblers wouldn't tap into that potential because these devices are below the microscale level in the nanoscale where the useful behavior of physics of raising throughput density with falling size of assembly machinery is hampered by other effects.

Antagonistic effects/laws – sub microscale

The problem that emerges at the nanoscale is twofold.

  • falling size => rising bearing area per volume => rising friction => to compensate: lower operation speed (and frequency) – summary: lower assembly event density in time
  • falling size => rising machinery size to part size (atoms in the extreme case) – summary: lower assembly site density in space

Due to the nature of superlubricating friction:

  • it scales with the square of speed (halving speed quaters friction losses)
  • it scales linear with surface area (doubling area doubles friction)

It makes sense to slow down a bit and compensate by stacking layers for level throughput balancing. A combination of halving speed and doubling the number of stacked equal mono-"layers" halves friction while keeping throughput constant.

Lessening the macroscale throughput bottleneck

There are also effects/laws (located in the macroscale) that can help increase throughput density above the first approximation. Details on that can be found (for now) on the "Level throughput balancing" page.

Alternate names for this scaling law as a concept

  • Higher productivity of smaller machinery
  • Productivity explosion

The thing is higher throughput does not necessarily means higher productivity in the sense of generation of useful products.
Thus the rename to the current page name "Higher throughput of smaller machinery".

Related