Difference between revisions of "Higher throughput of smaller machinery"

From apm
Jump to: navigation, search
(massive massive improvement on the old intro)
(Moved basic math section to the top -- and added some improvements - subchapters)
Line 4: Line 4:
 
then they can produce much more product per time.
 
then they can produce much more product per time.
  
 +
== Basic math ==
 +
 +
* n = 1 ... one robotic unit – this is just here to not mix it up with other dimensionless numbers
 +
* s ... sidelength of the blocks that the robotic unit processes <br>s^3 ... volume <br>rho*s^3 ... mass
 +
* f ... frequency of operation of robotic unit
 +
* v ... constant speed
 +
 +
=== Original size ===
 +
 +
Throughput of one robotic cell: <br>
 +
'''Q = (n * s^3 * f)''' <br>
 +
 +
=== Halve size ===
 +
 +
Throughput of eight robotic cells that
 +
* are two times smaller each 
 +
* fill the same volume
 +
* operate at the same speed:
 +
(same speed means same magnitude of velocity not same frequency) <br>
 +
'''Q' = (n' * s'^3 * f') where n' = 2^3 n and s' = s/2 and f' = 2f''' <br>
 +
'''Q' = (2^3 n) * (s/2)^3 * (2f)''' <br>
 +
'''Q' = 2 * (n * s^3 * f)''' <br>
 +
'''Q' = 2Q'''
 +
 +
=== 1/10th size ===
 +
 +
Throughput of a thousand robotic cells that
 +
* are ten times smaller each 
 +
* fill the same volume
 +
* operate at the same speed:
 +
'''Q'<sub>10</sub> = (10^3 n) * (s/10)^3 * (10f)''' <br>
 +
'''Q'<sub>10</sub> = 10 * (n * s^3 * f)''' <br>
 +
'''Q'<sub>10</sub> = 10Q'''
 +
 +
=== Supplemental: Scaling of frequency ===
 +
 +
The unexplained scaling of frequency is rather intuitive. <br>
 +
Going back and forth halve the distance with the speed you get double the frequency. <br>
 +
If you really want it formally then here you go: <br>
 +
f = v/(constant*s) ~ v/s <br>
 +
f' ~ v/s' ~ v/(s/2) ~ 2f
  
 
== From the perspective of diving down into a prospective nanofactory ==
 
== From the perspective of diving down into a prospective nanofactory ==
Line 49: Line 90:
 
Below the micrometer level several effects (mentioned above) make <br>
 
Below the micrometer level several effects (mentioned above) make <br>
 
full exploitation of that rise in productivity per volume impossible.
 
full exploitation of that rise in productivity per volume impossible.
 
== Basic math ==
 
 
* n = 1 ... one robotic unit – this is just here to not mix it up with other dimensionless numbers
 
* s ... sidelength of the blocks that the robotic unit processes <br>s^3 ... volume <br>rho*s^3 ... mass
 
* f ... frequency of operation of robotic unit
 
 
Throughput of one robotic cell: <br>
 
'''Q = (n * s^3 * f)''' <br>
 
Throughput of eight smaller robotic cells in that same volume operating at same speed: <br>
 
(same speed means same magnitude of velocity not same frequency) <br>
 
'''Q' = (n' * s'^3 * f') where n' = 2^3 n and s' = s/2 and f' = 2f''' <br>
 
'''Q' = (2^3 n) * (s/2)^3 * (2f)''' <br>
 
'''Q' = 2 * (n * s^3 * f)''' <br>
 
'''Q' = 2Q'''
 
 
f ~ v/s <br>
 
f' ~ v/s' ~ v/(s/2) ~ 2f
 
  
 
== Getting silly – questionable and unnecessary productivity levels ==
 
== Getting silly – questionable and unnecessary productivity levels ==

Revision as of 13:57, 1 April 2021

Q...throughput s...side-length f...frequency
(wiki-TODO: Resolve the issue with the text in this illustration!)

When production machines are made much smaller
then they can produce much more product per time.

Basic math

  • n = 1 ... one robotic unit – this is just here to not mix it up with other dimensionless numbers
  • s ... sidelength of the blocks that the robotic unit processes
    s^3 ... volume
    rho*s^3 ... mass
  • f ... frequency of operation of robotic unit
  • v ... constant speed

Original size

Throughput of one robotic cell:
Q = (n * s^3 * f)

Halve size

Throughput of eight robotic cells that

  • are two times smaller each
  • fill the same volume
  • operate at the same speed:

(same speed means same magnitude of velocity not same frequency)
Q' = (n' * s'^3 * f') where n' = 2^3 n and s' = s/2 and f' = 2f
Q' = (2^3 n) * (s/2)^3 * (2f)
Q' = 2 * (n * s^3 * f)
Q' = 2Q

1/10th size

Throughput of a thousand robotic cells that

  • are ten times smaller each
  • fill the same volume
  • operate at the same speed:

Q'10 = (10^3 n) * (s/10)^3 * (10f)
Q'10 = 10 * (n * s^3 * f)
Q'10 = 10Q

Supplemental: Scaling of frequency

The unexplained scaling of frequency is rather intuitive.
Going back and forth halve the distance with the speed you get double the frequency.
If you really want it formally then here you go:
f = v/(constant*s) ~ v/s
f' ~ v/s' ~ v/(s/2) ~ 2f

From the perspective of diving down into a prospective nanofactory

Simplified nanofactory model

Let's for simplicity assume that the convergent assembly architecture in an advanced gem-gum factory is organized

  • in simple coplanarly stacked assembly layers.
  • that are each only one assembly cell in height, monolayers so to say
  • that all operate at the same speed

Here ignored model deviations

There are good reasons to significantly deviate from that simple-most model.
Especially for the lowest assembly levels. E.g.

But the focus here is on conveying a baseline understanding.
And for assembly layers above the lowermost one(s) the simple-model above might hold quite well.

Same throughput of successively thinner layers

When going down the convergent assembly level layer stack …

  • from a higher layer with bigger robotic assembly cells down
  • to a the next lower layer with (much) smaller robotic assembly cells

… then one finds that the throughput capacity of both of these layers needs to be equal.

  • If maximal throughput capacity would rise when going down the stack then the upper layers would form a bottleneck.
  • If maximal throughput capacity would fall when going down the stack then the upper layers would be underutilized.

See main article: Level throughput balancing

The important thing to recognize here is that
while all the mono-layers have the same maximal product throughput
the thickness of these mono-layers becomes thinner and thinner.
More generally the volume of these layers becomes smaller and smaller.
So the throughput per volume shoots through the roof.

That is a very pleasant surprise!
In a first approximation halving the size of manufacturing robotics doubles throughput capacity per volume.
That means going down from one meter to one micrometer (a factor of a million)
the throughput capacity per volume equally explodes a whopping millionfold.
This is because it's a is a linear scaling law.

As mentioned this can't be extended arbitrarily though.
Below the micrometer level several effects (mentioned above) make
full exploitation of that rise in productivity per volume impossible.

Getting silly – questionable and unnecessary productivity levels

Now what if one would take a super thin microscale (possibly non-flat) assembly mono-"layer" that one finds pretty far down the convergent assembly stack and fills a whole macroscopic volume with many copies of it?

The answer is (in case of general purpose gem-gum factories) that the product couldn't be removed/expulsed fast enough. One hits fundamental acceleration limits (even for the strongest available diamondoid metamaterials) and long before that severe problems with mechanical resonances are likely to occur.

Note that the old and obsolete idea of packing a volume full with diamondoid molecular assemblers wouldn't tap into that potential because these devices are below the microscale level in the nanoscale where the useful behavior of physics of raising throughput density with falling size of assembly machinery is hampered by other effects.

More on silly levels of throughput here:
Macroscale slowness bottleneck

Antagonistic effects/laws – sub microscale

The problem that emerges at the nanoscale is twofold.

  • falling size => rising bearing area per volume => rising friction => to compensate: lower operation speed (and frequency) – summary: lower assembly event density in time
  • falling size => rising machinery size to part size (atoms in the extreme case) – summary: lower assembly site density in space

Due to the nature of superlubricating friction:

  • it scales with the square of speed (halving speed quaters friction losses)
  • it scales linear with surface area (doubling area doubles friction)

It makes sense to slow down a bit and compensate by stacking layers for level throughput balancing. A combination of halving speed and doubling the number of stacked equal mono-"layers" halves friction while keeping throughput constant.

Lessening the macroscale throughput bottleneck

There are also effects/laws (located in the macroscale) that can help increase throughput density above the first approximation. Details on that can be found (for now) on the "Level throughput balancing" page.

Alternate names for this scaling law as a concept

  • Higher productivity of smaller machinery
  • Productivity explosion

The thing is higher throughput does not necessarily means higher productivity in the sense of generation of useful products.
Thus the rename to the current page name "Higher throughput of smaller machinery".

Related