Difference between revisions of "Level throughput balancing"

From apm
Jump to: navigation, search
(Undo revision 8839 by Apm (talk))
Line 87: Line 87:
 
* [[Convergent assembly]] & [[Assembly levels]]
 
* [[Convergent assembly]] & [[Assembly levels]]
 
* [[Scaling law]]s
 
* [[Scaling law]]s
-----
 
* [[Deliberate slowdown at the lowest assembly level]]
 

Revision as of 12:44, 3 May 2021

In first approximation it's always one single layer of sub-assembly-level-cells below the current assembly-level-cell that has matching throughput.(Independent of step-size). When there are deviations such that lower layers are slower than the first approximation suggests then identical layers can be stacked for compensation.
Q...throughput – s...side-length – f...frequency

In advanced gem-gum factories the production and consumption rates of the meeting assembly levels should roughly fit together such that no bottlenecks are present.

Chaining/(stacking) of equal assembly levels/(layers) for mismatch compensation

To compensate for mismatches of the throughput of the assembly cells of specific size levels one can chain together several of the same assembly levels with transport paths running along.

In case the assembly levels are implemented as assembly layers the chaining concretises to stacking and the transport paths become vertical shafts going up through the homogeneous layer stack.

This chaining/stacking approach works only as long as transport up the stack is can be faster than the assembly in the stack which is especially true for the bottommost assembly layers.
(TODO: check for upper layers)

Effects on larger scales that may influence compensatability of throughput mismatches are:

  • infinitesimal bearings reducing friction in a scaling law changing way.
  • Streaming style assembly (details further down) – (delaying scaling law?)
  • Low (energy carrying) surface area per assembled volume leading to easier achievable high efficiency assembly and disassembly.
  • The fundamental speed limit. But average operation likely won't come near that.

Accepted or even desired mismatch

In the case throughput capacity monotonously increases with rising assembly levels it can at least speed up recycling where the old products don't go down all the way the convergent assembly stack. This situation could appear if a throughput capacity rise in the larger scales can't be compensated by chaining/stacking of the lower levels/layers.

A drop in throughput capacity with rising assembly levels is harder to justify. Pre-assembled matter memory "caches" that convert themselves back and forth but partially never reach macroscopic dimensions may be a motivation.

It's hard to guess where in the stack the demands (lower bounds) will first push at the physical limitations (upper bounds).

Factors determining throughput rates for individual assembly level chambers

To match the the throughput of the assembly levels one needs to at least roughly estimate the actual the production rates of the assembly levels of advanced gem-gum factories. They depend on several factors. Some listed in the following. Note: orthogonality aka mutual independence is not guaranteed!

Density in space

The density of operational spots of the assembly method.
Hard-coded mill style (spots are dense) or general purpose manipulator style (spots are sparse).

Density in time

Dissipation power (depending on operation speed) and cooling system capacity.
On the lowest levels surface area increases thus one might want to slow down a bit.

Step-size of convergent assembly

The step sub-product sizes between the assembly layers ("step size"). How many small parts will be assembled to a big one.

Streaming assembly robotics

Especially in the bigger assembly chambers that lie in the higher assembly levels it becomes possible to do "streaming". First one merges the incoming small building parts into a single stream and then one feeds this stream through moving hinges in the assembly robotics one delivers the parts to their destination in the block the next assembly level up.

Note that the merging of streams (between assembly levels) in the bigger size range (towards macro) is not not for ordering. At these levels the parts can already be produced in the right order making reordering unnecessary.

Even snake/tentacle like actuators feeding parts to their destination are a option.

The streaming of filament in an thermoplastic 3D-printer of today (2017) is a halfway correct analogy. There is streaming of building material but no fusion of streams of discrete building parts.

Less back and forth

Instead of going back and forth for each part one can stream parts directly to the tip of the manipulator.

  • from ... -> pick U-turn -> transport-move -> place U-turn -> empty-back-move -> pick U-turn -> ...
  • to ... -> U-turn -> place -> place -> ... -> place -> place -> U-turn -> place -> ...

Streaming works only if there is enough space available. Thus only in the higher assembly levels.

The gain in throughput rate should roughly be from two times the length scale of the assembly-level-cell under consideration to to one time the length scale of the sub-assembly cells.
This gives:

  • Two times the convergent assembly step-size from this effect, which is pretty significant.

Less U-turns

In the simple pick and place case one has two tight U-turns of per placement operation where the big manipulator of the assembly cell under consideration has to turn around on the smaller length scale of the sub assembly cells. In the streaming case one has just one tight U turn per row/column (whatever you want to call it) of the product part.

  • The number of necessary slowdowns divides by two times the convergent assembly step-size.

Related