Compute architectures
Contents
- 1 Mainstream
- 2 Reconfigurable Asynchronous Logic Automata (RALA)
- 3 Systolic arrays
- 4 Green arrays
- 5 Usually not seen practical as general purpose compute architectures
- 6 Machines specifically designed for pure functional languages (lambda calculus)
- 7 Reversible computing architectures
- 8 External links
Mainstream
There's plenty of resources out on the web so o details here.
- Von Neumann architecture (today's 2024 CPUs)
- Graphical Processing Units (GPUs still highly specialized on triangle processing, but changing as of 2024)
- FPGAs (Field programmable gate arrays)
- Emerging: NPUs & TPUs (for AI) – (not yet more general neuromorphic computing)
Reconfigurable Asynchronous Logic Automata (RALA)
A physical computing that aims to match to the 3D spacial constraints of our real world.
(wiki-TODO: Add details)
Systolic arrays
(wiki-TODO: Add details)
Green arrays
Compared to systolic arrays:
- Scale and generality: Green Arrays nodes are more general-purpose and typically deployed at a larger scale on a single chip.
- Asynchronous vs. synchronous: Green Arrays operates asynchronously, while systolic arrays are typically synchronous.
- Programming model: Green Arrays uses a Forth-inspired model, which is quite different from the typically fixed-function nature of systolic arrays.
- Data flow: Systolic arrays have a more rigid, predetermined data flow, while Green Arrays allows for more flexible data movement between nodes.
- Application scope: Systolic arrays are often optimized for specific algorithms, while Green Arrays aim for broader applicability.
Green Arrays Bootstrapping Process
Initial State:
The chip starts with one active node (often called the "boot node").
All other nodes are in a dormant or unconfigured state.
Propagation:
The boot node begins by configuring its immediate neighbors.
It loads them with basic functionality, essentially "waking them up".
Cascading Configuration:
The newly configured nodes then participate in configuring their own neighbors.
This process cascades across the chip, with each node potentially configuring others.
Dynamic Programming:
As the configuration spreads, nodes can be programmed with different functionalities.
This allows the chip to configure itself for various tasks dynamically.
Adaptive Behavior:
The configuration process can adapt based on the task at hand or the state of the chip.
This allows for efficient use of resources and fault tolerance.
Collective Intelligence:
The end result is a chip where the collective behavior emerges from the interaction of many simple, individually programmed nodes.
Usually not seen practical as general purpose compute architectures
Cellular automata
While some are Turing complete (e.g. Conways game of life)
they seem not for practical for general purpose computations.
Typically simple rules per cell thus very limited capabilities.
But complex emergent behaviour making them interesting to study.
Obviously limited to 3D lattices in physical implementations.
Machines specifically designed for pure functional languages (lambda calculus)
(wiki-TODO: Add details (SECD))
Reversible computing architectures
(wiki-TODO: Add details (pendulum))
External links
- https://en.wikipedia.org/wiki/Computer_architecture
- https://en.wikipedia.org/wiki/Von_Neumann_architecture (mainstream till today)
- https://en.wikipedia.org/wiki/Von_Neumann_architecture#von_Neumann_bottleneck (a big issue with it)
- 2010 RG – Reconfigurable Asynchronous Logic Automata (RALA)
- 2010 pdf – Reconfigurable Asynchronous Logic Automata (RALA)
- 2011 pdf – Aligning the representation and reality of computation with asynchronous logic automata – Neil Gershenfeld
- Green arrays
- https://en.wikipedia.org/wiki/Systolic_array
- https://en.wikipedia.org/wiki/SECD_machine
The letters stand for Stack, Environment, Control, Dump—the internal registers of the machine. - https://en.wikipedia.org/wiki/CEK_Machine (based on SECD)