Difference between revisions of "Compute architectures"

From apm
Jump to: navigation, search
(Reversible computing architectures: added internal & external links)
(some reordering)
Line 9: Line 9:
 
* Emerging: NPUs & TPUs (for AI) – (not yet more general neuromorphic computing)
 
* Emerging: NPUs & TPUs (for AI) – (not yet more general neuromorphic computing)
  
== Reconfigurable Asynchronous Logic Automata (RALA) ==
+
== Dedicated compute hardware specifically designed for pure functional languages (lambda calculus) ==
  
A physical computing that aims to match to the 3D spacial constraints of our real world. <br>
+
=== Term normalization by parallel asynchronous term rewriting ===
{{wikitodo|Add details}}
+
 
 +
* Landing page: https://higherorderco.com/
 +
* Code: https://github.com/HigherOrderCO
 +
* Virtual machine: https://github.com/HigherOrderCO/HVM
 +
* Associated proglang: https://github.com/HigherOrderCO/Bend
 +
 
 +
=== Older works - lambda calculus evaluating hardware ===
 +
 
 +
* https://en.wikipedia.org/wiki/SECD_machine <br>The letters stand for Stack, Environment, Control, Dump—the internal registers of the machine.
 +
* https://en.wikipedia.org/wiki/CEK_Machine (based on SECD)
 +
{{wikitodo|Add details (SECD)}}
  
== Green arrays ==
+
== Green arrays (related to stack based programming) ==
  
 
'''Compared to systolic arrays:'''
 
'''Compared to systolic arrays:'''
Line 47: Line 57:
 
'''Collective Intelligence:''' <br>
 
'''Collective Intelligence:''' <br>
 
The end result is a chip where the collective behavior emerges from the interaction of many simple, individually programmed nodes. <br>
 
The end result is a chip where the collective behavior emerges from the interaction of many simple, individually programmed nodes. <br>
 +
 +
== Reconfigurable Asynchronous Logic Automata (RALA) ==
 +
 +
A physical computing that aims to match to the 3D spacial constraints of our real world. <br>
 +
{{wikitodo|Add details}}
  
 
== Systolic arrays ==
 
== Systolic arrays ==
Line 62: Line 77:
 
Obviously they are limited to 3D lattices in physical implementations.
 
Obviously they are limited to 3D lattices in physical implementations.
  
== Dedicated compute hardware specifically designed for pure functional languages (lambda calculus) ==
 
 
=== Term normalization by parallel asynchronous term rewriting ===
 
 
* Landing page: https://higherorderco.com/
 
* Code: https://github.com/HigherOrderCO
 
* Virtual machine: https://github.com/HigherOrderCO/HVM
 
* Associated proglang: https://github.com/HigherOrderCO/Bend
 
 
=== Older works ===
 
 
* https://en.wikipedia.org/wiki/SECD_machine <br>The letters stand for Stack, Environment, Control, Dump—the internal registers of the machine.
 
* https://en.wikipedia.org/wiki/CEK_Machine (based on SECD)
 
{{wikitodo|Add details (SECD)}}
 
  
 
== Reversible computing architectures ==
 
== Reversible computing architectures ==

Revision as of 12:13, 2 September 2024

This article is a stub. It needs to be expanded.

Mainstream

There's plenty of resources out on the web so not much details here.

  • Von Neumann architecture (today's 2024 CPUs)
    – Issues here are the enforced serial nature of data processing and the "von Neumann bottleneck".
  • Graphical Processing Units (GPUs still highly specialized on triangle processing, but changing as of 2024)
  • FPGAs (Field Programmable Gate Arrays)
  • Emerging: NPUs & TPUs (for AI) – (not yet more general neuromorphic computing)

Dedicated compute hardware specifically designed for pure functional languages (lambda calculus)

Term normalization by parallel asynchronous term rewriting

Older works - lambda calculus evaluating hardware

(wiki-TODO: Add details (SECD))

Green arrays (related to stack based programming)

Compared to systolic arrays:

  • Scale and generality: Green Arrays nodes are more general-purpose and typically deployed at a larger scale on a single chip.
  • Asynchronous vs. synchronous: Green Arrays operates asynchronously, while systolic arrays are typically synchronous.
  • Programming model: Green Arrays uses a Forth-inspired model, which is quite different from the typically fixed-function nature of systolic arrays.
  • Data flow: Systolic arrays have a more rigid, predetermined data flow, while Green Arrays allows for more flexible data movement between nodes.
  • Application scope: Systolic arrays are often optimized for specific algorithms, while Green Arrays aim for broader applicability.

Green Arrays Bootstrapping Process

Initial State:
The chip starts with one active node (often called the "boot node").
All other nodes are in a dormant or unconfigured state.

Propagation:
The boot node begins by configuring its immediate neighbors.
It loads them with basic functionality, essentially "waking them up".

Cascading Configuration:
The newly configured nodes then participate in configuring their own neighbors.
This process cascades across the chip, with each node potentially configuring others.

Dynamic Programming:
As the configuration spreads, nodes can be programmed with different functionalities.
This allows the chip to configure itself for various tasks dynamically.

Adaptive Behavior:
The configuration process can adapt based on the task at hand or the state of the chip.
This allows for efficient use of resources and fault tolerance.

Collective Intelligence:
The end result is a chip where the collective behavior emerges from the interaction of many simple, individually programmed nodes.

Reconfigurable Asynchronous Logic Automata (RALA)

A physical computing that aims to match to the 3D spacial constraints of our real world.
(wiki-TODO: Add details)

Systolic arrays

(wiki-TODO: Add details)

Cellular automata

These are usually not seen as practical general purpose compute architectures.
While some are Turing complete (e.g. Conways game of life)
they seem not particularly suitable/practical for general purpose computations.

Typically they feature simple rules per cell. Thus expressive capabilities are limited.
But they feature complex emergent behaviour which is making them interesting to study.
Obviously they are limited to 3D lattices in physical implementations.


Reversible computing architectures

Se also: Reversible computing & Well merging

External links