Difference between revisions of "General software issues"

From apm
Jump to: navigation, search
(added relation to new computing paradigms)
(What is absolutely not a necessity but could boost development speed: added link to "reduceron")
Line 77: Line 77:
 
* neural network compution (deep learning) i
 
* neural network compution (deep learning) i
 
* memcomputers (interspersed comuting and memory can increase performance for certain algorithms)
 
* memcomputers (interspersed comuting and memory can increase performance for certain algorithms)
* lambda machines (historic alternative to von neuman computers)
+
* lambda machines (historic alternative to Von Neumann architecture computers) <br> Research 2015: '''Reduceron''' [https://www.cs.york.ac.uk/fp/reduceron/ webpage] [https://github.com/tommythorn/Reduceron github] implemented on FPGAs
  
 
Application optimization: circutry; layout; optimal molecular topologies for bigger structural [[Diamondoid molecular element|crystolecule]] brackets (aka Kaehler-brackets); ...
 
Application optimization: circutry; layout; optimal molecular topologies for bigger structural [[Diamondoid molecular element|crystolecule]] brackets (aka Kaehler-brackets); ...

Revision as of 13:15, 31 July 2015

In a world where the digital and physical realm starts to blend that is physical products become networked live acutateable and reconfigurable software architecture / organisation / design or however one may call it becomes even more important than it is already today.

Key issues are:

  • stability (at best error-proofness) correctness; research in Haskell
  • maintainability
  • extendability
  • modularity
  • diversity (as options for unexpected dead end routes)
  • optimized-specialisation conserving functionality-expanding-generalisation and vice versa
  • highly complex version management (dependency hell)
  • ...

File systems

Currently used tree based and machine local file systems have their limits. Sorting the same data after multiple hierarchical criteria is impossible. To give an example: Assume one owns a lot of image-file text-file sets. The small text files are of high importance (e.g. source code) while the huge images are of relatively low importance (e.g. rendered from source). Archiving different backup levels (location, redundancy level) for file types of different importance isn't possible with the basic functionalities of tree based file systems.

File system indexing usage of meta-data and file tagging only mend but do not solve the problem. Some kind of graph based file systems are needed. Graph databases (like Neo4j?) are interesting but are only crutches if implemented on top of tree based file systems.

The limitation to serial access to mass storage disk space (now changing with random access SD drives) led to the fact that current systems are still hardware near programmed and suffer from a lack of abstraction. Data must be manually serialized for persistent storage. Net based services like Google drive and Facebook already emulate this behavior. Another interesting "on top" approach is yesod with its integrated persistence.

Recently Google made a move in this direction. Google drive allows to create folder structures as an directed acyclic graph!

Dependency hell

Current packet management systems suffer from the dreaded problem of dependency hell. Especially developers who install a lot of software packages in parallel are affected. Solution approaches include:

Further related information

Avoiding hidden state (a potential source of errors) can be compatible with interactive environments:

CAD software

[Todo ...]
From Tom's machine phase blog comes an exampel of the usage of Nanoengineer-1: DNA origami: from design to product

Bad software design may undermine the "Disaster proof" property of globally used APM.

Relation of AP Technology to new computing paradigms

What is likely to be a necessity

  • reversible computing

Deleting data produces heat proportional to the operation temperature. In super high density computing this needs to be avoided.

What is absolutely not a necessity but could boost development speed

  • quantum computing (depends on reversible design)
  • neural network compution (deep learning) i
  • memcomputers (interspersed comuting and memory can increase performance for certain algorithms)
  • lambda machines (historic alternative to Von Neumann architecture computers)
    Research 2015: Reduceron webpage github implemented on FPGAs

Application optimization: circutry; layout; optimal molecular topologies for bigger structural crystolecule brackets (aka Kaehler-brackets); ...