User interfaces for gem-gum on-chip nanofactories

From apm
Revision as of 11:46, 11 July 2023 by Apm (Talk | contribs) (added Category:Programming)

Jump to: navigation, search
This article is a stub. It needs to be expanded.

See also: Future of human computer interaction

For everyday usage

In first approximation gem-gum factories are basically ultra advanced 3D printers.
As such one might expect somewhat similar interfaces maybe. But the differences will be stark.

Given a problem/desire one wants to find (or create) an objects that can solve/fulfill it.

Rather than company controlled "appstores of things" we
may more likely want something like open decentralized wikis of things.
Weaving documentation together with storage. Making things more easily discoverable.

Unlike todays (2022) 3D-printers portable nanofactories will often have highly advanced computers integrated.
People might interface with these via:

Interfaces like displays directly on the devices will strongly depend on the form factor of gem-gum factories.
An obvious necessity are 3D previews for what one is about to make.
Tools for 3D-modeling will need to be extended to be accessible by lower skill levels too.

(wiki-TODO: Improve on this section)

See related page: Future of human computer interaction

For devices intended as technology backup for civilization

See main pages: Desert scenario & Disaster proof

Accessibility of the device

As neither language nor degree of education nor eventual disabilities of the user can be guessed in advance.
Thus the user interface system needs to be capable of dealing with a wide variety of scenarios. Preloaded.

Communicating UI expandability of the device

For such devices a small form factor (around keyfob geo-tag sized maybe) might be ideal
as smaller means sturdier and more of them. This limits initial display size on first encounter
and makes communicating the option of expanding the user interface to
smartphone-size or laptop-size (or smart glasses) a first priority.

Communicating availability/precence of the device

The initially small and compact device obviously can call attention to it via bright flashes and loud sounds. With notable expenditure of energy. Question is: When should it?

  • How to detect that someone in the vincinity might be in trouble?
  • And how to tell that it is not juts a passerby that is totally fine and
    would only be annoyed by randomly strewn out screaming and flashing devices.
  • And how to balance detection with energy consumption.
    Small area gives not much solar power to recharge. Worse when buried.

Then only small thermal gradients and ground vibration remain as very minuscule energy source.

Redundant communication channels

Beside textual communication in various languages, voice IO and braille for touch reading,
the devices definitely needs to include symbolic language for
communicating at least means for fulfillment of basic needs. Universally understood symbols (skull, smile, …). Rather not symbols that need knowledge of specific culture for understanding.

Note that this is not about (also good) multimodality (communication channels complementing each other).
Each communication channel alone should be able to convey as much as possible on its own.

Artificial Intelligence

Given the with gem-gum-tech tappable potential for high density low energy computing
Adding AI/AGI assistance should be possible. Even with a rather compact (keyfob-sized) form factor.

Related