Home Wiki

User:Rudxain/Permacomputing

Last updated View on consumerrights.wiki ↗

Contents9
  1. Why
  2. Guidelines
  3. Ease of re-creation
  4. Ubiquity
  5. Interoperability
  6. Archivability
  7. External links
  8. See also
  9. References

Permacomputing is a concept and movement which consists of sustainable information technology (IT) devices and data preservation. In short, permaculture in computing.

Permacomputing is the antithesis of planned obsolescence (and anything related to it) and bloat.

Why

Here's a quote from 100R (links added by me; typos corrected):[1]

Many of the tools that we thought we could rely on broke down, whether it is Apple products, or software that require subscription services, DRM, etc. As an artist you spend time developing a skill, you become a Photoshop illustrator. When your connection to the internet fails and that the software locks up, that skill that you thought was yours was actually entirely owned by someone, and can be taken away. Even though we've been paying for this sort of software for years, the moment that you can't have access to authenticate yourself that skill is gone. We didn't expect this, it scared us.

Here's one from Maxime Chevalier-Boisvert (links added):[2]

We live in a world where software is increasingly complex, and increasingly fragile as a result. It's very easy to end up in a situation where software that was working just fine a few months ago can no longer compile and run due to broken dependencies. One of the main goals of UVM is to combat the phenomenon known as "code rot" or "bit rot".

[...]

There seems to be a growing interest in retrocomputing, and that interest likely stems, in large part, because the complexity of modern computer systems and their software environment is extremely high, and there is constant unnecessary churn, which becomes exhausting. At some point, programmers just want to build, and there is a natural desire to declutter and have fun.

Jeff Huang:[3]

Bookmark after bookmark led to dead link after dead link. What's vanished: unique pieces of writing on kuro5hin about tech culture; a collection of mathematical puzzles and their associated discussion by academics that my father introduced me to; Woodman's Reverse Engineering tutorials from my high school years, where I first tasted the feeling of control over software; even my most recent bookmark, a series of posts on Google+ exposing usb-c chargers' non-compliance with the specification, all disappeared. This is more than just link rot, it's the increasing complexity of keeping alive indie content on the web, leading to a reliance on platforms and time-sorted publication formats (blogs, feeds, tweets).

Guidelines

This is not a guide, this is a set of "universal" principles.

Ease of re-creation

Simple tools are easy to re-implement from scratch, so choose (and learn) simple tools whenever possible. If you ever find yourself in a situation where you have a computer with no software, or you don't even have a computer, you can rely on simple tools and skills to make what you need (or find someone else to do it).

If the tool you need is hard to re-implement manually, ensure it's easy to re-create automatically. For example, ensure the tool is widely-available in a source-code form that can be built by a simple compiler (such as TCC) or executed by a simple interpreter/CPU (interpreters are just virtual CPUs); this is source-level portability.

This principle is the basis of bootstrapping. Simple (and tedious) tools can be used to make useful (and complex) tools! This goes hand-in-hand with reproducible builds, which improves transparency.[4]

Examples of simple programming-languages (typically Turing tarpits):

Ubiquity

If the tool is hard to recreate, ensure it's mostly omnipresent. This is "useful redundancy", as I call it. This way, you can typically rely on it being available in almost any situation.

Examples:

Interoperability

Main article: wikipedia:Interoperability

Ensure the tool-set/tool-chain you use is like an orchestra: there must be some harmony. That is, ensure many tools can communicate with each other using common formats and protocols. Ensure those formats and protocols follow the same guidelines as the tools (simple, open, ubiquitous, etc...). In short, follow the Unix Philosophy. Even a tiny set of simple tools can be orders-of-magnitude more useful than a single simple tool.

Examples:

  • Zig, which has "reusable software" as part of its slogan. It also has C interop. However, it's still in an API-unstable state, so you'll have to wait before everything is settled.

Archivability

(yes, that's a real word) this is more about data/info than computing. When all else fails, at the very least, ensure data can be easily copied and archived. This isn't just for tools, it's about any kind of data: personal memories, historical events, etc...

Universal truths, such as those found in math and physics, can be "extracted" and derived any time. But events are more unique, so those are in great danger of being forgotten (see also: lost media). You don't have to become a data hoarder, just focus on important data.

BTW, there's a common misconception that physical media is better than digital media. The main problem is centralized media (typically on the cloud). Physical media is still digital (unless it's analog, like magnetic tape or vinyl records), it's just outside of a device.

See also

References