Features

XXX.5 September - October 2023
Page: 44
Digital Citation

The Purpose of a System Is How We Shape It


Authors:
Oliver Cox

back to top 

Why are our information systems places where ideas go to die? Why do so few of us, and fewer organizations, gain any level of mastery over our documents, ideas, and data?

back to top  Insights

Our computers and their software are not currently ideisomorphic: They do not make it natural to express human ideas.
In terms of cybernetics, they are not "good regulators," which is why so much human energy is crushed by computers.
Because of the feedback loop that exists between computers and ideas, the danger is that we will eventually think like machines at the expense of the subtlety and surprise of human thinking.

The reason is that our information management systems are not shaped like human consciousness. Drawing upon the field of cybernetics, I claim that to manage and master today's immense variety of information, we need immense variety within our information systems. Without tools that mirror the range of human consciousness, our information will swallow us up; perhaps you feel like it already has.

What might tools with the nuance and humanity necessary to express our ideas look like? To answer this question, I aim to do a few things in this article. First, I'll try to establish a base of values that we ought to pursue in the technologies we develop. Then I'll narrowly define a term—ideisomorphism—that I have used a great deal in various articles up to this point to refer to tools that are naturally suited to the expression of human thought. With this definition established, I'll explain in terms of cybernetics why our information systems need ideisomorphism to function properly. Finally, I'll discuss a quantitative framework for measuring the extent to which tools are ideisomorphic.

back to top  Introduction: Values

The computer screen is the retina of the mind's eye. I've adapted this expression from David Cronenberg's wonderful movie Videodrome, in which Dr. Brian O'Blivion says, "The television screen is the retina of the mind's eye." In essence, multimedia computers are subtle enough to conjure any experiences in our consciousness. They might even engulf it.

How? Our ideas influence the sorts of computers we build. At the same time, the structure of our computers influences how we think. How we think, in turn, influences the ideas we come up with. The cycle then continues.

This cycle [1], which I defined back in 2020, I call the Three Worlds of Computers (Figure 1). As a result, any trend, idea, or fact of computing has the potential to be reproduced ad infinitum by influencing how we think. Moreover, it may condition our overall perspective to the extent that we forget there are alternatives.

ins01.gif Figure 1. The Three Worlds of Computers cycle.

Computers cannot adapt, so when you use a computer you must adapt to it: You are teaching yourself to think like it. With that in mind, here are a few principles for how things should be.

ins02.gif

With respect to how humans and computers interact:

  • Ideisomorphism. Our software should have the necessary fidelity to represent human thinking and should make it natural to do so.
  • Humanity. Our software should be humane, or as Jef Raskin puts it, "responsive to human needs and considerate of human frailties."
  • Pedagogy. Our software should be immediately approachable, difficult to break, and should guide a new user on a path of mastery.

With respect to computers themselves:

  • Addressability. Our software should make data atomically identifiable and addressable, and public data universally findable, regardless of perspective.
  • Relationships. Our software should reflect the fact that relationships are as important as things, and promote the user's ability to see the big picture.
  • Interoperability. Our software should manage information via a common, open data structure, with no unnecessary barriers between systems.

back to top  Whom Does This Affect?

Here are a few examples of how this manifests for real people, today—speaking in generalities with limited exceptions.

Researchers/academics. These individuals deal with a changing field of interconnected research papers, databases, conference presentations, and, of course, informal notes and ideas, all richly interconnected. They need a visibly connected network of items with the citations shown between the specifically referenced material, not just page numbers. Instead, they are forced to view one paper at a time and can see what a given paper cites but not what cites it.

Legal professionals. As with researchers and academics, lawyers, paralegals, and other legal professionals deal with a similarly interconnected and changing landscape, except with the relationships between precedents, statutes, guidelines, and, of course, the particulars of their current legal case.

Managers/executives/coordinators/project managers. Effective managers need information summaries with the facility to delve into attendant information. In reality, executive summaries are often totally cordoned off from their sources (and are therefore tools of manipulation) while corporate document systems are where information goes to die, as outlined by research from RingCentral [2]. What information one can actually find will be split among multiple incompatible data types and applications.

Software creators. Programmers, product managers, system architects, and other professionals manage thousands of functions, objects, features, and requirements with uncountable relationships and dependencies. Much software today is indeed very good, but, as Casey Muratori explains [3], modern systems are so complex that we have lost track of the overall structure, leading to instability and slowness.

Countless other professions have similar concerns and are similarly underserved by their software.

back to top  Background: Cybernetics and the Good Regulator

I'd like to draw upon the field and terminology of cybernetics to help me. Cybernetics is the study of maintaining a particular state of affairs within systems, whether they are machines, air traffic control organizations, companies, or other countless other structures.

A regulator is a system designed to achieve a set of conditions within another system. A thermostat is a regulator designed to maintain a temperature, such as in a house. More-complex regulators include air traffic control organizations, highway management systems with variable speed limits, and enterprise resource management solutions.

The theorem of the "good regulator" by Roger C. Conant and W. Ross Ashby posits that "[a]ny regulator that is maximally both successful and simple must be isomorphic with the system being regulated" (emphasis mine) [4]. In common English: A regulator that can actually regulate the desired system and is not overly complicated should contain a structure that is of the same structure as the system it regulates.

With this in mind, the nexus of written material, information, and databases that we need to parse, create, edit, reuse, and organize is the system (all, in essence, an outgrowth of consciousness) and our information tools are the regulators. A good regulator must be isomorphic with the system it regulates—information systems must therefore be isomorphic with human consciousness, or ideisomorphic.

The nature of our software manifests in the shape of our consciousness and of our ability as a species to build relationships based on human solidarity. Today's deranging software imprisons us within rigid data structures and destroys our connections with others.

Having talked about ideisomorphism [1] for some years now, I will in this article attempt to give it a mathematical definition and provide a means of quantifying it. We can use this as an index through which to judge our machines.

back to top  Isomorphism and Ideisomorphism

Isomorphism is a mathematical concept describing a relationship between two systems whose structures are the same. Two systems are isomorphic if you can map each element of one system onto each element in the other system and transform back and forth without losing information.

A parallelogram and a square are isomorphic: You can map each of the four sides of the parallelogram to the sides of a square, and its four corners to the corners of the square. However, a triangle and a square are not isomorphic, because the square's extra side and corner mean that a one-to-one mapping is not possible (Figure 2).

ins03.gif Figure 2. Isomorphism and non-isomorphism.

Ideisomorphism is the property of systems that 1) have the necessary fidelity to represent human thinking and 2) make it natural to do so. Mathematically, an ideisomorphic system must have an interface with a one-to-one correspondence to each of the important elements of the area of human thinking that it is attempting to model.

One can quantify the extent to which a given system is ideisomorphic by asking the following questions:

  • Of the objects in your system, how many are available in the software?
  • Of the relationships in your system, how many are available in the software?
  • Of the actions in your system that correspond to consistent (and therefore possible) software actions, how many are available in the software?

One expresses the ideisomorphism score as the number of things you can do out of the total number of things we should be able to do (e.g., 1 out of 3, or 33 percent).

My foremost personal goal is to build ideisomorphic computer systems. To the extent that humanity's information systems lack this quality, our ideas must be distorted, even destroyed, to fit within computers. The more ideisomorphic we make our computers, the more natural it will be to work with them.

When ideisomorphic systems finally gain mass adoption, I expect people to experience something like William Blake described in The Marriage of Heaven and Hell: "If the doors of perception were cleansed every thing would appear to man as it is, Infinite. For man has closed himself up, till he sees all things thro' narrow chinks of his cavern" [5].

What might this experience be like, therefore? Imagine the difference between donning fashionable women's clothing from the 1700s—which through a corset and other devices restricts and changes the shape of the body—and acquiring a tailored suit. You must adapt to the former, the tailor adapts the latter to you.

To demonstrate, I will assess a human system and the attendant software: writing and word processing.

back to top  Writing Versus Word Processing

For the human art of writing we have software tools called word processors, the defining modern example of which is Microsoft Word, with Google Docs, LibreOffice, and others providing alternatives.

Objects. Limiting ourselves to English—I can speak only English well enough to advocate for it—the objects of the language are:

  • Characters
  • Morphological subword parts like roots and suffixes
  • Words
  • Grammatical subsentence constructs like phrases and clauses
  • Sentences
  • Paragraphs
  • Nested organizational groupings, such as subheadings, headings, chapters, etc.

A word processor can create characters, but no word processor that I know of can uniquely identify parts, words, subsentence units, sentences, paragraphs, sections, or chapters.

Therefore, its ideisomorphism score for objects would be 1 out of 7, or 14 percent.

Why does this matter? Currently, to achieve any transformation on a piece of writing in a computer, one must first translate the operation into an operation on a string of characters. For example, take the following sentence from the First Amendment to the U.S. Constitution:

Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.

If one wished to make the third clause "or the right of the people peaceably to assemble" into the second clause (replacing "or abridging the freedom of speech"), one must first manually select the text "or the right of the people peaceably to assemble," then cut this text, then paste it after the first semicolon.

This is incredibly tedious, rife with the possibility for error (missing characters due to imprecise selection, etc.) and usually requires cleanup (capital letters and periods often end up in the wrong places). This is, of course, a relatively simple edit.

This operation should be as simple as selecting the third clause with one click and then dragging it to the desired location. Computers can and should manage the attendant capitalization and punctuation.

There are two consequences of this omission for the user. One is that it is draining to use computers thus configured, slaving away at repetitive tasks that computers are designed to undertake. All this energy is sapped from our creative endeavors. The other consequence is that, to the extent that computers make us think about shunting characters, we are not thinking about what writing is about: structure and ideas.

Relationships. It is also important for word processors to represent several types of relationships, including the following:

  • Linear relationships. In word processors, the structure is always linear: One character leads to the next. The word processor genre therefore gets one point for satisfying this single aspect of written structure.
  • Grammar and morphology. However, there is a world of structure missing within the sentence and the word. See the breakdown of the sentence "John hit the ball" in Figure 3.
ins04.gif Figure 3. Grammatical breakdown of the sentence "John hit the ball."

This more natural and descriptive (and controversial) way of structuring writing is completely absent from our software, despite its immense promise as a tool for writing, editing, and teaching.

  • Reuse. The ability to reuse and organize preexisting material is also quite absent. By this I mean not copying and pasting, but rather referencing prior material to show it additional times—in other words, "this idea belongs in both these headings."

Modern word processors preclude a given sentence being in more than one place, although this is perfectly coherent and possible on our computers. Ted Nelson defined this concept [6] in the 1960s and called it transclusion (Figure 4).

ins05.gif Figure 4. The transclusion process.
  • Links. Nelson (again) explained decades ago that writing should be visibly connected to the material it cites, via links that are visible from all endpoints. Rather, today's links are one-way: Shakespeare can reference the King James Bible and link back to it, but when you look at the Bible there's no evidence of that connection.

For relationships, the ideisomorphism score would be: 1 out of 4 or 25 percent.

Why does this matter? Our flawed conception of links sets up documents as islands, rather than as nodes in a broader network, making it practically impossible to know reliably what links to what and what quotes what, much less visualize it. Word processors train us to think of writing as an "I'll take care of mine" type pursuit.

If we had real links and real transclusion, we could know instantly when someone thus interacts with our work. Today, one's best bet is to use proprietary or expensive tools like Google and Semrush, which unnecessarily visit every webpage they add to their incomplete index, at considerable cost.

Meanwhile, any material copied for quotation dies immediately. In most cases, it makes more sense to transclude a section and have that section update as the original updates, for example, for news, science, biographies, and so on.

We act like writing is a set of separate, sovereign documents, when in fact writing is a nexus of interconnected, interspersed material. While individuals are responsible for their material, the sum of connections among pieces of material is the responsibility of the community.


We are not masters of our information. We can be, though, if we build information systems shaped like human consciousness.


Actions. Obviously word processors can do a lot of what's needed for a system for writing, including:

  • Creating original material (typing characters into a document)
  • Editing material (delete, copy/paste, find/replace)
  • Undo/redo, track changes

For what they can't do, I'll limit myself to just two items here, for brevity:

  • Cut and paste (not to be confused with the modern construct of adding a single piece of content to memory to be output elsewhere). Ted Nelson explains that "the real work of writing is rewriting; and especially in big projects, is principally the overview and control of large-scale rearrangement" [5]. However, all modern writing tools lack the paper technique known as cut and paste: the art of cutting a piece of writing into pieces and freely moving them around into a new structure.

This is possible today only in a plodding, one-thing-at-a-time manner. Imagine being able to hit a button and explode your article into individual sentences and/or paragraphs, then drag them into a new arrangement.

  • Version history tree. Then there's another Nelson concept: the version history tree (Figure 5). You may have made some progress with a document (call it version 2), then hit undo a significant number of times to return to a previous state (version 1), then edited the document again (version 3).
ins06.gif Figure 5. Sketch of a simple version history tree.

This action (editing a document after hitting undo) destroys what was your latest version (2). Who on earth thought that version 2 wasn't worth preserving? Instead, every time you hit undo and make a new edit, the word processor should create a new branch of the version history tree and preserve that progress.

For actions, the ideisomorphism score would be 3 out of 5, or 60 percent.

Why does this matter? This is another instantiation of our misunderstanding of what writing is. Ted Nelson put it perfectly in Dream Machines:

Some people think you make an outline and follow it, filling out the details of the outline until the piece is finished. This is absurd…. Basically writing is: The try-and-try again interplay of parts and details against overall and unifying ideas which keep changing [7].

Nelson here expresses two invaluable premises:

  • You can't actually know for sure what's on your mind until you try to write or verbalize it.
  • As you undertake this process, the ideas you're trying to express themselves change.

Beyond saving time, real cut and paste will reduce the delay between imaginative creation or decision making and the completion of the required actions. Ideas decay quickly, and immediate action allows the writer better to capitalize on them and even to enter a state of "flow"—thinking about your software is a flow killer. The lack of a version history tree contributes to much the same malaise.

Both of these omissions limit what we might call the "scope of reorganization." Our tools—because they make it slow and painful to edit text—put dampers on what we imagine possible. Indeed, much writing today is redolent with insufficient organization: beginning with the "groping for the soap in the bath" attempts at getting started, finally getting going, then concluding with something of a summary (which should be moved to the beginning and adapted into an introduction) and almost always lacking a real synthesis by way of a conclusion.

This is not surprising given the tendency of the modern Google-optimized Web to encourage a spray-and-pray mentality, necessitating that writers produce reams of material at the expense of real refinement.

Let us now summarize the ideisomorphism scores for each of the three areas:

  • Objects: 1 out of 7
  • Relationships: 1 out of 4
  • Actions: 3 out of 5

That makes the aggregated total 5 out of 16, or 31 percent.

There you have it: As a genre, by my estimation, word processors are less than 50 percent capable of modeling human thinking.

back to top  Synthesis

I think the main argument against my approach is that it will increase complexity. Reducing complexity is a relevant goal in some circumstances. Randy Bush and David Meyer state in RFC 3439 that "complexity in networking systems is the primary mechanism that impedes efficient scaling" [8].

That said, if you wish to do complex things, the complexity has to go somewhere. Bush and Meyer note that "the complexity of the Internet belongs at the edges"—the computers on the Internet should accept the necessary complexity.

A corollary: Stupidly simple interfaces offload the complexity either to the user—who is stuck shunting text and copying and pasting, or who simply fails—or to other applications, which ends up giving us more complexity, incompatibility, and distortions, and, on the part of the user, misery.

Milton Glaser put it excellently:

Being a child of modernism I have heard this mantra all my life. Less is more…. But it simply does not obtain when you think about the visual of the history of the world. If you look at a Persian rug, you cannot say that less is more because you realize that every part of that rug, every change of colour, every shift inform is absolutely essential for its aesthetic success [9].

The trade-off in our software is rarely between simplicity and complexity, but rather between ugly complexity and graceful complexity. Graceful complexity is, I think, our main hope for not being washed away by the modern deluge of information and for using technology for its proper purpose: increasing the scope and leverage of human ideas and action.

Moderns are presented with a false prospectus not unlike the aforementioned women in the 1700s: "We will allow you to participate in our system, if you wear a corset: It is necessary." Our computer norms are similarly fashions—presented to us as natural laws yet continuously toppled by those who think differently.

Meanwhile, I warn of a broader risk, prefigured in my injunction that the computer screen is the retina of the mind's eye: Computers are so totalizing that we may reach a point where our experience is mediated by them to the extent that we cannot go back. This would turn the whole project from something useful but awkward into an accidental tyranny so horrible that it would negate all the previous benefits.

As you might have gathered, I think reform in this area is the most important area of human study.

back to top  References

1. Cox, O. Why the ethical character of the internet matters (the future of coordinated thinking pt. 3). Nov. 5, 2021; https://hyperstructure.media/2021/11/why-the-ethical-character-of-the-internet-matters/

2. RingCentral. From Workplace Chaos to Zen: How App Overload Is Reshaping the Digital Workplace; https://netstorage.ringcentral.com/documents/connected_workplace.pdf

3. Muratori, C. The thirty million line problem. Molly Rocket's YouTube page. May 12, 2018; https://www.youtube.com/watch?v=kZRE7HIO3vk&list=PLEMXAbCVnmY4JbNByvpgEzWsL-RKVaF_pk&index=2

4. Conant, R.C. and Ashby, W.R. Int. J. Systems Sci. 1, 2 (1970), 89–97; http://pespmc1.vub.ac.be/books/Conant_Ashby.pdf

5. Blake, W. The Marriage of Heaven and Hell; https://www.bl.uk/collection-items/the-marriage-of-heaven-and-hell-by-william-blake

6. Nelson, T. Xanalogical structure, needed now more than ever: Parallel documents, deep links to content, deep versioning, and deep re-use. ACM Computing Surveys 31, 4 (Dec. 1999); https://cs.brown.edu/memex/ACM_HypertextTestbed/papers/60.html

7. Nelson, T. Computer Lib/Dream Machines, 1977.

8. Bush, R. and Meyer, D. Some Internet architectural guidelines and philosophy. Dec. 2002; https://www.rfc-editor.org/rfc/rfc3439

9. Glaser, M. Ten things I have learned (Part of an AIGA talk in London); https://www.miltonglaser.com/files/Essays-10things-8400.pdf

back to top  Author

Oliver Cox is founder and CEO of HSM, a hypertext start-up building software that allows users to express their ideas with high fidelity. He writes on the philosophy of technology, particularly the effects of computers on consciousness, calling for systems with the subtlety to represent human ideas that facilitate human solidarity. [email protected]

back to top 

Copyright held by author. Publication rights licensed to ACM.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2023 ACM, Inc.

Post Comment


No Comments Found