Skip to content

Category: Uncategorized

Everything is Beautiful All of the Time.

The wait for the successor to OK Computer was a tough time for Radiohead dorks like me. Coldplay and Muse may not be here if it weren’t for the superficial itch-scratching they delivered to us desperately impatient fans. But when Radiohead finally released an album in October of 2000 it was a shock to anyone who enjoyed the one before it. The guitars were replaced by electronic tones and distorted vocals. Bleeping, blooping, bullshit.

After I let go of my expectations and gave it more time those bleeps and bloops began coming together. Kid A grew on me and became a favourite album, one I listen to over twenty years later. My perception of Radiohead changed from a band that was comparable to other alt-rock groups into something else. Other music I enjoyed before began to feel thin and limited. My mind had been opened a notch more than before.

Perseverance after the unexpected

What if, before Radiohead released the album, an AI system generated a bunch of albums as possible successors to OK Computer and fans were asked to choose the ones they preferred. What are the chances they would pick something as confronting and unconventional as the one Radiohead created?

This isn’t an attempt to define art, nor is it an attempt to raise one form of art above another. The variations in how things are made are unquantifiable; unique to each maker. The same goes for how those things are received.

This is about what happens when a creative work gives us more after we give it more; and what makes us do that.

It has nothing to do with what prompts us to give something another chance; instead it’s about what we require in order to believe that it’s possible for more to be there. A friend might tell us about the subtext we missed in a boring film but we’re not going to rewatch it without a thought of the director or writer and their intent.

Nick Cave’s response to a song generated by AI in his style may be dramatic and overly fond of suffering as a prerequisite but he makes a strong point:

What makes a great song great is not its close resemblance to a recognisable work. Writing a good song is not mimicry, or replication, or pastiche, it is the opposite. It is an act of self-murder that destroys all one has strived to produce in the past. It is those dangerous, heart-stopping departures that catapult the artist beyond the limits of what he or she recognises as their known self.

Nick Cave – The Red Hand Files, January 2023

Cave confines his view to the interests of the artist but it applies as much to the receiver of the art. We give more because we hope to get something back and we hope it will be something new that catapults us forward.

Superficial beauty is pleasant but it can only hold you until it is replaced by something else. Superficial beauty can be automated because it follows conventions of composition. We can’t break conventions of composition without a genuine intent to communicate something in a way that is shaped by what is being communicated.

Sometimes we dig into a creative work but find nothing; an empty void. It doesn’t mean we failed to dig enough or that the artist failed in their communication. As long as we know something was made to express something other than superficial beauty, we’re left to like it or not; we can’t invalidate it as failure because that denies the artist of the integrity in their intent. An unconventional composition created without any intent to communicate something is noise; any beauty would be accidental.

We don’t need to reduce artistic expression to a product of pain and suffering. If that were true we’d share our galleries with the works of elephants, tigers, and snails. Sympathy is not empathy.

When a chimpanzee throws its shit at a tree we don’t consider it a remarkable expression of the chimpanzee condition. Conversely, if we frame that shit and display it in the Louvre the chimpanzee won’t feel any more understood. We don’t know how a chimpanzee would express themselves through art because we don’t know what a chimpanzee wants to express and how they would express it.

Art communicates through a kinship between the artist and the observer. This translates into a relationship of trust because we have to believe that the artist created the work with genuine intent. Artists need to be observers as much as the observers they expect to appreciate their art. How can we trust the intent of an artist that can’t relate to a person’s appreciation of an old hand-woven rug over a mass-produced one?

A computer can’t synthesise that human artistic expression without synthesising the trust required for a human to appreciate it. That is a dishonest relationship which survives by keeping everything as conventional as possible. The result is infinite variations of aesthetic appeal which mask the fact that every variation is composed within a set of rules that gained legitimacy in the past. (See “Corporate Memphis“)

Consider how much the web has been visually standardised to the point it creates an almost unreasonable yearning for something different. That’s a symptom of the synthesised trust that conventional composition techniques result in. Jony Ive’s flattening of the iOS gui; Google’s Material Design project; Facebook’s uncustomisable profile pages. They all contribute to a pool of automatable resources that can make a crypto exchange appear legitimate despite almost any suggestion to the contrary.

That yearning for something different is a yearning for those unconventional compositions and it feels unreasonable because we know deep down that they won’t make any sense if their purpose is purely ornamental.

Ineffective Automation

I would give anything to be able to pay $28,500 for my computer. Anything! And I can’t, I can only buy these frigging $1000 computers that don’t do what computers are really good for. If you think about it, this is completely fucked up. People are valuing their cars more than their computer? They don’t have any idea what a computer is, they’re just using it to play movies. If you think about this, this is the corruption of consumer electronics and that the computer is basically for convenience rather than for actually doing primary needs.

Alan Kay interviewed by Adam Fisher in Palo Alto, August 2014 – Valley of Genius podcast Season 1, Episode 5 (https://twit.tv/shows/valley-of-genius/episodes/5)

It’s hard to ignore the persistence and flourishing of printed books despite an abundance of digital alternatives. Since computers have had screens they’ve had text. Yet MP3s killed CDs and only the most dedicated buy their movies on discs instead of online.

This difference between books, music, and movies, poses a question; what makes digital automation effective for us?

The easy answer is that automation makes things easy. It takes the laborious bits away so you can enjoy the bits you want.

The easy answer makes sense for music and movies because most of the laborious bits of physical media are peripheral to the listening and watching part, not part of it. But reading a physical book is different, it’s not quite as passive. The physical attributes of the printed book are intertwined with the reading part.

Here’s what I mean by that; to listen to recorded music, physical media or not, you use your hands—or voice—to start and stop the music, but your ears are all you need in between.

Printed books, though, need to be held up and open and you can’t stop holding them until you’re finished reading. So the digitisation of physical media for music and movies is effective for us because its only change to the listening and watching part is the potential for higher quality audio and visuals. It automates away the peripheral bits between us and the good stuff because that’s where the physical media was most problematic.

For reading there’s an effectiveness of the printed book that no amount of paper-like display and skeuomorphic interface has managed to fully capture. E-Readers still need to be held like a book. They have pages like a book, and can be bookmarked too. You can highlight passages and write notes. They also do things books can’t do like having internet connectivity, instant purchases, and the ability to hold hundreds of books at a time; none of which have much to do with reading.

Consider the fact that it’s very difficult to do something else while reading a physical book because your hands are required at all times. It’s a binding that makes reading a deliberate activity, you can’t do a Sudoku while reading a book. Yet almost every feature an E-Reader brings to the party is an invitation to do something other than read.

So, what if that binding to the book is what makes the printed book effective for so many people?

This is where automation—or computation—can be seen from a different perspective, a counter-intuitive one. It’s what I think Alan Kay is referring to in his frustrated quote. What reason does the average person have to be interested in the potential of a computer when hardware upgrades are prompted by software that tells you to upgrade, either explicitly or implicitly. Outside of video game and 3D graphics, the computing power of a device does not correlate with any increasing or decreasing amount of tangible effectiveness. We are being sold speed but without any concrete reason for why we need it.

It’s easy to say that automation is what makes things easy because it’s a lazy way to describe the potential for computation. It ignores the potential for automation to make things more involved, more purposeful, more effective. It opens friction to being something more than a UX dichotomy of being good or bad and instead as being varying degrees of effective.

This doesn’t mean that E-Readers should have a spring loaded cover that requires extra effort to hold them open. What it means is to recognise any exertion of energy as being everything it is in addition to being an exertion of energy. A fish swimming against the current could save plenty of energy by turning around, but they aren’t thinking about the swimming.

The effort exerted while reading a printed book minimises awareness and access to any potential outside of what is being read. The book wants your attention, it doesn’t work without it. We don’t long for some device that will hold the book and turn the pages for us because our hands are effective. We’re only aware of things that interfere with whatever we’re trying to do; obvious problems.

It’s not about having a sentimental attachment to the way things have been done. Far from it. This is a design concern; a way to design with a purpose constrained by the circumstances that are relevant rather than the ones that seem relevant.

Printed books live on because digital books automate all the things that seem relevant about reading a book. E-Reader technology is a collection of features that satisfy an array of purposes related to reading—the peripheral concerns—instead of having a purpose of making reading more effective. If we strip away the internet connectivity, bookmarking, highlighting, and dictionary lookups, we’re left with an inferior imitation of a printed book that needs to be recharged.

The Undeliberatable Means

This is a deliberative problem unlike deliberative problems of the past. In the past, deliberation led to decisions about means to be employed in given circumstances to achieve given and desired ends. Means were deliberated, but the circumstances and ends were not subject to deliberation. Today, deliberation is inverted. The computer provides new means — the means are given by technological development — but the circumstances and ends of computer use are, themselves, the subject of deliberation in the process of product development. This is a fundamental characteristic of our time, and it profoundly influences the development of human-computer communication.

Daniel Boyarski & Richard Buchanan — Computers and communication design: exploring the rhetoric of HCI, April 1994 (https://dl.acm.org/doi/10.1145/174809.174812)

The norm in the tech industry is to find ways to use computers to solve problems. “How can we use computers/the internet/software to solve a problem?”.

Computers as the undeliberatable means changes the way we design.

It’s difficult to consider computation as a building material, an option among all other building materials, like metals, woods, stones, and plastics, because of this tendency to start with computing and find things to build with it.

Solutionism is based on this inversion of deliberation where we find purposes we can satisfy with computation, and we squish and twist the purpose and its circumstances till the tech solution seems to be the most appropriate one.

When we say “we need to find a product/market fit” we are expressing this concept by saying that we have satisfied a purpose that we haven’t found yet.

User Experience design as a bridge

Alternatively, if a purpose is identified and as part of the design process the circumstances of the purpose are understood to be human, the human needs are part of the design process. Any need to apply a human concern after the materials are selected would be a failure of the design.

For instance, suppose we believe — as I and others might argue — that the central charge to HCI is to nurture and sustain human dignity and flourishing. Note that this is not to say that HCI’s claim to legitimacy ought to be to nurture and sustain human dignity and flourishing, but rather that it always has been.

Paul Dourish — User experience as legitimacy trap, October 2019 (https://dl.acm.org/doi/10.1145/3358908)

As Dourish says, the central charge of HCI is to nurture and sustain human dignity. In this framing it seems strange for human dignity to not be a defined circumstance of a design purpose. Design is not defined by the purpose, but ignoring the circumstances of a purpose will either prolong the path to a good outcome or miss it all together.

I see a lot of things, including the whole discipline of HCI, as a result of this inverse view of design. User Experience design aims to make the interactions between a user and some digital product as smooth and seamless as possible. Isn’t it strange that’s not something that occurs automatically because it’s an important part of satisfying the purpose?

It doesn’t mean that UX is a meaningless effort, we recognise this shift in the design process and UX is a result of that shift. It’s a bridge.

What happens when we stop deliberating over the means, or the materials, we use to satisfy a design purpose? Is it possible to have both a pre-determined means and a pre-determined purpose and design a way to make it work? I believe it is, but that’s what we usually call a hack, a kludge.

The right tools for the job, not the right job for the tools.

It’s important to be clear about what I mean when I say “material” because it suggests a physical entity, an object. If we’re talking about materials as options for satisfying a purpose, we’re talking about anything that has identifiable properties. Something that can be assessed for its pros and cons as they apply to the method being considered for satisfying a purpose. In this sense, a protest, forming a union, paying for a service, are all materials. They are options that should be candidate for selection just as much as the internet or software.

There are realities that make it difficult to consider this demotion of technology as an option on par with things like protesting, or… wood. The world is full of tech companies. Software development businesses that survive and thrive by finding purposes to satisfy with software. They can’t decide that an issue is best solved by political efforts more effectively than a networked software solution. They can decide that it’s not a fitting purpose and move to the next one. They deliberate the purpose.

It doesn’t mean that software companies can’t satisfy purposes, we have plenty of evidence they can. But it does mean that they will try to satisfy purposes that software is just OK for.

Paul Dourish’s article “User Experience as a Legitimacy Trap” talks of usability as being the legitimising value of HCI in industry, therefore trapping it from realising the original HCI values of human flourishing.

But if we consider usability as just a thing that’s done to an existing thing we can see that it doesn’t actually change the design, as in the method of satisfying a purpose. It is more like sanding the rough edges off a wooden table. Usability says nothing about which features are there, it just takes the features and makes them easy to use. It’s outside of the design process of the thing. A micro design process of the things between the thing and the subject of its purpose.

We’re almost proud that design is perceived as the shaping of something to be as inoffensive as possible. Our design influencers speak of things like “Human-Centered Design” or “Humane Interfaces” as if that’s some novel concept that designers of the past neglected to discover.

If someone needs to be told to think and work in a human centered way when they are designing something, it should be a clear indication of how separated the discipline of design has become from what it is the design is being applied to.

Is there Dog-Centered Design in the dog product industry? Do the designers there need to be reminded of the purpose of the things they are designing? Do dog toy companies practice rubber-centrism, where they search for dog related problems that strong, but malleable, rubber can satisfy the purpose?

In solutionism we have to remind ourselves that we make something for a human because we put the means high up on a pedestal, unquestionable in its power to deliver whatever we need, undeliberatable, and we hold it up there above the ends, the purposes, and the circumstances around them. We replace existing things with computational things and we consider any aspects of the old that can’t be replicated by the new to be irrelevant. We treat the ends like clay. We mould and cut off excess parts, so the inadequacies of the tech are less apparent. The ends are a prototype for the tech, something that accepts the tech. The prototype becomes the product when the tech has been accepted.

The Interface and the Potential

Talking to a computer is weird for me and while I know that’s a part of getting older and resisting change, the idea that it may not be weird for some seems worrying. The computer you can talk to is not invisible but a visible human-imitating form like The Thing. It becomes a presence that can’t be ignored because its usefulness as a tool is so general that you rely on it for tasks, not a task, the void felt in its absence should highlight how much of a presence it has. Computers are machines of potential. A hammer is a hammer, but a computer is whatever you need it to be.

Why should a computer be anything like a human being? Are airplanes like birds, typewriters like pens, alphabets like mouths, cars like horses? Are human interactions so free of trouble, misunderstanding, and ambiguity that they represent a desirable computer interface goal? Further, it takes a lot of time and attention to build and maintain a smoothly running team of people, even a pair of people. A computer that I must talk to, give commands to, or have a relationship with (much less be intimate with), is a computer that is too much the centre of attention

Mark Weiser — The World Is Not A Desktop from January 1994 ACM Interactions magazine https://dl.acm.org/doi/10.1145/174800.174801

Mark Weiser’s original ubiquitous computing ideas seem to rely on the interface as the main point of concern, where any attention directed to the interface is considered an unnecessary prevention of the tool itself becoming invisible.

But when we’re using a computer the interface isn’t the centre of attention as much as the potential of the computer is. When you are aware of it, potential is attractive and powerful. Computing potential is powerful in how generally useful it is. General usefulness is harder to ignore and its absence is easier to notice.

I’ve spent quite a bit of time trying to understand why it’s particularly hard for me to focus on tasks while on a computer. I noticed it was not the actual distractions from interfaces, like notifications or alerts, that broke my focus — they are easy to turn off. What hurt me more was the potential for distraction that lies behind the interface. Distraction is not always helpful but as a form of escape, it is useful.

A computer connected to the internet is the embodiment of potential. That connectivity, as potential for distraction, is always ready if you need it. Any challenging activity is haunted by that potential to escape it.

I’ve found that potential is more present and more powerful if the interface requires less effort from you. The effort you exert in using something also binds you to that thing you are doing. For example, a paperback book needs to be held up and open, otherwise it will close and fall. The interface of a paperback demands constant engagement with your hands.

Reading a text on a computer screen does not require active engagement from anything other than your eyes. It requires occasional input to tap the space bar or scroll the mouse wheel but your hands play no part in holding the words up in front of your face.

The less an interface requires from us the more it invites us to split our attention — to never give all of it to one thing at a time.

Mark Weiser is right when he says the interface shouldn’t be the centre of attention but it doesn’t mean the interface should be passive and invisible. Interfacing is not just a way of doing, it’s an interaction. Every interaction involves actions that can directly and indirectly influence the exchange. Like facial expressions, hand gestures, or tone of voice in oral conversation.

If we reduce the word “friction” to its negative connotation, we think that holding a book up, and open, is some kind of unnecessary and laborious part of reading. We ignore the subtle values that come from increased involvement in an activity and we only see the positivist view of wasted energy and unnecessarily occupied appendages.

It’s easy to assume the friction between us and computers is in the effort we exert in using them, but I think it’s more nuanced than that.

When effort is required from us because of a shortcoming of the technology, such as learning how to speak in a way computers can understand, our efforts are less linked to what we need and more to how we get it. But the same is true of keyboards as an interface. So what’s the problem?

Keyboards wouldn’t exist without typewriters and computers, they are input devices. We don’t use them for face-to-face conversation with friends. To learn how to type is to learn something new, not to modify that which is normally used in some other context.

This means the keyboard can almost disappear once a certain level of skill is achieved because we know keyboards are a computer thing. We’ll only use them with a computer and so we don’t need to consider if they are plugged into a computer or plugged into a person each time we use them.

Our voice as an interface method with computers is shared as a method between people. As long as we continue to talk to people, the voice interface with computers can’t become invisible because we’ll always have to be aware of the need to switch context.

Maybe this is what Mark means when he says “VR, by taking the gluttonous approach to user interfaces design, continues to put the interface at the centre of attention” because in VR everything you do is a modified version of something you do in the physical context.