Skip to content

The Aura of Care.

False Legitimacy Gained Through UX

In November 2022, Brian Chesky, CEO of Airbnb, began a tweet thread with “I’ve heard you loud and clear” in response to a customer backlash over the way they hid additional costs till the checkout page. “You feel like prices aren’t transparent…starting next month, you’ll be able to see the total price you’re paying up front” he said about a change that could be made urgently in a day, or carefully over a few.

When he said I’ve heard you loud and clear he was also telling his User Experience (UX) researchers and designers they were ignored, if they were heard at all. The dark pattern was no mistake. Intentionally designed to deceive and benefit from excited holiday planners and their potential to give in to the sunk cost fallacy. Instead of addressing the ridiculous additional fees the company chose to trick customers into paying them. That’s not empathy, at best it’s apathy, at worst it’s hate. The decision to fix it only came after the balance of business value and public relations started to tip the wrong way. Chesky presented himself as a model CEO doing right by his customers as if he wasn’t responsible for wronging them in the first place. People bought it too. He demonstrated how bright a performative aura of care can shine to hide questions about the business activity or even questions about the business’s legitimacy to exist.

In April of 2022 Twitter added the option to write short descriptions of the images you attach to a tweet. Those descriptions help vision-impaired people that rely on synthesised voice software to read out the contents of a page. The thing about image descriptions is that the World Wide Web Consortium’s (W3C) standards for HTML—the document structure language of the web (and Twitter)—has required them since 1999. When Twitter went live, that requirement was already seven years old and twenty three years old by the time they obeyed it completely. To praise Twitter for recognition of vision-impaired people is like praising a heavy drinker for taking a hip flask to their kid’s school play instead of skipping out to the pub. They did the bare minimum, reluctantly, despite having UX researchers and designers on deck. For this they deserve no more than a collective why the fuck did it take so long?

Goodness in a product’s design tends to make more sense as a convenient side-effect of a business case. For Twitter, crowd sourced image descriptions written for free can make a nice data set to sell for machine learning.

If we look at industry-wide examples we can see how intrinsic care replaced with business incentives leads to low quality black-and-white photocopies of the original ideas. Everything becomes optimised to meet business requirements and any surviving sense of care that remains is there by chance.

Since the beginning of the web, writing W3C compliant HTML has been highly regarded among developers. Standards compliant code makes the web accessible but the design philosophy of prioritising accessibility also led to the unique quality of HTML being forgiving if the standards are ignored. Showing something in a web browser is more accessible than showing nothing, so a web page will still look right if the code is not perfect. In the early days, this meant that the quality of the HTML wasn’t factored into timelines and budgets because it was extra work that didn’t change how the site looked. If a site was built with standards compliant code it was because the developers wanted it that way and did it on their own time.

That all changed in the early 2000s when Search Engine Optimisation (SEO) arrived. The techniques for improving the visibility of a site in Google’s search results included rules for the structure of the HTML. These rules took some W3C standards and tied them to a tangible business case of heightened search visibility. I remember the surreal experience of an SEO consultant presenting these rules to my web development team. We already knew everything they said because we understood web accessibility, but they were retelling these things as novel techniques for getting more sales leads from search.

Responsive Web Design (RWD)—a design philosophy for building sites that work for everyone regardless of the device they use or their connection speed—gained commercial adoption in a similar way, well after developers and designers had already seen its value as an empathetic design philosophy. Google announced that “mobile-friendly” sites would be preferred in search results and some, not all, RWD techniques became convenient. Now responsiveness in commercial web apps focuses mostly on being visually accessible to devices used by a target demographic. Anything outside is considered an edge case and ignored, or again, supported by developers and designers taking initiative in their own time. That’s why some sites will crash the browser on your parents iPad, use up your mobile data before anything renders, and fail basic accessibility tests. Browsing the web has become a reason in itself to upgrade a device.

And yet… User Experience has become part of the everyday lexicon. Normal people who don’t make tech products say they prefer a product “for the UX”. Normal people who do make tech products say their product “has great UX”. It’s generally accepted as a measure of how easy something is to use, how little it gets in the way. Like usability before, it takes something that was a core concern of commercial product design—since companies sold products—and treats it like some novel modern add-on. But the real innovation is making it seem like the ease of use, the user experience, is the only thing that matters, because sometimes a product doesn’t offer much else.

Notion is a popular cloud-based product that is marketed with no purpose more specific than productivity or collaboration. It takes existing products like wikis, project management tools, and document editors and mashes them together into one window. Notion is the sum of products that were already legitimised as being useful by themselves. Despite inheriting all of its usefulness from other useful things, Notion’s success is the result of good usability design that makes it easy to use those things in one place. For Notion the UX is the product.

Productivity and collaboration might seem like vague purposes to an individual but to a tech company they are compelling, concrete, purposes. Businesses are sold corporate subscription plans for Notion and other products, like Slack and even Figma, which are imposed on staff as essential tools. For employees these products are universal tools of nothing in particular. Each collaboration feature makes the anxiety of productivity ubiquitous. Little floating heads always watching over the document you’re working on, a perfect simulation of what we used to call micromanagement. They are virtual open-plan offices where everything you create becomes littered with comments and conversations you didn’t ask for.

The thing they all have in common is how strikingly easy they are to use. Part of which comes from very good usability design and part of which comes from the fact that you use them for a purpose you define yourself. When they say it’s for productivity instead of doing your taxes, they are benefiting from such an abstract criteria for failure that it doesn’t really mean anything. If you want to use it to do your taxes you can go ahead. But if it can’t help with some obscure tax calculation, you’re an edge case. For a UX designer at Notion the concern is that it can be used easily, not how well it does a specific task for a specific expertise.

And, look, I know how obvious and easy it is to dismiss this as how capitalism works. The problem being the aura of care surrounding UX pretends that capitalism can be coaxed into giving a shit. It chugs along as if UX designers and researchers are the ones who are going to cause a revolution of socialist CEOs who consider users beyond their money and their data. But the inside secret of commercial UX is that the empathy is just a posture and the businesses benefit from the aura of care without having to entertain it. In non-profit, government, or volunteer-based open source projects, the posture can, and usually does, match the reality but in commercial tech it’s always contingent to the strength of a business case. The Google UX design course that says it will help you “empathise with users” is attracting the best intentioned people and setting them up for a future of despair.

That’s why UX can help legitimise products that are intrinsically bad for people who use them. Tell someone that cigarettes are easy to use and they’ll ask about the reasons for using them, but tell them about the user experience of cigarettes and they’ll ask what makes the experience good.

Search Twitter for “FTX UX” and you’ll find no shortage of “it had a great UXtweets published well after the fraud was exposed. It doesn’t matter which fraud or how obvious the scam was beforehand, the same search will yield the same results. The UX aura of care shines brighter.

The posture is strengthened by a UX community that seems open in its contradictions. The discipline is detached from the substance of the underlying products it is applied to so empathy for users is mixed in with discourse of psychological exploits for increasing user engagement. There are Laws of UX that use psychology to design better products & services and at the top of most UX book lists you’ll find Nir Eyal’s Hooked to learn how to build habit forming products. Nir says he wants to see people hooked on products that promote healthy habits, but of course the ones getting rich from a product are going to believe their own bullshit when they say it’s harmless, healthy, or going to save the world. Another seminal UX book is Steve Krug’s Don’t Make Me Think which has popularised the relentless removal of “friction” from user interfaces for over two decades. When you’re trading crypto with your life savings you do want to think about every thing you do despite how much the product will be designed to avoid it.

Marketing is about attracting new customers and retaining existing ones and commercial UX is concerned with removing the barriers that prevent these. UX is powerful because it doesn’t seem like marketing and the practitioners don’t see themselves as being in the marketing business.

Like the sales tough guy that demonstrates his versatility by saying he can sell you this fucking pen. UX doesn’t care what you hit with the baseball bat, it just makes sure you don’t get splinters from it. Web3, NFTs, and blockchain products need this product agnostic approach that keeps everything in the realm of experience because blurry, uncertain, or non-existent usefulness is a form of friction itself. Consider FTX and all the other centralised crypto exchange, trading, and lending platforms that turned out to be massive scams. Centralised crypto products come from a community-wide UX need to obscure necessary complexity rather than create usefulness that is concrete enough to justify it. Complexity justified by usefulness is obvious in products like Blender where a terrifying interface hasn’t stopped it from becoming an industry standard. The evidence that gaining the expertise to use it will pay off is overwhelming.

There is no wonder that crypto, metaverse, and now AI pushers, are obsessed with UX. They talk about the user experience as a final barrier to adoption as if people are clambering behind a reinforced wall for a prize they can see and know they need. UX ignores questionable usefulness and the bright aura of care distracts from real questions of ethics and harm. It hides the real intentions of the business, not just behind a posture, but behind UX professionals who have a genuine sense of care. UX researchers and designers talk about empathy because they are empathetic people. In a commercial context there is tension between that empathy and viable business activity so the role becomes usability design by another name.

UX seniors working outside commercial constraints don’t help the situation. They push the fight for the user rhetoric in Medium articles, tweets, and LinkedIn posts. They goad young UX starters to push for empathetic values without acknowledging how few contexts they are compatible with. For most, choosing where you work is a luxury. It’s going to be the commercial UX roles that pay the best every time. Designing socially beneficial products is something to strive for, but not something that should weigh on the shoulders of a junior UX designer while their manager is asking them to draw a dark pattern in Figma.

UX needs to make clear distinctions between commercial design work and design as a social good so the aura of care is not just an aura. Until that happens we’ll continue to see the worst companies hire the best people to help them make the worst things.

Reverse Vapourware.

Vapourware claims to solve a real problem in a way that seems impressive for its time. Your money is gone before the truth comes out: the purpose is real but the product is vapour. But vapourware doesn’t work with the subscription model of the Software-as-a-Service market. Our perception of software as a product has changed as well. Marketing for software emphasises features, properties and potential rather than any concrete purpose. The products are real but the purpose is vaporising.

The internet, the web, and email give us unprecedented access to other people, information, and entertainment. They serve their purpose. That’s why they’re irresistible. In contrast, algorithmic content feeds induce engagement to supplement their purpose. They are irresistible by design.

Criticism of harmful tech should always be aware of this distinction. When ignored the criticism is easy to dismiss as anti-progress. Paul Graham does exactly this in his 2010 essay The Acceleration of Addictiveness.

“It’s the same process that cures diseases: technological progress. And when progress concentrates something we don’t want to want—when it transforms opium into heroin—it seems bad. But it’s the same process at work.”

Paul Graham – The Acceleration of Addictiveness, July 2010

The conflation of addictive usefulness and designed addictiveness strengthen his dismissive stance. It’s also important to recognise his arguments hinge on technology and not products. AI doomerism fuelled by proprietary AI product releases does the same thing. Capitalist owned AI products are not getting an inch out of control if no one is paying for them. But call them technologies and the business accountability vaporises.

“You can’t put the genie back in the lamp” builds on the technology generalisation to create a sense of inevitability. It implies we are the ones who need to adjust and adapt, not the genie. It implies that these things will hurt us but only if we don’t learn how to protect ourselves from them. Debates over abstract concepts like addictive technology or existential AI risk distract us from foul play on a product level.

Gamification and manipulative engagement techniques allow products to thrive without a concrete purpose. Marketing for Notion uses broad purposes like “productivity” and “collaboration”. People love using Notion but they have to define their own purposes that the software can serve.

Marketing for reverse vapourware contains no trace of purpose at all. Web3 may be the best example of this. Web3 marketing, CEOs, and VCs rarely claim a concrete purpose. If they do it’s either dependent on some future event, described as unrealised potential, or doesn’t hold up to five minutes of critical thought.

We define our own purpose and for a product which can cause harm, finding a purpose is the fool’s errand. The businesses behind the products choose the purposes they endorse and distance themselves from the ones they oppose.

They can define the criteria for their own success and they free themselves from any criteria for failure.

Everything is Beautiful All of the Time.

The wait for the successor to OK Computer was a tough time for Radiohead dorks like me. Coldplay and Muse may not be here if it weren’t for the superficial itch-scratching they delivered to us desperately impatient fans. But when Radiohead finally released an album in October of 2000 it was a shock to anyone who enjoyed the one before it. The guitars were replaced by electronic tones and distorted vocals. Bleeping, blooping, bullshit.

After I let go of my expectations and gave it more time those bleeps and bloops began coming together. Kid A grew on me and became a favourite album, one I listen to over twenty years later. My perception of Radiohead changed from a band that was comparable to other alt-rock groups into something else. Other music I enjoyed before began to feel thin and limited. My mind had been opened a notch more than before.

Perseverance after the unexpected

What if, before Radiohead released the album, an AI system generated a bunch of albums as possible successors to OK Computer and fans were asked to choose the ones they preferred. What are the chances they would pick something as confronting and unconventional as the one Radiohead created?

This isn’t an attempt to define art, nor is it an attempt to raise one form of art above another. The variations in how things are made are unquantifiable; unique to each maker. The same goes for how those things are received.

This is about what happens when a creative work gives us more after we give it more; and what makes us do that.

It has nothing to do with what prompts us to give something another chance; instead it’s about what we require in order to believe that it’s possible for more to be there. A friend might tell us about the subtext we missed in a boring film but we’re not going to rewatch it without a thought of the director or writer and their intent.

Nick Cave’s response to a song generated by AI in his style may be dramatic and overly fond of suffering as a prerequisite but he makes a strong point:

What makes a great song great is not its close resemblance to a recognisable work. Writing a good song is not mimicry, or replication, or pastiche, it is the opposite. It is an act of self-murder that destroys all one has strived to produce in the past. It is those dangerous, heart-stopping departures that catapult the artist beyond the limits of what he or she recognises as their known self.

Nick Cave – The Red Hand Files, January 2023

Cave confines his view to the interests of the artist but it applies as much to the receiver of the art. We give more because we hope to get something back and we hope it will be something new that catapults us forward.

Superficial beauty is pleasant but it can only hold you until it is replaced by something else. Superficial beauty can be automated because it follows conventions of composition. We can’t break conventions of composition without a genuine intent to communicate something in a way that is shaped by what is being communicated.

Sometimes we dig into a creative work but find nothing; an empty void. It doesn’t mean we failed to dig enough or that the artist failed in their communication. As long as we know something was made to express something other than superficial beauty, we’re left to like it or not; we can’t invalidate it as failure because that denies the artist of the integrity in their intent. An unconventional composition created without any intent to communicate something is noise; any beauty would be accidental.

We don’t need to reduce artistic expression to a product of pain and suffering. If that were true we’d share our galleries with the works of elephants, tigers, and snails. Sympathy is not empathy.

When a chimpanzee throws its shit at a tree we don’t consider it a remarkable expression of the chimpanzee condition. Conversely, if we frame that shit and display it in the Louvre the chimpanzee won’t feel any more understood. We don’t know how a chimpanzee would express themselves through art because we don’t know what a chimpanzee wants to express and how they would express it.

Art communicates through a kinship between the artist and the observer. This translates into a relationship of trust because we have to believe that the artist created the work with genuine intent. Artists need to be observers as much as the observers they expect to appreciate their art. How can we trust the intent of an artist that can’t relate to a person’s appreciation of an old hand-woven rug over a mass-produced one?

A computer can’t synthesise that human artistic expression without synthesising the trust required for a human to appreciate it. That is a dishonest relationship which survives by keeping everything as conventional as possible. The result is infinite variations of aesthetic appeal which mask the fact that every variation is composed within a set of rules that gained legitimacy in the past. (See “Corporate Memphis“)

Consider how much the web has been visually standardised to the point it creates an almost unreasonable yearning for something different. That’s a symptom of the synthesised trust that conventional composition techniques result in. Jony Ive’s flattening of the iOS gui; Google’s Material Design project; Facebook’s uncustomisable profile pages. They all contribute to a pool of automatable resources that can make a crypto exchange appear legitimate despite almost any suggestion to the contrary.

That yearning for something different is a yearning for those unconventional compositions and it feels unreasonable because we know deep down that they won’t make any sense if their purpose is purely ornamental.

Ineffective Automation.

I would give anything to be able to pay $28,500 for my computer. Anything! And I can’t, I can only buy these frigging $1000 computers that don’t do what computers are really good for. If you think about it, this is completely fucked up. People are valuing their cars more than their computer? They don’t have any idea what a computer is, they’re just using it to play movies. If you think about this, this is the corruption of consumer electronics and that the computer is basically for convenience rather than for actually doing primary needs.

Alan Kay interviewed by Adam Fisher in Palo Alto, August 2014 – Valley of Genius podcast Season 1, Episode 5 (https://twit.tv/shows/valley-of-genius/episodes/5)

It’s hard to ignore the persistence and flourishing of printed books despite an abundance of digital alternatives. Since computers have had screens they’ve had text. Yet MP3s killed CDs and only the most dedicated buy their movies on discs instead of online.

This difference between books, music, and movies, poses a question; what makes digital automation effective for us?

The easy answer is that automation makes things easy. It takes the laborious bits away so you can enjoy the bits you want.

The easy answer makes sense for music and movies because most of the laborious bits of physical media are peripheral to the listening and watching part, not part of it. But reading a physical book is different, it’s not quite as passive. The physical attributes of the printed book are intertwined with the reading part.

Here’s what I mean by that; to listen to recorded music, physical media or not, you use your hands—or voice—to start and stop the music, but your ears are all you need in between.

Printed books, though, need to be held up and open and you can’t stop holding them until you’re finished reading. So the digitisation of physical media for music and movies is effective for us because its only change to the listening and watching part is the potential for higher quality audio and visuals. It automates away the peripheral bits between us and the good stuff because that’s where the physical media was most problematic.

For reading there’s an effectiveness of the printed book that no amount of paper-like display and skeuomorphic interface has managed to fully capture. E-Readers still need to be held like a book. They have pages like a book, and can be bookmarked too. You can highlight passages and write notes. They also do things books can’t do like having internet connectivity, instant purchases, and the ability to hold hundreds of books at a time; none of which have much to do with reading.

Consider the fact that it’s very difficult to do something else while reading a physical book because your hands are required at all times. It’s a binding that makes reading a deliberate activity, you can’t do a Sudoku while reading a book. Yet almost every feature an E-Reader brings to the party is an invitation to do something other than read.

So, what if that binding to the book is what makes the printed book effective for so many people?

This is where automation—or computation—can be seen from a different perspective, a counter-intuitive one. It’s what I think Alan Kay is referring to in his frustrated quote. What reason does the average person have to be interested in the potential of a computer when hardware upgrades are prompted by software that tells you to upgrade, either explicitly or implicitly. Outside of video game and 3D graphics, the computing power of a device does not correlate with any increasing or decreasing amount of tangible effectiveness. We are being sold speed but without any concrete reason for why we need it.

It’s easy to say that automation is what makes things easy because it’s a lazy way to describe the potential for computation. It ignores the potential for automation to make things more involved, more purposeful, more effective. It opens friction to being something more than a UX dichotomy of being good or bad and instead as being varying degrees of effective.

This doesn’t mean that E-Readers should have a spring loaded cover that requires extra effort to hold them open. What it means is to recognise any exertion of energy as being everything it is in addition to being an exertion of energy. A fish swimming against the current could save plenty of energy by turning around, but they aren’t thinking about the swimming.

The effort exerted while reading a printed book minimises awareness and access to any potential outside of what is being read. The book wants your attention, it doesn’t work without it. We don’t long for some device that will hold the book and turn the pages for us because our hands are effective. We’re only aware of things that interfere with whatever we’re trying to do; obvious problems.

It’s not about having a sentimental attachment to the way things have been done. Far from it. This is a design concern; a way to design with a purpose constrained by the circumstances that are relevant rather than the ones that seem relevant.

Printed books live on because digital books automate all the things that seem relevant about reading a book. E-Reader technology is a collection of features that satisfy an array of purposes related to reading—the peripheral concerns—instead of having a purpose of making reading more effective. If we strip away the internet connectivity, bookmarking, highlighting, and dictionary lookups, we’re left with an inferior imitation of a printed book that needs to be recharged.

The Undeliberatable Means.

This is a deliberative problem unlike deliberative problems of the past. In the past, deliberation led to decisions about means to be employed in given circumstances to achieve given and desired ends. Means were deliberated, but the circumstances and ends were not subject to deliberation. Today, deliberation is inverted. The computer provides new means — the means are given by technological development — but the circumstances and ends of computer use are, themselves, the subject of deliberation in the process of product development. This is a fundamental characteristic of our time, and it profoundly influences the development of human-computer communication.

Daniel Boyarski & Richard Buchanan — Computers and communication design: exploring the rhetoric of HCI, April 1994 (https://dl.acm.org/doi/10.1145/174809.174812)

The norm in the tech industry is to find ways to use computers to solve problems. “How can we use computers/the internet/software to solve a problem?”.

Computers as the undeliberatable means changes the way we design.

It’s difficult to consider computation as a building material, an option among all other building materials, like metals, woods, stones, and plastics, because of this tendency to start with computing and find things to build with it.

Solutionism is based on this inversion of deliberation where we find purposes we can satisfy with computation, and we squish and twist the purpose and its circumstances till the tech solution seems to be the most appropriate one.

When we say “we need to find a product/market fit” we are expressing this concept by saying that we have satisfied a purpose that we haven’t found yet.

User Experience design as a bridge

Alternatively, if a purpose is identified and as part of the design process the circumstances of the purpose are understood to be human, the human needs are part of the design process. Any need to apply a human concern after the materials are selected would be a failure of the design.

For instance, suppose we believe — as I and others might argue — that the central charge to HCI is to nurture and sustain human dignity and flourishing. Note that this is not to say that HCI’s claim to legitimacy ought to be to nurture and sustain human dignity and flourishing, but rather that it always has been.

Paul Dourish — User experience as legitimacy trap, October 2019 (https://dl.acm.org/doi/10.1145/3358908)

As Dourish says, the central charge of HCI is to nurture and sustain human dignity. In this framing it seems strange for human dignity to not be a defined circumstance of a design purpose. Design is not defined by the purpose, but ignoring the circumstances of a purpose will either prolong the path to a good outcome or miss it all together.

I see a lot of things, including the whole discipline of HCI, as a result of this inverse view of design. User Experience design aims to make the interactions between a user and some digital product as smooth and seamless as possible. Isn’t it strange that’s not something that occurs automatically because it’s an important part of satisfying the purpose?

It doesn’t mean that UX is a meaningless effort, we recognise this shift in the design process and UX is a result of that shift. It’s a bridge.

What happens when we stop deliberating over the means, or the materials, we use to satisfy a design purpose? Is it possible to have both a pre-determined means and a pre-determined purpose and design a way to make it work? I believe it is, but that’s what we usually call a hack, a kludge.

The right tools for the job, not the right job for the tools.

It’s important to be clear about what I mean when I say “material” because it suggests a physical entity, an object. If we’re talking about materials as options for satisfying a purpose, we’re talking about anything that has identifiable properties. Something that can be assessed for its pros and cons as they apply to the method being considered for satisfying a purpose. In this sense, a protest, forming a union, paying for a service, are all materials. They are options that should be candidate for selection just as much as the internet or software.

There are realities that make it difficult to consider this demotion of technology as an option on par with things like protesting, or… wood. The world is full of tech companies. Software development businesses that survive and thrive by finding purposes to satisfy with software. They can’t decide that an issue is best solved by political efforts more effectively than a networked software solution. They can decide that it’s not a fitting purpose and move to the next one. They deliberate the purpose.

It doesn’t mean that software companies can’t satisfy purposes, we have plenty of evidence they can. But it does mean that they will try to satisfy purposes that software is just OK for.

Paul Dourish’s article “User Experience as a Legitimacy Trap” talks of usability as being the legitimising value of HCI in industry, therefore trapping it from realising the original HCI values of human flourishing.

But if we consider usability as just a thing that’s done to an existing thing we can see that it doesn’t actually change the design, as in the method of satisfying a purpose. It is more like sanding the rough edges off a wooden table. Usability says nothing about which features are there, it just takes the features and makes them easy to use. It’s outside of the design process of the thing. A micro design process of the things between the thing and the subject of its purpose.

We’re almost proud that design is perceived as the shaping of something to be as inoffensive as possible. Our design influencers speak of things like “Human-Centered Design” or “Humane Interfaces” as if that’s some novel concept that designers of the past neglected to discover.

If someone needs to be told to think and work in a human centered way when they are designing something, it should be a clear indication of how separated the discipline of design has become from what it is the design is being applied to.

Is there Dog-Centered Design in the dog product industry? Do the designers there need to be reminded of the purpose of the things they are designing? Do dog toy companies practice rubber-centrism, where they search for dog related problems that strong, but malleable, rubber can satisfy the purpose?

In solutionism we have to remind ourselves that we make something for a human because we put the means high up on a pedestal, unquestionable in its power to deliver whatever we need, undeliberatable, and we hold it up there above the ends, the purposes, and the circumstances around them. We replace existing things with computational things and we consider any aspects of the old that can’t be replicated by the new to be irrelevant. We treat the ends like clay. We mould and cut off excess parts, so the inadequacies of the tech are less apparent. The ends are a prototype for the tech, something that accepts the tech. The prototype becomes the product when the tech has been accepted.

The Interface and the Potential.

Talking to a computer is weird for me and while I know that’s a part of getting older and resisting change, the idea that it may not be weird for some seems worrying. The computer you can talk to is not invisible but a visible human-imitating form like The Thing. It becomes a presence that can’t be ignored because its usefulness as a tool is so general that you rely on it for tasks, not a task, the void felt in its absence should highlight how much of a presence it has. Computers are machines of potential. A hammer is a hammer, but a computer is whatever you need it to be.

Why should a computer be anything like a human being? Are airplanes like birds, typewriters like pens, alphabets like mouths, cars like horses? Are human interactions so free of trouble, misunderstanding, and ambiguity that they represent a desirable computer interface goal? Further, it takes a lot of time and attention to build and maintain a smoothly running team of people, even a pair of people. A computer that I must talk to, give commands to, or have a relationship with (much less be intimate with), is a computer that is too much the centre of attention

Mark Weiser — The World Is Not A Desktop from January 1994 ACM Interactions magazine https://dl.acm.org/doi/10.1145/174800.174801

Mark Weiser’s original ubiquitous computing ideas seem to rely on the interface as the main point of concern, where any attention directed to the interface is considered an unnecessary prevention of the tool itself becoming invisible.

But when we’re using a computer the interface isn’t the centre of attention as much as the potential of the computer is. When you are aware of it, potential is attractive and powerful. Computing potential is powerful in how generally useful it is. General usefulness is harder to ignore and its absence is easier to notice.

I’ve spent quite a bit of time trying to understand why it’s particularly hard for me to focus on tasks while on a computer. I noticed it was not the actual distractions from interfaces, like notifications or alerts, that broke my focus — they are easy to turn off. What hurt me more was the potential for distraction that lies behind the interface. Distraction is not always helpful but as a form of escape, it is useful.

A computer connected to the internet is the embodiment of potential. That connectivity, as potential for distraction, is always ready if you need it. Any challenging activity is haunted by that potential to escape it.

I’ve found that potential is more present and more powerful if the interface requires less effort from you. The effort you exert in using something also binds you to that thing you are doing. For example, a paperback book needs to be held up and open, otherwise it will close and fall. The interface of a paperback demands constant engagement with your hands.

Reading a text on a computer screen does not require active engagement from anything other than your eyes. It requires occasional input to tap the space bar or scroll the mouse wheel but your hands play no part in holding the words up in front of your face.

The less an interface requires from us the more it invites us to split our attention — to never give all of it to one thing at a time.

Mark Weiser is right when he says the interface shouldn’t be the centre of attention but it doesn’t mean the interface should be passive and invisible. Interfacing is not just a way of doing, it’s an interaction. Every interaction involves actions that can directly and indirectly influence the exchange. Like facial expressions, hand gestures, or tone of voice in oral conversation.

If we reduce the word “friction” to its negative connotation, we think that holding a book up, and open, is some kind of unnecessary and laborious part of reading. We ignore the subtle values that come from increased involvement in an activity and we only see the positivist view of wasted energy and unnecessarily occupied appendages.

It’s easy to assume the friction between us and computers is in the effort we exert in using them, but I think it’s more nuanced than that.

When effort is required from us because of a shortcoming of the technology, such as learning how to speak in a way computers can understand, our efforts are less linked to what we need and more to how we get it. But the same is true of keyboards as an interface. So what’s the problem?

Keyboards wouldn’t exist without typewriters and computers, they are input devices. We don’t use them for face-to-face conversation with friends. To learn how to type is to learn something new, not to modify that which is normally used in some other context.

This means the keyboard can almost disappear once a certain level of skill is achieved because we know keyboards are a computer thing. We’ll only use them with a computer and so we don’t need to consider if they are plugged into a computer or plugged into a person each time we use them.

Our voice as an interface method with computers is shared as a method between people. As long as we continue to talk to people, the voice interface with computers can’t become invisible because we’ll always have to be aware of the need to switch context.

Maybe this is what Mark means when he says “VR, by taking the gluttonous approach to user interfaces design, continues to put the interface at the centre of attention” because in VR everything you do is a modified version of something you do in the physical context.