Showing posts with label tech. Show all posts
Showing posts with label tech. Show all posts

14 May 2025

The state of UX design, and a job I did not take

An old colleague reminded me of a story about a job I did not take. It’s partly a story about the state of user experience design circa 2010, partly a story about the state of user experience design circa 2025, and partly a story about professionalism & org development.

I interviewed to become the First UXD at midsize company making niche consumer electronics which had superb industrial design capacity but did not have a UX design practice at all, and realized they needed it. It was a good org in a lot of ways. I liked the people I met. They did not understand UXD, but they mostly knew that they didn’t know.

They gave me an offer. It was Grown-Up Money, but the number was weak, rolling back the odometer on my career by a decade.

I told the hiring manager that I would not take the offer. He was unsurprised, had warned HR that the salary was inadequate, and was confident that he could get them to improve it significantly.

I said:

Look, more money in itself won’t change my mind. That number reflects a different conception of what this job is than I thought we understood. That number is low for someone to do UX design, and you need someone to pioneer a UXD practice.

He replied:

You’re right. We don’t have a commitment to go heavy on UX design. We are adding a UX designer to the team as an experiment. If that works out, we want to build a practice.

I said:

That sounds reasonable. And it is doomed to fail.

If you don’t address how UX design connects to your design & development process, you won’t get much leverage even from a great designer. With the pay you offered me, people in the org will smell on the person in that role that they are not worth listening to about the process & organizational elements of turning good UX design into actual products with good UX.

It is even worse if the person in this job manages to drive some product improvements in spite of that problem. The org will learn the wrong lesson. They will expect modest gains without org change. They will never invest organizational juice into supporting UXD properly.

This position is set up for failure. I don’t want that job, even if you pay me more to do it.

The hiring manager said:

That is very clarifying. I only half-understood that problem. I know I cannot address it from where I am sitting in the org.

I said:

You really need someone who has made it happen before, but there are all of a dozen people in the world with that on their résumé, and you’ll need a crowbar to pry them loose from where they work now, so the best move available to you is to roll the dice on investing a lot of trust in someone like me, who does not have that experience but has the skills and a fundamental understanding of the process & org challenges. But it is obvious why it is so hard to make a commitment to that. You need executive air support.

Because I like what y’all do and want to see y’all build a real UXD practice even if I don’t get to be the one to do it, I’ll offer you a bit of free consulting time. Put me in front of someone who can drive org change, and I will lay it out the case for them, like I just did for you.

The hiring manager was enthusiastic about that being the Right Thing. They said that they would communicate my offer, but predicted that leadership would not bite. I was unsurprised that I never heard from them again. I needed a new job, but I felt relieved that I had dodged a bullet.

In the 2010s, we saw a lot of orgs became enthusiastic about the importance of user experience. They hired a bunch of designers, then just sprinkled them into their organizations without changing any of the structures or processes.

My prophecy for that organization I interviewed with happened across the industry. User experience design “didn’t work”, as in the famous parable we tried baseball and it didn’t work.

The fanatical proponents of baseball tell us that it is a very exciting game, fun to play and fun to watch. They are clearly either stupid or evil or both, because we tried baseball and it didn’t work.

First of all, the requirements for the game are stupid: it does not scale. They say you need at least nine players on a side. That’s stupidly inefficient. The minimum number of players is clearly four: three men on and one batting. That’s how we played: four people on a side.

With only four players, we didn’t need all those bases ⋯

[⋯]

The thing that finally condemns the entire “baseball” idea, however is this: even with all these improvements, the game is no fun at all.

We tried baseball, and it didn’t work.

Frankly, the UX design profession bears significant responsibility as well. We have not delivered clarity about what we need or even what we do, because we lack it as a professional community. Too many of us were insufficiently skilled. Too many of us accepted doomed positions because we felt we had to.

Now we have a lot of capable UX designers leaving the profession or having trouble staying in it … and a lot of crappy products & services. A tragedy.

04 February 2025

Artificial superintelligence

A start on an index of useful articles:

Wait But Why — The Artificial Intelligence Revolution 1 & 2

A lively and accessible introduction to the Bostrom-ish argument that artificial superintelligence is plausible, and why the prospect is scarier than it might first appear.

Superintelligence: The Idea That Eats Smart People

An argument that the Bostrom-ish vision of superintelligence is a seductive mirage.

31 January 2025

What motivates the “tech” right?

To support my index of resources about neoreaction and adjacent far right movements which overlap with “tech” culture, I have taken the liberty of capturing commentaries from other people tucked into Xitter & Facebook. These are not quite the same as my reading, but both are illuminating in exploring how “tech” executive culture itself understands the shift. Emphasis in quotes are the authors’, not mine.

Aaron Bartley

3 February 2025

Why are they doing this? What’s motiving the techno-feudalists (Musk, Thiel, Vance, Andreessen and their minions) to wreck our country and tank the economy?

I’ve subjected myself to hours of YouTube interviews and dozens of screeds by the tech overlords and their high priests. They have made no effort to hide their motivations. This is my attempt at a basic synthesis of how they explain their shift from neo-liberals to Trumpists:

  1. The techno-feudalists have billions invested in crypto and without a big push from the government and the removal of all regulations, their money will be lost. Crypto has not mainstreamed in the way they expected, both because of Biden-era regulations on speculative investments and because the public just isn’t into it. They need the state to manage the transition to crypto.
  2. Similarly, they’re gravely concerned about the trillions they’ve invested in AI. Any amount of regulation or constraints by the state is seen as a death-knell. They know that AI has prompted a speculative bubble and they need the state to manage the bubble through subsidies and contracts. They also need the state to aggressively shield the US from Chinese AI technology.
  3. Both crypto and AI are burning up the earth. The electricity demands of both AI and crypto are enormous. They need a completely deregulated energy industry and rapid shift away from any climate policy. Even the mention of climate change is a threat to their fortunes.
  4. China has caught up faster than they expected in all realms of tech. They need an ultra-protectionist/nationalist regime to keep Chinese technology at bay.
  5. Lastly, they resent their employees deeply for the political concerns they’ve been raising about climate change and DEI. They blame the “radicalizing” influences of elite institutions and want the government to shut these institutions down or reform them heavily.

(Edited: As Mike Puma and Frits Abell have commented, the over-arching motivation is to make the state a profit center for themselves through privatization, subsidies and contracts, just like the post-Soviet oligarchical takeover in Russia.)

A couple of supporting links from JK:

  • Blockchain in 10 minutes — my no-math primer on the tech behind cryptocurrencies
  • The Twin Insurgency — my single favorite examination of What Is Going On, addressing that final point about the plutocrats & criminals trying to hollow out and exploit state capacity

Where did the tech right come from?

Jasmine Sun 30 January 2025

Here’s a 4-part grand theory of the tech right.

“Tech disposition”

First, there is a unique “tech disposition”. David E. Broockman and Neil Malhotra surveyed 700 founders:

  1. they’re more liberal than most Dems on social issues + taxes, but very conservative on regulation + labor (even outside of tech)
  2. pro-market values trace to adolescence & cannot be explained by demographics or economics
  3. these beliefs do not resemble other economic elites

First, technology entrepreneurs are far from monolithically conservative; rather, they overwhelmingly support Democrats. This is not an artifact of our survey sample: campaign contributions from technology industry employees and ultra-wealthy technology entrepreneurs to Democrats have long exceeded those to Republicans. Although it is not surprising that individual wealthy Democrats exist, we show that the wealthy in an entire industry support the party pushing for higher taxes, deviating from the norm among the wealthy at large.

However, our findings are not so simple as that most technology entrepreneurs are liberals. Our second group of findings is that most technology entrepreneurs share a particular set of views across policy domains; that this set is conservative on many issues; and that this set is distinctive to technology entrepreneurs, being rare among other wealthy individuals, other Democrats, and other wealthy Democrats. In particular, on issues related to economic redistribution, globalization, and social issues, technology entrepreneurs are typically as or more liberal than Democratic citizens, Democratic wealthy individuals, and Democratic donors; they are also more liberal on all these issues than millionaires in the mass public. For example, 82% of technology entrepreneurs indicate support for universal healthcare even if it means raising taxes. However, technology entrepreneurs are very skeptical of government regulation. Indeed, technology entrepreneurs’ views on regulation closely resemble those of Republican donors, and are more conservative from those of other millionaires in the mass public, Democratic citizens, and wealthy Democrats. For example, 82% of technology entrepreneurs also think the government should make it easier to fire workers. These large differences persist even between technology entrepreneurs who identify as Democrats and other Democratic constituencies.

This finding is surprising in light of popular accounts that describe technology entrepreneurs as falling within categories familiar in American politics: as typically liberal, typically conservative, or typically libertarian. However, we show that most technology entrepreneurs have a pattern of views that does not fit in any of these categories, has not been seen elsewhere, and is not predicted by prior work: a majority of technology entrepreneurs explicitly describe their views as supporting redistribution of wealth but opposing regulation of business, approximately double the share as in any other group of citizens, donors, or wealthy individuals we surveyed.

Our third set of findings concerns suggestive evidence for the theoretical mechanisms we posit for why technology entrepreneurs have this unique pattern of views. Our theoretical argument, elaborated below, is that the wealthy from a particular industry may have a unique set of political views because of the distinctive set of political predispositions of the individuals who select into each industry, and further, the experiences they will tend to have working in it. Consistent with this argument, we show that technology entrepreneurs share a distinct pattern of values and predispositions that correspond with their views in related policy domains. For example, with a series of pre-registered comparisons and survey experiments, we show that it appears technology entrepreneurs’ opposition to government regulation can be traced to positive predispositions towards markets and entrepreneurship. We also cast doubt on alternative explanations for their views related to demographics, geography, and pure self-interest.

[ Sun’s thread includes screencaps from what appears to be a slightly different version of the linked paper; I have quoted the corresponding section of the paper I have access to — JK ]

Mark Andreessen’s recent [New York Times] interview with Ross Douhat is very illuminating. Andreessen describes this exact disposition: “pay taxes, support gay rights, get praised” was the implicit deal of the Clinton-Gore era:

As a result of that, the most natural thing in the world for somebody like me was, “Oh, of course, I’m a normie Democrat. I’ll be a normie Democrat forever.”

Normie Democrat is what I call the Deal, with a capital D. Nobody ever wrote this down; it was just something everybody understood: You’re me, you show up, you’re an entrepreneur, you’re a capitalist, you start a company, you grow a company, and if it works, you make a lot of money. And then the company itself is good because it’s bringing new technology to the world that makes the world a better place, but then you make a lot of money, and you give the money away. Through that, you absolve yourself of all of your sins.

Then in your obituary, it talks about what an incredible person you were, both in your business career and in your philanthropic career. And by the way, you’re a Democrat, you’re pro–gay rights, you’re pro-abortion, you’re pro all the fashionable and appropriate social causes of the time. There are no trade-offs. This is the Deal.

Then, of course, everybody knows Republicans are just knuckle-dragging racists. It was taken as given that there was going to be this great relationship. And of course, it worked so well for the Democratic Party. Clinton and Gore sailed to a re-election in ’96. And the Valley was locked in for 100 years to come to be straight-up conventional blue Democrat.

Yes, 100 percent. I would say even more than that. We all voluntarily live in California. We not only have the federal dimension of what you’re saying; we also live in these very high-tax cities — San Francisco, Palo Alto. And I think by paying higher taxes and not objecting to them, you prove you’re a good person. For that generation of enlightened centrist liberals, it was: Of course you pay higher taxes, because we’re the Democratic Party. As an agent of positive social change, of course you want to have a bigger safety net. Of course you want to fund all these programs, and you want to fund all these activist campaigns. Of course you want that.

The term “Camelot” was never used, but there was a Camelot feeling to it at the time that people must have felt in the early ’60s in the same way. Like, wow, yes, it’s all happening, and it’s all going to happen, and it’s going to be great. Yeah, they’re going to tax us, but it’s going to pay off. That was like a full-fledged part of the Deal.

Look, quite honestly, I am trying in none of this to claim moral high ground or moral sheen or anything, just to kind of take the edge off that, if that’s what I’ve come across.

Quite honestly, the tax rates didn’t really matter because when an internet company worked, it grew so fast and got so valuable that if you worked another three years, say, you’d make another 10X. Another 5 percent higher tax rate washed out in the numbers. So we weren’t forced to really think that hard about it. It just seemed like this was the formula that would result in everything working.

Social justice

Second, social justice fused with labor, nonprofits, & the state — fueling a wave of employee activism and speech / DEI / climate requirements. Structural issues demanded process solutions. Tech’s hatred of bureaucracy trumped their support for social justice.

See again the Douhat interview with Andreessen:

Andreessen

By 2013, the median newly arrived Harvard kid was like: “[expletive] it. We’re burning the system down. You are all evil. White people are evil. All men are evil. Capitalism is evil. Tech is evil.”

Douthat

But they’re working for you. These are people who are working for you.

Andreessen

Of course. So I had this moment with a senior executive, who I won’t name, but he said to me with a sense of dawning horror, “I think some of these kids are joining the company not with the intent of doing things for us but destroying us.”

They’re professional activists in their own minds, first and foremost. And it just turns out the way to exercise professional activism right now, most effectively, is to go and destroy a company from the inside. All-hands meetings started to get very contentious. Where you’d get berated at an all-hands meeting as a C.E.O., where you’d have these extremely angry employees show up and they were just completely furious about how there’s way too many white men on the management team. “Why are we a for-profit corporation? Don’t you know all the downstream horrible effects that this technology is having? We need to spend unlimited money in order to make sure that we’re not emitting any carbon.”

See comments from Paul Graham:

For the press there was money in wokeness. But they weren’t the only ones. That was one of the biggest differences between the two waves of political correctness: the first was driven almost entirely by amateurs, but the second was often driven by professionals. For some it was their whole job. By 2010 a new class of administrators had arisen whose job was basically to enforce wokeness. They played a role similar to that of the political commissars who got attached to military and industrial organizations in the USSR: they weren't directly in the flow of the organization’s work, but watched from the side to ensure that nothing improper happened in the doing of it. These new administrators could often be recognized by the word “inclusion” in their titles. Within institutions this was the preferred euphemism for wokeness; a new list of banned words, for example, would usually be called an “inclusive language guide.”

This new class of bureaucrats pursued a woke agenda as if their jobs depended on it, because they did. If you hire people to keep watch for a particular type of problem, they’re going to find it, because otherwise there’s no justification for their existence. But these bureaucrats also represented a second and possibly even greater danger. Many were involved in hiring, and when possible they tried to ensure their employers hired only people who shared their political beliefs. The most egregious cases were the new “DEI statements” that some universities started to require from faculty candidates, proving their commitment to wokeness. Some universities used these statements as the initial filter and only even considered candidates who scored high enough on them. You’re not hiring Einstein that way; imagine what you get instead.

Dem pivot

Third, Dems started to pivot away from key Clintonite ideas:

  1. neoliberal economics would be good for america
  2. digital tech would spread good liberal values

Henry Farrell calls this the collapse of both the “neoliberal” and “Palo Alto consensus”:

If that has changed, it is not simply because progressives have moved away from Silicon Valley. It is because both the neoliberal consensus and the Palo Alto consensus have collapsed, leading the political economies of Washington DC and Silicon Valley to move in very different directions.

A lot of attention has been paid to the intellectual and political collapse of neoliberalism. This really got going thanks to Trump, but it transformed the organizing ideas of the Democratic coalition too. During the Trump era, card-carrying Clintonites like Jake Sullivan became convinced that old ideas about minimally regulated markets and trade didn’t make much sense any more. Domestically, they believed that the “China shock” had hollowed out America’s industrial heartland, opening the way for Trump style populism. Reviving U.S. manufacturing and construction might be facilitated through a “Green New Deal” that would both allow the U.S. to respond effectively to climate change, and revive the physical economy. Internationally, they believed that China was a direct threat to U.S. national security, as it caught up with the U.S. on technology, industrial capacity and ability to project military force. Finally, they believed that U.S. elites had become much too supine about economic power, allowing the U.S. economy to become dominated by powerful monopolies. New approaches to antitrust were needed to restrain platform companies which had gotten out of control. Unions would be Democrats’ most crucial ally in bringing back the working class.

Now they don’t. Authoritarian governments have turned out to be quite adept for the time being, not just at suppressing dissidence but at using these technologies for their own purposes. Platforms like Facebook have been used to mobilize ethnic violence around the world, with minimal pushback from the platform’s moderation systems, which were built on the cheap and not designed to deal with a complex world where people could do horrible things in hundreds of languages. And there are now a lot of people who think that Silicon Valley platforms are bad for stability in places like the U.S. and Western Europe where democracy was supposed to be consolidated.

My surmise is that this shift in beliefs has undermined the core ideas that held the Silicon Valley coalition together. Specifically, it has broken the previously ‘obvious’ intimate relationship between innovation and liberalism.

I don’t see anyone arguing that Silicon Valley innovation is the best way of spreading liberal democratic awesome around the world any more, or for keeping it up and running at home. Instead, I see a variety of arguments for the unbridled benefits of innovation, regardless of its benefits for democratic liberalism. I see a lot of arguments that AI innovation in particular is about to propel us into an incredible new world of human possibilities, provided that it isn’t restrained by DEI, ESG and other such nonsense. Others (or the same people) argue that we need to innovate, innovate, innovate because we are caught in a technological arms race with China, and if we lose, we’re toast. Others (sotto or brutto voce; again, sometimes the same people) - contend innovation isn’t really possible in a world of democratic restraint, and we need new forms of corporate authoritarianism with a side helping of exit, to allow the kinds of advances we really need to transform the world.

The Biden admin was far more pro-unions, tech regulation & antitrust. Dems probably needed to adjust (neoliberalism isn’t popular), but this broke the tech / liberal coalition — they were seen as abandoning business to back the activist class.

Douhat

So what, in concrete terms, does that mean? What are the policies that shocked or surprised you about the Biden administration?

Andreessen

They came for business in a very broad-based way. Everything that I’m going to describe also, it turns out, I found out later, it happened in the energy industry. And I think it happened in a bunch of other industries, but the C.E.O.s felt like they couldn’t talk about it.

The problem is the raw application of the power of the administrative state, the raw application of regulation and then the raw arbitrary enforcement and promulgation of regulation. It was increasing insertion into basic staffing. Government-mandated enforcement of D.E.I. in very destructive ways. Some of these agencies have their own in-house courts, which is bananas. Also just straight-up threats and bullying.

Mark Zuckerberg just talked about this on “Rogan.” Direct phone calls from senior members of the administration. Screaming executives ordering them to do things. Just full-on “[Expletive] you. We own you. We control you. You’re going to do what we want or we’re going to destroy you.”

Then they just came after crypto. Absolutely tried to kill us.

Trump

Fourth, tech found a home in Trump’s less ideological, more deal-friendly Republican party. Traditional conservatives were a poor fit, but Trump runs the country as a cult of personality — institutions be damned. and isn’t that what “founder mode” is? From Pirate Wires:

As President Trump begins his second term, he has the support of many business leaders, especially the entrepreneurs and founders who make up the ascendant ‘tech right.’ Why did this faction emerge and rally behind the president? Some point to policy issues such as regulation, taxation, government contracting, and antitrust. Others note that self-interest and perhaps self-preservation motivate its members as much as principle or policy.

But sincere converts to the tech right share at least one thing in common: a belief in founders — change agents capable of upending stale industries — taking on Goliaths, and reaching into the future to unite it to the present. The tech right sees founders, and the qualities they embody and inspire in others, as the key to company success. Conversely, founder-less institutions don’t work, like a body without a head — or perhaps, without a soul.

With its support of Trump, the tech right is just applying this model to politics, the ultimate stale industry, and Washington, D.C., the ultimate Goliath. In Donald Trump, the tech right and the American people see a leader. More to the point, we see a founder.

Other factors

There were other contributing factors: post-2016 techlash, defense money, mean journalists, COVID, plain old opportunism in the new Trump admin. Vocal Trump support from Andreessen, Sacks, Musk ignited a preference cascade.

But I maintain that SV elites’ deepest commitment is not right or left, but to unfettered innovation (& the wealth / growth that results). It’s the spirit of “get out of my way & let me cook” — they’ll align with whichever party makes that possible.

My overall thought on the sort of cultural versus material debate is like, the left liberal camp still underrates the cultural stuff. I think you're right that considering the counterfactual, would Zuckerberg be being like, “Meta is too feminine now” if Kamala won — obviously no. At the same time, when I read PG’s essay, I was like, oh, this is a complaint about bureaucracy as much as it’s a complaint about politics.

It's this cultural thing of builders in, prigs out. There are all these memes like “high agency” or “founder mode” or “live players” or “you can just do things.” My friend Clara, who edits Asterisk Magazine, was telling me about some conference where someone called it “Robert Moses libertarianism.” What does that even mean? And I think this is incoherent, except for that Moses was really going founder mode. He was a high agency guy. He’s very effective, right? And there is some disposition where tech, because it is the industry of innovators and disruptors, is just like: Anybody who’s super effective and hacks the system, whether it's from the inside or the outside, whether it's in tech or in politics, they get that respect.

That relates to why I see progress as like the primary coalition rather than like left or right. Or as Lonsdale calls it, the “builder class.” There is economic self-interest, obviously, but it’s also an aesthetic preference.

Another consequence is that this is causing real rifts between maga right and tech right — most notably over H1-B, but IMO that’s a sign of more fights to come.

The more traditional conservatives — not Hanania, that is — go on to express skepticism about the “Tech Right”’s lack of a moral compass (of the Christian sort); its willingness to endure short-term social harms for long-term economic gains; and its overall bias toward disruption — whatever chaos may come. They’re suspicious of tech’s talk about “human capital” and its flirtations with fascism. The essays read like a warning to fellow reactionaries: Can the tech titans be trusted to preserve American values? Or are they riding the Trump train on the way to a robot-ruled transhumanist utopia? Take slogans like Accelerate or die! — there’s nothing “conservative” about it.

11 June 2024

Good user experience design

It is hard to name the good-ness of good UXD.

Intuitive?

People outside the field often say that they want a user experience which is “intuitive”. I have talked before about how under-considered that word is.

When people say they want a system to be “intuitive,” they typically think they mean that users should immediately understand how a system works when they encounter it. But you cannot really do that with many systems … not even with most systems people talk about when you ask them for an example of something “intuitive.”

Consider the mouse-and-cursor. Most of us have forgotten the first time we encountered it, and thus forgotten how unintuitive we found it the first time we used it. A little box on a string with a button or three on top? If you have just arrived from the 23rd century, you might pick it up and try talking to it. But with ten seconds of demonstration you understand it completely and have some sophisticated applications of it immediately available to you, and even if you didn’t see a mouse again for the next ten years you would still remember how it worked.

There you have what people really mean by “intuitive:” easy to explain, powerful in its implications, impossible to forget. You get that through systems that possess a clear, coherent internal logic that feels natural and obvious. Of course, it can take hard work to figure out those “natural and obvious” behaviors; we interaction designers call that work “interaction design.”

So that is not quite enough.

Delightful?

Cyd Harrell has a good critique of facile uses of “delight”:

delight [is] an ambiguous word, referring to either a level of pleasing someone (a high level) or a way of pleasing them (charm, surprise, in any case a very conscious pleasure). adopting a high level of pleasing users as a goal is good - mostly - but when designers, through some kind of linguistic slippage, adopt the “way” sense of delight to inappropriate contexts, it’s like following the script of a romance when trying to get to know a colleague — awkward. that said, lots of designers meant the level.

in recent years I’ve come to understand that the level can also be a problem in a more subtle way. if delight is a conscious pleasure - the spirit stirred somehow - multiple “experiences” or products or whatever trying to stir our spirit can be taxing. it’s not always additive & if it happens to miss — if it’s the wrong way of pleasure for the context, or the experience is just trying to make sure that it visibly, maybe measurably, exerted a high level of pleasure on you - it asks for attention it may not deserve. collectively, it can be a burden.

sometimes, especially with a longterm relationship like, well, a longterm relationship or like belonging in an institution, what we really need is the background level of assumption that we matter & are cared for, & then the occasional sparks in a special smile, the bed, a voting booth if you’re talking about stirring the civic spirit (I have a story I tell about being overwhelmed with institutional belonging in a library) - those become reinforcing & sustaining.

spoiler: those are harder to design for. & they can’t be accomplished entirely through the tools of design; so that’s where I think delight is tempting - it is suited to our toolset, & we can push it towards measurable. but in doing so (sometimes) we can get on the wrong foot.

Giles Colborne has a rap about how designers tend to justify gimmicky, interesting design as pursuing “delight”, but when one asks people about delightful experiences, they often describe effortless resolution of anxiety, a good UX design goal.

Boring?

Ryan Bigge’s In Defence of Boring UX:

“Only when a product is functional, reliable, and usable can users appreciate the delightful, pleasurable, or enjoyable aspects of the experience,” notes Fessenden. In other words, boring underpins delight — and sometimes boring is delightful.

Cap Watkins praises The Boring Designer:

Maybe it’s born out of seeing apps choose flash over function, or trying to understand just one too many indecipherable icons-as-buttons. Whatever the case, here's an ode to the boring designers among us. The designers who …

  • Choose obvious over clever every time.
  • Rarely stand their ground.
  • Are Practical.
  • Value Laziness.
  • Lead the team.

Delivering power & pleasure?

I used to talk about “systems which deliver power and pleasure to the people who use them”. In 1997, when I was at Alan Cooper’s studio — then the only shop exclusively dedicated to what we now call “UX design”, we had a lively conversation about our mission statement coïnciding with us rebranding from “Cooper Software” to “Cooper Interaction Design”. Alan Cooper had a draft mission statement which was pretty good, but I was uneasy with its allusion to “designing software which is easy to use”. We were simplicity radicals then (and still), but we also worked on a lot of desktop apps which were necessarily complex.

I proposed “systems which deliver power and pleasure to the people who use them”, which I look back on with a mix of pride and unease. It has some distinct advantages as a way to articulate good UX design, and for a while the Cooper studio used it a lot in our materials. (It didn’t work as branding, though. Google search results were … worrisome.)

These days we rightly criticize the concept of “user-centered design” — we need a more global and ethical ground than that implies. (Though fergawdsake in the world we have we need more designers who are at least advocates for users.) But at that time that turn of phrase was a clarifying place to stand, and it still grounds much of how I think about UX design solutions:

Deliver

A lot of tools promise things which they do not deliver, either because they simply do not deliver the right function, or because they are too clumsy in their execution. A feature one does not use just acts as clutter, in the way.

Power

UXD should aspire to make things that are effective and make make people effective; a simple tool can be powerful if it is the right tool, and a tool should not shy from sophistication in the right context.

Pleasure

We need to talk about well-crafted design. This can mean fun, delight, or excitement, yes. But most often UX design should offer the subtler joy of an unobtrusively graceful tool.

31 January 2024

Apple Vision Pro, eye tracking, and the cursors of the future

I am fascinated by how the Apple Vision Pro identifies where the user is looking, treating that locus of attention much like the cursor used on the Mac and other desktop computers; one “clicks” with hand gestures. This is a cunning way to make desktop software usable on this very different platform, and discerning this by watching eye movements is an astonishing technological feat. It is not just a matter of precisely detecting where the eye is pointed, which would be hard enough; our eyes constantly jitter around in saccades, so the Vision Pro has to deduce from this complex unconscious movement where the user has their attention in their subjective experience.

Modifying desktop computer interfaces

It is fun to think about exotic alternatives to the conventional mouse/trackpad & cursor combination. The big gestural interfaces seen in science fiction movies mostly turn out to be a bad idea — Tom Cruise was exhausted after fifteen minutes of just pretending to use them in Minority Report — but I believe that there are opportunities for innovation. Clayton Miller’s 10/gui considers ways we might take advantage of a huge multi-touch surface instead of a little trackpad. Bruce Tognazzini’s Starfire from 1994 is still ahead of available technology, bursting with both good & bad ideas for combining direct manipulation with touchscreens & styluses together with indirect manipulation using a mouse or trackpad. Devices like the iPad have begun to unlock the promise of distinguishing fingers from styluses to create more graceful, complex interaction idioms by combining the two; a few specialists use stylus input tools like Wacom tablets at the desktop, and I feel an itch that more people might benefit from integration of stylus input into their desktop experience.

So might we just replace the mouse/trackpad & cursor with eye tracking? No. I cannot imagine that it could ever provide the fine precision of the mouse/trackpad (or a stylus). But I think eye tracking could combine well with those input tools to make some things more graceful. It would not require fine precision, just the ability to register which window the user is currently looking at.

Discussion with Raghav Agrawal underlines that I am proposing something I hope would deliver a fundamentaly different experience than the Apple Vision Pro. A user of the Vision Pro feels that they control the system with their gaze. A user of the desktop should still feel that they control the system with the mouse, with the system aware of their gaze and using that to ensure that it Just Does The Right Thing.

Solving some multi-monitor challenges

I think this will prove especially valuable if one has multiple big screens, which I expect more and more people to do as they get better and cheaper. I am a lunatic who uses a big wide monitor, a big tall monitor, my laptop’s 16" display, and a little teleprompter display at my desk. I love being able to look around at several open windows, and expect that many people will discover how good this is for work.

But using existing mouse-cursor-window interfaces with multiple big screens does come with disadvantages. Dragging-and-dropping across expansive screens gets clumsy. One can lose track of the cursor in all that space; even wiggling the cursor does not always make it easy to find. With a lot of windows open, one can easily lose track of which one is currently selected.

A radical proposal for multiple cursors

Rather than drive the cursor to appear at the point of one’s visual focus — one does not want the cursor racing back and forth across the screen every time one glances at information on another screen — I think it would work to have a cursor in each window, with mouse/trackpad actions affecting only the window one is currently looking at. When one looks away from a window, its cursor stays where one left it.

This puts a cursor within view wherever one looks, easy to find. Maybe on a big window, if one has not looked at it in a while the cursor returns to the center or gets a little momentary flash of emphasis when one looks back at that window.

The Mac puts the Menu Bar at the top of the screen because the edge preventing overshooting make it easier to decisively mouse to an element there. Keeping the cursor confined to the boundaries of each window makes all four window edges this kind of convenient interface territory.

Integrating eye tracking also eliminates the need to have a selected window to keep track of. In existing systems, actions like using the mouse scroll wheel can produce an awkward surprise when it does not affect the document in view, instead disrupting the content of a window which one has forgotten remained selected. With eye tracking, user actions can always just affect the thing they have in view, eliminating that problem. (I will get to one important exception to this pattern in a moment.)

Acting across multiple windows

Confining input effects to within windows seems like it would break a lot of interaction gestures which require moving across multiple windows, but I think everything one must do that way now can work at least as well in my proposal.

Again, we do not need to move the cursor across windows to select one; attention tracking eliminates the need for a selected window.

One need not move the cursor across windows to do window management. The edges of windows remain drag handles for resizing them and moving them around, and as I said above, with the cursor confined to the window, these become easier targets. One can combine this with the buttons and other controls I envision putting at those edges: drag to affect the window, click to use the control. I am a crank who perfers a tiled display to overlapping windows, but handling overlapping windows is fine: look at the protruding bit and click to pop it to the front.

Drag-and-drop across windows would require a bit of an adjustment, but eye tracking enables an improvement. One starts dragging an object in one window — turns to the other window — and that window’s cursor is right there with the object attached, responding to mouse movements. This will be more graceful, with less mouse movement and less risk of dropping onto the wrong window when dragging between windows on separate screens.

Imagine working with two text documents, referencing an old one while authoring a new one, bouncing back-and-forth between the two. Turning from the new document to the old one briefly, one might scroll to the next page in the old document, use the cursor in that document to select something one wants to quote, copy it, then turn back to the new document to find the cursor waiting right where one left it, ready to paste in the quote.

Plain text as the input exception

Keyboard shortcuts would act on the window one is looking at, just like mouse movement and clicks. But plain text is a little trickier.

It should be obvious how in the new-and-old document example above one may want to type into the new document while looking at the old one. There are a lot of situations like that. Text input boxes need a standard interface element allowing one to lock the box as the target of plaintext input from the keyboard; when that is active, other text input boxes show as unavailable. So one need not hunt down the locked text input box to unlock it, when a text box shows as unavailable, it would a control to unlock the old locked text box, allowing text input to go where one is looking ... or to immediately make that text box the new locked input target.

Having proposed this interface idiom, I am tempted to want the this ability to lock the text input target, overriding the selected window, in the conventional systems we have now!

19 January 2024

The Point Is

I had the good fortune to be at this talk at the 1996 Computer Game Developers’ conference, and it has stuck in my mind ever since. In retrospect, it anticipated not just Facebook’s Metaverse project but Facebook itself. Since Moriarty’s original post has linkrotted, I am re-posting it here for my convenience.

A few preparatory observations for young readers:

  • In 1996, it was still common for computers in an office to not even be networked to each other, much less to the internet. Few people used internet at home. Those who did used modems to connect their computer over their phone line to a nerdy internet service provider ... or more likely AOL, which was still more a walled garden than a connection to the internet. Google had not yet been founded.
  • Seriously, those modems were slow. People used interlaced GIFs on web pages because they would give you a blurry version of the image first, while you waited another half a minute for the images to fully load.
  • It was a common practice for web pages to include a counter at the bottom of the page tallying how many times the page had ever been visited.
  • The Pentium was a new chip used in Windows computers. Incremental differences in chip speeds we would now consider laughable made a big difference in what computers could and could not do; just a few years earlier, the release of the computer game Doom with real-time 3D graphics — primitive as they were — was a happy shock about what had just become possible on ordinary home computers.
  • Sand Hill Road was (and still is) a street in Palo Alto with the offices of many venture capital firms funding Silicon Valley.
  • In this context, “MUD” stands for “multi-user dungeon/dimension/domain”, a very early form of proto-social-media proto-virtual-reality. A large number of people could communicate in text with a computer which had a map of a world with multiple connected spaces — the name comes from the earliest implementations which had maps of a Dungeons & Dragons type “dungeon” full of monsters to slay and magical items to find, but in later versions an array of weird and interesting possibilities. When connected to a MUD, the system understood you as occupying in a particular “room” on the map, and one could move through the map by telling the system to go through doors or whatever; if multiple people were “in” the same place, they would “see” each other in the description of the room, “hear” each other talking (in text chat), and could interact in other ways mediated by the system.


The Point Is — 1996 — Brian Moriarty

The Point Is was the first of my fully scripted lectures. Today, writing on the 10th anniversary of its premiere, it seems to me rather quaint, inspired by the psychedelic muse of San Francisco, and representative of the wide-eyed techno-optimism of the mid-1990s. Parts of it make me cringe. Other parts seem eerily prescient. I must have been on drugs, or something.

Version 1 was originally presented on 1 February 1996, at Mpath’s Internet Game Developers Conference in San Jose, California. On this occasion, I was fortunate enough to command a decent budget and broad control over the venue and equipment, allowing the lecture to be presented under nearly ideal conditions.

The large conference room was completely darkened. Suspended above the stage area was an equilateral triangle, one meter on a side, covered with black light-absorbing material. Affixed to the center of the triangle was a dime-sized dot of 3M retro-reflective plastic. The beam of a 3-watt liquid-cooled green argon laser was focused onto the reflective dot to produce a dazzling point of coherent radiation. Although the point was completely motionless throughout the performance, random wave interference caused it to twinkle and scintillate eerily. The live narration and recorded music were reproduced at room-shaking volume through a high-performance stereophonic sound system.

Excerpts of Version 1 were presented on 21 February 1996 at Imagina in Monte Carlo, and 3 March 1996 at the SPA Spring Symposium, San Francisco.

Version 2 (reproduced below) was given on 31 March 1996 at the Game Developers Conference in Santa Clara, California. The presentation was similar to that of 1 February, although the laser and sound system were less authoritative. George “Fat Man” Sanger provided an introduction.

Excerpts of Version 2 were presented on 5 November 1996 at Online Entertainment ’96 in London.




You are about to take part in an experiment.

An experiment in group attentiveness.

For this experiment to work, it is important that you not look at me while I am speaking.

I have extinguished the house lights to discourage you from watching me.

Instead of looking at me, or anything else, I invite you to contemplate this point of coherent light.

Fix your eyes on the point as you listen to the sound of my voice.

The point will not move while I am speaking.

But, if you gaze at it steadily, expectantly, it may appear to wander.

The point will not change color or size.

But, if you fix your eyes upon it conscientiously, it may appear to oscillate in diameter or hue.

These illusions will occur only if you still your mind, and fix your eyes on the point.

Regard the space around the point, and together we will share an illusion.

This is not an attempt to hypnotize you. I am not a hypnotist.

I am an engineer of illusions that can be shared.

We are all about to become engineers of shared illusion.

This particular illusion has been engineered to fascinate you.

To fascinate you with possibilities.

The possibilities of the Internet and the World Wide Web.

Engineered to suspend your skepticism.

To transfix your attention. Attention.

The point of this experiment is to achieve attention.

Attention has not been a problem for the Internet or the World Wide Web.

Everybody here knows that the Net has become too important to ignore.

What is surprising is that the Web is succeeding in spite of the fact that it is based on very primitive technology.

Almost all of the home computers connected to the Web employ modems and analog voice lines.

Even the fastest modems are too stately. Even 28.8 V.34 is too majestic.

Many consumers are using service providers with out-of-date equipment or inadequate infrastructure.

This makes the torpid Web experience even more frustrating.

We know these problems will go away soon.

Consumers are beginning to learn about digital alternatives to modems and voice lines.

The cost of these alternatives is plummeting faster than anyone dared to hope.

Soon modems will be doorstops.

Consumers are learning that some Internet service providers are more reliable than others.

Soon anything less than lively, robust access to the Web will be unacceptable.

These trends are encouraging.

There are also ways to maximize the performance of the Web.

Strategic deployment of servers. Enlightened application design.

Other approaches that make the most of existing hardware and connections.

These technologies are effective.

But the Web’s success isn’t relying on these trends or technologies.

The Web doesn’t need any new technology in order to become popular.

It already is popular.

Despite modems and analog voice lines and generally crummy service, the Web is one of the biggest success stories of the 90s.

Thousands of people sign up every day.

Over a hundred new Web sites appear every hour.

There must be something about the Web people like.

What is it that makes people like the Web?

What is it that makes people like anything?

What is the origin and purpose of pleasure?

Biologists tell us that our brains contain faculties for creating enjoyable sensations.

They have coined a sophisticated technical term for these faculties.

They call them “pleasure centers.”

Nobody knows how they really work.

But everyone knows that some behaviors feel better than others.

Behaviors like eating, gathering, talking, playing, and having sex.

Biologists explain pleasure by invoking a process they call “natural selection.”

This process is said to favor the evolution of brains that give positive feedback for behaviors which provide a survival advantage.

In other words, nature rewards life-affirming behaviors with pleasure.

That’s why it feels good to eat.

Nature rewards a healthy appetite.

That’s why it feels good to collect things.

Nature rewards acquisitiveness.

And that is why it feels good to talk and to play.

Nature rewards communicators.

It feels good to communicate.

People like the World Wide Web because it satisfies the need to communicate.

The need to feel connected.

The need to not be alone.

Yet today, the Web is a curiously lonely experience.

You are surrounded by millions of surfers, but you can’t wave to anybody

All you get is a number at the bottom of the home page, indicating how many chances to meet someone you missed.

Happily, this is beginning to change.

Soon the Web will begin to notice you.

Soon the Web will be able to process commands faster than you can submit them, and formulate interesting responses in real time.

When that happens, the Web will become conversational.

When that happens, the Web will become interactive.

There will be two very popular uses for this new Interactive Web.

One will be conversation. The other will be multiplayer games.

Multiplayer games offer people the joy of communication.

Single-player games do not.

Multiplayer games are more involving.

More addictive. More delicious. More fun.

The designers and engineers of computer games have always known this.

The designers and engineers of computer games have always known that multiplayer games would someday challenge and eventually eclipse single-player games.

The designers and engineers of computer games have been waiting for over twenty years for a chance to become the designers and engineers of shared illusions.

All we need is an adequate network with an adequate number of users.

All we need is adequate computing power.

All we need is an adequate operating standard.

Thanks to the Internet, the wait is almost over.

Thanks to the Pentium, the wait is almost over.

And, like it or not, thanks to Windows, the wait is almost over.

So what are waiting for now?

We are waiting for a chance to build the games.

For a visionary executive. For an enlightened venture partner.

Somebody who understands that a multiplayer option tacked on to a single-player game at the last minute and with great reluctance is not going to be good enough.

But history suggests that these things are unlikely to materialize on demand.

History suggests that we will probably have to wait for the first Big Online Hit.

Sometime soon, somebody, possibly somebody in this room, is going create an online game that will capture the imagination of the world.

An online game that will become so popular the network will be threatened with collapse.

Congress will scramble to investigate it.

Wall Street and Sand Hill Road will scramble to invest in it.

Pundits will scramble to say they predicted it.

And everyone, everyone, everyone in this room will scramble to imitate it.

Sometime soon, the Gold Rush will begin.

The Big Online Hit will probably not be like Doom or Wing Commander or Super Mario World.

Although popular among hobbyists, these games are too brash and complicated for anybody else.

The Big Online Hit will probably reach beyond the hobbyists to a much larger group of people.

The people who loved Trivial Pursuit. The people who hated Rubik’s Cube.

The millions and millions of ordinary people who went crazy over Pac-Man.

Like these games, the Big Online Hit will be easy to learn, but difficult to master.

It will be approachable and inviting.

It will be abstract. It will be non-violent. It will be inexpensive.

It will be everywhere.

What kind of market will develop to support the Big Online Hit and its many imitators?

What will we be selling our customers on the Interactive Web?

To begin with, we will not be selling them things anymore.

Our products will be digital.

Digital products do not need to be wrapped in boxes if our customers are wired to our offices.

Everything we have to offer them could be downloaded.

Everything we build for them should be designed to be downloaded.

Unfortunately, the first online games will probably be too big.

Developers have yet to outgrow the fetish of gratuitous data streaming made possible by CD-ROM.

Publishers are still unwilling to face the fact that the parts of their games they’re spending the most on are the parts their customers care about least.

They will stubbornly continue to produce games that require a CD-ROM.

This is not a practice that should be encouraged.

This is our chance to transcend CD-ROM, and all physical packaging.

Well-designed online games will be entirely downloadable.

Don’t forget that modems will soon be a thing of the past.

How soon?

Within two product cycles, a twenty or thirty megabyte download will seem only slightly excessive.

And it only gets better from there.

The convenience and economics of online distribution are irresistible.

Everyone will be much happier when the age of online distribution begins.

Everyone, that is, except the distributors and retailers of boxes.

Let us have a moment of silence for the distributors and retailers of boxes.

     §

If we won’t be selling our customers things anymore, what will be selling them?

We will be selling our customers the pleasure of communicating with each other.

We will be selling our customers to each other.

How can we best sell our customers to each other?

We should first recognize that we are about to enter into a profound social contract with our customers.

We’ll no longer have retailers to handle our customers for us.

Soon we’ll be handling them directly.

We’ll be handling their problems, handling their money directly.

But they’ll be entrusting us with more than just their problems and money.

They’ll be entrusting us with the way they look and sound online.

With the way they come across.

With their secrets. With their dignity.

Our customers will insist that we respect their privacy.

That we protect their anonymity.

That we allow them freedom of expression.

That we mirror those expressions without distortion.

That we treat adults like adults.

That we offer parents the tools they need to make informed choices for their children.

To serve our customers best, we have to get out of our customers’ way.

Our customers don’t want to remember that we are facilitating everything they say and do online.

The more we make them aware of our presence, the more they will resent our presence.

Our goal should be to magnify our customers presence while concealing our own.

Our customers will be attracted to those games and services that offer the highest quality of presence.

Ours will be an economy of presence.

How do we measure the quality of presence?

It can measured by answering a few simple questions:

How easy is it to arrive?

How easy is it to congregate?

How easy is it to communicate?

In these pioneering days, it’s to be expected that some of the solutions being offered for managing the quality of presence will be somewhat primitive.

One of the most primitive is the prevailing aesthetic of virtual presence.

The aesthetic known as “virtual reality.”

Some definitions.

By virtual presence, I refer to any artificial extension of awareness.

By virtual reality, I refer to a particular style of virtual presence.

I refer to a simulation of perceptual reality that tries to make you feel as if you are someplace else.

So virtual presence and virtual reality are not necessarily the same thing.

Nevertheless, many people seem to believe that they are.

The aesthetic of virtual reality is overwhelmingly pervasive.

Especially among engineers, and also among the fans of a category of science fiction called cyberpunk.

In the cyberpunk novel Snow Crash, there is a very exclusive virtual night club called the Black Sun.

One of the things that makes the Black Sun so exclusive and special is that, unlike the low-rent portions of cyberspace where avatars pass through each other freely, patrons of the Black Sun must walk around each another or collide.

I read this description and thought it was a witty satire.

Unfortunately, many would-be engineers of shared illusion have read Snow Crash and adopted it as a specification.

The heresy needs to be spoken: The cyberpunk conception of virtual reality is not really very interesting.

Only a hacker would find the problem of avatar collision interesting.

Virtual reality imposes a materialistic space-time metaphor on the experience of virtual presence.

Space and time are exalted in virtual reality.

Even the word “cyber-SPACE” acknowledges their eminence.

But the space-time metaphor is not powerful enough.

Virtual reality is plagued with serious problems about how many people will fit in a conversation space and how to avoid overlaps and log-jams and backing yourself into a virtual corner.

Proposed solutions have been awkward and unsatisfying.

Distances and obstacle avoidance may be realistic, but they are not elegant.

Having to wait before you can arrive may be lifelike, but it is not efficient, or fun.

These constraints are appropriate if you’re building a flight simulator or some other model of a real space-time process.

But space-time simulation is being adopted as the metaphor of choice for all virtual presence.

Verisimilitude is not our concern. Simulation is not our business.

Communication is our business.

We are not necessarily selling our customers an alternative reality.

We are selling our customers to each other.

The space-time metaphor represents a monumental failure of the imagination.

This metaphor has exactly one benefit: it is familiar.

So how important is familiarity?

The history of motion pictures offers an amusing lesson.

In the early days, stories were adapted to the screen by framing a stationary camera view around the field of action.

This technique yielded a stage-like perspective that was familiar to audiences.

A few pioneers came along who thought it might be interesting to move the camera closer to the actors for dramatic emphasis.

A couple of real troublemakers wanted to move so close that only the actors’ heads would be visible.

Producers and exhibitors were aghast.

Producers and exhibitors thought the idea of showing parts of actors was macabre.

They actually declared that if the frameline divided an actor’s image at the neck, the audience might conclude that the actor had been decapitated.

They believed that film was a reproduction medium only.

But audiences were more sophisticated than that.

They were happy to embrace unfamiliar ideas if they made movies more powerful and interesting.

They came to understand that movies could be used not only to reproduce perception, but to represent the process of thought itself.

Now, if the hundred-year-old technology of perforated film can be used to represent the thought process, why can’t virtual presence?

Isn’t the thought process more interesting than collision detection?

We need to ask ourselves how to make virtual presence do what we really want, rather than how well we can make it approach what reality will always do better.

We are talking about the quality of presence again.

So ask the questions:

How easy is it to arrive?

How easy is it to congregate?

How easy is it to communicate?

We can begin by devising ways to transcend the conventions of space and time.

Our bodies and our brains seem to be trapped in space and time.

But the Web is not an environment for the body or the brain.

The Web is an environment for the mind.

For the mind.

We’ve been thinking about virtual presence as if we have to send our bodies out there.

We don’t.

If we could design reality for our minds, what powers would we grant ourselves?

The ability to be anywhere instantly would be a step in the right direction.

The ability to be everywhere all at once, without going mad, is the real challenge.

Why should our minds roll around like cameras, when they can zoom and focus like lenses?

Why settle for avatars, when we can be angels?

Our goal should not be virtual reality.

Our goal should be actual ubiquity.

Space and time are not intrinsic properties of virtual presence.

Space and time will not exist in virtual presence unless we bring them with us.

Space and time are boring.

Let’s not invite them.

     §

Now, it’s all very well to talk about evicting space and time from virtual presence.

It’s quite another to imagine what virtual presence would actually be like without them.

What will it be like when we transcend space and time?

How will we navigate without dimensions?

How will we tell things apart?

How will we separate things?

There can be no rational answers to these questions.

To formulate a response, we must be prepared to set aside rationality.

I first heard the word “strange” used this way to describe future technology by Peter Bergman of the Firesign Theater, at a lunch in 1994.

We must be prepared to embrace Strangeness.

I’m not talking about strangeness that is merely curious or weird.

Not strangeness as in bungee jumping or platypuses or Michael Jackson’s pet monkey.

I’m talking about the kind of Strangeness that goes beyond language.

The kind of Strangeness that is offensive to the intellect.

Strangeness so profoundly alien, a shift of consciousness is required to deal with it without going mad.

The kind of Strangeness that Lovecraft used to write about.

This is Strangeness with a capital S.

Strangeness like the square root of negative one.

Like superstrings and black holes.

Really Strange things seem to emerge from outside space and time.

From the realm of imagination.

So, when we ask what virtual presence would be like without space and time, the general answer is that it will be imaginary.

The precise answer is that it will be Strange.

Some of you are wondering why you should concern yourself with Strangeness.

Isn’t it enough to keep building knockoffs of the last big hit?

Isn’t it safer to let somebody else take the arrows in the back?

The truth is that every single one of you is going to be building Strange products.

The question is, who will have the vision to build them while they still seem Strange?

We must explore Strangeness because that is what the future would look like if we could see it today.

Embracing strangeness doesn’t mean abandoning the dictates of common sense or sound business practice.

It simply means you must be willing to take enough risk.

How do you know when you are taking enough risk?

If nobody is complaining about your work, you’re probably not taking enough risk.

If nobody is slapping themselves upside the head.

If nobody is saying, “I knew that!”

If anybody can afford to ignore what you are doing, you are not being Strange enough.

The format of this lecture may have seemed strange to you at the beginning.

By now you are probably getting used to it.

So let’s raise our collective attention to a higher level of Strangeness.

Keep your eyes on the point, and together we will explore Strangeness.

Archaeological and linguistic evidence suggests that ancient civilizations did not necessarily share our Western concept of the Self.

The concept that each of us is a separate, independent consciousness moving through space and time.

The basic dualism of the perceiver and the perceived.

The concept Freud called the Ego.

Most of us accept experiential dualism as a fundamental aspect of reality.

But some religious traditions maintain that Dualism and the Ego have nothing to do with reality.

These religions teach that separateness is only an illusion.

An illusion that can be dispelled by certain esoteric practices.

These religions possess Strange but demonstrably effective technologies that allow individuals to achieve states of consciousness that transcend the illusion of separateness.

And what do these transcendent individuals have to tell us?

They tell us that we all deeply connected.

Part of a network of minds. A web of souls.

An economy of presence.

Does any of this sound familiar?

And they tell us that it is space and time that prevent us from realizing our essential unity.

That space and time are precisely the illusions that keep us apart.

Scientific materialists don’t like to hear this stuff.

A materialist might concede the possibility that space and time and separateness are some kind of mental illusions.

But they will expect us to postulate an evolutionary purpose for these illusions.

The doctrine of natural selection demands that our brains must have developed these basic organizing metaphors because they provide some kind of survival advantage.

What is the advantage of believing that we are all separate?

All different?

Consider that without differences, there could be no comparisons.

Without separateness, there could be no dissatisfaction.

There could be no striving or competition or progress.

Without separateness, we would never have bothered to leave the paradise of Africa.

We would never have become farmers or builders or warriors.

We would never have aspired to the Renaissance or the Industrial Revolution or, not one, but two devastating world wars.

Only arrogance and pride born of separateness could challenge space and time with railroads and airplanes and guided missiles.

Only righteous malevolence would presume to harness the Strange power of special relativity to create an atomic bomb.

Only hatred driven by the illusion of separateness could produce conflicts and weapons horrible enough to shake us all out of the collective nightmare we call history.

Only institutionalized greed could concentrate enough wealth to wire the entire planet.

We have taken the long way around, my friends.

Somehow, amazingly, perhaps undeservedly, we are about to achieve global interactive connectivity without blowing ourselves up first.

The ways of natural selection are indeed Strange.

Some of you are going to cash in on the Internet and attain positions of power and influence.

Nature rewards a healthy appetite.

Some of you will create new technologies and companies and become comfortably wealthy.

Nature rewards acquisitiveness.

Nature rewards life-affirming behaviors that lead to a survival advantage for the species.

The illusion of separateness may have been necessary to push us to this threshold of realization.

But before we can Realize, before we can evolve, we must discard that illusion.

We must discard it like the space shuttle discards its solid rocket boosters.

We must discard it like an inflamed appendix.

The illusion of separateness has outlived its usefulness.

It is turning against us and subverting our destiny.

It is time to outgrow it. It is killing us.

And we, in this room, are the architects of its demise.

The point is that we are beginning to remember something we have all forgotten about ourselves.

Something wonderful about ourselves.

We don’t have the words in our Western languages to describe what that something might be.

But we can feel it. We can feel it.

So we’re doing what anyone who is having trouble describing something does.

We are trying to draw a picture of it.

We are struggling to construct a model of it.

We have succeeded in laying the foundations of that model.

And we have given our model a name.

We call it the World Wide Web.

By using the Web, by thinking about its possibilities, and especially by helping to build the Web, you are changing the way you realize the world.

You are changing your mind about the world.

The Web isn’t just something that is happening in the world.

It’s something that’s happening in you.

When people set up email accounts or personal Web sites or join chat rooms or create MUD personas, what are they doing, really?

They are saying to the world, I Am.

I Am.

I signify. I am part of a larger community.

I am part of something bigger than myself.

These are empowering acts.

These acts are an expression of hope.

These are spiritual acts.

Why is a game designer talking to you about spirituality?

The adjective spiritual simply refers to things which have no body, form or substance.

So spirituality is about things that are disembodied.

Things that are formless. Things that are insubstantial.

Things that are virtual.

Spiritual experiences are nothing to be ashamed about.

Spiritual experiences are, in fact, quite practical.

Spiritual experiences are, in fact, our business.

Ours will be an economy of spirits.

Some of you are now thoroughly embarrassed by my pretensions.

Some of you think that spiritual purposes are fuzzy-minded, and not respectable.

If the development of an effective spiritual technology does not interest you, then do it for your resume.

If being part of something bigger than yourself doesn’t inspire you with hope, then do it for your portfolio.

The reasons you use to justify your investment are not important.

The name of the spirit that moves you is not important.

What is important is that you are moving.

What is important is that you are building this.

We have to build this.

We’re not out of the woods yet.

We could still blow ourselves up before we build enough of this thing to recognize ourselves in it.

Everything else we have ever built will be for nothing if we fail to build this now.

Build this. Just build this.

You came to this conference because you sensed opportunity.

Your instinct was profoundly correct.

Conversation and multiplayer games are indeed the killer apps of the Internet.

The next big things. The Trojan horses.

Multiplayer games will attract scores of millions to the World Wide Web.

They’re easy to understand. They’re compelling.

They’re lots and lots of fun.

They will be a potent catalyst for global interactive connectivity.

They will help to bring us all together, at last.

They will help to bring us all together, again.

Are you ready to be a part of something bigger than yourself?

Are you ready to suspend your skepticism and entertain possibilities?

The possibility that something Strange and wonderful is trying to happen?

The possibility that the World Wide Web may be one of its manifestations?

Are you ready for the shock of recognition?

The point is that we are instruments of something bigger than ourselves.

We don’t know what this something is.

But we can feel it. We can feel it.

I call it “the point,” because the point is to find out what the point is.

The point is urgently prodding us.

The point is pointing at itself.

The point is infinitely small, and mathematically perfect.

The point has no volume, but it is never silent.

The point has no radius, but it is eternally radiant.

It is the shining Void. The empty Source.

It is expanding and collapsing simultaneously.

It is all-encompassing, and coincident with every other point in the universe.

The point is that there are no coincidences.

The point is that our hand is being guided here.

The point is that our attention is being fixed.

The point of this experiment is to achieve attention. Attention.

The point is not familiar.

The point is Strange. The point is Other.

The point is not contingent upon anything. Not even itself.

It is the First Cause. The Prime Mover.

The point is emergent.

The point is an emergency.

Emergency! Attention! The point is on fire!

But the point is inextinguishable.

The point is the burning bush.

The point is the I Am.

The I Am.

The point is not good.

The point is not evil.

The point is not even indifferent.

The point simply is. The point is.

The point is not an object or a noun.

The point is an action. The point is a verb.

The verb is to be.

The point is Being. It is Being.

The point is that we are don’t have to be alone anymore.

The point is that we have never been alone.

The point is that we can all be as one.

The point is that we are all one.

The point is that we are all one.

The point is that we are all one.

Fix your eyes upon the point, and together we will achieve attention.


Copyright © 1996-2016 Brian Moriarty. All rights reserved.

Doom and keeping company with skilled professionals

Capturing a Twitter thread I wrote on the 25th anniversary of the release of the computer game Doom.


25 years ago today, a colleague of mine interrupted people at their desks. “You have to come see this.” I was working at a little shop of about a dozen people making computer games. And one of the programmers had just downloaded the first level of Doom.

Computer programmers use fast computers (to save compile time, and because they are demanding prima donnas) but this was 1993 and Doom did not seem possible. Guys would look at the screen and 30 seconds later ask, “This is … realtime … ?”

Baffled amazement.

The guy who had downloaded Doom explained that the game’s designers had planned for people with slower computers. It was easy to re-size the display down so there were less pixels to render. On his machine, though, we got realtime 3D at fullscreen, as smooth as Frogger.

After a couple of minutes, the programmers started making little observations.

“You can’t tilt your head, can you?” “Yeah. There’s yaw, but no roll and no pitch.”

“The monsters always face you. The room is 3D but they are pre-rendered sprites.”

“That texture repeats. There’s actually only a few textures visible at any one time.”

“We go up and down stairs but I don’t think the map ever overlaps, vertically.”

My colleagues were finding little coding hacks that made Doom possible, by watching the gameplay.




The first time we racked the shotgun on Doom there was a wicked Cheer Of Awesome from a dozen professional computer game nerds. But by then we had already lost our innocence.

Baffled awe had given way to a rueful but delighted awe. My colleagues had reverse-engineered the fundamentals of Doom together watching fifteen minutes of gameplay. Computers had been ready for us to make Doom, if we were sly enough. Nobody had realized it.

One of the great pleasures of life is seeing skilled professionals work a hard problem together.




I was blessed to see Doom for the first time that way, in a room full of computer game programmers seeing Doom for the first time.


That year at the Computer Game Developers’ Conference, the guys from id Software all wore jeans with T-shirts that said “DOOM” on the front … and “wrote it” on the back.

Everybody there allowed them a little swagger.

17 January 2024

Good rounded corners are hard

A Twitter thread from Ken Kocienda <@kocienda> I want to keep about rounded rectangles, a signature element of Apple from the Eldar Days:


Consider the round rectangle (roundrect), a four-sided figure with rounded off corners. They are often used in user interface designs… but they have a built-in aesthetic glitch which I’ll describe later. First, to make one.

Round rectangles are generally specified with a rectangle’s width and height plus a radius for the corners, like this 200×120 rectangle with a corner radius of 20.

A fully-rounded off roundrect has a corner radius equal to one-half the shorter side, like this 200×120 rectangle with a corner radius of 60.

Once you have one of these fully-rounded off roundrects, you can imagine making them in a different way. Start with two circles of radius 60, position their centers 80 units apart, connect the top-most and bottom-most points of each circle with a straight line.

Then erase all the interior lines. This shape is also 200×120, two times the radius + 80 = 200, and two times the radius = 120. QED.

The problem is with those connecting lines. That line is straight, but the remaining parts of the circle are not. These lines are tangents to the circles, i.e. they meet the circle at a single point.

Crucially, at that point, there is a discontinuity between a straight element and a curved element. The effect becomes even more pronounced as the aspect ratio between the width and the height of the rectangle increases. Look at the points I didn’t circle. See it?

Once I saw this, it became hard to unsee. At least it was for me.

There are shapes that don’t have this glitch. Consider the superellipse.

This is the shape used for iOS icons (or very close to it). The industrial design team at Apple used this shape in many of their designs.

Inside Apple, we usually referred to these shapes as “continuous curves”, since they didn’t have this point discontinuity glitch. It’s a subtly more pleasant shape. By playing around with the numbers, it’s possible to approximate the original 200×120 roundrect without the glitch.

Note the comparison between the continuous curve (in blue) and the roundrect (in black). The roundrect sticks out right in an unsightly way at the point of the discontinuity.

Now the “problem” with continuous curves is that they they aren’t built into to any 2D computer graphics system in any operating system I know about, and they can only be approximated by bezier curves, which is a common 2D graphics feature. So, a lot of people are familiar with bezier curves, and they’re easier to work with, so that’s usually what gets used. Too bad.

12 October 2023

The Democracy Laptop

Quasi-Normalcy names a dream I share.

You know what, fuck it, I don’t want some frivolous, artisanal, lighter-than-air computer with no customizability, no upgradeability, no reparability, no ports, and a lifetime of maybe 3 years if you’re lucky. I want a fucking great BEAST of a computer that’s designed to last a minimum of 50 years, with ports up the wazoo and optional drives for every kind of media! I want modular components that you can drop in a bog for a year, dry them off, and have them still work fine! I want them to make a noise like “ker-chunk!” when you slide them into place! I want a switch that you pull to turn it on! And I don’t want software that constantly forces you to get a pointless, cosmetic “upgrade” every few months either! I want durability! I want longevity! I want satisfying haptics! I want Silicon Valley to go fuck itself!

Damm straight. I want a computer designed and built for the ages, meant to deliver a lifetime of use. The Honda Cub, the Leatherman Multitool, the Fender Stratocaster of computers.

This is of a piece with my dream that tech platforms should be democratically accountable public utilities, because contrary to the assumptions of capitalism, both industrial manufacturing and software / electronics technologies incline toward robustness and democratic pervasiveness. So I have thought for a long time about an Office Of Reference Computing which releases a new Democracy Laptop once every ten years. For the rest of the decade, the design stays unchanged save for necessary bug fixes. As much as possible it employs open standards; when it cannot it sets an open standard. The ORC would be the opposite of Agile, engineering with the kind of flinty deliberateness and caution they use at NASA.

The hardware would be designed to be durable, reliable, inexpensive, and easy to maintain. There would be maybe a few variations — the smaller one, the bigger one, maybe an extra-nerdy one that is more hackable — but they would be interoperable, use as many of the same parts as possible. The ORC would manufacture an initial run of maybe thirty million of them, and sell them at cost … that is to say, projected per-unit cost to manufacture in year four, so they would initially sell at a loss, to seed the ecosystem. Of course every schoolkid would get one.

The operating system & core applications would be designed to be stable, secure, and wicked fast on the hardware rather than cutting-edge. They would be released open source, and use file formats legible as plaintext as much as possible. Encryption and other privacy measures would be baked in from top to bottom. The UX design would focus on accessibility, coherence, patterns which they hope other applications will steal, and a nice steady learning curve which makes them easy to pick up but also worth digging into.

The ORC would maintain an application agora which makes it easy to publish open-source software components, datasets, and complete applications. They would police these tools to make sure they were not engaged in shenanigans. A user would be able to download whatever applications they wish from the agora at no cost. One can make money on stuff released to the agora — a Democracy Laptop watches what software it runs for how long, and tallies up a hashed report to the ORC so they can say, “hey, Ms. Developer, 1000 people each spent 10 hours with your tool this month, so here’s 10,000 pennies for your trouble”. Of course to be eligible for that, someone at the ORC is going to look at that tool, to make sure it isn’t breaking things.

Those pennies, plus the cost to design & develop and tool the construction of Democracy Laptops, come from the public purse. I like Georgist taxes but I bet you may have even better ideas?