08 June 2021

Esoteric cultural appropriation

Where I am coming from

Forgive me a lot of throat-clearing; the cultural politics make it necessary. You can skip ahead to the next section, Kabballah & qabala if you want to get right to the meat of what I am here to say.

I dread resting too much of the legitimacy of this kind of commentary on identity categories, ratification by authority, or scholarship pissing contests. I would rather have this comment read on the merits. But situating myself does inform what I say, so: I am an assimilated American Ashkenazi Jew. I am not practicing, though I have thrown a kickass ritually-correct seder every year for decades. I am also a practicing, though lazy, Hermeticist with modest formal investments from a lodge in the Golden Dawn current. I am a modern Pagan who counts Ha’Shem among the gods of my personal pantheon, as the “god of my people”. Ha’Shem has a place but not an icon on my personal altar, physically above the places of all of the other gods; I hold this to be consistent with Ha’Shem’s first mitzvah, לא יהיה־לך אלהים אחרים על־פני.

I consider myself unqualified to study Jewish kabballah as a practice, though I am an enthusiastic amateur scholar in an academic sense. I have some Hebrew & Torah scholarship under my belt, but not enough. And I am not living a life of Jewish practice.

Hermetic qabalah is integral to my spiritual practice and outlook. My practice is modest and I do not want to overstate my scholarship. But neither is trivial, and after decades of engagement they run deep into my bones.

I also need to articulate my cultural politics. I am committed to the pursuit of social justice. Advocacy is not one of my core personal projects, but I believe I have given it at least the attention which every person should. I admit to some reservations about the particular school of social justice praxis which dominates the culture of social justice advocacy at the moment ... and I count myself allied with it, because social justice advocacy is more important than my quibbles, and that school is much more right than wrong on the merits.

That said, among those reservations about common social justice praxis there is a chunk of the discourse around cultural appropriation. Sometimes it is plain wrong about how culture works. Ownership language — “that does not belong to you” — serves us poorly. And I am mortified by the implications of some discourse about cultural appropriation. The implication that each ethnic people must hew only to the cultural forms of their ancestors courts the worst possible Blut und Boden Cultural Purity “traditionalist” bedfellows.

That does not mean that I dismiss concerns about cultural appropriation. They are vitally important. I believe that we see countless examples of appropriation which we have an obligation to combat.

White people need to stop wearing warbonnets, right?

Modern Pagan culture has a lot to answer for here. Western occultists have a lot to answer for here.

Kabballah & qabala

This post was born as a pair of Twitter threads, the first inspired by a short conversation with another Jewish occultist unhappy with gentiles’ use of kabballah. I said to them:

I have complex ambivalence about all this because I am both Jewish and invested in Hermetic qabalah, but I have no ambivalence in saying that you are 100% right in finding antisemitism woven deep into the history and structure of those magical systems.

Is Pagan & occultist qabalah cultural appropriation of a closed Jewish tradition? This essay is long because the question is complicated. The history is appropriative and the practice easily can be appropriative, but I believe that there is a lot of space for thoughtful gentiles to engage with it responsibly.

First, one need to introduce a distinction between kabballah, cabala, and qabalah; esotericists use these different spellings to reflect the distinctions between these related systems.

Kabballah is a body of Jewish practice & ideas crystallized in the 16th century, grounded in writings from the 13th century, drawing directly on ideas and practices at least a couple of centuries older, with many much earlier antecedents ... including a mythic lineage attributed to Moses. I lack the scholarship to judge arguments about stuff like neoplatonism and other gentiles’ thought & practice influencing proto-kabbalist thought & practice, but it would be naïve to imagine that an esoteric school created by a diaspora people is entirely novel and unique.

Cabala is a little out of scope here, and my expertise on it is weak, but I must mention it to round out the picture. Renaissance-era Christian occultists built symbolism for their own use drawing directly on kabballah, modifying the source significantly to suit their own purposes.

Qabalah comes to us through a clearly-identifiable, narrow door. The Hermetic Order Of The Golden Dawn, a nominally Christian late 19th century English quasi-Masonic organization of occultists invented it. The HO G∴D∴ were like the Velvet Underground of esoteric groups: innovative, strongly informed by past practices, and hugely influential, much like how Brian Eno famously said of the Velvet Underground that “their first album only sold 10,000 copies, but everyone who bought it formed a band”.

The HO G∴D∴ had a “magical system” that was a stew of esoteric ideas from all over: Renaissance magic, alchemy, the classical proto-sciences (most significantly astrology), confused thirdhand accounts of Hinduism and Buddhism, neoplatonist theurgy, cabala, Kabballah, and more. Plus the HO G∴D∴ just made a bunch of stuff up, claiming legitimacy for it by attributing it to a mix of “secret traditions” and well-known cultural sources.

They held this mess together using symbolism they called “qabalah”, which drew directly on cabala and kabballah.

I am deliberately describing the HO G∴D∴ system of symbols, ideas, and practices a little flippantly here. It is bonkers. It is a mess. It is full of lies. But I love it. It is awesome. It has been massively influential for a reason. It works.

It should also be apparent from this thumbnail history of the HO G∴D∴ system that it is in many ways as culturally appropriative as anything can be. The HO G∴D∴ deracinated the culture of oppressed people, taking a lot of symbols with rich cultural context and then ignoring or crudely misinterpreting that context, using weighty symbols just because they looked and sounded cool. This is minstrelsy of the culture of oppressed people: not just twisting the source ideas but misrepresenting those alterations as an authentic presentation of the original. This includes when the HO G∴D∴ made stuff up, then claimed it had value and legitimacy because it was sourced from the culture of oppressed people whom they did not actually understand. The HO G∴D∴ constructing their qabalah using the bits and bobs of kabballah which suited them was both reflective of and exercise of antisemitism.

The HO G∴D∴ in London were literally at the seat of a Christian supremacist white supremacist colonialist empire at its apex, playing with the cultures of religious & cultural minorities who had been crushed under that empire’s boot.

Any esotericist who engages with qabalah must reckon with the appropriativeness and bullshittiness of this history. And understand that the influence of the HO G∴D∴ system is everywhere in anglophone esotericism. Bits and bobs of it show up in Wicca and almost all other modern Pagan practices, in New Age culture, in the banal astrology column in your local newspaper.

But one cannot accuse the qabalah of the HO G∴D∴ of being nothing other than the closed tradition of Jewish kabballah. It is not, it is just different stuff, precisely because it is such a mix of different sources and misunderstandings and misrepresentations and outright inventions.

As a Jew who does some of the practices from the HO G∴D∴, I have deep unease with elements of those practices. There is a core HO G∴D∴ ritual which involves pronouncing the divine name יהוה, which violates one of the few Jewish practices I am rigorous about! (I found myself a lodge which substitutes another name.)

So while I am not in the same place they are, it should be evident why I have boundless love & respect for my Jewish cousins who are disgusted by qabalah and gentile esotericists’ use of it. All esotericists who engage with qabalah need to grapple with the knowledge that there are Jews who have a legitimate disgust at these practices.

All of which is laying track for the opinion I came here to offer.

I personally am a Jew who is okay with qabalah. It does not bother me. If you are a gentile, it is cool with me if you work with qabalah.

Which is not to say that all qabalah is okay, or even okay with me. There sure are ways of working with qabalah which are offensively appropriative. If you are not Jewish and pitch your teaching as The True Secrets Of The Jewish Mystics, that is very bad. (It is also pretty bad if you are Jewish, though a different bad, with different cultural politics.)

This places me in respectful disagreement with Jews who say that gentiles need to step away from qabalah — emphasis on respectful. This is an ongoing conversation. I might well be wrong. They might be right. I am presenting a case to a candid world.

I disrespectfully disagree with anyone who claims that qabalah is nothing other than an appropriation of the closed tradition of Jewish kabballah. That is a crock of shit. Qabalah has a bunch of stuff cribbed from kabballah in there, but it is a different thing.

Jazz begat blues. Blues begat rock ’n’ roll. Rock ’n’ roll begat rock. Rock begat heavy metal. But that does not make Iron Maiden a jazz band. Rock is full of cultural appropriation. White artists and listeners often need to do a lot better at recognizing and avoiding engaging in appropriative moves. But it would be absurd to say we should stop listening to Iron Maiden because of the distant echo of the Bo Diddley beat. In the same way, qabalah is part synthesis, part novel creation. It is living culture. Suggesting that it is just repackaged kabballah is like saying Star Wars is just repackaged Kurosawa. Yeah, there is a ton of Kurosawa in there, but it is also planetary romance and space opera and a western and a flying ace story and and and …

I cannot see an argument that the cultural appropriation in the history of qabalah makes qabalah itself illegitimate which does not ultimately make practically all of culture illegitimate. Noting is created in a vacuum.

That line drawn, I hold that this space contains hard questions.

I ask my fraters, sorors, and nonbinary siblings in the body of Golden Dawn practice to face the aforementioned problem that practicing Jews cannot do the core ritual of the G∴D∴ tradition as written.

Though I am okay with gentiles engaging with qabalah, in my opinion gentiles should keep your goyish hands off of Jewish kabballah. “The rabbi said it was okay” is not okay with me, even though there are plenty of Jews who disagree and find it acceptable. Ten Jews, eleven opinions, which stacks on top of a a whole intra-Jewish conversation to be had about who among us is qualified to work with kabballah (ahem, sexism, ahem) but that is yet another question.

I use the Hebrew names of the sephiroth of qabalah as useful terms of art. Those are the same words used in kabballah to mean things which are similar but not the same. I am cool with that. One word can mean different related things. But we must at least note the distinction.

I do not want people to recruit my words to say that the appropriativeness of qabalah is all in the past, nothing to worry about now. We still need to step carefully in talking about its relationship with Jewish ideas, tradition, and practice. I do not want people to shrug and say “Jonathan Korman said qabalah is A-OK”, or to point to anything I say here as the final word. I am one person with a considered opinion, no more, no less.


A while back, an internet acquaintance used some tarot imagery in a game they were working on, and were confronted with a critic who said that this was irresponsible because “tarot itself is a closed practice from Roma people”. I found that surprising and not credible.

I did not see a way to reconcile that with the history I know for the Coleman-Smith deck, the most famous tarot images in the world, which were pictured in the illustration which inspired that comment. That deck is based on deliberate alterations to the secret deck designs invented by the white Brits of the HO G∴D∴, making it a creative obfuscation of a thing they largely invented for their system rather than drew from other existing tarot decks — which itself was indeed appropriative of a number of cultural sources but not, so far as I know, of any real practices of the Romani people aside from the idea that they used tarot cards with different designs for a different method of fortune-telling.

Tarot cards themselves can be traced to card games played in Italy circa 1500. The cards & designs from that era map only sloppily to the “canonical” 78 card deck defined by the HO G∴D∴ based on their numerological and other symbolism. Whether the use of Tarot cards for fortune-telling originates from Roma practices at all remains a hotly debated topic among scholars. Documentation of the practice is tangled up in romantic nonsense invented by non-Roma occultists in the 18th & 19th century.

Some time after that exchange about using tarot imagery, the Romani members of Seems Like Your Spirituality Is Just Cultural Appropriation: The Religion™ circulated an open letter: Your Tarot Card Practice is Romani Cultural Appropriation.

The argument offered by that letter that Roma people need justice and deserve reparations is ironclad. But its claims about the relationship between Jewish kabballah, Hermetic qabalah, and tarot described in that claim of tarot as a “closed practice” is thoroughly confused about the history and the mechanics of those, which I outlined above.

This presents a serious and tricky question. Even if we accept its uncertain claim that tarot fortune-telling was a Roma invention, I don’t know how to think about how irresponsibly appropriative it is to engage in the use of different fortune-telling methods with different cards with different symbolism.

I mean, I have genuine uncertainty. It is not hard to see how a white storefront tarot reader in a “g*psy” costume is engaging in appropriative minstrelsy. But is it wrong to engage in sortilage divination with those particular cards because some Roma have a story about them having Roma origins? Much slipperier.

What obligation do we have to be deferential to oppressed peoples’ ahistorical claims of cultural ownership, as a gesture of respect toward them? I don’t ask that as a way to be dismissive, I consider it a serious question.

Consider how social justice advocates agree that it is a racist gesture for white people in the US to wear their hair in matted ’locs, because doing so is appropriative of Black culture. Historically, that does not hold water. Black Americans are hardly the only people to invent matted hair — indeed, ’locs evidently did not emerge organically from Black American culture but evidently out of imitation of Black Jamaican Rastafarians, which raises hard questions about who is appropriating whom — and there are white historical examples ranging from Polish plaits to Shakespeare’s references to “elf-locks”. But whatever the history and cultural logic, it is surely true that a white person in ’locs knows that Black people will read them as engaging in a racist insult, so they are in an important sense choosing to insult. We have a broad cultural agreement that, given this, justifications white people make from cultural history are less important than the readings of Black people.

How far do we take that? One can easily make a much stronger case from cultural history that the history and structure of rock ’n’ roll makes it cultural appropriation. If some Black people tell white people to stop performing and listening to rock ’n’ roll, must we heed that? How many such critics are required before we would need to?

That last question is particularly pointed in the case of the Roma claim to tarot. I have already seen people reference this as the consensus among Romani people, while it is my understanding that in Europe Romani people overwhelmingly oppose this move. How is a gadjo outsider like me to evaluate that?

I have no satisfactory answers for those questions, only more questions and worries.

I feel queasy finding myself even examining the possibility that oppressed people of color are wrong about their own history or what constitutes a racist insult to them. That is not a good place for a white person to stand. But the nausea I get contemplating the Preserve Cultural Purity implications of this plea is even worse. Anthropologists, historians, sociologists, and folklorists debunked the whole concept of cultural purity ages ago; culture just does not work that way. Worse, advocates of social justice saying White People Need To Stick To White Things court very unwholesome bedfellows. The letter arguing for tarot as a closed practice suggests that white people should instead turn to divinatory methods “that fit with [our] interests and heritage” like ... Norse runes. This aligns with the ideology of “Neo-Volkish” white nationalists who will heartily agree that white people should stick to their noble white heritage of the runes. Surely we do not want to go there.

Figuring out a better praxis

These examples are not peripheral issues. As someone with a hand in the modern Pagan & esoteric movements, I see them as the tip of a large iceberg.

Again, we have a lot of irresponsible cultural politics and confused spiritual practice to answer for. I have seen rituals which invoke Aphrodite, Kali, and Ix Chel as “names of The Goddess”, erasing profound distinctions between these deities and deracinating the source cultures who named them. I have heard white people singing “Native American chants” which were probably fabrications ... not that singing authentic NA chants would be any better. There are countless other such examples of appropriative moves far more cringe-inducingly wrongheaded than qabalah or tarot.

Our tools for thinking about this are far too blunt. I do not have answers; I have worries.

In service of this question, I offer some commentaries to chew on.

Dr. Heidi Hart's paper Everybody Wants to Be ‘Origines’: Nativism, Neo-pagan Appropriation, and Ecofascism* looks at implied ideas of Cultural Purity in terms of their relationship with scary populist movements on the right.

Ultimately, any group that follows a purity mentality, seeking deep, unadulterated roots in nature, risks nativist thinking and exclusion of those without the privilege of imagining themselves doing heroic deeds in equally imaginary, old-growth woods.

Again, I think Team Social Justice should not want these bedfellows.

Leftist Pagan Rhyd Wildermuth offers the essay A Plague of Gods: Cultural Appropriation and the Resurgent Left Sacred as a skeptical response to common ways of framing these questions. He offers both an analysis grounded in left politics ...

While it may seem a bit harsh to compare social justice arguments for cultural exclusion to capitalist enclosure and private property, this cannot be ignored. Much of the discourse—especially from American activists—has inherited (or has been colonized by) the capitalist logic of property. In their framework (unacknowledged or not), cultural forms are property belonging to a specific group of people, and using those forms without express permission is theft or trespassing.

... and another drawing on an esoteric way of classifying understandings of sacredness.

The left sacred is a transgressive sacred, a sacred that seeks to spread and contaminate the rest of life with its power. The other hand of the sacred, seen in the purity codes of Leviticus, is the right sacred, the sacred that polices the borders between the sacred and the profane with an aim to stop the sacred from spreading to places where it cannot be controlled any longer.

He turns this analysis toward the same letter I referenced asserting that tarot is a closed Roma practice.

Consider again the above cited essay, which essentially claims that the Romani “own” Tarot. The authors argue that, because they believe the Romani are the authors or creators of Tarot, that Tarot is inherently Romani and has an intrinsic Romani property, they therefore have the right to assert an intellectual property right over its use. It is improper for Tarot to be used by others because such use takes it out of its proper place. And people who use it without their permission are therefore violating Romani ownership rights, a point that can be seen particularly in the essay’s repeated statements that, if someone wants a Tarot divination, they must pay a Romani person to perform it for them.

Asserting ownership and attempting to privatize something that has become common parallels the capitalist logic of property, especially during the birth of capitalism. The repeated use of the word “closed” in their essay about Tarot echoes the capitalist logic of Enclosure, the privatization of something that had been commonly used.

Wildermuth does not address the cultural politics of signaling respect for oppressed people, which as I say above I think needs an integral place in our practices, but I find his deep dive into the origins and meanings of the word “appropriation” illuminating. This is the kind of clarity we need.

Esotericist Phil Hine reflects on how the origin of chakras is as messy and strange as kabballah / cabala / qabalah.

Well the simplistic answer would be that chakras are Hindu and stolen by naughty colonial westerners then dropped into virtually every popular occult book written since 1910. That's not what happened of course.

He also observes how historical practitioners’ attitudes do not align with our cultural politics.

The problem for those people who insist that particular traditions can only be followed by those who are of particular cultures is that you can find, in the tantric literature, explicit statements against this view.

Conversations between modern Pagans and indigenous people can be very fraught. Ivo Dominguez, Jr. has a memorable story about how affinities can read as appropriation.

The last person to approach me, approached me with a stern face. I listened as he harangued me for a few minutes, as the room continued to empty, about how wrong I was for appropriating his cultural heritage. He said that as a Native American he was particularly troubled that this should occur at a conference that he expected to be more forward thinking. When he was finished, I told him that nothing in my opening had been borrowed from his culture. [...] I ran through a short list of places and peoples that had developed these sacred ideas on their own. I also said that the use of percussion is global. I stated my belief that there are some things that are perennial and universal and as such will appear again and again in many times and in many cultures. He was not completely convinced, nor did he soften his tone or demeanor.

This is painful to see, because like a lot of modern Pagans I have hopes for alliances between white modern Pagans and indigenous peoples. It is tempting for white modern Pagans to romanticize the natural affinities there, but I do think they are not simply a fantasy. If that is to happen, we will need to overcome how indigenous people have every reason to eye white modern Pagans with distrust.

Don Frew tells a story about this tension unfolding at a big interfaith conference.

In the midst of this, a number of Native American representatives, in the middle of one of their programs, read a “Declaration of War” against all those who would “steal” their spiritual practices. The Declaration named Neopagans among the thieves.

We immediately arranged a sit-down meeting with a number of the Native American Elders. We explained that their information about us was coming from the same news sources that so often misrepresented their spirituality. Why should they trust those sources to be any more accurate about us? We shared information about our practices, how people living on and in relationship with the Earth in different parts of the world will come up with similar practices, how ours are rooted in historic examples of the practices of our ancestors in Europe, and how we shared their disgust with those who falsely pass themselves off as Native American teachers to make money.

Some white modern Pagans have framed at least some contemporary pagan practices as “European indigenous religions”, like Andras Corban-Arthen, who offers A Declaration for European Indigenous Religious Traditions.

We are members of diverse European indigenous ethnic cultures who seek to revitalize and reclaim our ancestral religious and spiritual traditions. We honor those who went before us, who gave us our life and our heritage. We are bound to the lands of our ancestors, to the soil that holds their bones, to the waters from which they drank, to the roads that they once walked. And we seek to pass that heritage to those who come after us, whose ancestors we are in the process of becoming – our children, our grandchildren, and the many generations yet to be born. We send solidarity and support to those other indigenous nations, races and religions who are also engaged in the struggle to preserve their own ancestral heritages.

I have met Corban-Arthen; as a modern Pagan I cannot help admiring both his love for local pagan practices in Europe and the sincerity of his project to support them. But the cultural politics of this approach worry me on several levels.

The term “indigenous people” emerged from shared efforts by oppressed people, living in the consequences of European colonization & genocide, to build a place to ground their political claims and build solidarity among peoples facing similar predicaments. I have to imagine that many indigenous people would object to so many Europeans shouldering their way into that cohort and drawing on that hard-earned legitimacy. There are some peoples like the Saami in the Nordic countries and the Basques in Spain & France who are broadly recognized in the global movement for indigenous people, and there are numerous others fighting for greater recognition, but Corban-Arthen and his European Congress of Ethnic Religions cast a much wider net. One can easily see how that can be read as a form of unwholesome appropriation.

Further, this disrupts the hopes that many modern Pagans have for a Big Tent Paganism as a cultural movement. It distinguishes Pagan practices which are admittedly recent historical inventions from practices which have (or at least claim) a much older historical provenance ... implying that if a tradition has deep historical and geographical taproots, that gives it a claim on legitimacy which the newer religious movements lack. That seems like a bad place to stand, both politically and theologically.

Indeed, that takes us again to Blood And Soil rhetoric with its horrible implications and bedfellows. I will gladly swallow such worries when the Lakota, Aymaras, and Maori people talk about their people and their relationship to the land and bake that in to the choice of the word “indigenous”, because fergawdsake I am a white American and have an obligation to have their backs on anything I can, in the face of a history and a present of horrors and oppression. But white people do not merit the same latitude.

A start on a model of cultural appropriation

I have my own way of thinking about the base principles behind the movement to combat cultural appropriation.

One can understand it as emerging from an ambivalence about both the modern understanding and postmodern understandings of Authenticity and Identity. Postmodern sensibilities see these through a lens of multiplicity & fragmentation. There is no universal subject with an objective understanding of all culture to retreat to. Social justice advocates rightly reject modernist universalism with its colonialist / Western chauvinist / etc implications. But at the same time, oppressed people compelled to inhabit an identity which subjects them to injustices yearn to ground that identity in some kind of authenticity, pushing back a vulgar postmodernism which rejects any stability there. “Culture, Authenticity, and Identity are not slippery mush!” they say, and so turn to the response, “This thing is real and belongs to these people who can be clearly identified!”

But as Wildermuth describes above, countering colonialist modernist universalism with claims like “the chakras belong to South Asian people” still accepts modernist conceptions of property rights and crisply distinct “peoples”. That can act as a convenient shorthand, but does not hold water on close examination.

This frustration with some of the sloppy thinking behind common manifestations of the movement against cultural appropriation does not mean that I reject the entire framework and its project of resistance. There are at least three distinct modes of cultural appropriation which I think can be clearly identified and combatted.

  • Colonialism when people in a privileged position employ their power to deny oppressed people who have a cultural and historical link to a thing access to it, while they exercise or exploit that same thing from their base of privilege. So for example, when white people in a posh neighborhood open an “authentic” taqueria where Mexican immigrants could not.
  • Minstrelsy when people in a position of power engage in misrepresentation of an oppressed people and their culture, for the benefit of the privileged, sometimes even claiming credibity for their twisted version by pointing to its supposedly authentic roots. As the name indicates, the minstrel show is the classic example, but there are plenty of esoteric examples of inventing of bogus ancient foreign lineages and cribbing poorly understood elements of “exotic” traditions, as in the case of qabalah.
  • Deracination when the privileged exercise some cultural elements of the oppressed, stripping them of their context and full meaning. For example, white people with arbitrary bindi marks because they look cool.

Because the objection to cultural appropriation emerges from and serves a social justice politics, we need to apply an awareness of privilege & oppression to understand the stakes. Avoiding at least these three forms of cultural appropriation is righteous in the name of the basic politeness which the privileged owe the oppressed in recognition of the power relationship in which they operate ... that politeness is only one face of a necessary bigger project of correcting injustices.

28 May 2021

Putting people together to create new products

This article originally appeared on the Cooper website in October 2001.

When companies plan out a new product (or service, or business process) they often think of the effort as the coordination of two teams solving different problems. Engineering addresses the question “what can you make?” Marketing addresses the question “what can you sell?”

You could engineer a combined toaster and cell phone, but you could never sell it. Marketing would tell you that you have a product no customer would buy. Likewise, you might successfully market a car that runs on tapwater, but the impossibility of building one makes it a meaningless product idea. Smart organizations know that they need to combine the insights from both marketing and engineering to find products that they can both make and sell. 

You might think that those two perspectives cover everything you need for a success. Certainly, many products have had modest successes this way. But to have a big success, you need more than just engineering and marketing.

If you want to sell a new spaghetti sauce, for example, engineering can set up how you will cook and jar the sauce, while marketing can come up with ways to advertise, distribute, and promote it. But if you want loyal, satisfied customers who tell their friends to try the sauce, you have to make it taste good. Design addresses the question “what will people like?” It makes sure that the sauce tastes good.

A separate design team

Many companies give their marketing group responsibility for determining what people will like, but marketing must focus on customers and their purchase decisions. This differs subtly from design’s concern with users and their satisfaction. To succeed with a spaghetti sauce for children, you need marketing that will motivate adults to buy it, but designers need to give it a flavor that appeals to children. Marketing and design apply different skills to different problems.

With interactive products like software, consumer electronics, and Web sites, design means determining how the product will behave when people use it. Since engineering creates the software which generates the behavior, many companies leave it to engineers to decide what behaviors the product should have. But leaving the design in the hands of engineers tempts them to create behaviors which they can build more easily, or which may make sense to them but not to users. Engineering and design also apply different skills to different problems.

Responsibility, authority, and resources

Any organization draws its shape from the responsibilities assigned to its members. As most people in business know, if no one has ownership of some area of responsibility you can expect that it will not get done. Your organization’s success comes from the sum of your employees’ successes, so you must measure your employees’ effectiveness against their responsibilities. Your organization will only get what you measure.

You also need to give people the resources and authority necessary to meet the responsibilities that you give them. This governs how the different groups in your organization work together.

Design should have responsibility for users’ satisfaction with the product. Look carefully at your company—many organizations do not really hold anyone responsible for this! In order to accept this responsibility, designers need to have the authority to decide how the product will behave. They also need to gather a lot of information: they must talk to potential users about their needs, to engineers about the technological opportunities and constraints that define what the product might do, to marketing about market opportunities and requirements, and to management about the kind of product to which the organization will commit. 

Marketing has responsibility for the product’s appeal to customers, so they need to have authority over all communications with the customer. In order to do this, they need a lot of information resources including the results of designers’ user research and customer research of their own. 

Engineering has authority over all of the system architecture that users do not see. For the design to deliver its full benefit, engineering must have responsibility for building the behaviors that the designers define, on budget and on schedule. Too often, engineers get handed a schedule and vague product requirements, leaving them to guess what will satisfy management. Engineers need to get better resources in order to fulfill their responsibility, in the form of a clear description of the product’s behaviors which guides what they build and drives their time and cost estimates. 

Management has responsibility for the profitability of the resulting product, and therefore has the authority to make decisions about what the other groups will work on. In order to make those decisions, they need to receive information from all of the other groups: design’s product description, marketing’s analysis of the volume of sales they project, and engineering’s projection of the time and cost to create the product.

Design and engineering

In order to make this work, the design and engineering groups need to have an effective working relationship. They have the greatest opportunity for friction, but also the greatest opportunity for mutual benefit if they work together well.

In many companies today no one has specific responsibility for the satisfaction of users, so, by default, in those companies this falls under engineers’ broad responsibility for the quality of the product. As a consequence, engineers tend to regard designers’ plans for product behaviors as suggestions rather than directives, and instead implement behaviors that make sense to their engineering sensibilities. Software engineers, in particular, tend not to respect mere authority, and may call a design “impossible” in order to take control when they distrust designers’ judgment. But if management makes it clear that the designers have accepted real responsibility for users’ satisfaction, designers will have the respect of the engineers who, in turn, will fulfill their responsibility to build what designers specify.

Furthermore, designers must serve the engineers well by writing very clear behavior specifications. Engineers find vague requirements directives frustrating, because they expect that they will lead to requests for changes later in the process, wasting their time when schedules get tight. Engineers value designers who can give them a specific picture of what the product should do.

Engineers also may fear that unsophisticated designers will demand the impossible. Designers have an obligation to understand the technology involved well enough that engineers can really implement everything they design. However, designers should not concern themselves with ease of implementation, only possibility. Evaluation of the ease of implementation should only happen when engineering takes the design and creates a time and cost estimate to give to management.

The relationship between design and engineering shifts when the designers complete the behavior specification. Before that point, the designers draw on the wisdom of the engineers in order to understand the technological opportunities and constraints that control the vocabulary of behaviors which the product can use. This ensures that the designers deliver a design that the engineers can implement. 

After the designers have created the behavior specification, the situation reverses and the designers become resources for the engineers. No specification can anticipate every possible behavior and situation, so the designers must support the specification with explanations and elaborations as the engineers proceed with the creation of the product. This also acts as a check on the designers, ensuring that they deliver a clear specification.

The benefits of a good organization

Setting up the right responsibilities in your organization reduces costs and risks throughout the entire product development process. Engineering efforts become easier to manage because you can measure engineers’ work against the behavior specification, without mid-course changes. This makes it easier to keep costs and timelines from spiraling out of control. It also means that marketing has more lead time to prepare a campaign for the new product because they can get a good picture of it before engineering has finished their work.

The end result: great products. Your business benefits because a well-designed product spurs users’ enthusiasm, making it easier to sell, support, and market. Beloved products help to build your brand. And everyone in your organization can take pride in what they have produced.

Turning requirements into product definition

This article originally appeared on the Cooper website in August 2002.

In his newsletter article last month, Ryan Olshavsky outlined an overall process for defining new products and services, taking a look at the start of that process. But how do you get from understanding your users to a vision for an innovative product which will appeal to them?

Avoid roadblocks to innovation

For many companies, identifying what they should create in the first place is the hardest question in developing new products and services. They know how to build things, but they don’t have a good way to decide what to build.

Many technology companies simply follow the technical opportunities they see, hoping that the technology they create will find a market need. This strategy is high-stakes gambling. Many innovative products do come out of this strategy, which can result in huge profits—but many, many more “great new technologies” don’t go anywhere. This is why you want a user-centered process, not a technology-centered one. Start from an understanding of users, and find technology to serve them, rather than the other way around.

Other technology companies grow existing products by responding to the feature requests which their customers give them. In the short term, this guarantees that the product will serve a market. But you won’t get anything truly novel out of this process, just refinements of the product you started with, which may or may not be significant improvements. In the long term, you will have a product weighed down with feature creep, ripe for a competitor to come along and steal your market with an innovative new system that serves your customers in a better way. This is why you want a needs-driven process, not a customer request-driven process. Proactively figure out what your customers will want, rather than just wait for them to tell you.

Similarly, many companies have strong marketing groups who do quantitative surveys of their customer base. That kind of work is essential—it tells you where there are dollars to win—but it cannot give you true innovation because it only shows you how the market works now, not how it could work with the introduction of something new.

Innovation means delivering products and services that address needs that no-one else has seen. Development driven by technology delivers innovation, but inconsistently, because it just hopes to stumble across those hidden needs. Development driven by conventional market research doesn’t deliver innovation because it only identifies how many dollars are out there to pay for products that address the needs you already know about. Development driven by customers’ requests doesn’t deliver innovation because your customers tell you about the features they want, not the underlying needs that could be met with a truly innovative product.

To keep from leaving opportunities on the table, you have to target innovation directly by looking closely at what people need, and by giving planners the responsibility to invent new products that address those needs. Done right, this can produce not just a single product but a portfolio of products and services that address a range of needs for a range of users and customers.

Frame the question to find opportunity

This is why you want to marry traditional quantitative marketing research together with qualitative user research. In last month’s newsletter article, we discussed the importance of talking to users, and in an earlier article, Reconciling market segments and personas, we talked about applying different tools to thinking of people as users and as customers. An understanding of your potential user population provides the most powerful fuel for the definition of new products.

You also need to have a general picture of your capacities. What kind of organizational resources do you have to apply? Do you have some basic technologies you need to think about up front? Do you have a clear picture of how your products and services fit together? Planners need to have answers to these questions in order to make sure that the organization will get behind the product.

That said, some companies go overboard with talking about their capacities at this early point in product development. Big companies often want to get all of the organizational players lined up at the start, and technology-driven companies often dig deep into the technology from the very beginning of product development. Committing to a specific technology, working group, or budget before you have a product definition to talk about can mean missing opportunities that lie along a different path. Giving planners just a little background in these areas goes a long way.

Structure your company to include planning

Making use of a keen understanding of users is not just a new technique. It demands political change in your company, realigning the way that your company distributes the responsibility for developing new products. Because technology and marketing alone are not enough, you need to introduce a “planner” role that has responsibility for defining new products. I talked about how the responsibilities of these three groups fit together in my previous article.

Planners are a small but essential component in a company that creates innovative products and services. They are the ones who should be responsible for defining new products, and they are the ones who have to create the form and behavior specifications that will drive managers’ decision-making and engineers’ development work. That’s a heavy responsibility, but an important one: if no one in particular within your company is held directly responsible for coming up with innovative product concepts, then your company cannot do it consistently.

Maintain planning team continuity

At least part of the planning team involved in the creation of the product should stay involved from beginning to end, from the initial research to the final testing.

Continuity with research provides enormous benefits: nothing can substitute for the subtle benefits of direct exposure to users, especially during the definition of a form and behavior specification. Even when others have done more thorough or skillful research, planners will commonly benefit the most from research where they have the intimate familiarity of having participated in the process.

Similarly, after the creation of the form and behavior specification, its authors will have a facility with its contents which no one else can match. Keeping the planners involved as the work proceeds helps maintain the integrity of the product vision, and saves developers time and energy.

Keep the core planning team small

Once they’ve recognized the range of things that will go into making a product, many companies try to involve numerous players in the early planning process: engineering, marketing, managers, and so on. In practice, this weighs down the process with coordination and communication overhead. It slows the process and demands enormous organizational effort. It also leads to uninspired, compromised products that are plagued with feature creep, rather than distinctive products with a strong vision.

You want a small, fast-moving, decisive core planning team. Involve other people in the company, but have those supporting people respond to the core team’s requests for information and discussion, rather than the other way around. This will help the planning process go more smoothly. At Cooper, we generally assign a core team of just two planners to a project, supported by others as necessary, because a team of two can communicate closely and work quickly.

A small planning team also helps protect you from committing to ill-conceived products. If the product defined by the planning team doesn’t make sense to develop, it won’t yet have the momentum of many people’s involvement.

Think about structure first

The process of defining a product can easily get lost in details. Planners tend to start looking at details as soon as possible, but this pressure can also come from outside the planning group: In web and software projects, managers often like to see screen mock-ups as soon as possible, in order to have progress they can see. This creates problems: looking at one element of the product in detail, planners discover problems in some unexamined assumption about the product, which means making some changes to that structure, which then requires revisiting the details of some other element which had been discussed before, which has implications somewhere else. The planning gets bogged down in interdependencies within the product itself.

To avoid this, planners should split the time they spend working on the form and behavior specification in two. In the first half, they work on structure: the major elements of the product, the basic scenarios in which people will use it. Inevitably, this will demand a little bit of dipping into details, but planners should stay disciplined about focusing on structure. Once they have resolved the structure and switch to looking at detail, they need the opposite discipline, resisting the temptation to revisit and change the overall structure. Again, planners cannot avoid a little bit of cheating, but they need to minimize the backtracking as much as possible.

Take a short vacation from feasibility

Many companies, especially technology-oriented companies, start their thinking about new products by looking at the technology available. This makes sense: a form and behavior specification that describes a product you cannot build, or a service you cannot deliver, does no good. But stepping away from thinking about those real constraints for a little while, at the beginning of the design phase, can lead to better products that sometimes even turn out easier to build.

Setting aside feasibility for a bit frees the planners to think entirely in terms of the initial requirements, clarifying their vision. This often prevents feature creep, because the resulting product vision does not include any elements that are there only because the technology permits it. Occasionally, you can produce real breakthrough ideas from unrealistic brainstorming. As they work toward the form and behavior specification, planners infuse more and more reality into their thinking, progressively scaling back the idealized product they initially conceive.

Articulate product ideas in coherent chunks

Everyone knows that product design benefits from an iterative process, where the team proposes, reviews, and refines ideas. Your planning team will have to brainstorm and reflect on a number of product possibilities, but do not to try to expose the whole organization to these ideas. Your early ideas probably will not yet have reached a point of coherence where people can communicate and discuss them well.

Nor do you want to ask your planners to create heavy written documentation of their interim ideas, or bog them down with doing too many status reports. The whole point of checking in with the planners during this phase is to provide feedback to them as the product concept takes shape. Creating heavy documentation at the middle tempts planners to defend interim ideas too strongly, and confuses the organization about what the true “blueprint” is. Interim discussions with the company stakeholders should take the form of small, informal working sessions, where planners can speak from simple sketches on a whiteboard, or in a PowerPoint file. At Cooper we call these “chalktalks.”

Ultimately, you want the company to work from a form and behavior specification that provides a final and coherent description of what the product is and how it should work. Next month, we will talk about this kind of document in greater detail.

Not all web sites are alike

This originally appeared on the Cooper website in April 2003.

With the Web now completely ubiquitous and familiar, and the frenzy of getting on the Web for novelty’s sake long past, companies routinely turn to the Web to address many types of challenges. A Web site can offer a simple brochure for communicating with customers, a way to disseminate information to people within a large organization, a tool for running complex business processes, and much more. Because different sites try to address different problems, creating them requires different kinds of planning and development.

Although it may sound like a truism, many people have a hard time talking about the distinctions between different kinds of Web development, which makes it difficult to decide how to proceed. This article offers a quick survey of various Web projects and of the techniques that address them.


All Web projects call for a measure of Web production: creating the HTML, images, et cetera that will manifest in the browser. Though basic, this work remains challenging, even with the many tools and skilled professionals available in the wake of the Web boom. Making appealing, readable pages that load quickly and work for readers with different browsers on different devices remains tricky and demands careful craftsmanship. This goes hand-in-hand with the planning of the site’s look, even if the site does not have ambitions toward flashy visual design.

Where Web projects differ is in the planning they require before production begins. What a Web site must accomplish determines the kind of planning effort it requires.

A company site facing customers or partners should present the company’s brand identity. Companies with solid brand guidelines must provide Web developers with these guidelines and give clear direction about how the site fits into them. New sites, new companies, or Web-oriented companies may call for Web-oriented efforts in developing branding, rather than just an extension of an existing brand identity. This may not require a major effort, but it will require clarity up front.

Any site with static content—in other words, any site—will demand not only the creation of that content but also the definition of the information architecture that defines the navigation for the site. Even the simplest site, with just a dozen static pages, demands a little thought about the information architecture before production begins. And almost any corporate site is more complex than a small online brochure.

Complex sites

Web sites grow complex for three broad reasons. A site may have a lot of content, which makes ensuring that people find what they want tricky. A site with dynamic content makes it difficult to ensure that changing information is current and that it goes to the right place in the site. Last, a site may enable users to take some action, which introduces the possibility that they will not succeed in taking the action they want.

Often, a site involves some combination of these three elements. A news site may have content that is both dynamic and plentiful. An online database may combine large amounts of content with the ability to take action by creating reports. An e-commerce site may combine all three, with a large amount of dynamic catalogue information and the ability to take action by placing orders. Each kind of complexity demands its own combination of planning processes, development skills, and supporting technologies.

Big sites and information architecture

Everyone has encountered sites where they cannot find the information they want, though they suspect that it must appear somewhere in the site’s many pages. Organizing large amounts of information presents challenges that did not start with the Web—authors, librarians, and file clerks have wrestled with these problems for centuries—but doing this on the Web calls for a unique set of skills in constructing “information architecture.” Pages must link to other pages in ways that make sense, allowing users to easily navigate the site and find what they want.

For a site with even just a few dozen pages, information architects perform extensive planning before their work goes into production. Choosing appropriate labels and categories for information can call for research into the information domain, analytical exercises applied to the content, and usability testing with potential users of the site.

Database technologies typically support the organization of a site; any major site today consists of content served through a database, rather than just a collection of fixed pages. While that database may simply make it possible to change the top banner of links on the site without changing pages individually, turning to databases creates the opportunity to do much more, including dynamic content.

Dynamic content and content management

Over time, a site’s content changes. This must happen in a way that doesn’t cause the information architecture to break down. In the case of a mostly static site, this may mean a periodic redesign to accommodate new content. But when a site’s content changes quickly—as on a news site, for example—the site needs to have a structure that accommodates that change in order to prevent the information architecture from degenerating into chaos. This requires its own kind of planning and supporting technology.

The database for a dynamic site must do much more than it does for a static site. For instance, it must know when content first appeared and when it will become “stale.” It may need to know rules for publishing bits of dynamic content to a homepage or other index pages, which requires planning for the way information will appear on the site, the decisions the site will make about what content to present where, and the supporting structure of data associated with content.

Planning for a dynamic site involves not only the basic visual look and information architecture but also logical structures for deciding what to publish where. Complexity arises from looking at how content changes over time or how it changes when personalized for different people; thus, creating a different kind of information architecture work than what goes into a large static site. A dynamic site becomes an exercise in “content management:” the system’s ability to choose the right information to present depends in large part on how people put content into the system.

Many companies have learned that when a site grows big enough, it starts to face many of the same problems as a smaller dynamic site. Information grows stale, authors no longer work directly together, and different readers need to see different elements. Very large sites become less useful as they grow if they don’t have a foundation of good content management in place.

Taking action and interaction design

The Web began as a medium for presenting information, but many sites actually provide something very different: tools for taking some kind of action. Many people miss this very important distinction, even though it dramatically affects the planning that will shape the site and the technology that will support it.

Creating a Web tool means programming software, just like with a desktop application or a client-server system. The more complex the behaviors of the Web tool, the more sophistication the software development group will require and the more your “Web project” will become a software project.

Though most people find Web browsers fairly easy to use, this does not necessarily mean that a tool delivered through a browser will be easy to use. Making usable Web tools presents the same challenges as making usable software or electronics. In fact, delivering a usable tool on the Web can be harder than in a desktop application. If the best way to serve users involves a dynamic drag-and-drop system, you won’t be able to keep them happy with a click-and-wait Web site.

It is imperative to consider usability from the beginning of the requirements-definition process. Doing so requires thinking about some basic questions in your planning, for example: Will people use this tool just once, or will they use it again and again? (If just once, you will want to focus on usability testing to analyze first-timers’ experience. If repeatedly, you will want to research your users’ working context in order to understand their real-world needs.) Will this tool serve customers or people inside your organization? If the latter, do you need to think about how you will train users?

Considering users and usability in early requirements clearly supports defining the behaviors of a Web tool, which in turn provides the software development group with a good picture of what the organization expects them to build. The result is a software development project that is speedy and effective.

Where to product managers fit?

This article originally appeared on the Cooper website in September 2004.

People often ask how interaction designers should fit into their companies. If the company cannot take good advantage of it, the most brilliant interaction design in the world won’t help as much as simple, workmanlike interaction design will benefit a company that uses that design well.

To talk about putting interaction designers into your organization, it helps to start by talking about some other people—product managers. In the past several years, many companies have started introducing a product management (PM) role into their organizations. The way PMs play out at different companies often varies dramatically in what PMs do day-to-day, in how PMs fit into the organization, in what skills and background the company expects a PM to have, and so on. But most organizations agree at the most fundamental level about the PM’s charter: the PM has responsibility for ensuring a product’s success.

Hard questions about product management

PMs face a range of challenges. A few organizations frame the role in such a way that it becomes obviously difficult for the PM to succeed. They may give a PM responsibility for too many disparate products, or too little support from above for their authority, or unclear divisions of responsibility with other people in other roles. More often, though, companies have not made mistakes so much as struggled with hard decisions about how to define the PM role.

For example, many companies have internal disagreements about whether to place the PM as part of the marketing group or part of the development group. Putting the PM into marketing makes sense, since the ability to win customers for a product very directly measures the product’s success. But putting a PM within marketing can cost the PM credibility and effectiveness in the development group that actually creates the product, compromising the PM’s ability to affect the shape of the product itself … which of course has a direct link with the product’s success or failure. On the other hand, putting the PM under development makes it hard to keep the PM connected to customers’ needs and responsible for the product’s sales success.

This conundrum suggests that the PM might best stand outside of either group, but this presents its own problems. Working relationships become hard to define. If you have a big, complex product, then the product has both a development lead and a marketing lead.

How does the PM fit together with those? As a peer? How, then, does the PM act to control the product, without authority over marketing, development, or anyone else? As their superior? How, then, does the PM interact with these leads’ superiors within the development and marketing groups?

Who does a product manager manage?

The interaction design connection

To answer that question, let’s bring interaction design into the picture. When I describe what I do to people who have not encountered the term “interaction design” before, I say first that “I look at users’ needs, figure out what kind of product best addresses them, and create a behavior specification for that product which the development team then uses as requirements to drive their work.” Often people say, “In my organization, we call that a ‘product manager.’”

Whoa! My description didn’t describe me managing anything. Why should my description of “interaction design” ever correspond to anyone’s notion of “product management?”

This connection surfaces because organizations see that the description of product requirements strongly affects whether the product will succeed … and they recognize that the development team won’t actually follow requirements that don’t have the backing of management authority. Thus the person who drives the decisions of the development group by setting requirements must become a “product manager,” placed in the organizational hierarchy at or above the level of the development lead. So even though the PM doesn’t manage the people in either marketing or the development team, we call that person a “manager” as a way of denoting their level of authority.

In practice, the term “product management” does work well as a descriptive title because PMs concern themselves with a product and in fact do management as their day-to-day work. They talk to people in the organization, individually and in meetings, offering information and making decisions.

What does this have to do with talking to users, figuring out behaviors, writing requirements—the stuff of interaction design? In an organization lacking an explicit interaction design role, many PMs recognize that as the owners of product requirements, they have to also author those requirements. The development team cannot author their own requirements, the marketing team typically cannot write requirements that serve the development team well, and executives above the PM will not set requirements at the necessary level of detail. So the PM must do it.

But a problem emerges: the work of interaction design—all of that talking to users, solving interface problems, writing detailed requirements documents—takes a lot of time and effort. Doing this work thoroughly takes more time than a PM can spare from the work of managing. Plus, few people have strong skills in both management and design. Either the PM’s management work suffers to spare time for interaction design or the PM’s interaction design work suffers to make time for management. This spells frustration for the PM and problems for the company.

The interaction design role

Thus the organization really needs a separate person with a distinct role as an interaction designer (IxD). The work of interaction design done well demands so much time, attention, and skill that it takes a person’s full attention as a full-time job. (Indeed, at Cooper we find that a team of two or three IxDs works much better than a lone IxD.)

Does this mean that you need IxDs instead of PMs? No. You still need management authority supporting any behavior specifications that IxDs create. Anyone with management authority will end up doing management work, which becomes incompatible with interaction design work.

This distinction also helps to keep interaction design from becoming too important. Good IxDs think in terms of users’ needs and advocate for them. But though addressing users’ needs plays a very important role in determining product success, many other concerns have at least equal importance. You need a product that you can make and sell at a profit. You may have a gatekeeper customer making purchase decisions, someone quite different from your end user, who needs to see some characteristic in the product that does not matter to users. You may have partner agreements to satisfy. And so on.

The PM must weigh and integrate the various elements of success. If that PM also has responsibility for the interaction design, then either their effectiveness in product management or their effectiveness in interaction design will suffer because of their divided priorities.

Working relationships

Without IxDs, you have PMs in the Dilbert situation of trying to arbitrate “he said, she said” disagreements between development and marketing. The addition of IxDs creates a completed system of groups with interlocking concerns and responsibilities, giving a PM the ability to make informed decisions about product strategy because she has people speaking for all of the components of product success.

My earlier article, Putting people together to create new products, included this diagram showing how the different groups have different domains.

The organization thus considers these three groups as peers, separate from one another, but all working at the direction of a PM. The PM manages these three groups, receiving information from all three, giving them direction about how to use their time, coordinating their efforts.

Product creation

Consider how product creation works in this kind of organization. 
The company may start with a market they want to serve, identified by the marketing group. IxDs talk to users in the space, identifying a few different classes of users and spotting opportunities to serve them with new products. Marketing turns around and takes those classes of users and connects them with customer demographics, determining how much potential revenue those users could represent.

Meanwhile, the IxDs take what they learned about users’ needs, market requirements, and technical constraints (from talking to users, marketing, and development, in turn) and put together a product definition, describing its behaviors in detail in a behavior specification. Development then takes that behavior specification and produces a technical specification and a development timetable.

Now the PM has the information available to make intelligent business decisions. She can ask marketing, with a well-defined product and intended audience for it, how much money can we expect to make with this product? She can ask development, what time, money, and resources will it cost us to make this product? She can ask executives, does this product match our vision for the company’s business?

The PM can now act to integrate the concerns of the different constituencies in the organization. For example, if developers express concern that an element of the designed product will take a lot of time and money to build, the PM can then ask if it makes sense to defer that component to a later release. Development can answer what the costs become, breaking the product into multiple releases. IxDs can answer whether the more limited, early release product will still satisfy users, and what a more limited product will look like. Marketing can answer the impact on customers, revenue, and marketing the multiple-release strategy.

Each of the groups owes information to the PM, who then in turn makes decisions, and can hold the group accountable for executing on those decisions. IxD provides a clear picture of target users and product behaviors, and has accountability for user satisfaction. Marketing turns target users and product definition into a marketing plan, and has accountability for sales. Development turns a behavior specification into a development timeline and a finished product, and has accountability for robust engineering and meeting their promised timeline.

The PM, in this world, not only has a charter to deliver a successful product—she has the ability to actually deliver one.

The Web, Information Architecture, and Interaction Design

This article originally appeared on the Cooper website in September 2005

The impact of digital technology—from the Web to mobile phones to the silicon in your toaster—has meant a proliferation of terms for the work people do to define digital products and services. People talk about “customer experience,” “user-centered design,” and so forth. This talk can confuse even people who do that work for a living, as you often find different people using different terms to mean the same thing—or using the same term to mean very different things!

Many people say that this reflects a breakdown of disciplinary distinctions in designing for the new world of the digital. “It’s all just design.” I disagree. I see a few major types of problems in the digital world, and I believe that each of these has its own set of tools and methods that work well to solve that type of problem.

Two of these types of problems lurk behind the terms “information architecture” and “interaction design.” As I said, we live in times with slippery terms, so I want to admit that I’m using these expressions in a way that may differ from the way you or your colleagues do. Though I have some good reasons for thinking that I use these terms in the most useful and appropriate way, I want less to advocate for certain language than to make clear something about two different kinds of problems that emerge in designing for the Web.

To understand why the distinction between these problems can become obscured, it helps to look back over the history of Web design. Early in the growth of the Web, many graphic designers with backgrounds in print media migrated into Web design. They discovered that the specifics of good graphic design in print don’t always work so well on the Web, and creating good Web sites required some new techniques. But graphic design for the Web was still graphic design.

On the other hand, as Web sites grew larger, with more and more content, Web designers came to realize that creating effective Web sites requires solutions to some problems very different from those addressed by graphic design. Not only do individual pages need to look appealing and readable, but you need to organize and label links so that people can find the content in a large site. We now call those “information architecture” (IA) problems. The methods of graphic design don’t help with IA, and for a time Web developers struggled with IA problems, even having trouble recognizing that these problems even existed. But at this point Web development organizations understand the distinction between graphic design and IA, and recognize IA as its own distinct discipline with fairly-well understood methods and techniques.

Because the culture of Web design remains strongly rooted in graphic design, many conceive of IA as “all Web design that isn’t graphic design.” But that misses another important distinction.

Just as the growth of Web sites created problems in organizing content, giving rise to the discipline of IA, so too the appearance of Web sites generated by complicated software generating unique pages that people use as tools, calling for a discipline I prefer to call “interaction design.” (IxD)

Let’s unpack that distinction.

IA means defining information structures to answer the question “how does a user find the information they want?” Thus navigation links for a big corporate Web site reflect IA: where can I find directions to the company’s main headquarters? When you talk about contentpage hierarchy, and taxonomy, you probably have an IA problem.

On the other hand, IxD means defining system behaviors to answer the question “how does a user take the action they want?” Thus the pulldowns, buttons, and checkboxes in a Web email application reflect IxD: what must I do to reply to the sender of this email?When you talk about actioncontrols, and dynamic elements, you probably have in IxD problem. Some problems include both components: consider how Amazon includes both large amounts of static content and some very complex dynamic behaviors.

It turns out that the techniques for doing these two things differ dramatically, at least as much as they differ from the techniques of graphic design. IA calls for exercises like card sorts, usability testing for category labels, hierarchical structure diagramming, and so forth. IxD calls for exercises like workflow analysis, usage scenarios, wireframed walkthroughs, and so on. The work done, and the skills needed to do it, differ considerably between the two. Just as few people can fully master the skills of both graphic design and IA, few people will master the skills of both IA and IxD. It serves both organizations and practitioners for people to specialize.

So when thinking about a new Web site, first ask what kind of problem you have, to make sure that you bring the right people—and the right tools—for the job.

Intuition, Pleasure, and Gestures

This was originally published on the Cooper website in October 2007, which was lost after the acquisiton by Designit:

“Intuitive.” We use the word a lot when talking about interactive systems, but it misleads us.

When people say they want a system to be “intuitive,” they typically think they mean that users should immediately understand how a system works when they encounter it. But you cannot really do that with many systems … not even with most systems people talk about when you ask them for an example of something “intuitive.” 

Consider the mouse-and-cursor. Most of us have forgotten the first time we encountered it, and thus forgotten how unintuitive we found it the first time we used it. A little box on a string with a button or three on top? If you have just arrived from the 23rd century, you might pick it up and try talking to it. But with ten seconds of demonstration you understand it completely and have some sophisticated applications of it immediately available to you, and even if you didn't see a mouse again for the next ten years you would still remember how it worked. 

There you have what people really mean by “intuitive:” easy to explain, powerful in its implications, impossible to forget. You get that through systems that possess a clear, coherent internal logic that feels natural and obvious. Of course, it can take hard work to figure out those “natural and obvious” behaviors; we interaction designers call that work “interaction design.”

From time to time the talk at Cooper turns to the deeper philosophical waters of our work. Some years ago, we discussed the ultimate objectives of our work, and the expressions “intuitive” and “easy-to-use” started coming up, but didn't quite hit the mark. Some systems have an inherent difficulty; having worked on tools for doctors and NMR spectroscopers and rocket scientists (yeah, really) we have seen how some systems can never really become “easy-to-use.”

Now in a lot of cases you can improve an interactive system by simplifying it, stripping things out, and thus making it easier to use and understand. Most interactive products and services out there could benefit from this treatment, which can paradoxically result in the simplified version delivering more functionality to users because they understand it better and take better advantage of it.

This has its limits. You can't make that rocket scientist's satellite orbit plotting system “easy.” But you can make it “intuitive.” With that comes a surprise benefit.

Remember when a lot of folks started taking to the Palm Pilot? It seemed like everywhere you went, people were tinkering with their Palms all of the time, obnoxiously wanting to share with you, “look at how you can do this!” Why?

Because the Palm has “intuitive” interaction design, which makes using it pleasurable.

I believe that this hits us at a deep, animal level. Just as we get pleasure from the form and tactility of good industrial design, we get pleasure from good interaction design, both as we learn it and as we work with it. Learning things that make sense, working with tools that work right; these things make us East African Plains Apes happy right down to our DNA. So instead of saying “intuitive” or “easy-to-use,” at Cooper we often talk about designing interactive products that deliver power and pleasure to the people who use them.

With so many folks talking about the iPhone right now, it surprises me how few people remark on the effect of its lovely interaction design. Notice the difference between the advertisements for the iPhone and every other phone on the market. Other phones show people talking on the phone, snapping pictures of each other, dancing around, getting into kung fu battles with phones as weapons, everything except what the iPhone ads show you: how you use the phone. Think about that. They sell the iPhone by showing you thirty seconds of someone using it, assuming that this will make people want it … which it does, because people recognize that using a product with good interaction design feels good. 

Part of that lies in the harmonious way that the iPhone transitions between its functions, the thing that folks often call “navigation,” because finding your way between functions in a system with bad interaction design can feel like navigating your way through a maze or a trackless sea. Another aspect that I find particularly interesting lies in the iPhone's gestural interface.

A few key functions rely not just on touching the phone, but dragging your fingers over it: flipping through pictures and song lists, resizing pictures, and the cunning mechanism for unlocking the phone. This really struck me because for the past few years, instead of a mouse I have been using an exotic trackpad made by a now-defunct company called FingerWorks. [I would later learn that they were acquired by Apple ....]

My trackpad looks a bit like an ordinary mouse pad, and like the iPhone's photo-resizing function it can recognize when I touch it with more than one finger, so it does more than just move my cursor. By pinching my fingers in a certain way, I can cut something to the clipboard; if I reverse the gesture, I can paste it back in somewhere else. A dozen other gestures enable me to do things like open new documents, close windows, scroll, resize things, and jump to my home page in my web browser. Colleagues say that sometimes when they see me work it looks like I have cast some magic spell: I wave my hand and things happen on the screen.

When I use an ordinary computer I miss my trackpad, and not only because it provides me with a little convenience. It also feels good

I believe that part of why I enjoy it so much comes from something we don't get to do much when working with computers and other interactive tools: I do the gestures from muscle memory, rather than cognitive memory, just like I do with my typing on my computer keyboard. Most of the time tools that run on software tax our cognitive capacity but leave the intelligence that lives in our bodies relatively untapped, which makes us East African Plains Apes a little uncomfortable; using those gestures makes me a happier animal.

I think many of us know on some level that we suffer this imbalance, which explains why we ooh and aah over things like tablet computers, Jeff Han's big touchscreenMicrosoft's Surface table, and the computer from Minority Report. Even if these examples seem unrealistic in some ways—I have a hard time imagining what anyone would actually use the fantastically expensive MS Surface for, and I assume that Tom Cruise's character in Minority Report has such broad shoulders because the poor guy has to wave his arms around all day just to use his computer—they appeal to us because we yearn for the pleasures of using the intelligence of our bodies together with the intelligence that lies between our ears. 

I wondered for a long time whether I was right to suspect that using gestural input like on my trackpad really would appeal to most people, or whether it just reflected an eccentricity of mine. The iPhone seems to have demonstrated that gestures really do have a broad appeal. I hope that companies other than Apple learn the lesson, if only because I want to use—and get to design—the products that will include them.

13 May 2021

Settler colonialism and Israel

Because of the current horrors undertaken by the State Of Israel, a friend recently said something characteristic of a lot of American lefties who have had to un-learn both pro-Israel propaganda and US propaganda about our own history:

Israel is exactly a parallel of the US. It’s a settler-colonial state that displaces and claims the territory of the people who were there before […] That’s also why the US supports Israeli actions so much.

I got long-winded in my frustrated reply, and that led to this short history of the Israel-Palestine conflict.

US support for Israel

First I must dispense with this obvious canard. No, the US does not support Israel out of some weird settler colonialist solidarity. The US supports Israel because there are two sets of cranks who hold power over the two dominant parties in our politics. Republicans support Israel’s policies because they are in the grip of Evangelicals who believe that Israel is an integral part of realizing Biblical prophecy. (I would think that one would want to devote our foreign policy to preventing Armageddon, but what do I know?) Democrats support Israel’s policies because they are in the grip of the hardline Israel lobby whom liberal Jews have been too feckless to purge from the coalition.

The USA is different

The other part — the question of Israel’s settler colonialism — requires unfolding a lot of history. At the bottom line, Israel is the result of a settler colonial project and that project is ongoing in the present crisis, but analogizing it to the history of the US is irresponsibly misleading.

Prior to the founding of the USA in the Revolutionary War and its aftermath, the British colonies in North America starting even at Plymouth Rock were engaged in settler colonialism, seizing land with the intention to make it their own for every following generation, with total disregard for the indigenous people of the continent. By the time of the Revolution what would become the US had more than a century of expansionist settler colonialism with an overt program of total genocide to establish British sovereignty over territory; the program of genocide continued through the closing of the frontier, a legacy which is alive in the present day.

Israel’s history is bloody and ugly but it is very different.

Zionism before Israel

The Zionist project can be seen as beginning with the First Zionist Conference in 1897, when what is now Israel was still part of the vast, weak Ottoman Empire. It is important to understand that the Zionism conceived then did not seek to establish the State Of Israel as we now have it. The Conference in 1897 defined their project thus:

Zionism aims at establishing for the Jewish people a publicly and legally assured home in Palestine.

In this we can see how Zionism is indeed an ethnic nationalism but not quite what one might naïvely imagine from the succinct description “Jewish nationalism”.

First, the 1897 statement explicitly rejects conceiving of Zionism as a religious project. “The Jewish people” are framed in secular terms, as an ethnic people. (The Magen-David ✡︎ emerges as a symbol of ethnic Jewish identity at this time, as distinct from the seven-branch menorah which was the predominant symbol of Judaism as a religion in Europe. The use of this symbol on the flag of Israel underlines that Israel is a secular state, not a theocracy.)

Second, the pointed use of the word “home” deliberately avoids identifying a sovereign Westphalian nation-state like the State Of Israel as the defining goal of Zionism. Yes, that was the dream of many Zionists, but not all, and it was not the plan. It is neither accidental nor incidental that this definition of Zionism identifies it as at least compatible with a pluralistic Palestine, because that was what many in the Zionist coalition most wanted.

Palestine before Israel

Again, in 1897 Palestine was a region, not a country. It was a small part of the Ottoman Empire, which was approaching collapse and would not survive the First World War. There were already significant populations of Jews in the region. Communities of diaspora Ashkenazi & Sephardic European Jews had origins in centuries of waves of immigration. Indigenous Mizrahi Jews were present all over the Middle East.

Between 1897 and WWI, Zionism was in no way the work of an imperial power; it was an independent ethnic nationalist movement. Many of these early Zionists were undertaking a project a bit like the Pennsylvania Dutch — buying and homesteading land with the intent of developing ethnic enclaves. But it would be absurd to imagine these Zionists as just like the Amish. By the 1920s there were Zionist settlements on land purchased fair and square this way, protected from hostile neighbors by Jewish militias … which were hard to distinguish from other settlement militias seizing territory by force of arms, even terrorism.

After WWI and the collapse of the Ottoman Empire, Palestine became a colonial holding of the British Empire. Zionism in the interval between WWI & WWII cannot be understood simply as European or British imperialism, though: Zionist settlements in Palestine were tolerated by the British Empire rather than sponsored by them. The Balfour Declaration of 1917 was issued by the British government but did not have any compelling legal power, and again we can see that it does not imply any plans for a sovereign Jewish state:

His Majesty’s government view with favour the establishment in Palestine of a national home for the Jewish people, and will use their best endeavours to facilitate the achievement of this object, it being clearly understood that nothing shall be done which may prejudice the civil and religious rights of existing non-Jewish communities in Palestine, or the rights and political status enjoyed by Jews in any other country.

In the immediate wake of the First World War, or before that, a map of “Palestine” would often include not just what is now Israel but also most of what is now Jordan. Shortly after WWI, the region was organized into the Emirate of Transjordan (nominally independent but a “British protectorate”, which would eventually become independent Jordan) and Mandatory Palestine (subject to direct British rule, which included a bit more than what is now Israel and its occupied territories).

Founding Israel

By 1948, there were an array of different Jewish populations in Palestine. There were Jews whose grandparents were born in houses which Zionists purchased fair and square … going on to live in other houses seized by force of arms before they were born. There were non-Zionist Mizrahi Jews who looked just like their Arab neighbors, living in the same houses their great-great-grandparents had been born in under the reign of Sultan Abdul Hamid II in the 19th century. There were non-Zionist Ashkenazi & Sephardic Jews whose ancestors had migrated to Palestine centuries before Zionism. There were Eastern European refugees, survivors of genocide, who found that the very same people who participated in the Nazis shipping Jews to concentration camps were equally dangerous as administrators of Soviet domination, who had thus emigrated to Palestine just a year before … and joined Zionist militias, living in houses they stole while serving in those militias.

Plus, of course, Arabs. Though there were Arab Palestinian nationalist movements, “Palestine” was a place on British maps, so in this era few Arabs in the area thought of themselves as “Palestinians”.

That at that moment it was in the hands of the US and Europeans to decide what the shape of the geopolitical order would become as their empires crumbled in the wake of WWII was, of course, reflective of the greater process of European colonialism in the Middle East and around the world. And that it would primarily be Britian among those powers who would adjudicate the competing claims of Mizrahim, European Zionists, and Arabs inside of Palestine and out, again reflects that same colonialism.

Given that there were both Jewish and Arab populations with legitimate claims at the end of the Second World War, one might ask why they would grant all of Palestine to Jews rather than attempt a partition as in India & Pakistan. But one would only ask that if one does not know the history. Israel apologists will remind you that if you look at a map from 1910 or 1920, most of “Palestine” on on that map would likely be territory which was granted to Arabs in the form of Jordan. And in 1947 & 1948, there were a series of partition proposals for British Mandate Palestine. They suggested an Israel much like what Israel would come to actually hold over the decade to come, plus a sovereign Arab Palestine comprised of something close to what we now call the West Bank of the Jordan, the Gaza Strip, and the Golan Heights. But those would-be Palestinian territories would not have even a moment of Palestinian sovereignty, as they were claimed by Jordan, Egypt, and Syria in a tumultuous and bloody process between 1947-9. Here as in much of the world, the chaos of the immediate post-WWII era meant conflict between various players trying to lay claim to territory so they would not be left without a chair when the music stopped. I don’t have the expertise to untangle or summarize well how that took place, not least because important elements of this history remain vigorously disputed by good faith scholars.

Thus salty Israel apologists thus will say that we already have a two-state solution to the Arab-Israeli conflict with Jordan & Israel, and salty critics of pro-Palestinian movements will ask why we hear so much about the oppression of Palestinian Arabs by Israel but not by Jordan, Egypt, and Syria. Such arguments are whataboutism … but not entirely bullshit.

Early Israel

The result of the founding of Israel does roughly resemble the partition of South Asia in some important ways: population transfers, local atrocities on all sides, and border disputes. Mizrahi Jews all over the Middle East and Sephardic & Ashkenazi Jews all over Europe migrated to Israel, often though not always escaping severe oppression and efforts to displace them. In 1948 the majority of Jews in Israel were Ashkenazim; within a few decades, the Israeli Jewish population would arrive at the Mizrahi majority we have today because of the migration of Mizrahim from all over the Middle East.

Arab Palestinians in what was now Israel became citizens of Israel theoretically with the same rights as Jewish Israelis, but in practice they suffered (and still suffer) various forms of oppression and disenfranchisement … including that many of them were displaced from their homes by Zionist militias at the dawn of the State Of Israel, militias which would develop into the foundation of the Israeli army.

During its first two decades, Israel fought a series of wars and border skirmishes with all of its neighbors. Israel hardliners will tell you that Israel was simply an innocent sovereign nation under siege from conquerer neighbors. Historians have demonstrated, though, that Israel was often spoiling for a fight, hoping to seize more territory. (They eventually succeeded. We’ll get to that in a moment.)

But the Israeli story of dread of their threatening neighbors is not entirely bullshit. The entire Arab world understandably read the creation of Israel as nothing other than one last imposition of overt European imperialism and colonialism, right at the moment when former colonial possessions were starting to lay claim to liberation and sovereignty. So they refused to recognize Israel’s legitimacy at all, declaring their plans to literally wipe the country from the map, as did the nationalist terrorists of the Palestine Liberation Organization, founded in 1964.

One can imagine what success for these antagonists of Israel would have meant for the Jews who had never known any place other than Israel as home — or who had migrated from neighboring Arab countries to Israel after 1948 — and had no place else to go.

These many decades later, Arabs generally still see Israel as nothing other than the most galling face of the legacy of European colonialism. In turn, many Israelis still see their conflict with Arab Palestinians as only the most proximate part of an ongoing larger conflict with hostile neighboring Arab nations.

The Six Day War, and occupation

In 1967, Israel fought the Six Day War with Jordan, Syria, and Egypt. (By this time it should be evident why it is hard to say Who Started It.)

Remember the West Bank of the Jordan River, the Gaza Strip, and the Golan Heights from British Mandate Palestine, which Jordan, Syria, and Egypt seized? Israeli military planners had always wanted to capture these territories for strategic reasons; they faced ongoing shelling and rocket attacks from them. At the end of the war, these were now all in Israel’s hands.

Israel also seized the Sinai Peninsula from Egypt, which was more directly a territorial seizure of land for settlers; it had more than twice the area of all of pre-1967 Israel.

In violation of international law and UN resolutions, Israel engaged in a military occupation of those territories and began building settlements for Jewish Israelis in all of them. Israel would have one last major war with its neighbors in 1973, leaving that status quo unchanged. After that Israel started denying having nuclear weapons … in a way that meant that they wanted everyone to know that they really had them. Border conflict never really stopped, but the stand-up wars ended.

The occupation of the Sinai would not last; under the Camp David Accords signed in 1979, Israel returned the Sinai to Egypt.

Israel continues to lay claim to the Golan Heights to this day and is still building new settlements. The history of the fighting over the territory is complicated and entangled with the shifting situations in Syria and Lebanon.

The situation in Gaza and the West Bank is even more complicated. All of Gaza and a landlocked archipelago of territory in the West Bank is governed under the Palestinian Authority formed in 1994 in consequence of the Oslo Accords. But the sovereignty of the PA is limited, with countless intrusions large and small by Israel. Gaza is subject to military interventions, including intermittent shelling, a blockade, and a “buffer zone” maintained by Israel which takes up a significant portion of its arable land. On the West Bank Israel continues to build settlements in the major portion of it which they control, and it is worth noting that the folks who move there are generally rightwing hardliner assholes prone to confronting their Palestinian neighbors with harassment and worse.

Meanwhile, within the pre-1967 borders Palestinian Arabs who are citizens of the State Of Israel still have rights protections under the law, democratic representation in government, and all that … and still are unmistakably oppressed by countless systemic and institutional injustices, with things arguably getting worse for them in recent years.

And of course terrorist attacks & assassinations by anti-Israel radicals within Israel have waxed and waned all this time, but never stopped.


Roughly a couple of million Arab Palestinians are under illegitimate governance in the occupied territories. Israel continues to displace Palestinians to build new settlements in territory under military occupation, to which they repatriate Israeli Jews. There is an endless cycle of violence, in which Israel holds the unmistakable upper hand.

This is settler colonialism. This is military occupation. This is an apartheid state. But students of American history should remember that it is very different from our horrors.

And if you’ve read this far, and come to wonder what “Zionism” means in a post-1948 and post-1967 world, I have an open letter to an anti-Zionist which may interest you.