11 November 2022

Social media content warnings

Capturing a 2018 Twitter thread of mine reflecting on the user experience design of content warnings:

I write this in the hope that mental health pros, trauma survivor advocates, and UX designers have comments which will help refine these ideas.

I think I think that social media content warnings should be conceived in assistive technology terms, comparable to image descriptions:

  • easy for everyone to include on every post
  • unobtrusive for ordinary readers
  • powerful, flexible controls for readers who need them

Further, on many platforms one may need granularity in identifying what part of a post requires a warning: the post itself? an image within it? a link from it?

My original post was inspired by having used a content warning when I shared a story on Twitter and feeling that I had no choice but to do it awkwardly. The warning was a spoiler … and it seemed to me that delivering the warning was itself potentially triggering for people with trauma. I wished I had better tools.

Readers with trauma triggers need at least a few possible settings for how they see posts with a content warning:

  • Completely conceal all posts with this CW
  • Show only the CW, with an option to open the post if I wish
  • Show a short header for the post and the CW, with an option to open the post if I wish
  • Show the whole post but have the CW as a header, to eliminate surprise

These should not be global settings; readers may want different content to behave differently. People with trauma triggers only need to have flag & filter behaviors happen for the particular things which affect them. They may want more or less protective behavior for different content. This helps keep the handling of necessary triggers from becoming a burden of friction which makes everything a nuisance to read.

In order to enable this, we would need a strong standardized taxonomy of triggers. There might be a few dozen major common triggers, plus an open-ended ability to add more specific triggers. This should be a shared resource across multiple social media platforms, so that readers would only need to identify their triggers once and have them inherited across every platform they use.

To support a structured taxonomy of content warnings, social media platforms would need both crowdsourcing tools and professional information architects. Crowdsourcing is necessary to address the breadth of CW needs, to keep track of how various communities have evolving standards and understandings around trauma and other issues which make CWs important. And crowdsourcing tools can be abused by bad actors, making professional moderation necessary. Plus wholly crowdsourced taxonomies get hopelessly messy; professionals can keep the label system tidy and smart, doing stuff like linking CW-spousal abuse, CW-IPV, CW-DV, et cetera.

Crowdsourcing should extends to letting readers tag social media posts with content warning tags when the authors have neglected to do so, but this is tricky. It invites malicious brigading abusing this tool to harass and silence people.

Reflecting on how crowdsourcing content warnings can be abused — indeed, how even a system which is vigorously professionally managed can be abused — leads to hard questions about building trust network architectures which we need to think about better in social media more generally. Whose tags should be applied? Whose discounted? How do we ensure that the infrastructure is a democratically-accountable public utility?

Which in turn takes us to hard questions about user verification.

All of this demonstrates the broader challenge of social media platforms neglecting their responsibility to police how they are used … while many critics are dangerously naïve in imagining that this would be simple for them to do.

More content warning lore from Twitter

DoesTheDogDie.com indexes a very specific content warning for a huge library of movies and other media.

Jen <@booksherpa> offers an analogy which is particularly thoughtful about the relation between different kinds and degrees of triggers.

Imagine that you have been invited to a cake potluck. You bake a beautiful vanilla cake with chocolate icing, and, following the instructions in the invitation, send a copy of the recipe to your host. The day comes and you arrive, cake in hand.

The host hands you a number printed on an index card and directs you to a large table where all the cakes are. You place your cake down, the number next to it. The co-host hands you several folded pieces of paper stapled together.

“Here’s a list of all the cakes and their recipes by number,” they say. “If you’re okay with being surprised, feel free to ignore the list. If you want or need to know what cake you’re eating, or what ingredients are in it, look at the list.”

You just had dental work yesterday, and know you can’t have anything with caramel, so you look at the list and cross out three cakes. Next time you’ll get to have caramel!

Your friend Vanessa walks up to you, and you see her list has one cake crossed out. She sees you looking and says “Oh yeah, it has raspberry. I hate raspberry!” Another friend, Jeremy, joins you. His list has half the cakes crossed out.

Jeremy says “I can’t eat any of the ones with tree nuts, I’m allergic. But I’m able to come because of how they set up the party. I get to take my slices first, so there’s no cross contamination. Otherwise I couldn’t have come at all.”

“Speaking of that,” you say, “where’s Cat?”

“Oh, her allergies are bad enough so that they couldn’t come at all. She's sad, but after she reviewed the recipes, that was safest for her.” Just then the host rings a bell. “That’s my cue,” says Jeremy. “Time for cake!”

Luckily this party had a gracious host who set things up so everyone could make the best choice for their own safety and preferences. Others’ choices don’t affect yours. Yours don’t affect others’.

Now, imagine the handout from the host is a list of content warnings for a tv show. Some people can ignore it entirely. Some need it today for certain content, but may be okay to encounter that content another time.

Some prefer to avoid certain content. Some have to avoid parts of the content to stay safe. Some can’t engage with the content at all. Everyone uses the content warnings as they choose to make the best choice for their own safety and preferences.

Seems pretty simple when you look at that way.

Crap. Now I want cake.

Tatiana Mac <@TatianaTMac> has some ideas analagous to mine for watching streaming video.

Watching Squid Game, I imagined a UI that cued content warnings. The brief cues at the beginning of each episode are helpful, but could be supported with a subtle pop-up right before the event to Skip if you’re activated by it. Showrunners could indicate where these should happen and whether there are plot points that need to be summarised (via text or voiceover!)

(From my experience / by my count most triggers aren’t critical to plot and tend to be unnecessary devices, an issue for another time.)

CW: suicide

Extending this onto the platform, I could imagine setting a global choice on my streaming service that says say, “for all references to suicide, please:

  • skip without warning
  • skip with warning
  • more things I’m not thinking of

I skip a lot of shows due to

  1. references to suicide, often graphic,
    and on a different axis,
  2. strobes or motion, which make me sick.

Both are used abundantly and often without much care, making me wonder if sensitivity viewers are employed as much as I wish.

The thing that is disappointing about this is that adding this accessibility feature I suggested takes absolutely nothing away from the unaware majority. I’m sure they could use it to avoid content they find “annoying,” even, and it would be built for them.

The sad thing is perhaps that this isn’t seen as a money making feature (well except for companies who charge for accessibility features, which is shitty), so if would be hard pressed to be built. 🤷🏼‍♀️

And another gross sad thing is that they could track how many people skip activating/triggering topics and the results might be different than they thought!

(Yes I do hate myself for how effectively I can make a “business case” for things like this. 🥲)

CW: suicide

Anyway. This isn’t a critique of Squid Game (which I loved!) but media tech; it was what I was watching when I thought of this. I watched waiting for the suicide, then strobes, which kept me tense and scared for what I might see and how I might be impacted.

Payton Jones <@paytonjjones> offers some informed skepticism about whether content warnings as we do them now do what we want them to do.

We just released a meta-analysis on the efficacy of trigger warnings, content warnings, and content notes (preprint). Here’s a short thread explaining the results with graphs and figures.

There are a lot of fundamental disagreements about what it even means for a trigger warning to “work” properly. One common argument is that trigger warnings help prep individuals to brace themselves to face their triggers. e.g.,

New York Times: Why I Use Trigger Warnings
They don’t coddle. They help create a better environment for learning.

On this aspect, trigger warnings clearly fail. Studies consistently show a near-zero effect, with trigger warnings making no meaningful difference on “response affect” to potentially triggering material. Notice that even the most extreme point doesn’t reach a medium effect.

Another common argument is that warnings help individuals to avoid particularly triggering material altogether. Confusingly, both advocates and critics of warnings think that more avoidance supports their position.

Advocates: “Warnings help people avoid unnecessary emotional pain”

Critics: “Warnings prevent people from engaging with challenging material that would ultimately benefit them”

Well, too bad for all y’all. Trigger warnings do not seem to encourage avoidance. If anything, there are some hints they might attract people to stimuli in certain specific cases.

In studies with a choice between Stimulus A (warning) or Stimulus B (without a warning) - people seem drawn to Stimulus A. This effect seems more pronounced among those with more PTSD symptoms.

A third argument is that trigger warnings might be either beneficial or detrimental to educational goals. A handful of studies have looked at how well people absorb educational material with or without warnings. By now, you know the story. No effect of trigger warnings.

Finally there is just one aspect on which trigger warnings do have a consistent effect. They cause people to feel more anxious after receiving the warning, but before actually seeing the thing they were warned about.

(Of course, as mentioned before, once they see the actual stimulus, they respond just like they would without the warning. No difference.)

To conclude, I’ll give my own personal answer to the perennial question: should I use trigger warnings? Trigger warnings are not a calamity. They are not the end of the world. They have very little effect either way. So why not use them?

Imagine you had a doctor, and that doctor prescribed a medication. You ask him whether that medication is effective. He answers: “Oh no! It definitely won’t help. There's a small chance it could cause some minor harm, though.”

You’d probably question why the doctor prescribed the medication (you’d also probably switch doctors). Similarly, given the evidence, I question why one should want to use trigger warnings so badly?

I’m concerned about trigger warnings’ rapid spread, despite a complete lack of evidence that they help. I’m concerned that trigger warnings are taking up too much of the conversation, when we should be talking about things that matter much more.

Trigger Warnings Are Effective (at Generating Division, and Not Much Else)

We should be talking about how to encourage people to access evidence-based care for PTSD such as prolonged exposure therapy and cognitive processing therapy.

We should be talking about why PTSD is on the rise despite interpersonal violence (the primary cause of PTSD) consistently declining. And we should be talking about why people are more vulnerable to harm than ever before.

The Neurotic Treadmill: Decreasing Adversity, Increasing Vulnerability?

A Meta-Analysis of the Effects of Trigger Warnings, Content Warnings, and Content Notes

No comments: