Once upon a time, I had an epistemological crisis at my day job.
When I was working at a consulting studio I did a lot of presenting with PowerPoint decks, communicating research and design work. I devote a lot of effort to my presentation materials and feel pride when clients admire them presenting complex ideas clearly. Occasionally this has resulted in clients asking for my help in assembling PowerPoint decks of their own, which is usually fun, easy, good for the studio’s client relationships, and makes the world a tiny bit better by spreading wisdom about making non-gawdawful presentations.
Several years ago, a client in a large organization asked for me to take a day or three to help their colleague from another part of their company to put together a presentation about some research they had done. This person came my office with a huge stack of binders full of research reports. “We need to figure out how to explain to people in Department X how this research shows that we need to pursue Policy Y.”
“Great!” I said. “Just walk me through it. Tell me in your words how you came to your conclusions, we will talk until I have a good grasp of it, then we can figure out together how best to explain it clearly, succinctly, and convincingly.”
This surprised them. “I don’t understand.”
“What don’t you understand?”
“Why are you asking me how this research shows that we should do Policy Y?”
Huh? “Is this not your research?”
“Well, some of this research we did ourselves, some of it we paid for. But what does that have to do with anything?”
I shrugged. “I’m sorry, I did not want to get sidetracked on how the research was done. I was asking because I want to understand what from the research lead you to choose Policy Y.”
“I don’t understand. Policy Y is my team’s plan.”
“Yes, how did you decide that?”
“What does that have to do with the presentation?”
They were visibly exasperated. I am sure I failed at concealing my irritation at how weird this conversation was.
Obviously I am abbreviating a longer, twistier conversation in which we were talking past each other. After a few rounds, I did the Shelf Sweep with my hands and tried to take a conceptual step back.
“Okay. You did the research. You looked at it, you thought about it. Then you realized that you needed to do Policy Y. What in the research pointed you there?”
“That’s what I want you to figure out for the presentation!”
I tried another step back. “I think the folks who set this up confused me about what you need. Let’s start over. Can you tell me what you need me to do?”
“I need you to find the data in this research which shows that we need to do Policy Y, so I can convince Department X to do it.”
Oh.
“Now I think I am following you. You have already decided to do Policy Y, for your own reasons. You want to persuade the people in Department X. So you want to sift through all this research to find things which support Policy Y.”
“Yes!”
“I apologize for the misunderstanding.”
“It’s okay. Let’s get to it!”
“Um. No. We have had a misunderstanding about what I do for a living.”
“Aren’t you a research consultant?”
“Yes, I am, among other things.”
“So what is the misunderstanding?”
“I do research, and analyze it, to help clients make good decisions about products and services.”
“Right. I need help analyzing this research.”
“You are neither asking me to analyze that research nor to communicate an analysis of it.”
“Huh? I am asking for an analysis of how this research supports Policy Y.”
“Analysis is looking at research and figuring out what policies and solutions it suggests. That is a different thing.”
“I don’t get the difference.”
“Yes, I see that. I am very sorry to have wasted your time. Obviously our studio will not bill you for this meeting, because of the misunderstanding.”
Again, this was a much longer and clumsier conversation than I could hope to reproduce. They were not just astonished that I considered myself professionally obligated to refuse this work, they could not see what my objection was. Wasn’t this what researchers did? Wasn’t this what consultants did? My presumption that one should examine evidence before reaching a conclusion, rather than using it to support a conclusion, was not even an idea they could understand well enough to reject.
I am not against trying to be persuasive. That is a necessary art. But I was shaken they could not conceive of any use of information other than persuasion.
I have reluctantly come to believe that there are more people like them than there are like me.
This post was born as a Twitter thread, and over time I threaded related observations from others.
In tech / design / business
Maia Bittner
sums up my story succinctly:
I have finally learned how “data-driven decision-making” works in business:
-
First you make whatever decision you want to about your product or business.
-
Then you go spelunking to cherry-pick data to back up it being a good decision! 🍒
Nils Gilman describes the kind of consultant that client of mine thought they were hiring:
Consultancies claim that what they’re providing is access to “best practices” — which is sometimes true. But much of the time what they’re actually providing is CYA for unpopular decisions and/or ammo for one side in an intra-firm political struggle.
The Economist | Do McKinsey and other consultants do anything useful?
Though hated, they often provide a valuable service to the economy
Pavel A. Samsonov feels misunderstood like I did:
Was just introduced by a colleague as someone who will “validate business requirements” and a chill ran up my spine.
Samsonov describes the underlying politics well:
No amount of research will make stakeholders as confident as following all of their biases.
Sell stakeholders on the idea of doing research in the first place, or they will demand impossible bulletproof evidence for anything that doesn’t align with their existing assumptions.
In one article, Jared Spool discusses how you must always ask executives for what they expect to happen before user testing. Otherwise they will pretend like they knew the outcome ahead of time, and it will not sway their thinking. This is an analogous concept.
If you produce results without creating the expectation that the stakeholder’s assumptions may have been wrong, they will find a way to reconcile the results with their existing mental model. Often that just means being dismissed: “you asked the wrong people/questions”.
As Alan Cooper says - user experience is a power struggle. In low-maturity product orgs, stakeholders gather political power through being Experts who are Always Right. Realigning attitudes away from assumptions is a political game, not a delivery one.
These orgs create a vicious cycle between assumption-driven design and brittle development. The cost of being wrong is massive (politically and financially). Rather than reducing the cost of being wrong, stakeholders find it safer to obfuscate how “right” is determined.
Introducing the idea of experimentation - controlled, rapid failing - is anathema to this environment. So you have to sell the whole thing at once: small, quick, cheap tests aimed at learning rather than “winning”. The only real failure is if we didn't learn anything new.
This is very scary because it requires stakeholders to commit to a real “definition of good” that’s not just “the team delivered the requirements.” They have to get buy-in for goals and hypotheses (you might recognize these as OKRs) and that takes work!
But more importantly, they give up control. They are no longer the Knower and Decider (instead, research determines the way forward). They no longer set the team's outputs (the team determines their own experiments). But they are still on the hook for results.
This is what you’re struggling against: convincing stakeholders to relinquish absolute power. If they do not, then regardless of the title on your business cards you are stuck as a feature designer or usability tester, not a UX designer or user researcher.
Samsonov also talks about the organizational dynamics these deceits produce in the kinds of product development I do:
Product teams fall apart when they start lying to themselves. This breaks the core mechanism:
-
Define problem
-
Prioritize problem
-
Define appropriately scaled solution for problem
Teams that lie to themselves can’t do 2, and therefore can’t do 3.
When teams lie to themselves, it usually looks something like this:
-
🧑🏻: It would be cool and innovative to build Widget
-
🧔🏻: I’ll create the user story! “As user, I want to have Widget”
-
🧑🏻: Great! I’ll add it to the roadmap.
Did you spot the lie? It’s when the team invents a problem simply to justify the solution. But because the problem is made-up, it can’t be scoped or prioritized. It’s impossible to determine the right amount of resources to dedicate to it, or to define success criteria.
Being honest with themselves doesn’t mean that the team can’t say “we want to build Widget.” But it does mean that the problem is defined as “we want to build Widget” and is prioritized accordingly (probably very low on the list). Due to the low prioritization, Widget receives the appropriate amount of resourcing (probably not much). This naturally creates scaled-down success criteria, from “we build Widget” to perhaps “we proved building Widget was possible” or “we proved Widget is desirable to users.”
Why do I say that teams who lie to themselves fall apart?
Because without being able to establish clear success criteria, the team lacks one of the two key aspects of what actually makes a team: shared goals (or in other words, the “definition of good”).
The issue is not that the outcome the team is working towards isn’t a user-facing outcome. It’s pretending that it is. Because real user-facing outcomes have real definitions of good: users being able to reach their goals better than before. Fake user stories don’t have goals. So trust begins to break down. PMs challenge designers on “what users would want.” Devs challenge PMs on what is doable. The guiding principle for each role shifts towards outputs - we will follow our process and produce an output. “Shipped” over “good.”
The team that lies to itself loses trust in itself and starts to focus more and more on project management, chasing deadlines, and treating responsibility for outcomes like a hot potato. “We followed the Scrum process, it’s not our fault.”
Teams that are honest with themselves can identify an appropriate definition of good, even if it’s not user-facing (which is not inherently bad, a lot of valuable work isn’t easy to trace to user outcomes). But crucially, everyone can align on what the desired team outcome is. A team that is honest about its problems, and therefore is able to align on desired outcomes, can work backwards from those outcomes to make decisions in a real Agile way. There is no need for appeal to authority or “I’ll know it when I see it” type feedback/”acceptance testing”.
A team that is aligned on desired outcomes can say “we learned that solution X will not work to achieve goal Y” and can try something else.
A team that lies to itself cannot admit this, because building solution X is their problem. Without solution X, the team has no mandate.
The members of the team that lies to itself are not stupid, however. Individually, eventually, they realize that “build solution X” is not a valuable problem to solve. But without public recognition of this fact, they feel lost & bitter: why can’t anyone else see what I see?
This is the point at which the team starts to fall apart. Enthusiasm evaporates, work that used to energize the team becomes draining. People start leaving. But the team promised solution X to leadership. Stopping would be a crushing blow to credibility.
So now you have a zombie, shambling forward without any objective beyond devouring as many brains as it can. All because the team was initially organized around a lie.
Samsonov comes directly at how yes, this is about epistemology:
Software orgs are shaped by their source of truth.
If the source of truth is users, Agile is great for helping the team continuously develop understanding of fixed user needs. If the source of truth is executives, needs are not fixed. Execs constantly change their mind.
To an outside observer, these may look the same. The team starts out with some understanding of what needs to be built, and that understanding changes over time. But understanding of users can only improve. Exec-driven development is always at risk of being flipped upside-down.
When the goal is to understand users, experimentation is the norm. We don’t build features - we make bets, manage the risks, and learn.
Exec-driven development is extremely hostile to experiments. The HiPPO told you what to build, just build that. Don't ask questions.
So we can’t simply say, “let’s switch to Agile” without effecting change outside the dev team. Agility under HiPPO rule doesn’t look anything like agility for user-facing development. Teams are only thinking about making it easiest to keep up with exec flip-flopping.
An “agile” team in a feature factory must first build expensive, highly flexible architecture, so that pivoting along with exec whims is easy. Story point estimates are punitive to slow down the rate of requests for changes.
But execs don’t want architecture, they want UI! So one or two Java developers draw the short straw, spin up a web app, and start making some kind of website. This is when the org expects designers to contribute. We don’t need testing or research since the exec decided that already. Here is the BRD, draw us some mockups.
You will never be able to justify the value of design in an environment where design is the rendering of executive fiat. The designer can’t change their role simply by making better outputs, or challenging the PM. After all, it's not the PM’s decision either - it’s the exec’s. Rocking that boat is a huge risk.
The only way forward is to take on that risk.
Just because your outputs change from wireframes to processes doesn’t mean that the “make a bet and reduce the risk” model stops applying. Your users: management. Their goals: make money. Unknown: their mental model around converting features into value into money.
There are different schools of thought on how to do that. But it all boils down to: how can you change management’s mental model that experiments are risks, failure, and wasted work? How do you convince them that “just do what I tell you” is actually more risky?
Roja tells a tale of this breakdown happening at Twitter:
Years ago when I was a new PhD grad working at Twitter, there was a director of product who had strong ideas about how to solve the abuse and harassment problem on Twitter.
I worked as a new Data scientist, building a model to detect abuse and then was responsible for A/B testing the solution. His product solution, which he was pretty sure was going to change abuser behavior.
His idea was that if victims did not see the abuse and did not respond to trolls, the trolls would stop the harassment (facepalm). I did the analysis and no change was detectable in abuser behavior.
I was told this guy had already drafted a press release ready for my numbers to be added, claiming that abuser behavior had changed. My results were not good news. There was real pressure on me to keep looking at different cuts of data. Something not super kosher in stats.
I kept getting the same results. And if anyone asked me, I’d have said his solution wasn’t going to change abuser behavior. Anyone who has ever been harassed online or offline knows that “not engaging” the trolls does not protect you from abuse.
I didn’t have a manager at the time (my new manager was on parental leave). There was no one to push back and be on my side. I remember a Google doc where I kept adding comments saying “no evidence of behavior change” wherever there was such a claim. It killed the press release.
But I felt so powerless, in a team of all men, who had a lot of confidence but very little understanding of the problem and even less empathy for the victims of abuse on Twitter (the Sr eng on the team made some disgusting comments about sexism and racism online).
The team’s director of engineering thought the abuse issue on Twitter was a matter of optics (he said this in a meeting I was in). That it wasn’t that bad and if we quickly shipped a few solutions for the supposed abuse problem the media would move on.
I also remember the director of product sitting in a meeting with people from rights groups and folks who were harassed on Twitter and had a very condescending conversation with them. The idea was that he knew what he was doing and these people needed to be handled.
It felt like I was a failure for not getting the “desired” results. What’s worse is that I was part of a wave of layoffs that happened right then, where 30% of employees were let go. I stayed strong but there was a voice inside me that told me I was bad at my job.
I’ve been managing teams of engineers and data scientists for a few years now. But the remnants of that first experience have stayed with me. The feeling that I am an outsider. That these arrogant men owned the place even when they did not understand the problem.
Those eng and product leaders stayed a while and then moved on to the next hot startup and accrued connections and wealth, stabilizing their position in our industry. They didn’t really face any consequences for getting the Twitter abuse problem wrong back then.
I had to get this story off my chest today. But there is a silver lining too. During that same period I met so many amazing people who worked at Twitter. Coworkers who made me feel connected and whom I enjoyed their company greatly and whom I am friends with to this day ♥️.
You all know who you are :) thank you for being great. I appreciate you.
(P.S. I think Twitter has come a long way in terms of cleaning up the timeline from the days of gamergate. Still would love to see it do better, especially in non-English languages)
Arvind Venkataramani has encountered a weird self-awareness about this:
A belief that exposure to human-centered design can overcome inherent power differences is a great way to be blind to the ways in which people are misusing power.
I’ve been told to my face (by someone solidly entrenched in the organisation) after presenting research insights and the raw data and analysis process that led me there, that I was making it all up.
At some point it is a useful heuristic to suspect people in power 🤷♂️
Cat Hicks offers Five Things I’ve Learned in Ten Years of being a Social Scientist in Tech
-
We keep saying data when we mean measurement
-
Specificity is ok, qualitative is ok
-
Ships of Theseus & Zombie Luggage
-
Is Social Science any good?
-
Evidence Science
Several Twitter gems from Erika Just Enough Research Hall:
I also turn down every job which can be described as “help us do research so we can use the data to convince the person in charge of the value of research”. If that person doesn't already value a particular source of evidence, it's not going to happen.
Never say “validate” unless you are talking about parking. Evaluate!
I get asked “How do I use data to convince someone of a thing?” all the time. And my answer is always, “Go back in time and build a relationship with that person before trying to change their mind.” In the current moment, you can listen to their reasoning and ask questions.
Too often, the reason research findings are ignored by the business, is that the business model itself is based on a combination of fiction and extraction.
Other domains
A comparable example from an article about picking a host for Jeopardy.
For example, much has been made of Sony’s use of analytics to identify a front-runner. “We want to go at this with real analytics and real testing and not just go, ‘Hey, how about this guy?’” Richards told a Wall Street Journal podcast in April. Indeed, the studio called out this element in its announcement of Richards and Bialik, who Sony TV chairman Ravi Ahuja said “were both at the top of our research and analysis.” On Saturday, The New York Times reported that Richards alone selected the episodes that were sent to focus groups for review; the show’s two supervising producers, Lisa Broffman and Rocky Schmidt, who are both in their fourth decade working on the show, were excluded from the process. When The Ringer asked about the Times focus group report, neither Sony nor Richards offered comment.
Dorian Taylor gets philosophical about decision-based evidence-making maybe being the root of evidence-based decision-making:
over the last two years i have thought about this paper more than ever Why do humans reason? Arguments for an argumentative theory
and i still will make the effort to point out that they missed the most amazing opportunity for a title: an argument for argument as the reason for reason
tldr
-
we take the positions that align with our agendas; if people agree or disagree with us then okay, but what about the people on the fence?
-
that's where reasoning comes in
-
the fact that reasoning also yields logic and math and science and problem-solving is pure side effect
i love this idea. i think it is absolutely delicious
is it true? i don’t know. i don't care. i’m doing what they said i would
but holy crap it explains so much
the way i have come to treat theories in the social sciences is like a bet: they are often too spongy to be conclusively true but what if you behaved as if it were true? what kinds of outcomes would you get? i find that’s pretty easy to game out a lot of the time
also it exposes a lot of bad theories because you just ask “what evidence would it take for this theory to be false” and i mean that’s like elementary my dear popper
but in the same vein it doesn’t matter if a theory is “true”, just if it predicts outcomes reliably enough
and being true and predicting outcomes aren’t the exact same thing
like take physics: are quarks real? they are “real” in the sense that you can make calculations with them and predict real outcomes, but a physicist will tell you a quark is really just a prosthesis for thinking
is there a thing there For Real?™
at that scale what does that even mean
so, back to the paper’s hypothesis: people reason in order to persuade people on the fence to come over to their side first, and to solve problems a distant second
what would falsify that hypothesis? dunno man, it’s psychology. falsifying their hypotheses can actually get pretty tough, i’d have to think about it
okay, what if it were true? what bets could you make and could you attribute any payoff to the hypothesis being true?
The implications of decision-based evidence-making in politics politics are immense. Consider the question Why Did We Invade Iraq?:
But the intelligence analysts who were most expert in the region and in the technology for making and handling WMDs couldn’t find persuasive evidence to make the case that Saddam had any, and Tenet did what he could to suppress their skepticism. A holdover from the previous administration, he had been frustrated by Bill Clinton’s lack of interest in what the CIA had to offer. For any CIA director, the president is the “First Customer”—the sole source of the agency’s power—and under Clinton that power had dissipated. By contrast Bush, especially after September 11, was riveted by the agency’s reports; he had Tenet personally deliver its Presidential Daily Briefing at 8 AM, six days a week. At last, the CIA had a seat at the big table, and Tenet wasn’t going to blow it.
[⋯]
Senior officials throughout the national security bureaucracy—Tenet very much among them—inferred from these and other incidents that the decision to invade was a fait accompli and made sure to hop aboard, lest they lose their influence. This “fevered swamp of fear and genuine threat” particularly pervaded Cheney’s office, which Draper calls “the Bush administration’s think tank of the unthinkable.” Tenet went so far as to supply Team Cheney with a “Red Cell”—a group whose job was to invent the scariest scenarios and draw the most far-fetched connect-the-dot conspiracies imaginable in “punchy” three-page memos. (“Our goal,” one of its members said, “was plausibility, not anybody’s notion of truth.”) Cheney, Libby, and Wolfowitz loved its work. (The one question the Red Cell did not ask, Draper notes, was “What if Saddam Hussein did not possess WMD?”)
AI as a bullshit machine
One of my very favorite essays on the internet, Teresa Nielsen Hayden’s Common Fraud, concludes with the chilling conclusion about faux research that “deceiving us has become an industrial process”. Pookleblinky speculates on the political implications of it becoming fully automated:
In Dirk Gently’s Holistic Detective Agency, a programmer is working on an AI system that proves theorems. He fucks up, and makes it run backwards instead: given a statement, it constructs a proof for it. The Pentagon immediately falls in love with it.
At the time, this was considered a joke. The idea that you could ask a computer how to justify why you need to go to war, or make something illegal, or kill some group of people, and it’d pop out a whole argument for it.
Now consider DALLE and similar systems. The exact same generative model can generate text, code, data, etc. Not just images. With only a bit of tweaking, a DALLE-alike could generate persuasive arguments given a prompt. A wall of text, with accompanying generated stock footage, that you could text-to-speech (TTS) and upload to youtube.
Youtube has a lot of channels which simply read news articles and reddit posts aloud, accompanied by stock images. Boomers spend hours watching this like babies watching pregnant elsa foot torture on autoplay. There is a huge youtube genre that consists of a TTS voice, reading text, while stock footage is shown. When boomers say “I do my research” they mean they watch 5+ hours of these videos each day.
With a tiny bit of tweaking, you could make a DALLE-alike that effortlessly produces basically infinite such videos. Endless content. “Dalle, make a 5 minute video about how george soros and fauci are lizard people, stock footage of scary doctors”. The mean cost of producing a truly staggering amount of propaganda, drops to basically nothing.
“DALLE, give me a 3 hour long compilation of urban decay and a narrator arguing against immigration.”
“DALLE, give me 10 30-minute videos of angry amber heard caricatures with upbeat ragtime piano music”
“DALLE, give me a 20 minute video of ben shapiro's voice talking negatively about trans people, over stock footage of monty python wearing dresses”
Basically: infinite, endless propaganda, in a form effortlessly consumed.
You already know how autoplay traps vulnerable people into radicalization rabbit holes. How much more effective do you think the pipeline to blood libel will be, when the cost of saturating the infosphere drops to basically zero? Picture a boomer, watching 10 hours of virulent hate speech on youtube, not a second of which was made by a human.
“Kermit as tony soprano” and “kermit as comic book count dracula,” except it’s mutually contradictory versions of blood libel or false flag conspiracies. Picture layers of algorithms that target an audience, generate content to show to that audience that ensures high engagement metrics, and not one of those algorithms understands what hate speech is.
Imagine a boomer, trapped in an infinite conspiracy theory rabbit hole, made just for them.
This brings us back to tech & design by way of Pavel Samsonov again, who observes No, AI user research is not “better than nothing” — it’s much worse.
Another epistemic break
I cannot resist transcribing into this post a Twitter thread from Emmett Sheara about a similarly-astonishing dialogue revealing a different epistemic disjoint.
I attended Hereticon in 2022, and I had an experience there that made a truly deep impact on me. Getting to the conclusion was a trek for me, so you’re going to have to listen to the whole story.
The conference itself was effectively TED from the Nega-Earth dimension: only talks the NYT would not approve of, lots of interesting young people, low prep talks, great parties.
While it is true that, as they say, reversed stupidity is not Intelligence, I had a great time. Only a couple of the talks grabbed me but I had some incredible conversations. Did you know you can use nicotine gum as a reinforcement tool to condition yourself to exercise?
At the end of the weekend I was lounging by the pool with a few friends, talking about the future.the sky was clear and the moon was up, and I said something like, “it still fills me with wonder and joy that we managed to actually land someone on the moon”.
Another conference attendee pipes up from the next seat over. “You know that was faked, right?”
It’s Hereticon. Of course someone thinks the moon landing was fake. I’m barely surprised.
“I did not know that. I’ve seen a video … tell me, how do you know?”
“Oh. Well, that’s pretty straightforward. Space doesn’t exist.”
I have to admit, this caught me a little flat-footed. It is true, if space isn’t real then the moon landing was definitely faked.
Me: “That’s a new one for me! Ok, I’ll bite. What happens if I keep going up?”
Guy: “You hit the crystal sphere.”
Me: “Ok, cool. So I have a question about what it’s made of, but let’s just say I drill through the sphere and keep going. Then what?”
Guy: “Water. Lots of water.”
Me: “Ok, we bring a submarine. What happens if I keep going?”
Guy: “You know I’m not sure … no one’s ever gone to see.”
Me: “Ok fair enough. How does the crystal sphere stay up? Why doesn’t the massive weight of water crush it and fall in on us?”
Guy: “I don’t think it works that way, but I’m not an expert. I can send you some YouTube videos.”
Me: “That’s ok. Let’s talk about another part of it. How do satellites work?”
Guy: “High altitude weather balloons”
Me: “But they go really fast, wouldn’t they burn up?”
Guy: “I’m not sure but I think the atmosphere is really thin.”
Me: “No problem, how about the ISS? I’ve seen phone calls to there in zero gravity.”
Guy: “Sound stage, it’s all faked.”
Me: “I’m not sure how they fake the gravity, but you know you can literally see the ISS from earth right? Like we could go look up where it is and see it tonight w a telescope. And it’s moving really fast … if it was in the atmosphere it would burn up.”
Guy: “Well I should go check that out, I haven’t heard that before. But it’s probably faked somehow, I just don’t know how.”
At this point, I basically gave up on the space thing. I assume he’d gotten brainwormed by flat earther YouTube videos. Let’s move on. But it’s Jan 2022 … COVID was still very much on everyone’s mind … so I have to ask.
“So, I have to ask. The vaccines, what’s up with them?”
Guy: “They don’t work, obviously.”
Me: “obviously, I mean that’s what I expected. But how do you know?”
Guy: “that’s easy. Viruses aren’t real. I mean, infectious diseases in general aren’t real.”
At this point I have to understand.
He seems so reasonable and well spoken, but his epistemology is completely deranged. What the hell is going on???
I dig. I dig. I ask every question, I offer mental models, I give challenges he can’t answer but nothing works.
Finally, I hit on the right question. “What is the underlying nature of knowing truth to you? What’s the gold standard for evidence? Everyone has to take a lot on faith, there’s too much to figure it all out. But ultimately, what form of evidence trumps all others?”
Interregnum: I encourage you to actually consider this question. What is it that grounds out truth for you? This isn’t a trick question, go with the obvious and naive answer.
Srsly think about your answer (and what other people might answer) before you keep reading. It’s more fun that way and you learn more.
So eventually he comes to his answer. When someone presents an idea, he connects with them. He looks in their eyes. He considers their motivations. And if their argument is internally consistent with itself. And if all the signs are green, he trusts them and that’s truth.
This just blew me away. It’s a fundamentally social theory of truth. Truth is based on credible arguments presented by trustworthy people.
That is not what truth is to me, at all.
For me, the gold standard of truth is what I can see, hear, taste, smell, feel for myself. What is “true” is what my current conceptual model of the world can infer from that data.
(Actually truth is whatever reflects the underlying nature of reality, we are talking here only about our current best guess at truth, but this is an unavoidable distinction for finite beings who cannot know Truth)
I’d never heard someone actually say the quiet part out loud like that before. I think perhaps many people have his relationship to the truth, but with more normal choices for people to trust, and therefore more standard beliefs (and more correct ones IMO).
At this point I completely lost interest in discussing space and viruses and etc w him any further. Because I’d understood what had been driving me to ask and ask … I’d figured out how this seemingly smart man had come to conclusions that seemed so insane.
I’ve come up with names for these two sides (tho I’d be interested in better ones, if anyone has them): team words-mean-things, and team alliances.
The sense-data theory of truth leads to team words-mean-things. Because the goal of words, at least outside of art, is to convey sense data impressions and models inferred from them. And to do this well, one must care fanatically that your use of the word is the same as theirs.
The canonical sign you are talking to a words-mean-things person is the exasperated sigh followed by “wait but i thought you said X was Y, but now it’s not Y?”
The credible-talk-from-trusted-sources theory of truth by contrast leads to team alliances. Because if you get the words a little wrong, it’s fine, you can trust the ultimate conclusions regardless as long as you can trust them. As long as they’re on your team.
The canonical sign you’re talking to a team alliances person is when they seem more interested in figuring out whether you’re on their team than whether the details of your words make sense. “Is he a good guy?”
Alliances : Words-Mean-Things
::
Mimesis : First Principles
Given my team, my take feels anti-alliances. But learning-via-trust has a lot to recommend it. I don’t think one is “better”. But know yourself, know who you’re talking to, and connect at their level.