Ontology epistemology positivism interpretivism
Ontology – degrees of reality
Ontology is the discussion around what is real or not real – and also – if something is real how do we classify it? So we could do the Father Ted thing of having a list on the wall of real and not real and adding to them, but there’s a seven-point scale Richard Dawkins came up with on where to put ideas about how real things are. He meant it specifically for talking about god, because he seems to be particularly obsessed with that, but I think it helps to apply it to anything.
So on this scale we have at 1 stuff that 100% absolutely exists.
The problem is that we can’t know with 100% confidence that anything exists. I don’t know that you exist, or this table exists, or even I exist. It could just be data that’s being pumped into my senses, and my thoughts might actually just be thoughts that make me think I’m alive, like Cat says to Rimmer in Red Dwarf 13. And at the other end we can’t know for 100% that something doesn’t exist. So we don’t have any evidence for unicorns, god, the tooth fairy, star wars existing. But absence of proof isn’t proof of absence. There might actually be a god, He might even be exactly as one of the various religions describes Him. Or Her. Or Star Wars could really have happened a long time ago in a galaxy far, far away.
So although we have a seven point scale, really we’re just looking at a scale that runs from 6 to 2. Like a grading system, it’s out of 100 but in reality we only give marks between 20 and 90.
So when we say something is real, we’re really looking at stuff around the 2 mark. “True” is just a shorthand for “this is the explanation that best fits our observations for the time being”. Everything that we say is “true” is really just an operating assumption. So you, me, the Big Bang, dark matter, the standard model, they’re all around the 2 mark, some maybe slightly higher, some maybe slightly lower. But we can’t get through the day constantly bearing in mind things might not exist. I’m going to assume you exist and get on with things, although occasionally it’s worth remembering what we’re experiencing is only a 2 not a 1. Same at the other end. We don’t have to worry all the time about what god might think, or try and use the force to open doors. Chances are those things aren’t real, so it’d be wrong to rely on them.
Right in the middle at 4 we have the things that ontologically we’re totally unsure about. It’s completely 50/50. Then just above that, we have the stuff that’s around 3. So maybe we’re leaning towards it being true, but there’s still some doubt. So, superstring theory for example. Multiple universes. Then on the other side there are all the things at 5, so unlikely but the jury’s still out. Like, I don’t know, the Illuminati or something.
Ontology – categorising reality
If we’re looking for an example of an ontological argument about how to categorise reality a familiar example would be taxonomies of living things. When people first started categorising living things they went by what they looked like, so feathers make you one type of thing, scales make you another. It’s a system based on morphology. As scientists have mapped more and more genomes though, they can see how closely related things are to other things, they can work out at what point in evolution they diverged. Everything that’s descended from a particular organism is called a clade. If you look at cladistics rather than morphology, birds and crocodiles are more closely related to each other than crocodiles are to lizards, so grouping the crocodiles and lizards together, but excluding birds makes no sense. It’s paraphyletic. So now birds are classified as a type of reptile. It’s also why there’s no such thing as a fish. You can’t group them all together sensibly in a way that includes all “fish” but excludes all “non-fish”. Cladistically. Obviously if you’re adopting the old system of looking at what they look like, then you can.
Ontologicial questions about how to organise things then runs throughout our perception of reality, it can actually alter how we view reality. “This is part of this, but not part of that” can sometimes be absolutely crucial. Linnaeus may have been really keen on labelling plants and opisthokonts (ie fungi and animals) and that might have helped us understand the natural world, but he was well shite when it came to categorising humans, for example. He also obliterated indigenous people’s names for things when he did so, which may have changed how we perceive Western academia’s relationship to them.
But perception is more the domain of the next bit.
Epistemology – positivism
What gets you closer to the truth (or not) is a question of epistemology. So ontology is what’s real or not, epistemology is the approach by which we determine what’s real or not. There’s basically three types of epistemology. Finding things out by measuring things, finding things out by interpreting things, and making things up. So that’s positivism, interpretivism and belief.
So first off positivism. The positivist approach is to just look at things you can measure with instruments. The idea is that this is objectively getting at the truth by looking at numbers on dials, or scans, or whatever, what’s called sometimes instrumental reality. Positivism is the cornerstone of the scientific method, which works like this:
- you have these theories about how the world works.
- You test them with your experiments.
- The results match your theory so you think you’ve got to the truth.
- Then you carry on doing experiments until one of them doesn’t match the theory, so you need a better theory.
- When you’ve come up with a few theories you then do more experiments to confirm which one is the best. That becomes the new truth.
- And then you start the whole cycle again.
People are pretty bullish about positivism because it’s been really effective at working out what’s actually going on.
There are problems with the approach though. One is that people sometimes forget nothing scores above a 2. They mistake their current best guess for what’s actually happening. It’s the best way to get closest to the truth, true. But you never quite get there. Like Zeno’s arrow.
The other problem is that sometimes the experiments give the wrong results. So for instance you fire neutrinos through the Earth and find out they’re travelling faster than light, but then later figure out that there’s a loose cable that has thrown off your timing. Or maybe it’s your analysis that’s wrong, like the dead fish experiment in neuroscience. If you do a brain scan you can see effects that look like there’s a causal relationship between showing someone pictures and the reaction in the brain, but you also get a reaction if you plug in a dead salmon at the other end. You need to account for random fluctuations.
Then there’s a lot of cultural bias. So for example, if you’re testing a theory, the one that gets the most funding is the one propounded by the most eminent of scientists, and they’re often old white guys. If there’s other theories, they can get held back for a while. Usually until all of that generation of old white guys are dead. You can see the social effects on the progress of science.
The thing is though, that the process is self-correcting for social bias. If a theory doesn’t work, you’ll have lots of people doing experiments in all parts of the world, and coming up with theories and eventually one will look better than the rest to most people, and that’s the one that generally gets adopted. You get a consensus irrespective of culture. At the boundaries there’s contention, but in the main body of science there isn’t – the main body is more or less everything that happens after the first 10-35 seconds after the big bang up to now, everything bigger than a quark, anything smaller than the observable universe. This main core of science is the same for everyone, no matter where they are and has been contributed to and tested by cultures on every continent on the planet. The cultural bias doesn’t change the overall direction, it just slows it down.
Epistemology – interpretivism
The other approach is interpretivism. Interpretivism is more subjective, in that it’s interpreting what’s going on. You might not have anything you can actually measure with an instrument, so you need to ask a lot of people a lot of questions. This is a bit more systematic than a bunch of anecdotes, in that the idea is that you ask a large representative sample of people, and aren’t selective about which responses you look at. The criticism is that it’s still just a collection of opinions and it’s not reliable enough. As Roosta would say, you can’t scratch a window with it. Interpretivists would argue that positivism is so culturally biased that everything is interpretivist, which is just fashionable nonsense. Obviously if thousands of people from all over the world do an experiment and get the same result, which confirms the generally accepted theory, that’s not open to interpretation. To claim it is just seems like an inferiority complex on behalf of the interpretivists. Where the real strength of interpretivism is, is that it’s producing something like a version of the truth that can be useful where positivism couldn’t get you anything. Anything to do with how people behave socially has to be interpretivist, because people are way, way more complicated than cosmology. You can’t put them in a laboratory and see how they perform in the real world, because once they’re in a lab they’re not in the real world any more. So all you can get is a mass of opinions to interpret. But that’s OK because it’s better than the alternative. Which is nothing.
And there’s a huge number of interpretivist approaches, feminist, postcolonialist, Marxist, basically anything with an ist on the end. They’re all a valid way of approaching the world to some extent, as long as they can accommodate all the data observed and are precise about what their limits are. The mistake is calling them theories. That’s a positivist word. There’s nothing predictive about interpretivist approaches. You can’t say “in this and this situation with people, this will happen”. It’s too complex. And vague. What you’ve actually got with interpretivist approaches are different narratives, or lenses, through which to describe what’s going on. As Jitse said in a previous episode of Pedagodzilla, all models are wrong, some models are useful. The important thing is not can we prove it, but is it reproducible enough, and generalisable enough, and explain enough of the observations to be useful?
Epistemology – belief
Finally, we have making things up as an approach. There’s a lot of in-built elements to the way minds work that mean we tend to look for patterns that aren’t there – which is called apophenia. We recognise simple messages rather than complex ones. When we make connections in our heads that make particular sense to us we get a dopamine hit. That leads to aberrant salience, things get connected that shouldn’t get connected. So for example, there’s a lot of intricate stuff about crystal healing and resonance, which makes no sense physically, but sounds good as a story. There’s no scientific rationale behind it at all, but it works as a placebo because it sounds plausible to people who skipped physics in school.
One thing positivism and interpretivism are bad at is creating the sort of stories that have emotional truth for people. You can’t all get together and have a good time based on the standard model, or the general theory of relativity. The myths that we create hold communities together. They bring people comfort. So if you’ve moved to a new place and you’re wondering what church to join, for example, someone coming along and saying well you have no evidence for your faith so why bother? is completely the wrong epistemology. We talked about Buffy as if the show was real in a previous episode. It would be completely out of place to continually remind everyone it’s not real while we’re doing that. I’ve used the phrase “science needs to stay up its own end” before, which I don’t think people would get unless they grew up on a working-class housing estate in the 60s. Basically, those spaces could be very territorial. You learnt where your patch was, and if you strayed into someone else’s you get told to stay up your own end. Too many epistemologies try and muscle in on someone else’s patch. Lots of epistemologies are dying out because of competition from other worldviews because of just this sort of intrusion – it’s called epistemicide. That seems like a bad idea because we’re losing other ways of perceiving the world. Colonialists need to stay up their own end.
But … the problem also works the other way when you start using your beliefs to make decisions about real things. So if you’re looking for a response to covid-19 you need to use a positivist approach and do clinical trials to find out what will work, and what won’t, you don’t just tell people you’re protected by the blood of Jesus. That’s a category error. Or you’re deciding whether gay people should be able to adopt. You can’t use a positivist epistemology (because there’s no instrument that can measure that) or a belief-based one (because it’s way too important to base it on something someone made up). You need to look in between at interpretivist approaches and gather data about what people’s experiences are about children of gay parents. And as it turns out, there’s no major difference. To insist on something being your way because you read it in a book somewhere is simply bizarre. I don’t need to do a routine on that because Patton Oswalt has already done that.
Critical realism and ontological hygiene
So what’s the proper epistemological approach? Well one of the things I learnt from physics is where you’ve got a binary choice, the answer is nearly always both are right. So is light a wave or a particle? It’s both. Same’s true here. I’m really suspicious of people who say “I’m a positivist” or “I’m an interpretivist”. Neither are appropriate all the time. There’s an epistemological approach called pragmatism, or realism, sometimes critical realism. It’s about adopting the correct epistemology for the domain that you’re looking at. So you have a physical science or chemistry or medicine, you have to take a positivist approach, you measure things and look at the numbers, and that gives you something ontologically that scores a 2 or maybe a 3 (or is disproved down to a 6). Or you’re looking at how people think or behave. You need interpretivism, because there’s no laws that predict how people behave, and that’s only going to be a 3 at best. That’s not as good as a 2, but it doesn’t have to be to be useful. Just let it go. At the other end you have all the stuff that has no evidence for it at all. But that’s ok too, science can stay up its own end. And as anything you can make up is ontologically a 6 and never a 7 that gives you a lot of wriggle room. “You know it’s possible God, or Severus Snape, or the Dalai Lama does exist, and believing that makes me feel happy, so I’m going to believe it.” The problem is when you start misapplying the made-up stuff to make decision about real things. Even then, I guess as long as your actions don’t harm someone else, feel free. But if someone else is going to be affected, you need enough evidence to score a 2 or a 3 on the ontological scale, or you’re being a complete dick.
It’s all about being aware of where things are on the ontological spectrum and using them appropriately – what’s called ontological hygiene. Maintaining that ontological hygiene, and being able to switch between the different epistemologies, is where liminality comes in, but that’s another episode.
Edited 16.12.21 when writing the companion piece in the Pedagodzilla book I realised I’d switched Dawkins’s scale round.