On losing a pet

There’s a short story we were set to read during my Creative Writing MA. More a vignette really. It was about the happiest moment in someone’s life. A guy is on a train, his family are around him, his daughter rests her head on his shoulder. At the time he doesn’t realise it’s his happiest moment of his entire life, but bit by bit things fall apart. The story had a huge emotional impact on me, and when I chatted to the others in my tutorial group, it had the same effect on them. It’s about a happy moment, but the inevitability of that happiness is the rest of your life will always live in the shadow of that moment. As someone pointed out, if you are a parent, there will be a point in your life when you put your child down and, unless you’re a weight-lifter, or they remain much smaller than you, never pick them up again.

This morning the second of my two cats, Pash, was put to sleep. She had a severe heart episode on Sunday, which left liquid on her lungs. We hoped it would have been possible to manage her medication so that we could remove the liquid, but without dehydrating her too much. But there was no way to balance the two demands on her body. This morning her breathing was laboured from the water pressure on her lungs, and she still wasn’t hydrated enough. So with the vet we decided it was the right thing. It was quick overall. From having the episode to five days of care, to the end. She was 17. I’d had her since Easter 2008.

My elder cat, Sinta, had kidney disease. She was being treated and was on special food, and lasted seven months from initial diagnosis to a moment when she degenerated rapidly. That was Feb 4 2023. She was 16. I’d had her since 2006.

I initially had Sinta because my ex-girlfriend suggested it. We’d had an on-again, off-again relationship for 6 years. We’d row, split up, one or both would get lonely, and we’d get together again. The last split was amicable, but her suggestion I get a cat was (fairly transparently) that with something to prevent one of us from feeling lonely, we’d break that cycle.

It worked. It sounds cheesy, but it really felt like I’d found a soulmate. She’d climb onto my shoulder and happily sit there. Be content to just sit on me and we’d stare at each other. Come when I called her. The relationship felt more like one of Pullman’s daemons than a pet.

Then I got a full-time in person job and I thought she’d need a companion, so I got Pash. Which after an initial bit of conflict, also worked. There’s a painting of the two of them my father did, of them sitting next to each other on the windowsill looking out of the window. He’s caught their postures perfectly, their tails curling in mirror reflections of each other, Pash still not fully grown.

One of the things about autism: people are tiring. The constant extra effort in determining what’s going on, the wealth of information about emotions, expressions, tones, you’re expected to parse. The constant self-management of masking. Being away from them is a relief. They don’t make effective constant companions. Even if you find the perfect match in a partner, they’ll be autistic too probably, so they’ll need to withdraw for long periods of time. Which I understand, but also that can feel lonely.

Cats (and dogs – I suspect) don’t judge. You don’t need to mask in front of them. Yet they still come to you for comfort, they still need us, and are still comforting. Physically they have perfect fur, they purr (one up on dogs there), they’re warm, their weight is comforting as they lie on you. They vocalise enough but not too much. Both of mine had a wide range of sounds that could almost be a conversation. They’d play. But most importantly, they would always be there. When I went through a major depression after getting my PhD (no job, all my social contacts had withered, the one thing I’d focused five years on had ended, and there was nothing to take its place) the cats were there. During covid, they were there. They were my bedrock. For a lot of people, pets become part of the family, but for long periods of my life, those two were my family.

After Sinta died, Pash came into her own. Suddenly there was no competition for my affection. I’d get headbutts, she’d stretch her paws out and claw me in sheer happiness at being near me. Undisguised, unfeigned.

I knew my time with Sinta was limited, which meant much more focus on my time with them. There was one moment I remember, lying in bed, one each lying contented on a shoulder, purring in absolute bliss, and I felt utter contentment. And I know that is the moment that the rest of my life will exist in the shadow of.

I miss them, obviously, and that’s where a lot of the grief comes from. Knowing that they won’t be around to come up to me when I get home, or jump on the bed to be allowed in to sleep next to me, or sit on me and let me talk to them or sing to them (I’m sure Sinta loved my rendition of Klokleda partha menin klatch). But much of the grief comes from knowing that feeling of contentment, of just sheer happiness of having those balls of fur lying on me, trusting me, sharing their contentment with me, is gone forever.

I have a metaphor for grief. It’s the sort of thing Langdon Jones once described as antipoetry because it’s so banal. Grief is like a hydrofoil. While I’m busy skimming along, I’m above it all. I can work, write a blog post, watch TV, read a book, and it’s pushed away. But as soon as I stop, I sink into it. Waves of it come at me and every seventh one, or so, is so big it swamps me. Even after two years, I’d be physically hit by a wall of grief that Sinta was gone. Now both have. All you can do is let it drive you down, and hold on until you come up for air.

This time is, I think, easier. I’m not sure why. I think because with one cat there was still something emotional going on inside. This time, with none, I don’t think that connection with the emotions is working so well. I’m just on autopilot now. Maybe I’ll be wrong. It’s still early days. I didn’t save any of Pash’s fur from the last time I groomed her, so hunted around my room for traces of it. I gathered up a small handful of it, and yes, it’s soft and white, but there’s no emotional connection to or tactile memory of her in it. It’s just fur. I wanted to feel something from it though. It’s good to have the scrappy bits of fur though because if I didn’t I’d regret not having some so that I could have that connection. This is evidence that wouldn’t work. I haven’t been able to look at any photos or videos. Still can’t for Sinta. They might have a better connection to them, but that might be overwhelming. Like it would trigger this big ball that’s just inside that I can’t let out, but I want to let it out in manageable amounts. Maybe it’s there to stay.

A friend has talked about a ceremony for emotional closure. I’m not sure I want that. I’m not sure what would be left if I got closure. Today’s card on my tarot app says “Keep going, knowing that the journey does not end.” So — good advice. We’ll see.

<edit> I’ve come back in to change the names of my cats – I don’t remember if I’ve ever used them as passwords but it’s possible at some point over the past 19 years that I have.

I’m realising that the effort (on day 1) of getting through is added to by a sort of muscle memory of them. Lying on the bed anticipating one jumping up onto it, to an extent that I think it’s happened. Like a phantom limb. Opening the curtains and looking down expecting one of them to be looking through the window and to look up and give an acknowledging meow at me.

All those moments were small little lifts that brightened each hour. Now they’re not there. I keep leaning on something that’s absent, and keep falling. And much as that habit painful, the worry is that it’ll wear off and I’ll forget what that felt like. Just sharing my life that closely with another for so long that your behaviours are intertwined, not just your lives.

Also, why I’m writing this post. Main aim is – there’s probably someone out there feeling the same thing. It might help to see someone else understands it.

Appetite and Appetition – the philosophy of Christmas cake

I have a FreeStyle Libre sensor in my arm. A needle is inserted into the tissue of my left biceps and records the glucose levels of the interstitial fluid there. An app on my wife Anna’s phone records the variation in glucose levels, which she correlates with the food we ate.

Describing this type of change in substance (the substance here being my interstitial fluid) “Monadology” offers insights in the following:

10. I also take it for granted that every created being is subject to change … and even that this change is continual in each one. (Leibniz, 1867; 129)

My glucose levels are subject to constant change; Christmas cake may potentially influence them, but the levels are in constant flux anyway. Further:

13. every natural change takes place by degrees, something changes and something remains; and consequently … there must be a plurality of affections and relations (Leibniz, 1867; 300)

In this example, there is the both the underlying condition of prediabetes, and the elevated glucose levels as the passing change. These have a different set of relationships to me, food and each other, and

14. The passing state, which encompasses and represents multitude within unity … is nothing other than what is called perception (Leibniz, 1867; 130)

How much of this is due to the cake itself, how much other contributory factors and how this can be untangled from the underlying condition, is still a matter of interpretation. And beyond this, the information itself is not knowledge that can be used effectively without being parsed via a summary Anna has made that approximates a healthy glucose range to interpret the numbers generated (see figure 1).

Figure 1: A healthy glucose range

To distinguish between the data recorded by the sensor and what those data mean, Leibniz coined the term “apperception”. Thus, the sensor and the iPhone perceive (in Leibniz’s sense) the glucose level, but it is at the point at which Anna interprets the data that apperception occurs, which “is the consciousness of (or reflection upon) a perception” (Strickland, 2014; 67).

Leibniz next defines the process by which these perceptions are made as appetition.

15. The action of the internal principle which brings about the change or passage from one perception to another may be called appetition. It is true that the appetite cannot always completely reach the whole perception it aims for, but it always attains something of it, and reaches new perceptions (Leibniz, 1867; 130)

In distinguishing between perception and apperception, Leibniz sees perception as being exhibited by substances that are not conscious (1867; 130). However here he ascribes appetition as a change in perceptions that is driven by an aim. Strickland is forgiving of Leibniz’s lapse into teleological thinking, stating that

this is not necessarily a conscious striving: ... in much the same way that a computer script can be said to strive, automatically and unconsciously, to complete each step of a subroutine. (Strickland, 2014; 68)

I argue that a phone app does not display an “aim” in its striving for a whole perception, as “aim” implies conscious intent, which transcends mere programming and requires desire. I’ll therefore distinguish in this discussion between programmed appetition and consciously-driven appetition by modifying the term appetition to “appappetition” when the aims are the result of programming (i.e. appetition by an app), and retain “appetition” for consciously-driven perceptions (and “appetite” to my desire for Christmas cake).

Leibniz immediately contradicts his definition of perception by stating

17. Moreover, we are obliged to admit that perception and that which depends on it cannot be explained mechanically, that is, by means of shapes and motions (Leibniz, 1867; 130). 

that is, material things cannot perceive, which would mean that the ability to perceive is lost once simple substances are combined into more complex ones, but without providing an explanation as to how. Strickland attempts to reconcile this by supposing there may be an intermediate logical step that Leibniz has omitted or that this is simply a self-contradiction, before stating that Leibniz’s consequent arguments follow on from his M14 statement not his M17 statement (Strickland, 2014; 72) and therefore dismisses this statement. A third possibility is that Leibniz has simply failed to apply his developing terminology consistently; if we read “perception” in M17 as “apperception” then Leibniz’s argument is coherent.

Leibniz, G. H (1867) ‘Monadology’ (tr. Hedge, F. H.), The Journal of speculative philosophy, Vol.1 (3), p.129-137

Strickland, L. (2014) Leibniz’s Monadology: A New Translation And Guide, Edinburgh University Press, https://doi-org.ezphost.dur.ac.uk/10.1515/9780748693238

What’s it all for?

Talking to a friend last week and he summarised my approach to life as “it doesn’t have any meaning, but that’s ok”. Which is spot on. I do actually envy people who can convince themselves there is ultimately a point – that there’s fate or a god, or something watching over them – to be able to turn off the rational part of their mind like that at will. That’s not supposed to sound snide or anything, it’s a skill, and there’s a demonstrable link between being able to do that and positive mental health.

That sort of thing is easier in crowds the “collusion in delusion” which explains the popularity of church, or sporting events, or cinemas, and of course most of us can do it for specific periods and locations, it’s the theoretical basis for much of playful learning – Huizinga’s magic circle. There’s contentment in those moments and searching for spaces that can help us reach that point can be worth seeking out. A prime example for me was Seonbawi rock in Seoul.

There was something about the peacefulness, the immense solidity of those two rocks, the tolling of a bell at sunset from a nearby temple, everything just felt … OK. Only one person visited during the hour or so I was there, and it’s a touchpoint I can call up. I’ve not found the equivalent in the NE of the UK, except I guess looking out down on the valley my home sits at the end of. Sheep, cows, rabbits, various birds, connects me (well anyone) to those metanarratives Serres discusses in The Natural Contract.

Of course, there’s not actually any pattern – thinking you can see one is a warning of incipient apophenia. Something to be indulged in briefly, but can tip from rabbit-hole to tar-pit if you’re not watchful. Don’t believe in yourself, don’t deceive with belief. All that quicksand stuff.

But when you’re enacting practice, teaching, researching, doing your job, is it necessary to think that ultimately there’s a point to motivate yourself to keep going? I was reading Lyotard last week, The Inhuman (specifically “Can Thought Go on without a Body?”) and in that he discusses the post-solar humanity (I’m studying post-humanism and trans-humanism) and he discusses the ultimate fate of humanity to be either destruction when the sun dies, or to escape this destruction by becoming something non-human. Lyotard’s point of this is to show the fundamental error in unlimited technological progress – either it’s not possible because the sun will undergo a helium flash in 4.5 billion years, or it’s undesirable because the only logical end point is for us to not be human any more.

To which I’d answer “generation starships”. Or “pantropy”. Or any of the known SF solutions. I don’t read that Lyotard’s question as a hypothetical – I mean what are we going to do? I’m reminded of a line from a Woody Allen routine where a woman turns him down with the line “not even it would help the space programme”. Is all our endeavour actually reducable to this one goal? It could work for me – understanding virtual embodiment, how humanity is reflected in our avatars, how an extended body works via telepresence, all that could help us survive the ultimate fate of the solar system. How would what you do help anything long term? Except …

we’re just postponing the inevitable. The heat death of the Universe. There is no long term solution.

Maybe just getting a few extra billion years on humanity’s clock is point enough? But it could possibly all seems a bit abstract for day-to-day life. I was chatting with another friend over the weekend and her answer was to have as much fun as possible without causing harm to anyone.

Not sure how that justifies me doing what I do. I suppose a lot of it is fun, and when it’s not fun I justify it in terms of it earning me enough of a living to spend money on things that are fun. I’m sure there’s an integral equation for that so that you could work out how to maximise fun over time. But that, as a philosophy has actually been captured succinctly by The Wyld Stallyns.

Be Excellent to Each Other

Party On Dudes

Is that actually ultimately the point?

Letting the GenAI out of the bottle

I’ve had a couple of interesting conversations at work recently about the use of AI in education – prompted largely by sharing this poem

https://poets.org/poem/student-who-used-ai-write-paper

which asks the question “I know your days are precious on this earth. But what are you trying to be free of? The living? The miraculous task of it?”

It’s a good question and I think is a good one to raise with students, because it reframes the whole relationship between teacher, student, assessment and study. We’re not (or we shouldn’t be) trying to persuade students not to use AI because we don’t want them cheating, or because there’s a standard we want them to attain under some artificial constraints, just to make assessment more challenging (which we shouldn’t) but because there are skills we think they should acquire because they are skills that will develop them, their interaction with the world, and to feel the pleasure of enacting their abilities well.

AI has its place – in the words of someone I was talking to at a conference recently, it’s good for doing the boring stuff we already know how to do. There’s also the possibility you could get by through getting AI to do the work, but to progress past a certain level, you need to have the skills that (if you’ve used AI) you’ve bypassed the acquisition of, for example, you could get AI to write an essay that synthesises different writers, but to create something novel, you need to make associations that aren’t really obvious. To do that you have to have the ability to summarise papers, follows citations, pull out key thoughts and abstract them.

Also, to stick with it, you’ve got to find where the fun is in it. In the degree I’m doing at the moment, I’m enjoying doing the assignments, because I’m finding my own take. For example, my essay on Leibniz I developed by relating each of the aspects of his philosophy to different cake metaphors. Because I like cake but I can’t eat it, basically.

Though, having fun with something is really possible only when you’re not overly concerned with the mark that you’re going to get and that as I said in a meeting last week “is only possible when you’ve reached an age where … err … you’re confident enough that you don’t feel the need to prove yourself further” to which my colleague responded “you mean run out of fucks to give” which is exactly what I was going to say before I self-censored myself. 😀

The issue is that students are just scared, scared by the amount of assessment they have to do, scared by the amount of competition (some people still do normative grading – which is inexcusable) and scared of screwing up. Sitting back and smelling the roses is – or the pleasure in just learning – is rarely possible.

What we can do is make their engagement with AI authentic at least. People who insist on written testing simply so that they can be sure it’s the student’s own work need to think again. If AI can do the thing we’re testing them on and will do that better then – and I’m going to put this in capitals so that this stands out – because it’s key

WHY THE HELL ARE WE STILL TEACHING THEM TO DO IT?

If this is a skill AI can acquit perfectly, then it’s not something that’s worthy of a human doing. So, maybe this will rule out a huge chunk of a maths syllabus, for example, or coding. Well fair enough. Rethink your syllabus from the ground up. Maybe it’ll make it easier, well deal with it, you’re now teaching an easy subject and all the people who can’t do the tricky things will take yours as the easy option. But putting in artificial barriers, simply to make the assessment harder (like in person testing), is missing the point of what education is for (the subject of my next post). Find a way of assessing which actually challenges the student on something that has some value, like groupwork, or have an assessment that checks in on them frequently so you can observe their process.

Avoiding coming up with authentic assessments, which test the non-AI skills is simply failing the students, yourselves, and the education system. In fact, that’s where the cheating is, not in the students using the AI.

Curriculum of kindness

I’m part of a critical pedagogies group at work. It’s often a reading group – someone picks a book, we pick a chapter from it and then talk about the one we’ve read. It’s an excellent way to prompt reading and discussion, but it’s mainly a chance to just chat with colleagues once a month. So many of the meetings we have are around the basic transactional stuff of getting academic development and teaching done, it’s good to have something that’s more about why we’re doing it. We’re always advised to get to know the people around us because it’s helps getting work done, but the vagaries of “just attend lunch” or “just come in to the office” don’t really support that. It has to be more structured than that, but not too structured because then there’s no chance to actually chat about the things that interest us. A reading group hits the sweet spot on that spectrum.

The last book selected was Enacting a curriculum of kindness and I chose the chapter Kindness in curriculum and course design. The two authors work at Southern Cross University’s Centre
for Teaching and Learning. They describe themselves as “lecturers” – the quotes are theirs. The point being that they don’t give lectures. They also don’t have tests. The rationale for the former is that they don’t work as effectively as other forms of learning – as indicated by this study:

Schmidt HG, Cohen-Schotanus J, Van der Molen, HT, et al. (2010) Learning more by being taught less: A “time-for-self-study” theory explaining curricular effects on graduation rate and study duration. Higher Educ. 2010;60(3):287–300

There are rationales for lectures, which is why they still have a place, but they’re not about students’ learning – they’re more about allowing students to get to know their module leader and maybe identify them as a role model. And because lecturers like demonstrating their knowledge (why not allow teachers to do stuff just because it makes them happy? – there’s little enough around that does that).

The rationale for the dropping testing though is that not only is it pointless (it’s testing skills – memorising a bunch of stuff and working under time pressure that rarely have any value) it’s also unkind. It seems a dreadful way to treat other people to put them under that kind of pressure.

Actually, on the lectures thing. It is also unkind to take fees from students and then fob them off with as poor an educational experience as lectures. Sure a few are nice to have, but for them to predominate isn’t giving value for money.

So – Southern cross has already achieved what seems like a far-off dream … no lecturing and no testing.

What was revelatory for me about the book chapter was that the authors recount the opposition to their learning design work from colleagues, who positioned that as unkind. Which sent them into a sort of spiral of self-doubt and questioning – am I being unkind by asking colleagues to do stuff which they’re uncomfortable with?

The example they start with was backlash against online submissions of assessments. Been there. When I was at Coventry, we introduced online marking and to smooth things over I was tasked with 1) looking at devices where people could mark using a stylus like they used to and 2) looking at bulk printers so that academics could print stuff out. As an initial postdoc role, measuring print speeds of various machines was not (I felt) making best use of my PhD.

But it came from a good place – being kind to the academics. But then – looking back, none of that resistance was justified – because we now just mark online and get on with it. And it’s fine.

Throughout the chapter, Mieke and Lachlan lay out how the curriculum reforms institutions are enacting – are all also kinder. Active pedagogies, authentic assessment, constructive alignment would be the obvious ones to me, but they also mention moderated assessment and feedback which makes sense, as it’s through this that students perceive fairness of treatment. It’s not a question we ask regularly enough when we encounter poor and recalcitrant pedagogies – “how is this kind?” As if kindness is somehow not a factor. When ultimately it should be the most important factor. If we’re not hardline about kindness, how are the students going to demand that of each other and themselves? If we’re not sending out a generation of kind people to be in the world, we’ve failed at our most important task.

What Mieke and Lachlan keep coming back to in their chapter in amongst this, is how reform is unkind to their peers. It’s a difficult thread to navigate. Wait. A difficult course to thread. Some metaphor. Balance. How do you balance what needs to be done to be kind against the fact that doing it requires work, stress, re-evaluation?

I’d say demanding change in and of itself is not unkind. It’s the difference between what my colleague who chose the book calls brave spaces as opposed to safe spaces. Spaces should not be comfortable, we should be challenged to change practice, and this in itself should not be stressful. Learn to mark on a screen – it won’t take long to adapt. Where it becomes unsafe and unkind is when the time and the support for the change is not supplied – and that’s a structural thing. The authors’ solution is more support staff to help with the educational technology. Well I’d say more learning designers too. But also time. To redesign. To get your head around the fact that lectures and testing are bad models for learning. To have those parts of the month allocated to sitting around and absorbing the ideas. But then do it.

The self-questioning is also revealing from the perspective of why maybe kindness is more difficult to demand than anything else, because the people who are working from a position of kindness are less prone to making demands. We’re self-doubting and always try to place ourselves in the position of the other person, which acts partly in being self-defeating. Whereas those who are without any self-doubt, get to take the lead because they have no self-doubt, but are therefore usually wrong.

Anyway, read the chapter. I’d strongly recommend it.

Reflecting on reflection

As part of a project I’ve just started working on I spent two days last week at a castle in Yorkshire.

The project is fascinating – and fun – it’s about integrating playfulness within the curriculum and measuring its impact. You can read more about the project at https://research.northumbria.ac.uk/replay/One of the things that came out of the workshop was being given the task with three other participants of coming up with the structure for one of the final steps, which is where we all reflect on the project. We decided that we’d actually plan the reflection for the whole project all the way through, as one is so dependent on the other. As the learning designers on the project (across six universities) have to do research diaries, it makes sense that these should all be integrated.

As part of this “working group” one of the people in the team shared this: https://eprints.staffs.ac.uk/9080/1/Lesley%20Raven_Thesis_2025.pdf It’s really focused on reflection in design education / studio learning – that general domain – but there’s a lot that’s transferable to other disciplines. The key thing that was fed into our process on the day was the key phrase “Reflection is not admin”. If we have to do repeated reflection throughout the project, it shouldn’t feel like a chore, but actually be playful practice too.

I started skimming through it, but there’s such a lot of good stuff in there, i’m reading the whole thing from beginning to end.

What I particularly liked about the bit of it I’ve read so far (p. 34 – ) is the break down of reflection into these five themes. The design ed focus shows through but most disciplines will have something that aligns to these.

1) Technical rationality – This is really the basic “did it achieve what it set out to do?” – most tasks will actually have some learning to acquire, even if it’s not a technique or skill. Bottom line is – did it work?

2) Artistry – again this is obvious for design ed (and art) not so much for other things. Though this book shows how artistry is a principle that can be applied anywhere. Chapter twelve is especially worth a read.

3) Constructivist assumptions – constructivism is the principle that we develop knowledge by building on what we already know – and so this reflection would address the extent to which these assumptions have been met. Did we build on any knowledge?

4) Tacit knowledge – tacit knowledge is the hidden bits of knowledge we have, but we’re not aware we have. This part of the reflection aims to unpack that aspect. So the question here might be “has any knowledge or understanding emerged through the process of reflection?”

5) Mind and body dualism. This is a bit trickier to apply generally. The thesis isn’t really suggesting we adopt mind body dualism but be aware of it. We often ask ourselves how have our minds developed, but of course design ed is also about physical skills. The two are intertwined so much the idea of dualism is outdated. Embodied learning. Post-humanist post-dualism etc etc. Lots of post. It’s a bit difficult to say what your body has learnt from … say a maths lecture … except lecture seats do not provide adequate lumbar support for 62 year olds. For me this makes sense in terms of Gibbs’s “feelings” stage of reflection. How did it make you feel? When you reflect do you cringe or smile? There’s a quote by Polanyi (1974) which helped me get my head round this – “practical wisdom is more truly embodied in action than expressed in rules of action”. So basically, don’t overlook the embodied aspect. It’s the ontic before the ontologic (to get all Heideggery again).

Also reading the thesis I realise how much scope there is to reflection – it’s not a boring chore if you do it properly – it can be a creative act in itself. Maybe the most creative part of the cycle if done in a fun and playful way, which is what we’re here for.

Obviously I blog, I do podcasting and I have a constant internal self-critical monologue so reflection is something I do a LOT of, but finding alternative mechanisms to reflect that are creative and invite people to want to reflect is, I’ve realised, going to be a fascinating parallel part of the project.

Polanyi, M. 1974. Personal knowledge: Towards a post-critical philosophy. Chicago: University of Chicago Press

Privileging the corporeal

It’s still happening

This train of thought was triggered by this headline:

Couples who meet on dating apps are doomed science says

https://www.dazeddigital.com/life-culture/article/68440/1/couples-who-meet-on-dating-apps-are-doomed-science-says

Well, OK. That does not surprise me. But then the first line of the copy says: “A new study has found that people who meet their romantic partners online are less happy in love compared to those who meet in person.” Now that is a very different statement than the headline. What about all the people who meet online through other mechanisms than dating apps? What about all those who meet through community groups? Gaming? People you know mutually through social media? Or (my own experience here) social virtual worlds? Those are very different dynamics and aren’t mentioned in the article.

I posted about this on Bluesky (follow me @markchilds.bluesky.social) a month ago, and not only is it still annoying me (hence the blog) but also the news item is still appearing on my browser home page

https://www.newscientist.com/article/2492159-couples-who-meet-online-may-have-lower-relationship-satisfaction/ for example.

And this one is an earlier study that says the same thing, the news item is in 2023 https://www.psychologytoday.com/gb/blog/dating-in-the-digital-age/202310/unpacking-the-online-dating-effect. (Links to the Sharabi and Dorrence-Hall paper)

which also got picked up by the media, for example: https://www.theguardian.com/commentisfree/2023/nov/18/relationships-online-mates

Now you expect this from reactionary rags like The Guardian, but New Scientist?

The original article doesn’t link to the research (shameful!) but does mention the lead researcher (Marta Kowal) so I tracked down the paper, assuming that the conflation occurred through sloppy reporting. But no – it’s in the original paper!

The literature review is even-handed – they reference studies that find no difference, or even stronger relationships if begun online (due to enhanced disclosure online). However, they then note that since those studies online dating behaviour has changed – there’s more of a swipe right / left culture – which leads to a more transactional mindset and gaming the algorithms. Also if you’ve scanned through a thousand opportunities and picked one, it’s going to make you wonder more about the 999 you didn’t pick and if perhaps one of those would have been a better choice so increasing the chances of dissatisfaction with the person you did pick. Fair enough.

They also acknowledge in their limitations “our binary categorization of meeting context—online versus offline—did not account for nuanced digital contexts.” Well true. But this caveat does not appear in their conclusions or their paper title. A simple addition of the phrase online dating apps would have made the distinction clearer.

And this conflation occurs all the way through the paper. They’re obviously talking specifically about the mechanics of dating apps, but throughout they describe this as meeting online. For example the Discussion section starts: “The present study aimed to better understand the increasingly common phenomenon of meeting romantic partners online.” No it doesn’t – it better understands the phenomenon of meeting romantic partners through online dating apps. Meeting offline involves the various serendipitous, low stakes, casual connections that can occur through the traditional venues like family, friends, work, school and the Oldenberg third places list (1997). They don’t just get you through the day – though these last group have declined. It’s the (I’m assuming) low stakes, random connections, where you’re not meeting with a potential partner in mind, but just doing stuff, then suddenly after a while thinking “hang on, this person I like a LOT” and taking it from there that makes those relationships so great in the long term (no evidence about in general, but that’s how it worked for me). Doing research like this either means comparing like with like – so all the serendipitous, third space type places online with the equivalent offline ones – or being very cautious about how the claims are framed. Particularly when the key distinction is something other than what you’re claiming. This isn’t offline v online, it’s serendipity v algorithms.

And the other reason why if I was a reviewer of this paper I would have rejected it. There’s no qualitative data. They surveyed a huge number of people, BUT DIDN’T TALK TO ANYONE. Essentially they have no real clue about what the data mean because they haven’t checked their thoughts with any of the people they surveyed. Even my undergraduate students do a better job than this – they understand the complementary roles of data in a mixed methods approach (in an interpretivist study) and why both are necessary for a fuller picture.

But this leads to a wider question – the glee with which the “journalists” pounced on the findings and spread them abroad. There is still (despite lockdowns) a widespread mistrust of online interactions – that for many people there is an in inherent inauthenticity to them. What Carl Mitcham (1994, p.298) calls “ancient scepticism”. It’s a distrust of technology. I see it at work where people say they prefer to teach in person because they can judge the engagement of their students better.

I’m here to tell you. No. You. Can’t. There is no evidence (unless I’ve been very bad at tracking it down) that perceptions of engagement actually correspond to actual engagement. All those nods and eye contacts DO NOT MEAN anyone is paying attention. In fact, students report all that performance around paying attention distracts them from actually paying attention. Admittedly anecdotal, but no-one has anything else to go on.

Sure I agree the apps are pretty dodgy, I would hate them, but the relationships you can build up through online communities, through gaming, through social virtual worlds, are real relationships and it’s disengenuous to criticise one through the guise of reporting on something completely unrelated, just because it happens in the same place.

References

Marta Kowal, Piotr Sorokowski, Adam Bode, Michal Misiak, W.P. Malecki, Agnieszka Sorokowska, S. Craig Roberts, Meeting partners online is related to lower relationship satisfaction and love: Data from 50 countries, Telematics and Informatics, Volume 101, 2025, 102309, ISSN 0736-5853,
https://doi.org/10.1016/j.tele.2025.102309.

Liesel L. Sharabi, Elizabeth Dorrance-Hall, (2024) The online dating effect: Where a couple meets predicts the quality of their marriage, Computers in Human Behavior, Volume 150, 2024, 107973, ISSN 0747-5632,
https://doi.org/10.1016/j.chb.2023.107973.

Mitcham, C. (1994) Thinking Through Technology: the Path Between Engineering & Philosophy. Chicago: University of Chicago Press.

Oldenberg, R. (1997) The Great Good Place: Cafes, coffee shops, community centers, beauty parlors, general stores, bars, hangouts and how they get you through the day. Marlowe and Company, USA: New York

Adolescence or senescence?

I can see we’re at the start of another moral panic about a technology – this time about a TV show raising concerns about the impact of social media on toxic masculinity. This actually isn’t about this TV show, it’s about every time we’ve been here before. Because people tend to forget, or they tell themselves this is different. But just a reminder.

The ancient Greeks said the same about books, then when the printing press took over it was about books being widely available, then it was about book being in English not Latin, then it was about them being so cheap your wife or your servant might get hold of them.

We had the same moral panic about newspapers, bicycles, automobiles, films, television.

In the 1950s it was communists, then comics, in the sixties it was rock music. I’ve lived through moral panics about TV again in the 70s, videos and video games in the 80s. Satanism was big in the 1990s, and heavy metal music specifically (as opposed to all music) came under the spotlight. Rob Halford stood trial for backmasking. Genesis P’Orridge fled the UK. Moral panics about pornography cycle round every so often, usually linked with a new medium. The one about virtual worlds in the 2000s made my PhD particularly difficult. The 2020s one about transgender people shows no signs of abating.

Each time we’ve looked back on the previous panic with bewilderment, or ridicule (people had to wave a red flag in front of their car!) or anger.

The people who led the campaigns, the McCarthys, the Werthams, the Eysencks, the Whitehouses, were realised to be delusional, or power hungry, or deceitful, or authoritarian. Nearly always all of those. But also, they are looked back at not just dictators but also leaders of cultural destruction. What Wertham took away when he eviscerated EC in the 1950s, the comics medium has never really recovered from. Whitehouse is still loathed by my generation. Our view of all of them is summed up neatly in the quote by Joe Rosenthal “Those who seek to ban books are never on the right side of history.” What’s also true looking back on the history of moral panics is “Those who seek to ban are never on the right side of history.”

So … this isn’t about the current one. Not really. It might be actually something to worry about this time round. But at some point we need to stop succumbing to the innate human distrust of anything new (which is why I think these things gain currency so quickly) and proceed with caution and a level of scrutiny of the claims that wasn’t conducted in any of the previous times we’ve been here. Because wolf has been cried many, many, many times before.

On Ludicity, Bullshit and Lorraine

This is a re-post of an earlier blog entry https://markchilds.org/2020/10/03/on-liminality-bullshit-and-lorraine/ I’ve just tried updating, but can’t save it – at least now you’ve got the original for comparison, I guess?

Bullshit

Bullshit is defined in the literature as unevidenced claims (Mackenzie and Bhatt, 2020) . I would like to extend this definition to describe anything miscategorised ontologically.

Broadly there are four ontological categories

  • “Proven”
  • Unproven
  • “Disproven”
  • Unproveable

So “proven” claims are those with sufficient evidence to convince the majority of people who have viewed the evidence. The scare quotes are because nothing is ever completely proven to be true, the best we can say is that the statement is the one, of all the possible statements, that best explain the observable evidence. Examples are evolution, general relativity, the standard model, climate change, and so on.

Unproven are those claims which have insufficient evidence to convince the majority of people who have viewed the evidence, but for which there is some, or where there are competing explanations. Examples are string theory, …. These are contested, and often there are social, hierarchical, cultural reasons why some lead over others. For example, those published in English are likely to be forerunners over those published in other languages.

“Disproven” are those where the overwhelming evidence is that the claims are false. Vaccines cause autism, creationism, etc.

Unproveable are those categories of statements for which evidence cannot be acquired. God, unicorns, afterlife, etc. The claims are that these things exist despite there being no evidence. Absence of proof is not proof of absence, is the argument.

So I’d argue any statement properly attributed to the correct category isn’t bullshit, but if it is misattributed it is. So for example, “I believe in God and that belief sustains me through my bad times”, is not bullshit because it makes no untrue claims. “God loves you all”, is – because it’s claiming that God actually exists, and we have no evidence for His existence.

“The Earth is flat” is bullshit, as is “vaccines cause autisms”. Those are both claiming disproven things are proven. But so is “science is just a matter of perspective”, as it’s stating a “proven” thing is unproven. Yes, you could overthrow the current paradigm, and people have, but you would need a wealth of evidence to outweigh the current best “proven” explanation, and move it to a different category through presenting that argument. To state that theories agreed across all cultural perspectives are just a male, western white perspective, when science is being used by all countries to determine truth from fiction, is bullshit.

An addendum – I’m talking here about the positivist end of the spectrum – astrophysics, biology, etc, the things based on measuring stuff (see a previous blog post). My own bias, as I go there when I think about science rather than the more interpretivist stuff like anthropology, psychology, education. With those there is a strong argument that there’s a western domination which influences the field – have a read of this https://www.nasw.org/article/science-writers-urged-tell-stories-include-indigenous-perspectives

Within the “proven” category we also have the distinction between positivist and interpretivist perspectives. Positivist observations are more powerful, and indicate stronger causal links. There is instrumental reality to back them up (although instruments can be wrong). But interpretivist data is also useful. To state that a model needs to predict behaviour absolutely in order to have value is bullshit, because even if it’s useful most of the time, it can still inform decisions. But to say that measurable phenomenon is no more value than a collection of qualitative data is also bullshit.

So yes, things move from category to category, but only over time, and only with evidence and reasoned argument.  There are blurry lines between the categories, and opinion might vary on which side some things legitimately belong. Bullshit only applies outside of these blurred lines.

These distinctions weren’t always so evident. It’s only with the Enlightenment in the 18th century and the development of the scientific method, that humanity developed a mechanism to fully determine the difference between finding stuff out and making stuff up. And to state that that science is the dominance of a western male perspective is bullshit. Anyone who wants to tell the difference between making stuff up and finding stuff out uses the scientific method. Indian scientists put stuff into space using science, not Western science. Science. The only difference between a Chinese scientist investigating copper nanotubes and an American one is the abbreviation they use. It’s always been this way. Current science is an amalgam of Islamic scholars, Greek philosophers, Chinese inventors. The first university was in Timbuktu. Reality is a humanity-wide endeavour.

Part of the problem is that there is a perceived difference in value between stuff made up and stuff found out. The Enlightenment has led to the perception that only things that are true have value. Hence, we’ve had epistemicide, where whole systems of making stuff up have disappeared. But just because something’s made up doesn’t make it useless. However, in order to compete with real stuff, everyone feels they have to claim that their worldview is real. Hence people claiming that God is real, I mean really real in a literal sense in the same way that I, and probably you, are.

Before the division into real and not-real, people felt comfortable with mixing ideas they would create alongside stuff they saw. So we’d have theologians arguing about how many angels dance on the head of a pin, or people dancing so the sun came up. They didn’t really believe that those things were really real in a literal sense. The distinction didn’t matter. There are a lot of cultures around the world that still haven’t adopted this hierarchy. The idea of qi informs the design of buildings, but no-one tries investigating copper nanotubes using those principles. It’s not really real in that sense. The Dreamtime doesn’t actually literally exist in the same way the world does; it’s a signifying mythical system that exists alongside the real world.

But since the Enlightenment people who like the made-up stuff feel they have to place it on the same footing as stuff that’s found out, which means claiming that made-up things are really real too and then using made-up stuff to make decisions about real things. So they redact science books because it contradicts the made-up stuff about creationism, or they use a line in the Bible or the teachings of an Imam to decide which real people should be allowed to fuck whom.

It’s a misassignment of ontological categories.

It’s bullshit.

On the update

<I’ve gone through updating the original post because that version conflated the ideas of liminal and ludic spaces. I was aware initially that the ideas were different, but was convinced through conversations around 2015, 2016 that the ideas had become conflated, but recently (2022) I’ve have had a few more chats with people and have realised that actually, many people make a distinction. I’ve changed “liminality” to “ludicity” where that’s what I actually meant and any extra text I’ve added (19/9/22) is in <> parentheses. >

On liminality and ludicity

The idea of liminality started with Victor Turner, who was a drama theorist. Liminality derives from the word “limen” or the edge of the stage. <Turner’s idea was that in the cross-over between off-stage and on-stage, there is a moment where there are no rules, no roles, everything is held in abeyance until we enter the roles assigned for us on the stage. And this also applies to any other transition, the commute between home and work is a liminal space. I guess there’s the highway code to follow, but apart from that we are set adrift from all the other pressures of social interaction. I wrote this while driving back from a conference down the M6. Ideas would flow, I held onto them until i could get to the next service station, I’d write them down, and once that set of ideas were transcribed I set off again. This was only really possible because of the liminality of the space I was in. With that extended period to just think, it was possible to organise everything in my mind.

However, once the play starts, this is no longer liminal. There are rules to follow, parts to play, but they are different from those of regular life. So the events of a play exist within a ludic space, alternatively called a magic circle by Huizinga, or a membrane by Castronova, or a fourth place (though that was just me in one book and it didn’t catch on).> Within that space we suspend our disbelief – an actor becomes the character, the backdrop becomes an actual landscape or drawing room. But the same is true of a film, or a book; it’s called the diegetic effect. We can sustain that level of engagement, while also knowing that it’s not real. The state of knowing something is real and not-real at the same time is known as metaxis, or double-consciousness. We know deep down it’s not real, but while we’re in the ludic space we suppress that knowledge in order to fully immerse ourselves.

While we’re engaged in the film the real world doesn’t intrude according to this view. We know that it’s not real, but while we watch it, that doesn’t matter. We know aliens aren’t real, but we’re still scared by them invading, we know that’s just an animated drawing of a deer, but we still cry when Bambi sees her die. We can take part in ludic spaces too. A game space is a ludic space. We know it’s only play money, but when we land on Mayfair with a hotel on it, we’re really pissed off. Virtual worlds are ludic spaces too. As are ritual spaces. Within them, roles are changed; identities can be changed; rules are changed. The made-up is made real. For example Monopoly, extends around the players. Within that space, the play money matters, there are specific rules that govern behaviour. We all become capitalists.

Ludic spaces aren’t just defined by space, they are also bounded by time. A stage outside of a performance isn’t a ludic space. It’s just a normal space. It’s transformed into a ludic space by ritual elements, <passing through a liminal moment during the transition>. The surroundings help here. There’s an interesting paper by Pierpoint (Childs et al, 2014; 121-124) in which the surrounding elements of a proscenium theatre are described as part of this ritual. There is the design of the front of house, the direction to the seat, sitting down, the reading the programme, all those build up to the moment where the orchestra plays, and the curtain goes up. All these liminal experiences are signifiers of the moment when the ludic space is created. Performances where actors drift on stage, and there is no real start feel odd because this ritual commencement hasn’t taken place. Site-specific theatre is more challenging partly because this ritual is absent, <we miss the liminal moment,> so we don’t know when or to where the ludicity extends.

Ludic spaces can also be returned to and invoked repeatedly. By having multiple texts, a series of movies, or a TV show, a consistent repeated diegesis is created. This can also be extended outside of those texts, by others, like for instance fanfiction, or conferences like the Sherlock Holmes society run, where the canon is engaged with as if it were real.

The pedagodzilla podcasts are ludic spaces. The Godzilla roar, the music, Mike’s intro, all set up the ludicity of the space. It’s important because it signifies that within that 40 mins, making stuff up is legitimised. Mike sets out the rules; that there is a genuine piece of pedagogical theory, a description of a piece of pop culture, and then we will apply the real stuff to the made-up stuff as if it was real. We are deliberately misattributing the ontological nature of, for example, Yoda as a supply teacher, because we know it’s inappropriate, and therefore fun. We know that he doesn’t exist, and wasn’t created in order to be analysed in that way.  And we know the audience knows that. And we hope the audience knows that we know that. It would spoil the lusory nature of the ludic space for someone to criticise the argument with “but he’s not real.” That’s not the point. Made up stuff is legitimate within the ludic space.

Ditto church services. The organ music, the singing, the sermon. All of those add to the ludicity. Gee would also describe the space as a semitioic social space, if you can read the signs around you, in the vestments, the props, the stained glass windows, it all adds to the experience of it as a ludic space. Within that ludic time-bounded space, misattributing the ontological status of God is fine. You can say He’s real within that space, and share fellowship and agape and all that feelgood stuff, because the normal rules of engagement with reality are suspended. Made up stuff is permissible.

And also ludic spaces can exist within other ludic spaces. So for example, later in the same chapter as the Pierpoint reference Ian Upton (Childs et al, 2014; 127-130) talks about ritual spaces within Second Life. We adopt one set of rules on entering the virtual world, and then within the virtual world cross another magic circle where rules and identities are transformed again. Ian argues that the change between the non-performance SL space and the performance SL space is a greater one that between RL and SL.

Where it breaks down a bit

This idea that ludic spaces are separate, discrete places cut-off from normal space doesn’t always hold, however. The membrane around that magical circle is permeable. Anyone who’s had to placate a child who’s got upset by landing on Mayfair, or fallen out with someone because they lifted money from the bank, will know that what happens within the Monopoly game space does have an impact on the rest of the world. More positively, the ludic space can excite us, or sustain us, in the rest of our lives, by us looking forward to the next movie in a series, or building a fan community around those spaces, or having faith in a divine being.

It works the other way too. In novels and films, often the exterior world will intrude, to remind you it is only a book. In Vanity Fair, Thackery interjects to remind the reader that he’s writing the novel. The sense of immersion is undermined, the diegetic effect broken.

And sometimes the membrane extends way more than the ludic space. A football ground is a ludic space. There is the ritual of the singing, the buying of the pie, the Chinese dragon dancing between halves (I’m guessing because I’ve only ever been to one football match in my life which was West Brom vs China). The crowd shares in the made-up thing that it matters whether one set of the 11 people on the pitch get the ball in the net more than the other 11. That’s what the game is. That’s what all games are. They’re enjoyable because we’ve invented a set of criteria that matter, not because they do intrinsically, but for the sense of camaraderie, of communitas, that occurs when the criteria are met. One woman jumps higher than the other woman, one robot flips the other robot out of the ring. We all know deep down that they don’t matter, but it’s fun to believe that they do and share that with other people.

But that ludicity is broached when that extends to people’s entire lives. Outside of the match, that ludic space bounded by space and time, can dominate those lives. At some level there is the awareness that actually, it’s a manufactured reality, but that realisation is permanently suppressed. Your team loses and you will be depressed all week. It’s the same with religion. The statement that God is real isn’t left at the church door, but is taken out into the real world and is acted upon as if it were true all the time. It’s self-evidently not true, but the ontological status is misattributed.

Let’s remind where the bullshit lies. “I know I can’t prove God exists, but I choose to believe he does, because that belief gives me comfort, and ties me to my community” – not bullshit. “God exists and He says you’re going to Hell” – bullshit.

There’s an extra level of complexity with ludicity , and that is where it’s intricately woven with the external world. This ludicity isn’t obviously tied to a space or a time, but it’s ludicity nonetheless. This is where we come to the Dalai Lama, Tony Stark and Lorraine Kelly.

The Dalai Lama, Iron Man and Lorraine Kelly

What these three have in common is that they are all fictional characters; they have identical ontological status.

The Dalai Lama is the latest incarnation of Avalokitesvara, a Bodhisattva. This obviously is made up as there is no evidence for reincarnation or the existence of Bodhisattvas. He is performed by a real person named Lhamo Dhondup. If people believe in that sort of thing, then if they meet Dhondup then they might believe they have met the Dalai Lama. Where the reality and fantasy are distinguishable is difficult to say. Maybe when he gets home Dhondup takes off the assumed identity and just becomes a normal guy. Maybe he performs that identity 24/7. Similarly with Tony Stark. The character was created more recently, and we know by whom (Stan Lee, Larry Lieber, Jack Kirby and Don Heck) whereas the name of whoever made up the Bodhisattva stuff is lost in the mists of time, but ontologically they are just as real or unreal as each other. In his most recent incarnation Tony Stark is performed by Robert Downey Jr. However, that performance isn’t restricted to the ludic space of the MCU, as Downey JR (like Dhondup appearing as the Dalai Lama) goes to hospital wards to meet sick kids who (like Buddhists) really believe he’s Tony Stark. Downey Jr. doesn’t do that all the time, he has an out-of-ludic-space life, but he carries that ludicity around with him, able to generate it when it’s required. And that’s OK because that ludicity legitimises the made-up-ness. The child in the ward isn’t meeting an actor, he’s meeting a superhero. For the moment Downey Jr. is there, the fantasy is real. Ditto Dhondup.

Lorraine Kelly is slightly more complex, in that the fictional Lorraine Kelly is performed by a real person also named Lorraine Kelly. This was actually a point of law, proven by the real Kelly because the fictional nature of Lorraine means that she’s a performer when she’s doing her presenting; she’s not being herself. When she meets fans in events, she’s also Lorraine, but where the real Kelly exists, and the fictional Lorraine exists, is a blurred edge to the ludicity.

In the world of professional wrestling this is known as kayfabe. Although professional wrestling resembles a sport, its roots are actually in the strongman sideshow of carnivals. Initiated by the golden trio in 1920s’ New York, the matches are actually “worked”, ie fictions created as performances. The ring is a ludic space (as are all sports spaces) but the ludicity extends beyond the ring, as the worked narratives are played out in public outside of the ring, extending the narrative into mainstream space. The wrestlers abuse each other in publications, carry on feuds in public spaces and the wrestling news describes these stories as if they were real news. As internet culture has formed, the ludicity has extended to websites, but this also makes maintaining the work constantly more difficult, as fans may spot enemies together in restaurants etc.

This is still ludicity, but again the wrestlers carry that ludic space with them. In dressing rooms etc if a “mark” (ie someone not part of the work) is spotted, the wrestlers will call out “kayfabe” and switch on their characters in the same way that Kelly, Downey Jr, and (presumably) Dhondup do, generating that ludicity around them.

And what? It gets more complicated?

This blurring of ludicity is also deliberately played with in professional wrestling, in a level of complexity rarely developed in other media.. A wrestler might be really hurt, or go AWOL, or fall out with his coach. Or a “face” and a “heel” might fall in love etc. This is called a shoot, (as with most carnies, there is a huge terminology describing the differences between the ludic and external spaces). A shoot is when the ludicity is unintentionally dropped and reality inevitably intrudes. This could happen with the other examples. Anything could happen to cause Downey Jr, Kelly or Dhondup to slip out of their roles,  with varying consequences.

Where professional wrestling is more complex, however, is that there is also worked shoots. What may seem to be a falling out, and a narrative in which the ludic space has been broken, can actually turn out to be part of a larger narrative, and it’s all part of the work. Fans are constantly kept uncertain as to what’s real and what isn’t. But they work it out, or adapt in retrospect if they haven’t. Professional wrestling fans’ realities are constantly being retconned and it’s all part of the fun. We could learn a lot from them.

So what’s got fucked up?

Believing in things is fun. Make-believe is reassuring, it brings respite from the harsh realities of life, and particularly death. We can console ourselves there is a jeaven, or whatever it’s called, and that gets us through. It’s more exciting to meet the Dalai Lama, or Tony Stark, or Lorraine, than it is to meet Dhondup, Downey Jr, or Kelly. It’s tedious to constantly have to follow up a statement about God or Yoda, with “I know he’s not real, but just for the sake of discussion let’s pretend He does.”

The problem is that people feel the Enlightenment has forced on us this hierarchy between finding stuff out and making stuff up. People feel that stuff has to be true in order to be justified in believing in it. And worse. Deep down people know claiming unproveable things are true is bullshit (once you know how to tell the difference, you can’t unlearn it) but that just means they end up defending it even more vociferously. You could argue that there are other ways of knowing, that evidence is not the only way to find things out, but then that’s bullshit about bullshit. That level of self-deception is going to wear you out.

The effect of all this bullshit (and metabullshit) is that we get people attacking soap stars because of something their character did in last week’s episode, we get climate change denial, antivaxxers, holocaust denial, homoeopathy, we get statements like “it’s Adam and Eve, not Adam and Steve”, we get people forgetting that ultimately it’s just a game, etc. etc.

And on the other hand, where many epistemologies collide with scientific rationalism, scientific rationalism wins (because it’s the only one that works) and we lose all these alternative worldviews in a global epistemicide.

The answer to this either or state, between accepting or rejecting reality is ludicity. You can have your cake and eat it. You don’t have to pretend stuff that’s made-up is real, in order to feel it’s legitimate to carry on believing it. Have your ludic spaces, but acknowledge that they are ludic spaces. You just need to be able to see the crossover point – the limen. Within the delineated liminal spaces, you can call anything you like true. Go to your mumsnet group and complain about your food having chemicals in it, have your YouTube channels about the earth being flat, have your services where you talk about all the wonderful things your God has done for you. But see the limen.

From all the examples above, we can see how flexible ludicity is, it can be delineated within specific spaces, it’s permeable, it can be spontaneously generated once it’s been established, it can follow people around. The boundaries can be played with. So feel free in applying ludicity when and where you like, to gain your emotional sustenance from it, but when you come back out into the real world, acknowledge that it’s just football, or religion, or a movie and use real things for making decisions about the real stuff.

Recognise that every damn thing has chemicals in them and act accordingly, don’t go down conspiracy-theory rabbit-holes to prove the Earth is flat, acknowledge that God is no justification for stopping your son from marrying his fiancé because God is something someone made up at some point. Acknowledge your inbuilt bullshit detector and end the self-denial. Accept reality into your lives.

Go to your ludic space. Have your fun. Have your life-affirming moments. Share your beliefs with your fellow worshippers as if they were real things. But see the limen, as you transition back out into the world you share with the rest of us.

See the limen and we’ll all get along just fine.

References

Childs, M., Chafer, J. Pierpoint, S., Stelarc, Upton, I., and Wright, G. (2014) “Moving towards the alien ‘other’”, in Kuksa, I. and Childs, M. Making Sense of Space: The Design and Experience of Virtual Spaces as a Tool for Communication. Chandos, UK:Oxford. pp . 121-138

MacKenzie, A., Bhatt, I. Lies, Bullshit and Fake News: Some Epistemological Concerns. Postdigit Sci Educ 2, 9–13 (2020). https://doi.org/10.1007/s42438-018-0025-4

Failing to get irony isn’t the flex you think it is

In Plato’s The Republic, he has his old teacher, Socrates, engage in a series of conversations about how to create a utopian society. The people he’s conversing with (I’m hesitating to call them friends because tbh he comes across as _really_ annoying) offer ways to construct this society, for example, having officials elected from amongst Olympic athletes as they’d have commitment, and sport is an objective measure of who is better at something e.g. the fastest gets to the finish line first.

Ah, says Socrates, so you’re saying that only the fittest and healthiest should make decisions about ruling. To which they answer yes as they have sounder minds. Ah says Socrates, so you also then are saying that the infirm have nothing to offer, to which they make another response, and so on, each one leading them step by step to a more untenable position by using the logical consequences of their positions against them.

This then, is Socratic irony. Showing people the egregious nature of their positions, even though they might not appear so, but while claiming to understand them.

It’s the basis of a lot of humour from the past few thousand years.

Though, not great humour, as it’s pretty annoying.

And as a recent example we have Jimmy Carr. The statement is that when we look at the holocaust, we decry (quite rightly) the death of six million Jewish people. We don’t decry the death of a million Roma and Sinti people. Ah, says Jimmy, that’s because we’re OK with that. The audience laughs.

The laugh – the “joke” – is the shock of recognition that by not including those deaths in with our teaching of the holocaust, the implication of what we’re saying is those deaths are OK. Of course, it’s not. The response isn’t one of enjoyment, it’s not really meant to be funny, it’s that instead of the expectation that the usual declaration of how wrong deaths are, someone is espousing the logical consequence of a prevailing opinion (the Holocaust was the death of six million, not seven, or 14) which actually runs counter to that outrage. We’re being caught out in a double standard.  It’s being suddenly faced with the sudden recognition that something is wrong here

That’s how socratic irony works. The ironist says “you haven’t thought this through, your position is untenable” by stating the untenable.

There are some valid arguments that this still isn’t a great way to convey an antiracist message, though.

One is that there’s the danger it could be taken literally, and that could end up being counter-productive. Never underestimate the range of things you think untenable that other people do think are all too tenable. It’s not really conceivable that a comedian and a TV channel would condone that level of racism, but the endemic anti-Roma sentiment around is horrendously high and people are understandably unnerved by it. It’s also possible for people to not actually understand socratic irony. I’m sure some of the people taking those comments literally genuinely believe that because someone says something, that’s what they mean. There are language issues, literacy issues, the potential to take things out of context. All of which could lead someone to seriously think a racist message is actually being conveyed.

And secondly, the Holocaust. I mean, even if you can tell socratic irony when you hear it, that’s still too horrendous a subject to include in a routine. I follow the Auschwitz memorial twitter feed and sometimes that’s overwhelming, seeing that inhumanity on a daily basis. Hourly. I don’t think I’d laugh for the rest of the evening for thinking about it  if it got mentioned – even though I get the point that Carr is making.

And also, I don’t really want to go to a comedy gig to have society’s shortcomings as far as double-standards with racism addressed. I kind of like stuff about people’s own lives, and their own perspectives. I already get the fact that the Roma and their suffering in the camps is overlooked. It’s personal observations on life I get a kick from hearing about, I don’t need to be woken up about people’s inhumanity to each other when I go out for the evening.

So – suitable subject matter – not really. Racist, literally obviously, but I do suspect the motivations of the people who are taking it literally. What is going on there?