What’s it all for?

Talking to a friend last week and he summarised my approach to life as “it doesn’t have any meaning, but that’s ok”. Which is spot on. I do actually envy people who can convince themselves there is ultimately a point – that there’s fate or a god, or something watching over them – to be able to turn off the rational part of their mind like that at will. That’s not supposed to sound snide or anything, it’s a skill, and there’s a demonstrable link between being able to do that and positive mental health.

That sort of thing is easier in crowds the “collusion in delusion” which explains the popularity of church, or sporting events, or cinemas, and of course most of us can do it for specific periods and locations, it’s the theoretical basis for much of playful learning – Huizinga’s magic circle. There’s contentment in those moments and searching for spaces that can help us reach that point can be worth seeking out. A prime example for me was Seonbawi rock in Seoul.

There was something about the peacefulness, the immense solidity of those two rocks, the tolling of a bell at sunset from a nearby temple, everything just felt … OK. Only one person visited during the hour or so I was there, and it’s a touchpoint I can call up. I’ve not found the equivalent in the NE of the UK, except I guess looking out down on the valley my home sits at the end of. Sheep, cows, rabbits, various birds, connects me (well anyone) to those metanarratives Serres discusses in The Natural Contract.

Of course, there’s not actually any pattern – thinking you can see one is a warning of incipient apophenia. Something to be indulged in briefly, but can tip from rabbit-hole to tar-pit if you’re not watchful. Don’t believe in yourself, don’t deceive with belief. All that quicksand stuff.

But when you’re enacting practice, teaching, researching, doing your job, is it necessary to think that ultimately there’s a point to motivate yourself to keep going? I was reading Lyotard last week, The Inhuman (specifically β€œCan Thought Go on without a Body?”) and in that he discusses the post-solar humanity (I’m studying post-humanism and trans-humanism) and he discusses the ultimate fate of humanity to be either destruction when the sun dies, or to escape this destruction by becoming something non-human. Lyotard’s point of this is to show the fundamental error in unlimited technological progress – either it’s not possible because the sun will undergo a helium flash in 4.5 billion years, or it’s undesirable because the only logical end point is for us to not be human any more.

To which I’d answer “generation starships”. Or “pantropy”. Or any of the known SF solutions. I don’t read that Lyotard’s question as a hypothetical – I mean what are we going to do? I’m reminded of a line from a Woody Allen routine where a woman turns him down with the line “not even it would help the space programme”. Is all our endeavour actually reducable to this one goal? It could work for me – understanding virtual embodiment, how humanity is reflected in our avatars, how an extended body works via telepresence, all that could help us survive the ultimate fate of the solar system. How would what you do help anything long term? Except …

we’re just postponing the inevitable. The heat death of the Universe. There is no long term solution.

Maybe just getting a few extra billion years on humanity’s clock is point enough? But it could possibly all seems a bit abstract for day-to-day life. I was chatting with another friend over the weekend and her answer was to have as much fun as possible without causing harm to anyone.

Not sure how that justifies me doing what I do. I suppose a lot of it is fun, and when it’s not fun I justify it in terms of it earning me enough of a living to spend money on things that are fun. I’m sure there’s an integral equation for that so that you could work out how to maximise fun over time. But that, as a philosophy has actually been captured succinctly by The Wyld Stallyns.

Be Excellent to Each Other

Party On Dudes

Is that actually ultimately the point?

Letting the GenAI out of the bottle

I’ve had a couple of interesting conversations at work recently about the use of AI in education – prompted largely by sharing this poem

https://poets.org/poem/student-who-used-ai-write-paper

which asks the question “I know your days are precious on this earth. But what are you trying to be free of? The living? The miraculous task of it?”

It’s a good question and I think is a good one to raise with students, because it reframes the whole relationship between teacher, student, assessment and study. We’re not (or we shouldn’t be) trying to persuade students not to use AI because we don’t want them cheating, or because there’s a standard we want them to attain under some artificial constraints, just to make assessment more challenging (which we shouldn’t) but because there are skills we think they should acquire because they are skills that will develop them, their interaction with the world, and to feel the pleasure of enacting their abilities well.

AI has its place – in the words of someone I was talking to at a conference recently, it’s good for doing the boring stuff we already know how to do. There’s also the possibility you could get by through getting AI to do the work, but to progress past a certain level, you need to have the skills that (if you’ve used AI) you’ve bypassed the acquisition of, for example, you could get AI to write an essay that synthesises different writers, but to create something novel, you need to make associations that aren’t really obvious. To do that you have to have the ability to summarise papers, follows citations, pull out key thoughts and abstract them.

Also, to stick with it, you’ve got to find where the fun is in it. In the degree I’m doing at the moment, I’m enjoying doing the assignments, because I’m finding my own take. For example, my essay on Leibniz I developed by relating each of the aspects of his philosophy to different cake metaphors. Because I like cake but I can’t eat it, basically.

Though, having fun with something is really possible only when you’re not overly concerned with the mark that you’re going to get and that as I said in a meeting last week “is only possible when you’ve reached an age where … err … you’re confident enough that you don’t feel the need to prove yourself further” to which my colleague responded “you mean run out of fucks to give” which is exactly what I was going to say before I self-censored myself. πŸ˜€

The issue is that students are just scared, scared by the amount of assessment they have to do, scared by the amount of competition (some people still do normative grading – which is inexcusable) and scared of screwing up. Sitting back and smelling the roses is – or the pleasure in just learning – is rarely possible.

What we can do is make their engagement with AI authentic at least. People who insist on written testing simply so that they can be sure it’s the student’s own work need to think again. If AI can do the thing we’re testing them on and will do that better then – and I’m going to put this in capitals so that this stands out – because it’s key

WHY THE HELL ARE WE STILL TEACHING THEM TO DO IT?

If this is a skill AI can acquit perfectly, then it’s not something that’s worthy of a human doing. So, maybe this will rule out a huge chunk of a maths syllabus, for example, or coding. Well fair enough. Rethink your syllabus from the ground up. Maybe it’ll make it easier, well deal with it, you’re now teaching an easy subject and all the people who can’t do the tricky things will take yours as the easy option. But putting in artificial barriers, simply to make the assessment harder (like in person testing), is missing the point of what education is for (the subject of my next post). Find a way of assessing which actually challenges the student on something that has some value, like groupwork, or have an assessment that checks in on them frequently so you can observe their process.

Avoiding coming up with authentic assessments, which test the non-AI skills is simply failing the students, yourselves, and the education system. In fact, that’s where the cheating is, not in the students using the AI.