There are generally three responses individuals and groups may provide before some social phenomenon such as the proliferation of generative artificial intelligence (AI) tools: accept and conform, ignore and continue, or reject and rebel. With colleagues at We and AI, the UK’s leading AI literacy advocacy organisation, Tania Duarte and I have been working on articulating different courses of action we might take when “rejecting and rebelling.” Our analysis will hopefully be published in the coming months, but what I’m here to say is that I personally and very deeply believe in the power of comedy as an act of rebellion.

Comedy as an Act of Rebellion
There are countless studies and essays on the role of the comic in subverting the powers that be — and that might be what art achieves more generally.
But in the context of AI, for me, humour comes as a breath of fresh air. In my experience, conversations about AI often take the form of very serious people making claims very confidently or formally, either because they have something to sell or because they are in a formal setting. The seriousness with which AI is discussed is consistent with the exclusionary nature of AI discourse: either you’re in because you work in big tech, policy or science; or you’re out because you are led to believe you don’t know what you’re talking about.
To be more inclusive, I find that humour —whether it’s in the form of a funny remark or a glint of cheek in one’s eye— plays an important role.
An old bit of mine is to ask what “trustworthy AI” means for how I might or might not trust my toaster. I’ll publish that full ramble some day, but did bring it up during a podcast in December (see 28:58 - 29:46):
The story of the toaster basically brings the apparently serious and very daunting of topic down to Earth: we can understand what a toaster is, we can discuss why or why not we might say we “trust” our toasters, and we can have a laugh at countless very serious people and organisations speaking about “trustworthy AI” along the way. (N.b.: I will only use humour to punch up! Punching down is just cruel and unnecessary.)
In the process of bringing complex discussions about AI into our day-to-day worlds through metaphor (the same podcast episode introduces a few more helpful metaphors), we unveil the wizard, so to speak: AI narratives become easy to understand and, as such, easy to critically evaluate; to challenge!
In another podcast episode, I made a quip about AI being used to automate the things that give life its meaning. Natalie Meyers on LinkedIn picked this bit to post about:
I believe they really missed the mark when they said: "We can automate art!" Like, that's one of the very few things we have left ? (Laughter) Come on! We're working 9-5 [nearly] everyday from age 18 to 65 if we're lucky, and now you want to take away our art ? Why don't you just automate the Olympics while we're at it? And automate music? (Exasperation)
Now, I’m definitely not the only person in responsible AI (far from it) drawing on humour to make AI accessible. One very good example of this is the podcast Dr. Emily M. Bender and Dr. Alex Hanna host: Mystery AI Hype Theatre 3000. Consider listening in on this episode, where they just go through AI-hype-filled article after article — it’s humorous but… tiring when your AI hype radar is so fine-tuned.
And I think that’s another reason humour helps tell real stories about AI: it’s not only often very dull, technical stuff; it’s also often very distressing to see just how much wrong is being perpetuated for the sake of lining the pockets of a few tech leaders. And the moment you go online to make the case that a lot of bad things are happening because of tech leaders’ motivations, surprise! People who have fallen for the AI propaganda are ready to fight you, bringing to the fore the very divisive nature of AI.
In this sense, humour allows us to dull ourselves slightly from the constant thrashing involved in deep thinking and meaningful work in AI.
And now, this
It’s exciting to see humorous takes on AI move from small circles in the responsible AI movement —whether on podcasts or social media— to mainstream media. In the latest episode of Last week tonight, John Oliver presented viewers with the concept of “AI slop” in his usual way. The episode ends beautifully, with a wonderful wooden carving that captures a pretty random AI-generated video, turning the table on AI developers: artists can reclaim the stolen art on which your tools are built. I recommend watching the episode if you can. If not, here’s a pre-ChatGPT take on text-to-image tools from the same show. Enjoy!