Artificial intelligence (AI) is deeply divisive, and conversations surrounding the topic can easily become as heated as a political debate with that one uncle you only see during the holiday season.
Is AI sentient? What’s sentience? Is AI taking people’s jobs? Isn’t it actually people deciding to use AI instead of hiring staff? Why are you so scared of AI taking over the world? Why do you think it will bring a new age of joy and abundance? Sam Altman says it’s magic. If Sam Altman told you to jump off a bridge…
I digress.
But the divisive nature of the concept of “AI” is a feature and not a bug, as they say. “Intelligence” is a deeply philosophically rich and contested term. What is “intelligence”? Is it something we only find in humans? Or can non-human animals be intelligent? Where do we draw the line to identify something or someone as “intelligent”? Is everybody in an “intelligent” species deemed equally “intelligent”? Is there “intelligence” in outer space? Can and should we measure “intelligence,” and —within human societies— what role would such a measurement play? What meaningful phenomenon does “intelligence” track?
Conversely, “artificial” refers to the fact that the “intelligence” described by “AI” is human-made: “AI” is then about artefacts we create. But this doesn’t help much either. We don’t usually call things “artificial”: a hammer’s a hammer, a candle’s a candle, a frisbee’s a frisbee, and so on and so forth. The word “artificial” in “AI” is simply causing confusion by making salient a feature of things people make: they are artificial. Of course, in this instance, we are not making “intelligence,” so the entirety of “AI” is bogus.
A Super Incomplete History of AI
It may be helpful to go back to the origins and evolution of “AI” to understand why it is such a divisive term today. I will keep this brief.
The term “artificial intelligence” was coined by John McCarthy for the 1956 “Dartmouth Summer Research Project on Artificial Intelligence” to avoid having to “accept Norbert Wiener [a pioneer in cybernetics] as a guru or having to argue with him” (McCarthy, 2000).
In 1955, “artificial intelligence” was deemed too flashy a term by McCarthy’s colleague Claude Shannon; this is a more anecdotal fact that I will draw on in a moment.
For the better part of half a century, “AI” was about hard-coding rules and knowledge into machines; we now call this “symbolic AI” or “expert systems,” whereby, basically, decision trees and how ideas relate are codified into a machine.
“Machine learning” (ML) grew as a modelling practice in the 1960s and eventually overthrew “AI” as the preferred method for emulating human decision-making: ML is about applying algorithms to huge amounts of data.
The Melting Pot of AI
Historically, “AI” refers to many different things: a flashy term to get a conference funded whilst avoiding conflicts with difficult colleagues; a method for codifying human decision-making processes; engineering-style solutions-finding; scientific practices involving experimentation; qualitative data-gathering; quantitative data analysis; statistics; mathematics; academia; government; and industry.
With such a complicated history, “AI” is naturally divisive; and we haven’t even touched on narratives about imbuing life into artefacts or seeking the divine through mathematical and computational methods. “AI” isn’t even flashy enough anymore. The common division between “narrow” and “general” AI has inspired companies such as OpenAI, Microsoft, Anthopic, Google and Amazon to market and strive for the creation of “Artificial General Intelligence” (“AGI” — I assume because someone was uncomfortable with the acronym “GAI”). As an aside, the most transparent definition of AGI comes from OpenAI and Microsoft:
AGI will be achieved once OpenAI has developed an AI system that can generate at least $100 billion in profits (Maxwell, 2024).
And other terms have kept emerging in recent years: generative AI, frontier AI, foundation models, agentic AI…
A Fork in the Road
So, here we are, fighting at the dinner table with that one uncle we only see once a year. In 2020, he became a self-taught epidemiologist, but he then transitioned into an AI expert in early 2023, when the “AI revolution” apparently became inevitable. Never mind millennia of research and thinking about intelligence, knowledge, science, technology, humanity… Never mind that algorithms have influenced what we’ve been seeing on social media for years, helped us navigate through digital maps, edited our photos and run spell-checks (not always as we’d wanted). Never mind the dangerous ideologies those leading in AI draw on, from transhumanism and effective altruism to accelerationism and neo-reactionism.
And this roughly describes how promoting responsible AI feels today: it feels like a fight, which is precisely what we don’t want —what I don’t want.
I don’t want to debate anybody about whether an AI chatbot is or is not sentient; I don’t want to argue about whether an AI tool is the perfect solution for any one thing; I don’t want to fight about AI.
I want productive conversations that are founded on facts about the history of AI; the people who made the decisions that constitute that history; the scientific and technological underpinnings of AI; the interests of those funding, building and selling AI tools and research. The result of such conversations should be both a deeper understanding of AI, and an ability to form nuanced, evidence-based views about AI.
In my work, I help groups of people form nuanced understandings about AI together; whether they are work colleagues, research participants or the general public. Responsible AI promotes the nuance that big tech companies don’t want the public to have.