Why AI Is Moving So Fast
It isn't, tho
The degree to which we are bombarded by information about artificial intelligence (AI) is overwhelming. I believe this to be the main reason people claim AI is advancing extremely quickly.
Of course, people don’t actually justify the claim that AI advances quickly by saying “I’m just overwhelmed by all the news around AI.” Rather, the reasons generally underpinning claims about the speed of AI advancements relate with the many iterations of ChatGPT, big market moves (like Meta investing $15 billion into Scale AI, maybe), political leaders’ statements about how “hugely transformational” AI is, the latest claim about what AI will do within five years (this month, it’s “colonise the galaxy”), and so on and so forth. In other words, claims about the rampant speed of AI are evidence-based.
The view that AI is improving at great speed is informed by the news: new AI tools, use cases, incidents and policies are brought to our attention almost each day. Even the language about AI —from “generative” to “frontier” to “agentic”— is in constant flux.
However, it is unclear how these many different facts support the claim that AI is moving fast. If we look at how quickly AI tools are being developed, we then have to confront the simple fact that most of the exciting tools making it onto the headlines of newspapers, blogs and government policies are based on a network architecture proposed in 2017 called the “transformer.”
The great speed of AI developments might then be defended on the basis of the billions invested in AI in recent years, but that just means that the people and organisations with the big bucks believe that AI can give them and their shareholders the best return.
And how about the daily problems around AI —from AI-powered disinformation campaigns to billions being invested in AI solutions that are really just a modern-day Mechanical Turk? Surely, this means AI is advancing so quickly that social institutions —including democracy, business and law— are not suitable to handle these emerging technologies!
Whilst there is some truth to the fact that AI falls through the cracks of certain laws and social practices, this is not evidence of any sort of “speed.” On the one hand, the law changes all the time to respond or preempt social changes. For example, large language models challenge copyright law in the US, whereby they may or may not be “transformative” and, therefore, may or may not justify infringing on copyrighted materials on the basis of “fair use.” On the other hand, the seemingly sudden release of AI into the wild —which bubbled in 2021 with DALL-E and burst later 2022 with ChatGPT— has meant that organisations have found themselves having to quickly adapt, and that quick adaptation has resulted in many poor business decisions.
It is in this last sense that there is some form of rampant speed in the world of AI: people and organisations are feeling that they need to respond quickly to the bombardment of information pertaining to AI. Who is pushing those stories and why? I’ll leave that for another time, but they go by the name of ARSEs.

