It had to happen, but no one quite expected how rapid or from where the clarion call would come to pause the progression and future creation of large-scale AI systems.
Elon Musk, Steve Wozniak and 1,200 tech leaders have signed an open letter to temporarily pause the development of AI-models more powerful that GPT-4.
Many support their aims, some have scoffed at such hypocrisy while others find it risible that any government could debate and ratify any kind of law with such urgency and pace as fast as AI has exploded.
Within the last 6 months, we’ve witnessed how ChatGPT has had the potential to go from writing a poem and writing a whole thesis to coding who websites, inciting hate speech (with enough prompts) and making 300 million jobs almost redundant overnight.
The holy grail has always been for some kind of product to take away the most manual, routine or onerous of human tasks and automate them
However, AI’s ability to produce entirely plausible and convincing text, with erroneous statements or facts that aren’t true can easily be used by states or bad actors for entirely malicious intentions.
The concern is clear: technology should never surpass humanity; machine should never surpass humans.
TECH TURNS ANTI-TECH
So no one quite expected where the first salvo would be fired from, but perhaps it’s not surprising that Elon Musk would be the person to lead it.
With a Twitter debacle rumbling in the background and as self-imposed tech visionary, the optics on this aren’t necessarily that bad for him.
The open letter, published by the nonprofit institute Future of Life, says that AI Labs are caught up in an “out of control race” in a bid to develop and launch machine learning models that no one “can understand, predict, or reliably control.”
There’s no doubt the genie is out of the bottle, the horse has bolted and the stable door is almost off its hinges and the cat is most definit out of the bag. At the same time, we’re well out of cliches to describe the whole situation.
It’s not just Musk who’s signed the letter, but a whole host of other tech CEOs, researchers, developers, government ministers and a few pranksters too, naturally. OpenAI CEO Sam Altman has apparently signed the letter, but we’re not so sure about that.
As we’ve written about before, AI models like ChatGPT have changed the game of content creation, but it doesn’t stop there. Like a endless rope thrown over a chasm of infinity, the ramifications of AI LLMs (language learning models) are unclear.
Perhaps ironically, the same coterie of tech CEOs who rub shoulders with those who’ve signed the letter have desperately been trying to either contain the competition or build their own AI models with their usual “movee fast and break stuff” zeal and with little regard for such niceties as “ethics” or “safety”
With this latest letter, and opposition steadily mounting about the significance (and potentially uncontrollable effect) of AI models, governments and legislators are going to have to move a lot faster to contain something that’s beyond their wildest imagination.