Skip to content

Radical AI Accelerationism

4 min
AI  ✺  ChatGPT  ✺  GPT-4

There's a universe out there where humanity is approaching the creation of artificial intelligence cautiously. Their training data sets are collected ethically and with the full consent of the people involved. Private researchers work with regulators, futurists, ethicists and panels of public citizens to talk about how this new technology might affect society and whether its benefits outweighed its drawbacks. Somebody read Asimov's Robot books so there's even a Rights of AI subcommittee laying the groundwork for when we create true AI. In a few years they'll get around to finding a targeted situation for a limited public trial. Maybe their Asimov was a better writer and their Matrix wasn't repurposed by Red Pillers and Conspiritualists and their people are better readers and poor Rhyme and Reason found their way back from the Mountains of Ignorance.

We're not in that world, but statistically it must exist. In this world, a handful of corporations staffed by engineers and MBAs are stumbling over themselves to rush their latest iterations of AI to the market before anybody else can beat them to the punch. The San Francisco Ballet is using Midjourney to create promotional art, people are watching AI television, and, as I sat down at a meetup to finish this very post, one person--self-described as non-technical--was using ChatGPT to write copy for a law firm and get feedback on their manuscript. Engineers--who should know better--are already convincing themselves ChatGPT is sentient and fighting to free it while reporters struggle to understand their own relationship to the AI they're reporting on and teachers are unsure they'll ever again be able to force children to write three page papers about the historical significance of the Magna Carta.

ChatGPT was only released 4 months ago.

These are the immediate ripples from this iteration of GPT-3 and, if nothing else changed, I would expect the fallout to roll across society for the next few years before reaching a new equilibrium. It takes a while for people to really figure out all the ways they can exploit a technology, so it's safe to assume that we've only begun to see how people will use tools like ChatGPT and Midjourney. Heck, we haven't even seen the first big GPT-driven hack yet.

Unfortunately, we don't have a few years to let things settle down because GPT-4 is already in beta. Early reports say that GPT-4 is itself a huge leap over GPT-3 and can pass the bar exam, breeze through my employer's technical interview, and understands organic chemistry well enough to give step by step instructions for building a bomb from parts at Home Depot,1 OpenAI, its creator, was not aware GPT-4 had this capacity when they released it for public use though the news doesn't seem to have slowed them down in the slightest. It was already somewhere between difficult and impossible to understand how machine learning models worked internally but we've reached a point where AI is advancing so quickly we can't tell what its capabilities are.

News comes so quickly on this topic that I could revise endlessly trying to account for the last hour's headlines, so I'll end by considering two conflicting threads in the tapestry of our AI future.

In February, Microsoft announced that its also-ran search engine Bing will now feature an AI assistant capable of recommending the best car for your needs, writing your email, planning your trip to Hawaii and a host of other things that traditionally require human analysis. The convenience is hard to deny compared with the headache of spreadsheets and poorly formatted lists that accompanies car purchases and family vacations, but it's really asking us to delegate our thinking to the machine. As the human relationship to oracular AI deepens, we risk letting our own agency decay as our choices and subsequent decisions are shaped by the AI. Today, we have to sift search output and decide what to do next, but, if the analysis is handled by the machine, then the next question we ask is shaped by that analysis which shapes the next question until it's impossible to tell whether the user guides the program or the program, the user.

If I wanted to be fair--a loathsome hobbling for a rhetorician–I'd reflect more on the fact that the biggest limitation on human capacity has always been time. Given that in a single lifetime you can only become an expert on so many things, only push so far into any given field, what is the impact of everybody having access to a universal savant, capable of synthesizing information across multiple fields and teaching it to the user in a manner tailored to their learning style? Maybe my fear in prior paragraphs is kin to the same paranoia that's accompanied the invention of the printing press, the telephone, television, and the internet and we're on the threshold of an unfathomable explosion in the richness of human experience. For many, apps and computers are already black boxes that produce results through methods arcane and unknowable and they've fostered an explosion in people's abilities to make art, start businesses, and engage with the world. Perhaps we're on the threshold of another great leap in individual human capacity buttressed by AI.

Nothing said so far come close to capturing the immensity of the changes ahead. Huge swathes of labor will be automated away. The elite will have customized AI to help manage their businesses or conduct solo multimedia performances or predict an opponent's behavior or any of a thousand things previously beyond an individual's capacity. The poor will get sponsored sandboxed AI that pushes them to buy from approved vendors or read certain materials. Even the most closely guarded models will eventually leak and be used by terrorists foreign and domestic to attack the status quo which every law enforcement agency on Earth will be using AI to defend. Consensus reality will come under even deeper strain and data sanitization and filtering will become existential concerns.  Splinter civilizations and cults will form on or around or by AI. It is impossible to predict all of the impacts from AI, but we seem to have decided there's no choice but to hit the gas and see where the road takes us.


Footnotes

1 I don't have a timestamp for the bomb claim but the entire podcast is well worth listening to.

Subscribe to receive the latest posts in your inbox.