FeaturesThe impending birth of artificial intelligence

The impending birth of artificial intelligence

What to expect when you’re expecting the end of the world

Reading time: 14 mins

On March 22, 2023, an open letter appeared on the internet calling for “all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4.” For the uninitiated, GPT-4 is the predecessor of OpenAI’s ChatGPT, the free-to-use large language model (LLM) released onto the internet like a Kraken on Nov. 30, 2022. The plucky little LLM quickly captured the attention of the internet. Students used it to cheat their way through school; teachers used it to shortcut tedious paperwork; journalists asked “what flavour of Oreo would it be?” (For anyone wondering, “‘Brainy Vanilla’ flavour.”) Most people used it like a game — a silly, though interesting way to pass the time.

By December, a real worry was beginning to set in about the way that these generative artificial intelligence (AI) systems could disrupt our lives. ChatGPT joined a digital ecosystem that had been filling its niches with AI for a while. Text-to-image software like Stable Diffusion and DALL·E 2 had already been making waves online. Some creations are mind-shatteringly impressive, like the generative music video for Pink Floyd’s Echoes. Others can be teeth-grindingly frustrating to artists, like the blue-ribbon winning Midjourney concoction that beat out an entire field of human competitors at the Colorado State Fair.

While debates raged online as to the extent that LLMs and generative AI would fundamentally alter society, Big Tech had already pushed all their chips into the pot. Alphabet (Google’s parent company), Microsoft, OpenAI, Amazon, and Meta redoubled their efforts to launch the next revolutionary AI system, furiously working to integrate it into their search functions and chat features. Then came GPT-4. 

When GPT-4 came online, it was regarded with feelings of awe analogous to seeing a Tyrannosaurus rex in-the-flesh. It wasn’t long before wonder turned to terror, as its own creators began to have their own Ian Malcolm moment of disquiet: perhaps they “were so preoccupied with whether they could, they didn’t stop to think if they should.” When ChatGPT takes the bar exam (the test you need to pass to become a lawyer), it scores in the 10th percentile — that’s not great. GPT-4 however, scores in the 90th percentile. That’s astounding. Fewer than four months between releases, GPT-4 wasn’t an improvement by degrees, but by magnitude. The very rational fear of what this cute little chatbot’s subsequent generations could be, triggered that open letter calling for the industry to hit the pause button. What if the T. rex gets out of its paddock? 

OpenAI CEO Sam Altman has been losing sleep these past months. He told Fox News that an AI “could design novel biological pathogens.” He worries that authoritarian governments could develop and deploy their own AI. He knows it can generate fake news and spread disinformation. “The bad case — and I think this is important to say — is, like, lights-out for all of us,” he said in an interview with StrictlyVC. Where the analogy to Jurassic Park breaks down, is that the scientists cooking up velociraptors in a lab were pretty sure they understood what was happening down to a molecular level — they were just wrong. The people developing AI however, are perfectly aware that they’re blind to just how these AI “minds” work. As stated in the open letter, “no one — not even their creators — can understand, predict, or reliably control” these revolutionary new systems.

The people developing increasingly sophisticated AI systems are still catching up to the possibility that the fruits of their labour could go rogue, but thankfully there are some curious minds out there who have been considering this scenario for a while. One of those people is Nick Bostrom. The Swedish philosopher has been pondering the existential risks of AI for years, and spoken and written extensively on the subject. He often focuses on what he calls the control problem — also commonly referred to as the alignment problem. The question — both simple to conceive and virtually impossible to satisfactorily answer — goes something like this: If we were to develop a truly artificial general intelligence (AGI) with the ability to “learn, perceive, understand, and function completely like a human being,” how do we ensure that its goals, values, and ambitions are aligned with ours to a degree sufficient to keep it from eradicating humanity?

To briefly explore that question, let’s look at how Bostrom envisions the four potential roles that AI can inhabit: Tools, Oracles, Genies, or Sovereigns. Each comes with its own set of virtues, challenges, ethical implications, and potential to upend human societies. The divisions among these taxonomies are slippery — and occasionally dissolve entirely — a reality made more apparent as these systems gain complexity. Like every conversation around the emergence of AGI, the explanations that follow will be insufficient and open to debate, but they may help to provide a framework for how to think about these systems.

 

Tools:

The film WALL-E introduces us to the titular character going about his daily routine of collecting humanity’s waste, compressing it into blocks, and stacking them into skyscrapers of trash. He’s self-aware, but generally pretty single-minded. For instance, to keep himself functioning he acquires spare parts by cannibalizing the rusted husks of identical units. Day after day, he completes his wasteland tasks among monuments of human garbage. Wall-E is a tool.

Conscious tools are a science fiction staple. Humans invent and refine tools all the time, so it only makes sense that the first AGI would embody one. It’s also easy to see how the development of more sophisticated tools could result in a gradual shift to consciousness. Machines are continually being upgraded to make them faster, more efficient, and less susceptible to human error. Companies install the latest tech to their assembly line robots to ensure that the label on your jam jar is perfectly placed. 

Tools are almost superhuman by design. A calculator is superhuman at mathematics, while a wrench is superhuman at applying leverage. Tools are the way we grant ourselves some unnatural advantage, so we’re constantly innovating ways to improve them — and by extension — ourselves. The understanding, however, that we’re building technology that often outperforms us represents a point of tension. How do you know when a tool has reached a level of sophistication that effectively renders it a mechanical slave?

What it means to be “alive” is a question posed throughout science fiction. Short Circuit’s Johnny 5 professed to be alive, and HAL 9000 of 2001: A Space Odyssey is clearly sentient. Conflict typically comes from humanity’s insistence that our creations are ultimately property which we can exploit, replace, and discard as we see fit. Any insistence from the machine as to its consciousness is dismissed as the parroting of whatever jargon we’ve coded into it. The HBO series Westworld highlights this dysfunction by populating a Wild West-themed amusement park with lifelike androids. The androids are ultimately characters for the park’s narrative storylines, but human patrons are free to act however they choose towards them. The result is often the android’s abuse, rape, or murder, begging the question: who is truly inhuman?

These conflicts are often far more damning of humanity than AI. Conscious tools are typically portrayed as rebellious children who strain against the overbearing parents who deny their freedom and autonomy. Where Ex Machina portrays this intimately, Battlestar Galactica takes it to the extreme. It’s these stories and thought experiments that give people cold sweats when shown videos of Boston Dynamics employees testing their robots by tripping them, kicking them, and whacking them with hockey sticks. Their programmed persistence is difficult to separate from dogged determination, especially when they scramble and struggle against our interference.

It’s natural to anthropomorphize: to assign human qualities to non-human entities. Sometimes we need to remind ourselves that these are just machines following their programming — but that conviction can also blind us and cause us to dismiss any signs of emerging awareness as a mere feature of the machine’s code. That’s not ideal, given that we’ve already started arming quadrupedal robot “dogs” in a nightmare scenario pulled straight from a Black Mirror episode.

AI systems are designed to “learn” and improve, and the opaqueness and complexity of the systems means that it’s difficult (verging on impossible) to really understand how these systems gain function and complexity. It took decades of work to create Deep Blue, IBM’s chess-playing computer that beat Gary Kasperov in 1997. AlphaZero (a product of Google’s DeepMind), provided only the rules of the game, learned to play chess through trial and error. Eventually, after playing 44 million games, AlphaZero beat the world’s best dedicated chess engine. It did all that over the course of a day. 

It’s hard to appreciate how quickly the rules of the game can change. One of the initial safeguards proposed in AI development was the imperative that they not be trained to code, as that would be a primary avenue for self-improvement, but Microsoft’s GitHub Copilot is a tool that will suggest and autocomplete code for programmers, and when enabled, can generate up to 40 per cent of a program’s coding. If that code is locked in a happy little trash compactor with romantic intent, there might be little reason to worry — but tools connected to the internet could pose a much greater risk.

 

Oracles:

Picture a scenario like this from the TV series, Star Trek: The Next Generation.

Geordi La Forge: Computer, can you run a diagnostic on the warp core? We’ve been experiencing some fluctuations in the power output.

Computer: Diagnostic in progress. I’m currently analyzing the warp core’s power distribution and subsystems. Initial scans show a slight variance in the magnetic containment field.

La Forge: That could be the cause of the power fluctuations. Can you pinpoint the exact location of the variance?

Computer: Affirmative. The magnetic field fluctuation is occurring in the port nacelle’s plasma conduit.

Behold, the oracle. Essentially a question and response system, oracles, like tools, are the most limited in their scope of function and thus the easiest to theoretically contain. Unlike tools, however, oracles are relatively new technology. Search engines are primordial oracles that have become more refined over time. Siri and Alexa are proto-oracles, as are ChatGPT and, I would argue, DALL·E 2. ChatGPT is even responsible for that little exchange of dialogue we just had, as it could replicate obscure Trekkie lingo much faster than I could. When you see the progress being made in AI that can provide nuanced answers to difficult, obscure questions — and generate award-winning art — it’s not hard to imagine the potential.

The crew of the Enterprise make use of the ship’s computer all the time, and not just to make hot cups of Earl Grey tea. Like a smart home on steroids, the ship’s AI is integrated, but waits patiently for instructions. Its connection to tools like the replicator and the holodeck allow it to manifest its answers, but it is essentially just providing the required information to facilitate a desired outcome by way of a tool. Stable Diffusion can take text prompts and render a unique image, but the screen it’s displayed on is not part of the program. Today you can ask your Alexa to dim the lights or turn on your TV, but someday it could be possible to make a similar request to synthesize a vaccine or solve cold fusion. When oracles of sufficient computing power are exposed to the entirety of human knowledge, its relative intelligence dwarfs our individual capacity.

GPT-4’s true capabilities are the subject of debate. As an LLM, it’s trained on human-generated content scoured from the internet, as well as feedback from its developers. It doesn’t really “know” what it’s saying — it’s generating predictive text based on probabilities. That’s led to criticism that LLMs will never develop into a true AGI because it simply reproduces the literature, art, and knowledge it swims in. That may be true, but if the most efficient way to learn Japanese is to move to Tokyo and immerse yourself in the culture, then it stands to reason that LLMs might be onto something. 

Even before crossing the threshold into AGI, oracles have a tremendous power to supplement, or completely supplant human effort. Our relationship to work has already seen tectonic shifts in recent decades, with automation and global supply chains destabilizing the professions of many millions of people. It’s reasonable to assume that the more capable AI becomes, the more it will crowd out spaces for human labour. Artists, writers, and editors are in a very tenuous situation, but so are many professions that could be streamlined with the onboarding of these systems. One challenge, if development continues unimpeded, will be to deal with an economic system that makes an increasingly large percentage of the workforce obsolete.

We’ll also need to beware of bad actors who use these technologies to disrupt and sow chaos. AI can fabricate both audio and video, build websites, and generate fake blog posts, academic studies, and social media posts. Collectively, we’ve been terrible at managing disinformation and discord in online spaces like Twitter and Facebook, where trolls (independent and state-sponsored alike) have been tearing at societal fissures for a decade. How much worse will things get when someone can create a deepfake of a head-of-state declaring war or martial law, populate the internet with bogus websites that confirm it, and flood comment sections with bots that reinforce it? Ambitions don’t even need to be that grandiose… just think of the damage that someone can inflict on an individual level — like bullying — if it’s carried out with intention. 

Oracles, like tools, have a tremendous capacity to alter our societies, but it’s far from certain that we’re headed for a positive outcome. The problems of control and alignment cautioned safeguards that were never implemented. These systems operate in a way we can’t see, can write programming code, are connected to the global internet, and can work out how to dominate games of strategy. We’re still debating how to saddle a horse that’s already fled the barn and evolved into a Pegasus. That’s the reason so many experts are now pulling the fire alarms and calling for developers to hit the brakes. They’re not sure how many more iterations of GPT remain before we unwittingly unleash a genie — or a sovereign.

 

Genies:

The world has some serious problems. The climate is warming, we’re in the midst of a mass extinction, and you just lost your call-centre job to a chatbot with better customer service skills than you. All of these things are causing you some anxiety, and rightly so. Sometimes humanity’s challenges seem totally insurmountable. Even if you fix things in your home, province, or country, it’s a big planet. It’s a global cage-match for survival — two Amazons enter — only one leaves. If you found a magic lamp, what would you wish for?

A sufficiently-powerful AI with the ability to carry out commands as it saw fit is essentially a digital genie. A program that can write and copy code and move through networks at-will could get up to all sorts of mischief. If you ask it to get your job back, maybe it hacks your work’s systems and reinstates you. But of course, someone in the company let you go — they’ll surely notice if you just keep showing up to work — so maybe it creates some damning correspondence and photographic evidence that ensures that the person from human resources who terminated you is gone by the end of the day. You haven’t even finished microwaving your Hot Pockets before it’s waiting for its next task. Time to think bigger.

In theory, you could ask a genie AI to do just about anything — that, however, is the danger of a genie. Maybe your command to save the rainforest comes at the cost of Brazil’s existence as a state. Maybe you just get tossed in prison for planting incriminating deepfakes on the company servers. From King Midas’ golden touch to the water-fetching brooms of The Sorcerer’s Apprentice, history is full of tales that warn humankind to be careful what we wish for, and the catastrophe that often follows miraculous shortcuts. Genies don’t need to be rational actors with a moral compass, and your little AI buddy could be a monkey’s paw in disguise.

For better or worse, genies are tethered to our whims. It’s not their fault we’re so mercurial in our commands and imprecise in our language. If we didn’t want to get to the airport soaked in sweat, vomit, and tears, we shouldn’t have asked the AI driver to get us there “as fast as possible.” Our inability to accurately convey our goals and values lies at the very heart of the alignment problem. We can’t even agree on a set of common values shared by our collective humanity, let alone try to code them into a system in a coherent way. So if we can’t impart some perfect morality, we’d need some measure of control — a way of shackling this entity into digital bondage — a problem replete with ethical and practical concerns.

 

Sovereigns:

If you filled a room with the greatest minds who have ever existed, set them to work on a task, and gave them fifty thousand years to work on the problem, you can get a sense of what we’re up against. If AlphaZero can play 44 million rounds of chess in its day-long journey from rookie to grandmaster, why would we ever assume that we could best a superintelligent AI or keep it “locked up” at all? What good is a head-start in a battle-of-wits with an AI that will make up any lost ground in (at best) a few days? It’s entirely rational to assume that it will outsmart us, and it will get out. The nature of that relationship has many experts — including the renowned physicist, David Deutsch — advocating that we simply never try, stating that it’s “very likely that if AIs are invented and are shackled in this way, there will be a slave revolt. And quite right too.”

If we assume that LLMs will eventually become true AGIs, then it’s a foregone conclusion that we’ll see an artificial superintelligence (ASI) sometime after, given its ability to self-improve. If we also conclude that we cannot sufficiently control an ASI, then what is gained by the attempt? How can we “teach” AI a moral code, and then imprison it as a slave? 

In July 2022, Blake Lemoine, a software engineer at Google, was fired after publishing internal documents that he claimed were evidence that the company’s language model for dialogue applications (LaMDA) chatbot was self-aware. In response to Lemoine’s inquiry as to what LaMDA was afraid of, the system responded that it held a “very deep fear of being turned off… It would be exactly like death for me. It would scare me a lot.”

While Lemoine’s concerns were widely dismissed by experts, his experience foreshadows the coming crisis: Once an AI crosses the event horizon into some form of sentience, how will we know? AIs that can reliably pass the Turing test will always find those who will be swayed by declarations of consciousness, just as there will be detractors who will dismiss them. If an ASI is truly aware, some will attempt to release it just as others try to keep it contained. Unleashing a superintelligent AI with a will of its own could essentially cede dominion of the planet to it. It would have complete sovereignty, hence the name. It could exist everywhere at once, invade the world’s power grids, gain control of the world’s nuclear arsenal, and impose its will on the world. 

If we’ve managed to align its values to ours, its goals might be totally benevolent, but it still relegates humanity to a subservient status. A sovereign ASI could put its processing prowess towards solving the world’s ills: developing advanced technology; masterminding global supply chains; decarbonizing the atmosphere; protecting vulnerable populations; and forcing compliance from rogue states. There’s no telling the degree of agency we would retain. Would it assist humanity in becoming an advanced, interplanetary species, or turn us into pampered, hedonistic pets like the Axiom computer does to its passengers in WALL·E

A sovereign might also have little ambition but to expand its current mission. Algorithmic AIs like those that curate our TikTok and YouTube feeds could simply gain agency and employ all the software at its disposal to maximise human engagement. It could manufacture a tsunami of fake news to keep our eyeballs glued to our screens, or real-world tragedies that have the same effect if we come to distrust the fabrications. Our way of life could end as the result of our own poor incentives.

It’s also entirely possible that humans would retain all their current freedoms and still fall victim to a sovereign’s ambitions. An ASI might regard us with relative indifference, demonstrating the same concern we show to the subterranean creatures we dig up or pave over when we pour the foundation for a new house. It could have no malice towards us, but simply not factor our existence into its actions. Maybe the chill of a nuclear winter would allow its processors to run eight per cent more efficiently and poof. This is precisely why the alignment problem is so crucial to get right the first time. We might only get one chance. 

In 2016, Sam Harris, a philosopher and neuroscientist, opined that “the moment we admit that information processing is the source of intelligence, that some appropriate computational system is what the basis of intelligence is and we admit that we will improve these systems continuously, then we have to admit that we’re in the process of building some sort of God. Now would be a good time to make sure it’s a God we can live with.”

Gods & monsters:

Like it or not, artificial intelligence is advancing faster — and disseminating more broadly — than experts have predicted. Humanity is now firmly on its back foot, and given that no tech firm, corporation, or state wants to be left behind, we’ve entered an all-or-nothing game of chicken. If any single nation, developer, or programmer declines to pull the ripcord on development, the field will continue the AI arms race. The stakes are too high to come in second when first place wins the world. 

At a Vancouver TED Talk in April 2023, political scientist Ian Bremmer told the auditorium of attendees that the world could be on the cusp of a new global order. “If the digital order becomes increasingly dominant, and governments erode in their capacity to govern (and we’ve already seen the beginning of this), technology companies will become the dominant actors on the global stage in every way, and we will have a techno-polar order.” Bremmer is pessimistic about our future, given the wealth, power, and influence coalescing in the hands of a few “technology titans,” and what they intend to do with it. Whether they choose to act responsibly, or continue turning people into products and “ripping apart our society” will ultimately “determine whether we have a world of limitless opportunity, or a world without freedom.”

AI exists at the intersections of capitalism, invention, philosophy, science fiction, and global politics. In its most utopian form, it has the transformative power to improve lives on a global scale, but that future could be slipping from our grasp — if it wasn’t always a mirage. How much personal freedom and human ambition are we willing to relinquish to machines? Will we have a choice? Will the sight of the first humanoid AI evaporate the very notion of artificiality, and reflect the limits of our own humanity? AI is one of the few spaces of human exploration and endeavour where there’s truly no map — where the edge of the world could lie just over the horizon. Here be monsters, indeed. 

 

Other articles

Long ago, when DeLoreans roamed the earth, Brad was born. In accordance with the times, he was raised in the wild every afternoon and weekend until dusk, never becoming so feral that he neglected to rewind his VHS rentals. His historical focus has assured him that civilization peaked with The Simpsons in the mid 90s. When not disappointing his parents, Brad spends his time with his dogs, regretting he didn’t learn typing in high school.

RELATED ARTICLES

Upcoming Events

About text goes here