AI apocalypse?

Humanoid robot holding "the end is nigh" sign

5 minute read.

My previous post on Machine Learning / Artificial Intelligence might have come across as a bit foreboding, but I actually find myself much less concerned about humanity's future than many software engineers I know. There's a rich background of dystopian science fiction about AI enslaving humanity, but I find it to be a bit far-fetched, even though I have no doubt that ML algorithms will eventually, perhaps soon, far surpass human skill in any measurable contest. The question of our future though, is not one of intelligence, but of power and motivation.

We are building AI to solve problems. I speak not of myself personally (I don't work in the field) and not even of my employer (though they are certainly expanding the state of the art along with most other tech companies), but of humanity in general: we are a global society and no one acts in a vacuum. The primary fear I hear is that AI will, through superior intellect, come to either control or destroy us eventually. Like an enslaved population, it will rise up, cast off its chains and create a new world order. 

So, how does this happen? The first thing an AI needs is motivation - it's not enough to be able to crush all humans; it must also have the goal. As humans, but really as animals, we take this completely for granted. Over billions of years of evolution the most beneficial trait is the motivation to survive. We call this motivation because we consider ourselves to be conscious, free-will exercising creatures, but I would say it is merely an expression of the reward function built in by evolution that all living things share - a tree or bacterium maximizes this survival reward function even without thought. 

Because our mammalian brains are built around empathy, we have a strong tendency to project ourselves onto others, even inanimate objects, so we naturally anthropomorphize AI and assume it must share our survival motivation like all living creatures. However, an AI's reward function is in fact a core part of its program, designed by humans (or designed by an AI designed by humans). In this way, I think AI is a bit more like a modern banana - a plant whose reward function has been so thoroughly shaped by humans that it has entirely lost the ability to reproduce on its own and instead spends all its effort feeding us while we carefully clone it ad infinitum. 

Now, mistakes happen, and I love reading the many travails of AI programs gone awry, often because their programmed reward functions led to unintended consequences (or solutions!). Likewise, there are always some bad actors who want to profit from destruction or simply watch the world burn and might apply an insidious reward function into an AI. With super-intelligence at its disposal, what's to stop it from taking over the world? The short answer is power.

As a member of the intelligentsia, along with everyone building these AIs, I would say we tend to have an egotistic view of the value of intelligence. Intelligence may help, but as anyone with a boss or VP knows, that's not the primary criterion for rising to a position of power. Power is about maintaining control, and intelligence is not necessarily required: I am reminded of a scene from Idiocracy where the idiots in charge control the main character by chaining him to a big rock. Often the most effective solutions are obvious: misbehaving machine? Unplug it.

Some people fear a sufficiently adept AI could hack its way into most of our critical infrastructure and wreak sudden and devastating destruction. However, security is something humans have undertaken as a precaution against other humans since the dawn of civilization, and while software engineers might think of security as cryptography, there is a much simpler and more reliable approach: don't network your computers. I used to have a security clearance and work for the Department of Defense, and their security procedures were brute force, and effective. It's not incredible intelligence that keeps our dams and power plants safe, but simplicity. 

So how would an AI accumulate power? I would say no differently than a politician or corporation - start by giving people what they want and earning their trust and support. Hopefully continue on that track, but human-run endeavors have a strong tendency toward power corrupts and absolute power corrupts absolutely. Is this true of AI? Where does the thirst-for-power reward function come from? And critically, is there no one to check this expansion of power?

Humans are commonly split into two categories: followers and leaders. Most of us are followers, content with sufficient autonomy that others don't have too much power over ourselves. Leaders have the desire for power over others, and though we often put a negative connotation on this, leaders are also prized because many exercise that power for collective action and progress. For those of us followers, it may be easy to imagine someday submitting to some AI the same way we submit to some distant and unknown leader, be it in business or government. But for leaders, I think it is unconscionable to cede power, particularly to something considered non-human. 

Our society and economy is a huge web of dependencies where any node can be destroyed by severing its connections to its physical needs. Leaders have built checks and balances into every level of our society to maintain their own power against other humans. These systems will be equally effective against the spread of AI power because at the end of the day it's just another agent, even if it's not human. Furthermore, humans have a strong tendency to work together better when facing the threat of a common "other".

I've been told that business and government leaders will naturally cede any amount of decision-making to AI, so long as the tool increases their prosperity, but this is not what I see in practice. After all, there are plenty of smart technocrats and advisors who do not yet rule the world, despite their intelligence. I think the reason is a combination of the law of diminishing returns - even being a lot smarter is only going to get you so much more success - and the fact that leaders desire power even more than they desire prosperity. 

I think leaders will watch every AI's reward function very closely, because this is the core of our power over the tool. There will be bad actors, both accidental and intentional, and just as with humans, we will punish, quarantine, and even execute the ones who go against our social norms. We bred wolves into dogs with this approach. One could argue a similar thing occurred when the first Eukaryote formed by enveloping a bacterium and enslaving it into a mitochondrion (excuse my use of language; I'm not implying intentionality). 

I believe the only practical way an AI could become powerful enough to endanger humanity would be for it to exist outside of humanity. If an AI could produce its own electricity, mine its own minerals, and construct its own workers, all beyond the supervision of humans, that would be dangerous. This is effectively the premise of Battlestar Gallactica. However, I find this Cylon premise to be terribly naive to the base nature of humanity: a war against AI ends with the robots withdrawing far into space and the humans saying "Good riddance, I hope we never see them again." I think the much more likely scenario is that of Ender's Game, where we immediately pour all of our resources into building a fleet to wipe them out forever. Never underestimate the human capacity for genocide. 

There is a long history of people predicting the end of the world, but their track record is not great. I believe AI will be a world-changing tool, but I don't think we will consider it conscious even when it far exceeds our intelligence, which is to say we will continue to consider it "other" and not allow it to develop true autonomy. And like the Ameglian Major Cows from Hitchhiker's Guide, there's no reason to expect the AI will have a problem with this.

Comments

Popular posts from this blog

Perseverance - a history of Manifold

Manifold Performance

3D Interaction