Beware the coming superintelligence of machines

Machine learning is still in its infancy but it won't be long before humans are forced to deal with some very real issues related to artificial intelligence


By Brennen Schmidt
ALEUS Technology Group
and Allan Bonner
Troy Media columnist

This old joke is getting a new life: “Do you believe there’s intelligent life on other planets?” A witty respondent says, “I sure hope so, there’s so little of it here on Earth.” This can be repurposed for a discussion of artificial intelligence (AI), since there’s so little regular intelligence around.


AI is real. Military weapons systems exist or are almost off the drawing board that feature bombs and drones flying in packs looking for targets without human oversight.

Nobody’s sure how good an idea this is, so the U.S. Pentagon banned the use of lethal autonomous systems in 2012. This ban will probably be lifted when stronger AI comes into being.

These days, we have ‘weak AI,’ which will perform limited tasks: stock trading, inventory control, smart phone voice commands and some automobile functions.

Machine-learning, which we hear a great deal about, is still in its infancy. Eventually it will cause machines and programs to learn without being programmed to do so. This may soon be a natural state of being for machines. When computers start teaching themselves, a result will be superintelligence.


The peace and security implications are discussed in the book Warnings: Finding Cassandras to Stop Catastrophes by Richard A. Clarke and R.P. Eddy. The book deals with cases of smart people who warned about the Gulf War, Hurricane Katrina, the rise of the Islamic State (IS), the Fukushima nuclear incident and other horrible events that could have been prevented. AI is one of those things.

The authors quote astrophysicist Dr. Stephen Hawking as saying that AI is “likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right.” Getting it right may involve ‘friendly AI,’ which could be programmed to attack all unfriendly AI. One hopes the definition of ‘friendly’ doesn’t originate in North Korea.

There are even more definitions and perhaps a few dead ends out there. The U.S. Defense Advanced Research Projects Agency (DARPA) has a mandate to make sure the U.S. initiates rather than is the victim of technological surprises. They want to find and use ‘explainable AI.’ This will take the important step toward letting the human operator know what’s going on. That’s a good idea.

In some proposed weapons systems, there’s no human in the chain of command. These systems operate autonomously, seeing their mission through to the end no matter what. They’re intentionally created with no off or kill switch.

In the private sector, Elon Musk and Sam Altman have committed $1 billion to researching AI. They believe some form of regulation is needed to control it. This seems logical considering there’s nobody in the U.S. government or any international organization responsible for protecting us from superintelligence.

The idea of computers running the world isn’t far off. We’re seeing more and more dollars invested in research of both systems and machines. If we thought the Terminator movie series was a work of fiction, we may wish to think again. Engineering and robotic design company Boston Dynamics’ most recent video on YouTube shows Atlas, its robot, performing gymnastics. The video has been viewed more than 13 million times.

Gymnastics is entertaining but what are the broader implications?

Assuming Atlas eventually becomes autonomous, what kind of regulations are in place to limit its use? Should Atlas be used for law enforcement or military purposes? Would it be afforded the same rights and protections as a human?

What’s interesting is that many of these questions are being asked by scientists, Hollywood filmmakers and novelists – but rarely by legislators, who are tasked with keeping us safe.

AI may be a complex and complicated field, fraught with technical terms and jargon. But we shouldn’t use this as an excuse to avoid some very important conversations.

Human existence may very well depend on it.

Recycled old jokes aside, AI is no laughing matter.

Dr. Allan Bonner, MSc, DBA, is a crisis manager based in Toronto. His forthcoming book is Cyber City Safe. Brennen Schmidt (BEd, Certiftied PR, CUA) is principal of the ALEUS Technology Group, a boutique digital communications firm in Regina.

artificial intelligence, machine learning, ai

The views, opinions and positions expressed by columnists and contributors are the author’s alone. They do not inherently or expressly reflect the views, opinions and/or positions of our publication.

You must be logged in to post a comment Login