The self-play, self-training concept of the AlphaGo Zero software provides a glimpse into the future of self-defining artificial intelligence (AI) systems. So how and when will AI exceed human capacity?
While some believe that the processing power already exists to achieve this, the software that would function at this level has not yet been invented.
However, AlphaGo Zero provides us with a real-world example of why super-intelligent AI might evolve quickly and soon.
In the AlphaGo Zero scenario, the AI plays the traditional Chinese board game Go against itself, starting from nothing. Initial games are almost useless because the decisions are random, so knowledge develops slowly. But each game provides the machine learning (ML) system with experience. And all the learning is self-generated; it doesn’t use human-compiled data as a base.
Subsequent learning is based on knowledge the software accumulates. Eventually, the program reached a level where it could play a novice. Soon after, AlphaGo achieved super-human capability. In 40 days, AlphaGo had discover all the knowledge mankind had accumulated for the game Go over the past 3,000 years.
In comparison to what’s called general AI, AlphaGo is quite limited. AlphaGo combines a pre-programmed, automated reasoning mechanism that searches game trees for optimal move sequences with a learned (rather than programmed) game-state evaluation function.
General AI must be able to design, on-the-fly, its own search algorithms and its own evaluation function. And general AI must also be able to consider many streams of knowledge, not just game-state knowledge. Nevertheless, given the rapid pace of AI development, a general AI that automatically combines many knowledge streams looks more likely every day.
It seems inconceivable that an all-powerful general AI could be built by anyone today, given our limited knowledge of what intelligence actually is.
But human knowledge of intelligence isn’t going to be a limiting factor in the evolution of machine intelligence. One of the more powerful innovations in modern AI systems (like AlphaGo) is the ability to auto-tune their own structure.
Once a desired level of performance has been achieved, the auto-tune function can examine the evolved system and determine more optimized intelligence structures. With this technique, AI systems can grow without limit to achieve successively more advanced goals.
With the right data and training mechanisms, general AI would seem achievable with today’s software and hardware.
Many people inside and outside the AI community believe general AI is decades away.
But that kind of linear thinking assumes AI technology will evolve at a regular pace. As Ray Kurzweil points out in his “singularity” thesis, AI development is non-linear because it is self-referencing. That means developments in AI accelerate the development of general AI and the pace of change is rapidly increasing.
We’re approaching the AI tipping point much quicker than many realize.
Bruce Matichuk is an AI Consultant currently working in Edmonton on Health Gauge, an AI based hypertension management system. Bruce is also an Executive in Residence at GO Productivity.
The views, opinions and positions expressed by columnists and contributors are the author’s alone. They do not inherently or expressly reflect the views, opinions and/or positions of our publication.