Will Androids Dream Of Electric Sheep? An Overview of Recent Approaches To Artificial General Intelligence

Jessica M Cooper

ABSTRACT

The field of Artificial Intelligence was founded upon an ambitious goal - to create machines that can think like humans. Since 2012 much well publicised progress has been made in narrow AI, but what of general artificial intelligence? This essay will cover different approaches to AGI development from four contemporary research groups - namely Numenta, DeepMind, CogPrime and NARS - contrasting their strategies and noting recent progress in each, before considering the future of AGI as a whole.

INTRODUCTION

The field of AI was founded in the mid 1950s with the aim of creating machines that possess human level intelligence. However, due to the inherent difficulty of the problem and the technological limitations of the time, initial excitement about potential progress in the field quickly faded. In the intervening years the focus of research has shifted towards narrower forms of AI - useful expert systems that have many practical real world applications and are now the subject of much research and investment. Most of the well publicised AI breakthroughs of recent years are of this kind - for example IBM Watson, Deep Blue and AlphaGo are excellent at the tasks they were designed for, but none of them could make you a cup of tea, or act at all outside of their specific domain. These narrow AIs are a long way from the initial ambition of the field to create a human level general intelligence, but their impressive achievements have had the effect of rekindling interest in AI as a whole [1].

As interest in AI has grown there has been a resurgence of interest in AGI, with several different research groups now dedicated specifically to this area. This essay will briefly explore recent work by four different groups, each of which are aiming to build AGI via different methods, and then consider the likelihood of AGI being achieved and when.

WHAT DOES AGI MEAN?

What is AGI, and what must a system demonstrate in order to be considered generally intelligent? There is no widely accepted precise definition as yet, but the general consensus is that an AGI system must have the ability to work towards complex goals in uncertain environments, to learn and to apply that learning in unrelated domains, and to do all this with limited computational resources [2].

A number of operational definitions have been suggested to validate potential AGI systems, from Turing’s imitation game [3] to Nilssen’s employment test [4] and various others between. What many of these measures have in common is that they attempt to test the system’s capability to do the things that a human can do - so, for the purpose of this essay, let AGI be simply an artificial system which is capable of demonstrating human level intelligence across any given domain.

APPROACHES TO AGI

We already have a working example of general human level intelligence - the human brain. It follows that imitating some or all aspects of the brain is one possible route to AGI. What are others?

There are several different approaches to designing systems that can do what the brain can do, but need not resemble the structure of the brain at all. These cognitive architectures can take many forms. Pei Wang identifies three main approaches: ‘Hybrid’, in which a range of individual modules (for example: reasoning, vision, language processing etc) are developed using whichever techniques are best suited to each function, before being linked together in some way; ‘Integrated’, where the fundamental architecture is created first of all, before its modules (which may be implemented using various methods or algorithms) are designed and added; and ‘Unified’, in which a single technique or algorithm is the basis of the entire system [5].

It is these three approaches to cognitive architecture, plus the route of brain imitation, that this essay will consider. [See Fig 1] The research groups I have chosen to cover are those who have a large body of research, and whose work appears particularly interesting or promising as a possible route to AGI.

Fig 1.

Approach

Brain Imitation

Cognitive Architecture

Hybrid

Integrated

Unified

Group

Numenta

CogPrime

NARS

DeepMind

BRAIN IMITATION

In 2005 Kurzweil [6] wrote: “There are no inherent barriers to our being able to reverse engineer the operating principles of human intelligence and replicate these capabilities in the more powerful computational substances that will become available in the decades ahead.” Unfortunately, this approach is yet impossible and will remain so until scanning technology improves. That said, there is no reason to doubt that at some point in future, sufficiently high resolution brain imaging will enable a very detailed software model of the brain to be constructed. Such a model would be very useful, but would it constitute AGI, or just an empty framework - incredibly detailed, but with no spark of intelligence?

Human brains and computers are fundamentally different, and if intelligence is a product of the imprecise, fuzzy, chaotic aspect of the human brain it may or may not be possible to replicate that in a machine. Nevertheless, simulating parts or all of the brain in an attempt to create AGI does offer one significant advantage - it is not necessary to create a theory of intelligence, or to try to build a system to approximate one, or even to understand how intelligence works or what it fundamentally is. All that is required is an understanding of the structure of the brain, and the assumption that it is this structure that gives rise to intelligence. The Human Brain project is working on this task [7], although their approach is to better understand the brain for the purposes of medical science and neurological experimentation rather than as a route to AGI, and as such is somewhat beyond the scope of this essay.

NUMENTA

Whole brain simulation is not the only brain-inspired path to AGI. Numenta, a machine intelligence company, aims to understand the principles of intelligence by studying the brain and then employ those principles to engineer machine intelligence.

Their latest work is centered around a theoretical framework entitled Hierarchical Temporal Memory (HTM), which describes the function of neocortical neurons based on neuroscientific evidence. In Biological and Machine Intelligence [8], Hawkins explains how artificial HTM neurons are much more akin to the complex pyramidal neurons found in the neocortex. (As opposed to the kind of artificial neurons seen in typical neural networks, which are relatively simplistic.) He states that learning in these pyramidal neurons is due to the removal of unused synapses and the creation of new ones, and that artificial HTM neurons learn by modeling this synaptic growth and decay.

Although they do not claim to have a complete theory of the neocortex, Numenta believes that they have recently succeeded in discovering some of its core algorithms [9]. HTM has so far proven to be a successful approach in several commercial machine learning tasks such as anomaly detection and natural language processing, however this is not Numenta’s main focus - rather, they aim to create “true machine intelligence” [10]. This is particularly exciting given the success of artificial neural networks in narrow AI. If HTM neurons are truly better models of learning than the kinds of ANNs that have seen such success of late, it follows that we may expect great progress from Numenta as their technology matures.

Whether this is realistic or not remains to be seen. Goertzel argues that to focus on these low level features may be misdirected, if what really matters is the software of intelligence - the actual activity of the mind - and not the hardware it happens to be running on [11]. Similarly, Russell and Norvig [12] write : “The quest for “artificial flight” succeeded when the Wright brothers and others stopped imitating birds and started using wind tunnels and learning about aerodynamics.” It may well be unnecessary or misguided to attempt to mimic the human brain if the goal is to produce intelligent behaviour - just as it’s unwise to try to fly by masquerading as a pigeon.

COGNITIVE ARCHITECTURE

DEEPMIND

Straddling the two approaches of brain inspired models and cognitive architecture, some of DeepMind’s latest work concerns differentiable neural computers (DNCs), which are hybrids consisting of deep neural networks with read/write access to external memory matrices. This is particularly interesting as in the past neural networks have been unable to represent data well over longer timescales, but it appears that the DNC may have solved this problem. Graves et al write: “Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read–write memory” [13].

Increasingly, DeepMind appears to have been adapting neural networks in ways which are more geared towards AGI than their usual narrow applications. Indeed, Demis Hassabis (DeepMind co-founder and research scientist) is described [14] as characterising DeepMind’s work as “a multi-decade Apollo-style project to crack artificial general intelligence... rather than teach the machine to understand language, or recognise faces, or respond to voice commands, he wants machine learning and systems neuroscience to teach the network to make decisions - as humans do - in any situation whatsoever.” This kind of transfer learning - the ability to apply knowledge learned from one task to another - is critical to successful AGI.

Earlier this year DeepMind also published a promising paper on adapting neural networks for transfer learning by using a system of many large neural networks layered together, entitled PathNet [15]. When a task is learned by the system, the optimal pathway is fixed to prevent it from being overwritten by the next task, allowing the knowledge gained to persist and thereby be used to improve the learning rate of new tasks. This is important because it appears to circumvent the problem of catastrophic forgetting - when neural networks are unable to adapt to new tasks whilst retaining knowledge of prior ones, enforcing their narrow nature.

These two papers seem to go some way towards overcoming the limitations of neural networks, and as DeepMind’s ambition and progress in recent years has been so impressive [16] it seems reasonable to expect further breakthroughs in broadening the scope of deep learning in future. However, unless some flexible way of integrating these disparate systems into a cohesive whole that is capable of functioning in many environments is found, we may end up with some very clever systems, but ones which are still fundamentally limited to a few specific domains.

COGPRIME

CogPrime is based on the idea that AGI will arise from a ‘cognitive synergy’ borne out of the integration of multiple algorithms and structures within a cognitive architecture, connected in such a way that individual components are able to assist each other. Goertzel et al [17] claim that this architecture explains the way in which human brains are intelligent, leveraging a combination of distinct structures made from common components and “arranged according to a sensible cognitive architecture...Due to their close interoperation they give rise to the overall systemic behaviors that characterize human-like general intelligence.” However, the CogPrime architecture bears little resemblance on a low level to the human brain, rather attempting to cover the key aspects of human intelligence in a modular fashion. This is based on the assumption that intelligence arises from the interoperation of high level structures across an entire system, and not as a consequence of the actual implementation of those structures.

Goertzel [18] also argues that “Biological systems tend to be messy, complex and integrative; searching for a single “algorithm of general intelligence” is an inappropriate attempt to project the aesthetics of physics or theoretical computer science into a qualitative different domain.” I disagree. The neocortex, the seat of human intelligence, is relatively homogeneous and essentially blank at birth, and only through experience do different regions begin to develop and interact [19] - thus it seems illogical to require distinct structures as a prerequisite for potential AGI. However, that does not imply that CogPrime will fail, only that it is not brain-like at a low level.

CogPrime’s approach is interesting as it offers the possibility for many of the useful and well developed narrow AI systems to be integrated within an AGI architecture. This would seem in some senses a simpler approach than attempting to develop a practicable unified model of intelligence - good narrow AI systems are increasingly capable and available, so focussing on a central architecture to combine them is an efficient strategy.

Most recent work with CogPrime appears to be done by Hanson Robotics, and also by the open source community using OpenCog (CogPrime’s open source counterpart). Some success has been achieved in controlling virtual agents within a virtual environment, and also across commercial applications in finance, analysis and natural language processing, although the architecture is not yet complete [20].

NARS

Contrary to CogPrime, Pei Wang’s NARS is a general purpose reasoning system which takes a unified approach - that is, one fundamental mechanism underlies the entire architecture. The system is based on non-axiomatic reasoning, assuming that there are no fixed truths and therefore reasoning in uncertainty. This approach is particularly interesting as the ability to function without complete knowledge of a given environment is key for any AGI system attempting to function across many domains. The system works by taking ‘tasks’ as formal language input, which could take the form of goals to achieve, questions to answer or ideas to assimilate, and forming ‘beliefs’ based on those inputs. Tasks and beliefs containing similar ideas are clustered in nodes known as ‘concepts’. Crucially, tasks can be processed by using prior beliefs to make inferences, whilst forward inference and backward inference allow the system to create new knowledge and derive new questions respectively. [21]

This complex system has been under development for many years at this point, but progress appears steady. Recent papers consider the implementation of emotion [22], self awareness and self control [23] in NARS, and the existence of a comprehensive roadmap makes it possible to quantify progress to some extent. However, the NARS approach assumes that there is one fundamental algorithm necessary for intelligence, and that it is the “logic of general intelligence” used in Wang’s system. This is not proven - it is possible that there is no single underlying algorithm or structure capable of describing human level intelligence, rather that intelligence arises from a sum of interacting features or parts, as in CogPrime.

WILL ANDROIDS DREAM OF ELECTRIC SHEEP? [28]  

It’s difficult to measure the progress of most AGI projects, because to some extent it is unclear what a successful generally intelligent system would look like. How can we tell if we’re 50% of the way towards an unknown destination? However, this is not the case with Wang’s NARS: “My project NARS has been going on according to my plan, though the progress is slower than I hoped, mainly due to the limit of resources. What I’m working on right now is: real-time temporal inference, emotion and feeling, self-monitoring and self-control. If it continues at the current pace, the project, as currently planned, can be finished within 10 years, though whether the result will have “human-level AGI” depends on what that phrase means - to me, it will have.” [24]

So, AGI within a decade? Perhaps, if Wang is correct regarding the potential of the NARS project. However, there are several reasons to believe that AGI may not be too far off, even if NARS is not successful for whatever reason. For one, as more breakthroughs occur in the field of artificial intelligence, even if not explicitly in AGI, the field as a whole becomes more popular and more funding and resources become available, thereby increasing the rate of progress. Major companies like Google and Facebook pouring money into AI [25] also bodes well for continued progress in the field as a whole, which can only impact positively on AGI research. Likewise, as computing power continues to increase, new scanning technology becomes available, and our general understanding of intelligence as a concept improves, it could be argued that AGI via one approach or another is inevitable at some point.

Conversely, if for whatever reason AI falls out of favour and another AI winter occurs, perhaps progress would stall enough to push AGI a century or two into the future - yet this does not seem likely in the light of current public enthusiasm for the topic. Perhaps there is something completely unique about human intelligence, and it is fundamentally impossible to replicate artificially by any means - although I do not see why this would be the case. Of course, there is always the unfortunate possibility that some kind of catastrophic societal or environmental disruption to scientific research in general or AGI research in particular could stall progress indefinitely - unfortunately not something that can be discounted in today’s uncertain world.

On a more positive note, expert predictions seem to support the idea that AGI may be achieved in the not too distant future - a study of 352 machine learning researchers discovered that ““High-level machine intelligence” (HLMI) is achieved when unaided machines can accomplish every task better and more cheaply than human workers. Each individual respondent estimated the probability of HLMI arriving in future years. Taking the mean over each individual, the aggregate forecast gave a 50% chance of HLMI occurring within 45 years and a 10% chance of it occurring within 9 years. ”[26]

Whilst there is evidently a large gap between our current capabilities and a truly general artificial intelligence, the likelihood of AGI occurring is increased by the fact that there are a variety of promising approaches currently being investigated, as this essay has outlined. Clearly there is a lot of uncertainty in this young and incredibly exciting field of research, but as Kurzweil writes [27]: “These are still early days for AGI; and yet, given the reality of exponential technological advance, this doesn’t necessarily imply that dramatic success is a long way off.”

REFERENCES

  1. Wang, P. (2013, June). Artificial General Intelligence. Retrieved October 9, 2017, from https://sites.google.com/site/narswang/home/agi-introduction
  2. Muehlhauser, L. (n.d.). What is AGI? - Machine Intelligence Research Institute. Retrieved October 9, 2017, from https://intelligence.org/2013/08/11/what-is-agi/
  3. Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433–460. http://doi.org/http://dx.doi.org/10.1007/978-1-4020-6710-5_3
  4. Nilsson, N. (2005). Human-level artificial intelligence? Be serious! AI Magazine, 26(4), 68–75. https://doi.org/10.1609/aimag.v26i4.1850
  5. Wang, P. (2013, June). Artificial General Intelligence. Retrieved October 9, 2017, from https://sites.google.com/site/narswang/home/agi-introduction
  6. Kurzweil, R. (2005). The Singularity Is Near: When Humans Transcend Biology. Book. https://doi.org/10.1016/j.techfore.2005.12.002
  7. Human Brain Project: Brain Simulation. (n.d.). Retrieved October 9, 2017, from http://www.humanbrainproject.eu/en/brain-simulation/
  8. Nugent, A. (2011). Hierarchical Temporal Memory. US Patent App. 13/029,374, 1–4. https://doi.org/10.1109/IEMBS.2006.260909
  9. Numenta.org • HTM School. (n.d.). Retrieved October 7, 2017, from https://numenta.org/htm-school/
  10. Numenta.com • Numenta in a Nutshell. (n.d.). Retrieved October 9, 2017, from https://numenta.com/numenta-in-a-nutshell/
  11. Goertzel, B. (2014). Artificial General Intelligence : Concept , State of the Art , and Future Prospects. Journal of Artificial General Intelligence, 5(1), 18. https://doi.org/10.2478/jagi-2014-0001
  12. Russell, S., & Norvig, P. (2009). Artificial Intelligence: A Modern Approach, 3rd edition. Prentice Hall, 3. https://doi.org/10.1017/S0269888900007724
  13. Graves, A., Wayne, G., Reynolds, M., Harley, T., Danihelka, I., Grabska-Barwińska, A., … Hassabis, D. (2016). Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626), 471–476. https://doi.org/10.1038/nature20101
  14. DeepMind: inside Google’s groundbreaking Artificial Intelligence startup | WIRED UK. (n.d.). Retrieved October 6, 2017, from http://www.wired.co.uk/article/deepmind
  15. Fernando, C., Banarse, D., Blundell, C., Zwols, Y., Ha, D., Rusu, A. A., … Wierstra, D. (2017). PathNet: Evolution Channels Gradient Descent in Super Neural Networks. Retrieved from https://arxiv.org/pdf/1701.08734.pdf
  16. DeepMind’s Work in 2016: A Round-Up. DeepMind.com (2017, January). Retrieved October 6, 2017, from https://deepmind.com/blog/deepmind-round-up-2016/
  17. Goertzel, B., Pennachin, C., & Geisweiller, N. (2014). Engineering General Intelligence (pp. 1–18). Atlantis Press, Paris. https://doi.org/10.2991/978-94-6239-027-0_1
  18. Goertzel, B. (2014). Artificial General Intelligence : Concept , State of the Art , and Future Prospects. Journal of Artificial General Intelligence, 5(1), 18. https://doi.org/10.2478/jagi-2014
  19. What Intelligent Machines Need to Learn From the Neocortex - IEEE Spectrum. (n.d.). Retrieved October 9, 2017, from https://spectrum.ieee.org/computing/software/what-intelligent-machines-need-to-learn-from-the-neocortex
  20. CogPrime Overview - OpenCog. (n.d.). Retrieved October 9, 2017, from http://wiki.opencog.org/w/CogPrime_Overview
  21. Wang, P. (2013, June). A Logical Model Of Intelligence. https://doi.org/10.1142/8665
  22. Wang, P., Talanov, M., & Hammer, P. (2016). The Emotional Mechanisms in NARS (pp. 150–159). Springer, Cham. https://doi.org/10.1007/978-3-319-41649-6_15
  23. Wang, P., Li, X., & Hammer, P. (2017). Self-awareness and Self-control in NARS (pp. 33–43). Springer, Cham. https://doi.org/10.1007/978-3-319-63703-7_4
  24. Pei Wang on the Path to Artificial General Intelligence - h+ Mediah+ Media. (n.d.). Retrieved October 5, 2017, from http://hplusmagazine.com/2011/01/27/pei-wang-path-artificial-general-intelligence/
  25. The Race For AI: Google, Baidu, Intel, Apple In A Rush To Grab Artificial Intelligence Startups. (n.d.). Retrieved October 9, 2017, from https://www.cbinsights.com/research/top-acquirers-ai-startups-ma-timeline/
  26. Grace, K., Salvatier, J., Dafoe, A., Zhang, B., & Evans, O. (n.d.). When Will AI Exceed Human Performance? Evidence from AI Experts. Retrieved from https://arxiv.org/pdf/1705.08807.pdf
  27. Goertzel, B. (2014). Artificial General Intelligence : Concept , State of the Art , and Future Prospects. Journal of Artificial General Intelligence, 5(1), 18. https://doi.org/10.2478/jagi-2014
  28. A play on the title of Philip K Dick’s famous science fiction novel, Do Androids Dream of Electric Sheep?