17/08/11

TABLE OF CONTENTS

Chapter One: Introduction

Chapter Two: From Artificial Intelligence in General to Evolution, Embodiment, and Artificial Life

Chapter Three: Developing Differently from Animals / drawing from human evolution / forming social groups

Chapter Four: Architecture

Chapter Five: Experiment Plan and Results

Chapter Six: Discussion

Notes


Project

[ ] Write Summary A

[ ] Write Summary B


Summary of the Main Ideas I Went Through

The following is the progression of ideas that I followed since I started my PhD. I wrote down the twists and turns, dead ends and backtracking that I went through in order to help me with my thinking. In the final version I will leave these out.

Legend

Threads I will include.

Threads I will leave out.

I felt that there was something missing from AI.

Why can’t AI keep learning? Keep adapting? Be autonomous? Have human-like behaviour? Have human-like intelligence? How can we develop something that can develop human-like behaviour on its own?

I had an idea to ground symbols in needs, that our understanding depends on our needs. But I didn’t know what to do beyond this.

I looked at AI in general.

I looked at all sorts of approaches to AI including case based reasoning, expert systems, neural networks, fuzzy logic, genetic programming, hybrid systems, embodied AI, game AI, natural language processing, project planning, crowd simulation, robotics, and more.

I read What Computers Still Can’t Do by Hubert L. Dreyfus (1992) which had a big influence on my thinking. Finally here was someone who saw similar problems in AI and who traced them to their roots and who could clearly articulate them.

Dreyfus talked about the problems with cognitive simulation, semantic information processing, relevance, and context. Dreyfus then talked about false biological, psychological, epistemological, and ontological assumptions underlying persistent optimism in AI research. Dreyfus then suggested ways to overcome the problems by using the role of the body, situations, and human needs.

I had a bunch of theory and I could look at existing AI systems and see their limitations but I didn’t know what to do next.

I decided to look at language in order to put a finger on a hard to define characteristic, a human-like ability for context, relevance and learning, that narrow AI finds challenging. I could also use this time to see whether the problems that Dreyfus discussed were still there. Finally, I thought I could make a small contribution using the following idea.

AI lacks commonsense. There are commonsense knowledge databases. This can help answer questions. Can AI use question answering to improve its commonsense knowledge database in a positive feedback loop?

I looked at non-evolutionary approaches to AI. I looked at question answering, information extraction, information retrieval, document summarizing, systems that claimed to have a positive feedback loop, and more.

I looked closely that how those systems worked and I felt that there were things missing, I felt that there was a need for a cognitive architecture.

I looked closely at cognitive architectures and felt that there were things missing here as well.

I didn’t know how to go forward so went back to Dreyfus and his ideas about how to overcome some of the problems in AI.

I followed the ideas about bodies, needs, and situations, and read How the Body Shapes the Way We Think by Pfeifer and Bongard (2007). I came across the idea of embodied AI modifying their environment to help them achieve their goals, a process called scaffolding. Changing the environment includes evolving communication. Suddenly, my view of embodied AI changed from programming movement to using the body to shape behaviour.

Plotkin put forth the idea that life was a knowledge gaining process and this happens at three levels, genetic, personal, and cultural.

Language was a key behaviour to have in order for cultural evolution to occur.

I looked at embodied evolutionary approaches to developing language. The idea we had was to extend the ability of systems to acquire concepts, to keep building concepts upwards. More specifically the acquisition, use, communication, and re-organization of concepts both tangible and intangible.

I wanted language to evolve further. I thought about different ways to get language to evolve further. I could increase the number of types of food, for example have varying levels of positive or negative nutrition, etc.

However, something didn’t feel right. The limitations were still there. I was dissatisfied with artificial selection and situations. I wanted to do without artificial selection and situations. I realised that work in this area was actually narrow embodied evolutionary AI.

I looked at artificial life. I looked at non-narrow embodied evolutionary approaches to AI.

Very little has been done in non-narrow embodied evolutionary AI.

I considered getting non-narrow evolutionary AI to evolve language since I’ve been focusing on language.

I looked at how to create pressure for language to evolve. There were many things we could try. The harder we made things, the more likely they have to cooperate to survive.

My focus was on the things that could increase the need for interaction and dependence. My thinking was that if we could created the conditions for cooperation to occur then that could replace the removal of artificial selection and situations and allow communication to emerge. How do we get social groups to form in the absence of artificial selection and situations?

In a discussion about changing from NLP systems to EAI systems, I was asked about the difference between my agents and animals. What I had didn't seem to be enough.

My creatures had bodies, were engaged in form of life, and were situated in an environment. So are animals. Animals face similar challenges including having to find food and evade predators. They have similar physical needs and capabilities including needing food and being able to move. However, animals have not developed the type of intelligence exhibited by humans.

Why then, would my system evolve any further?

I looked at what caused human behaviours to evolve differently to animal behaviours. I came to the idea that to get human-like behaviours to emerge, we need to create conditions that require them to emerge.

The following are some ideas that first come to mind to answer the question of why humans behaviour is different to animals behaviour Humans had to acquire abilities described above because humans find it harder to get food, and harder to avoid danger than animals. Humans could acquire abilities they had bigger brains, better vocal control and had toes and thumbs so there was more we could learn. However, there is more to it than that. Research shows that early human physical environments were not that challenging, that pressure came mainly from the social world.

Human evolution research put forth that the key differences were bipedalism, hands, vocal systems, brain sizes, and social groups. There’s the idea of social groups again.

Bipedalism frees the hands.

Hands allow bring food back to share with the social group. Hands allow experimentation with objects in the environment.

Social groups create pressure for learning. The need to learn creates pressure for social groups. Human brains got bigger quicker than evolutionary normal. This growth was likely driven by pressure from the social world.

Human vocal systems were more suited to language.

Language required a big evolutionary jump and was likely the byproduct of another adaption.

We want to go beyond animal behaviour. At the same time we don’t want alien behaviour.

Social groups play a key role, arguably the biggest role,  in driving the development of intelligence and communication.

What constraints contribute to social group formation?

Around here I came across a BBC documentary on the relationship between rates of aging, the role of elders, role specialisation and social groups.

Bipedalism, hands, vocal systems, and brain sizes do theoretically encourage social groups to form. However, there were experiments that had bipedalism, hands, vocal systems, and brain sizes but no social groups emerged. So I decided to select some constraints that more directly encourage social groups, namely maturation and gender. From Humphrey (1976) I focused on embodiment constraints rather than physical constraints to encourage social groups.

My hope was that as the robots adapted to the constraints that create pressure for social groups to form then language could emerge.

Boyd and Richerson came after I submitted my paper. Again the idea of social groups came up. Communication likely did not emerge because it was useful because the evolutionary gap for doing so is too large. Communication likely emerged as a byproduct of another adaptation. Humans have to live in social groups and to do so they had to learn to predict the behaviour of others. To predict the behaviour of others they had to predict their own behaviour. This then created the foundation for observational learning. Plotkin and Humphrey also puts forth these ideas.

Note that Boyd and Richerson focus on observational learning and cultural evolution rather than communication but it’s close enough.

This shows the importance of pressure from the social world on developing sophisticated behaviours and intelligence.

As I did the practical work I felt that there was a good chance that it would be a while yet before language would emerge.

I decided to scope my work back from achieving specific behaviours to instead seeing whether the robots could adapt to the constraints that cause human behaviours to evolve differently from animals.

Note that my original goal was always human-like behaviour and intelligence.

Why focus on human?

Note how I focused on the features that caused human behaviour to develop differently from animal behaviour because of the question of how I could expect my robots to evolve differently from animals when they had no differences to cause behaviours to evolve differently.

My original goal was always to head towards human-like behaviour and intelligence.

We want to go beyond animal-like behaviours. We want to avoid alien behaviours.

Why no longer focus on language?

My original goal was always to head towards human-like behaviour and intelligence.

Language was just a way to explore ideas.

I’m not sure whether language will emerge so I am scoping back to focusing on whether robots can adapt to the constraints that play key roles in getting human behaviours to evolve differently from animals.

The big idea is that by adapt to human like constraints, the robots will develop human like behaviour. Another way of putting it is that to acquire human like behaviour, create the conditions that require them to emerge. The second way is better.

Why these constraints?

These constraints are one of the key constraints that cause social groups to form.

Social groups play a key role, arguably the biggest role,  in driving the development of intelligence and communication, and for human behaviours evolving differently from animals.

Bipedalism, hands, vocal systems, and brain sizes do theoretically encourage social groups to form. However, there were experiments that had bipedalism, hands, vocal systems, and brain sizes but no social groups emerged. So I decided to select some constraints that more directly encourage social groups, namely maturation and gender. From Humphrey (1976) I focused on embodiment constraints rather than physical constraints to encourage social groups.

What does the work show?

It shows that the robots can cope with the first set of constraints, allowing us to move on.

The big idea is that by adapt to human like constraints, the robots will develop human like behaviour. Another way of putting it is that to acquire human like behaviour, create the conditions that require them to emerge. The second way is better.

What can the literature support?

Work in non-narrow embodied evolutionary AI has only been with very simple embodiments and environments.

It is unclear how to integrate and extend existing work.

There exists in the field of human evolution, theory that can guide how to extend existing work. There are constraints that are key to human behaviours developing differently to animal behaviours.

The constraint of living in social groups is one of the biggest factors. Constraints that encourage social group formation include maturation and gender.

What is the question, hypothesis or argument?

Can we use natural selection and situations to evolve robots that can adapt to the constraints that led to human behaviours developing differently from animals?

We can use natural selection and situations to evolve robots that can adapt to the constraints that led to human behaviours developing differently from animals.

What is the contribution, novelty, or uniqueness?

Work in non-narrow embodied evolutionary AI has only been with very simple embodiments and environments. Anything beyond that is a contribution to show that it is possible.

What is the problem description?

To evolve human like behaviours, we need to create the conditions that require them.

We don’t have a clear process to evolve non-narrow embodied evolutionary AI further. We want it to acquire a characteristic rather than to do a task.

We don’t know what the conditions are. We don’t know whether robots can adapt to those conditions.

We need good order to explore the conditions in.

What is the aim?

To see whether the robots can adapt to the selected constraints.

What is the scope?

As many of the constraints as time allows.

How do the empirical results connect up with the concepts?

We are focused on whether the robots can adapt to the constraints we’re imposing. Each constraint that we can show robots can adapt to, opens the way to the next set of constraints.

We are focusing on whether the robots can adapt rather than whether robots can develop specific behaviours.


Summary of the Main Ideas I Want to Present

Premise and Work so far

Evolve differently

Importance of social

How social evolve

Can adapt to constraints?

Next set of constraints?


Chapter One: Introduction

Problem Statement

+

Aim

+

Scope

+

Overview

+

What's your PhD about?

It's about getting evolutionary and emergent embodied artificial intelligence in natural situations and under natural selection to develop behaviours, especially human-like behaviours, further by using ideas from human evolution and evolutionary psychology.

What's your main point or argument?

+ Given the premise, we want human-like rather than animal-like behaviour to evolve. How will they evolve differently? We need to focus on the things that help humans develop differently from animals.

+ Given the premise, we could do many things. How do we decide? What we should do is use ideas from human evolution and evolutionary psychology to help us select constraints.

+ Given the premise, we could do many things. The physical world provides only so much pressure. The social world was the one that provided the key pressures. The question is then how do we create pressure to form the social world?

How's this different from approaches that grow?

Embodiment, simulated robots rather than abstracted, simulated sensors rather than abstracted

Artificial life

Natural selection, also implies there are populations of robots

Natural situations

Neural networks

Intra and inter-generation learning

Focus on human-like rather than alien-like embodiments and environments

How can we take what's been done further?

We can do so by introducing key constraints. The question is what constraints should we introduce? A better question is how do we select constraints to introduce?

How can we integrate what's been done?

What researchers have done has been slices of the whole.

We can integrate by selecting embodiment and environment constraints.

We can integrate by considering the environment of evolutionary adaptation.

There is a spectrum of behaviours. On one end are behaviours such as walking, pursuing, and evading. On the other end is behaviour such as finding a circle and occupying it with one other robot. The first type of behaviours occurs in what I call natural situations. The second type of behaviour occurs in what I call artificial situations.

In a simple simulation, we can develop evolutionary reasons for the second type of behaviour without having to use artificial selection or artificial abstractions. We can structure the environment to produce food when robots occupy a circle with one other robot. An artificial abstraction would be picking up the robots who successfully occupy a circle with one other robot, putting them in a smaller space so that they reproduce, and then putting the child robot back into the world.

Why does it have to be embodied and situated?

It's because our focus is on developing embodied and situated agents further.


Chapter Two: From Artificial Intelligence in General to Evolution, Embodiment, and Artificial Life

This section is about what embodied artificial intelligence systems have been able to do. This section can also discuss the motivation for embodied artificial intelligence engaged in a form of artificial life. This is potentially big section.

The main point is to get to our starting premise, which is how can we get evolved and emergent embodied artificial intelligence under natural selection and in natural situations to develop further.

Subsections

+ The Current State of Research

+ Humanoid and Non-Humanoid Embodiments

+ Natural and Artificial Selection

+ Natural and Artificial Situations

+ Artificial Life

MOTIVATION

I started research because I wanted to have better AI. I wanted AI that could keep adapting, that can use what it has learned to learn more.

ARTIFICIAL INTELLIGENCE IN GENERAL

I looked at all sorts of AI approaches and applications. I looked at cased based reasoning, expert systems, chat programs, fuzzy logic, genetic programming, neural networks, hybrid systems, and more.

I saw problems such as knowledge bottleneck, commonsense knowledge, background knowledge, context, symbol grounding, brittleness, narrowness, framing and limited adaptation potential appear over and over.

Note that this section will become larger as I discuss key aspects of the theory that I read.

THEORETICAL PROBLEMS FACING ARTIFICIAL INTELLIGENCE

I then looked at these problems in detail. I was trying to understand the problems in the way of AI approaches.

There were many things AI could not cope such as when you reach the bounds of the application which were not far. AI had made four assumptions, biological, neurological, epistemological, and ontological.

Some options to explore included looking at the role of the body, needs, and situations. The theory was informative but I did not know how to turn theory into practice.

One of the things I did keep in mind was giving symbols meaning. There is a whole theoretical discussion on whether you can give symbols meaning by using other symbols. I was exploring the idea of at least priming the pump, where you provide the the AI with enough starting knowledge that it can keep going. The theoretical discussion I was reading were quite old, so I had to explore whether things had changed. In addition, it gave me something to do because I didn’t know what to do with my theoretical ideas.

DO THE THEORETICAL PROBLEMS STILL APPLY?

There seemed to be two options to go from here. Tailor AI to domains more or improve AI technology. Tailoring AI to domains usually means focusing on aspects other than open-ended adaptation. So I chose the second approach.

Up to now I had not chosen an approach or application to focus on. I had been hesitant to choose an application because the ones I could think of either could be done in narrower ways or they sounded too grandiose, walk, talk, think, and learn. I prefer to say that I’m focusing on the property, adaptability, and that applications depend on how far we can get AI to adapt.

My focus was on the specific property of open-ended adaptation. I then thought about applications where narrower approaches performed poorly and about applications that did not sound too grandiose.

An application that fit the requirements, that was easy to understand, where performance limitations and technology limitations were easy to see, was natural language. I then looked at the problems facing natural language. One of the largest problems was lack of commonsense knowledge for structured systems and statistical systems. Statistical systems had benefited from access to large collections of natural language  documents. The more narrow the application in natural language, the more different the problems and requirements. Narrow applications include search, translation, question answering, document retrieval, document summarising, chatting, voice recognition, and more.

The thing that helped me choose among the narrow areas of natural language processing to focus on was their ability to adapt their knowledge base. Some approaches were algorithms with a lesser ability to adapt their knowledge base and others were architectures with more ability to adapt their knowledge bases. My ideal was virtuous loop loop, ideally with no human interaction, where the AI could use question answering to build its own knowledge base. A number of structured knowledge bases had recently become available such as Cyc, and LifeNet, in addition to ones that had been used for a long time such as WordNet. Note that this covered commonsense knowledge but not necessarily background knowledge that we take for granted even more than commonsense knowledge..

KNOWLEDGE BASES, AN ATTEMPT TO SOLVE THE THEORETICAL PROBLEMS

So I had a look at natural language approaches that used knowledge bases. I looked at them but even the ones that were architectures boiled down to simple algorithms. There was an innate limitation of adaptation potential.

I discussed limitations of such systems, in terms of association, and context but I didn’t feel like what I was saying was clear enough. So I started thinking about integrating some kind of thinking system.

Again, when you boiled down cognitive architectures, they were simple algorithms. There was an innate limitation of adaptation potential. Examples of cognitive architectures include, SOAR, CLARION, and PSI.

I’m not knocking these architectures, it’s just that I’m try move towards a different goal and I’ve got different priorities.

COGNITIVE ARCHITECTURES, A FURTHER ATTEMPT TO SOLVE THEORETICAL PROBLEMS

At this point, after investigating where things had progressed since the time of the theoretical discussions, I can’t say that there has been. There has been growth in statistical techniques for narrow applications but not in the area of open-ended adaptation. So I went back to AI theory on limitations, causes, and assumptions. This was early 2008, which was the second year of my PhD.

So with natural language in the forefront of my mind, I looked at bodies, needs, and situations. I still had theory but not a way to turn it into practice.

Again I tried to express limitations of disembodied, non-artificial life, and non-situated, approaches but I still felt that I was not clear enough.

REEXAMINING THE THEORY FOR A DIFFERENT WAY FORWARD, THE ROLE OF THE BODY, NEEDS, AND SITUATIONS, EMBODIED LEARNING, EVOLUTIONARY LANGUAGE

So I started looking at references to support my hunch about the importance of the body. I came across ideas such as scaffolding, labelling, inter-generation, and intra-generation learning. It was around here that I changed my opinion about embodiment. My original understanding was that embodiment was just about developing an AI that can control a body. A lot of embodiment is about that but there is also the idea, from the other direction of using embodiment to guide association, context, and adaptation.

This looked like a way that AI could acquire concepts. I thought about this further and felt that we can be more specific and have tangible and intangible concepts.

I did consider the idea of using embodiment with knowledge bases and to use embodiment to prime the pump. I didn’t know how to use embodiment with a structured knowledge base, so I considered using embodiment to acquire concepts and to keep acquiring them.

I looked more and more into evolutionary language. I looked at both embodied and dis-embodied approaches. My bias was towards embodied approaches from all of my reading. It was interesting that more practical papers regarded embodied approaches as being more sophisticated.

A KEY IDEA, EVOLUTION AT THREE LEVELS, GENETIC, PERSONAL, AND CULTURAL

Around this period, my supervisor, Andrew, was reminded of the idea of cultural evolution. This was a powerful idea, essentially adaptation at a third level. I looked at work by Henry Plotkin for more detail on this idea. Critical to this level of adaptation was communication or at least observational learning. This provided clues about how to work towards higher level intelligence, beyond physical behaviour and provided support for doing so.

BARRIERS TO EVOLUTION, ARTIFICIAL SELECTION AND SITUATIONS

As I looked more at evolutionary language, I was struck by the idea that artificial selection and situations limited than they helped. I continued looking and saw research that did without artificial situations but still kept artificial selection. Even though that change was small, it came with costs such as not being able to do intra-generation learning. I then came across work that used natural selection and natural situations but there was no language.

There was a period where I thought about using more complex artificial selection and situations but it didn’t feel right. I can include examples of It mean missing out on other behaviours that I also wanted.

Note that I now had a way to integrate the body and situations. By replacing artificial selection with natural selection or artificial life, I had a way to integrate needs.

Note that this section will become larger as I discuss the types of language evolution systems.

HOW DO WE DO WITHOUT ARTIFICIAL SELECTION AND SITUATIONS?

Around this period, my supervisor asked another interesting question. The  embodied AI could sense, move, and act. Without artificial selection and situations, why would they evolve differently to animals? This lead to thinking about human evolution and evolutionary theory, which is the subject of discussion of chapter three.

I could choose embodiment and environment features that are tough to create pressure for the AI to form language. However, there are two points to consider. The first point is that I don’t want to head into alien embodiment and environmental conditions. That would open up a whole new can of worms. The second point is that research (Humphrey) indicates that adaptation pressure didn’t come from the physical world in other words just the embodiment and the environment, but rather from the social world.

So the question now changed from how to I get further language evolution to how do I get the AI to form social groups?

STARTING PREMISE

The main point is to get to our starting premise, which is how can we get evolved and emergent embodied artificial intelligence under natural selection and in natural situations to develop further.


Chapter Three

Possible Titles

Chapter two was about getting from artificial intelligence in general to non-narrow embodied evolutionary artificial intelligence. Chapter three is about getting from non-narrow embodied evolutionary artificial intelligence to whether robots can adapt to the first of a series of constraints that shaped human behaviour. Chapter four is about the architecture for the experiments and chapter five is about the experiments and the results. Finally, chapter six is a discussion about the results.

Chapter two begins with discussing artificial intelligence in general and then the limitations. It then moves to an analysis of the limitations all the way to the roots of the problems and then looks at the possible next steps. One of the next steps put forth by AI literature is the use of embodiment. Analysis of embodied AI literature indicates strongly that whilst important, embodiment is not enough, that embodied AI requires a driving force to get embodied AI to act and that the type of driving forces each have their own implications to consider. After looking at the literature on driving forces, artificial life provides a way forwards. This also fits with the ideas put forth during the discussion on roots of the problems in AI. The gist is that embodied AI requires key starting dispositions and a non-artificial motivation to prevent the development of limited behaviour. Analysis of artificial life literature reveals that there are various types which we can broadly separate into using artificial or natural selection. Artificial selection relies upon criteria put forth by a designer whilst natural selection models the processes in the natural world. Artificial selection has key problems whilst natural selection has potential to overcome those problems. So at the end of chapter two we end with looking at embodied artificial life systems with natural selection. Looking at what has been achieved in this area reveals that indeed, not very much has been done at all. This raises the question of what should we do next to raise the intelligence of the entities in these simulations.

Chapter three begins with the question of what’s next. The general idea that we will put forth is that to evolve robots with human like behaviours we have to create conditions that require those behaviours to evolve. And we can do this by modifying robot embodiment and the environment. Whilst this is intuitive for some people others require stepping through the process of answering obvious questions, objections, and alternatives, dealing with those one by one to to reach the idea of evolving human like behaviours by creating conditions that require them.

How can we raise intelligence, especially human-like intelligence? There are many types of intelligence, including animal like and alien types of intelligence, but the focus of this thesis is human like intelligence in order to be able to do more and more tasks that humans have had to do whilst possessing an awareness of what’s important in the human world. In theory, we could recreate human evolution in a simulation and that should result in steps towards the development of entities with human like intelligence. However, whilst this could work in theory, it won’t in practice. We cannot simulate every detail in the evolution of the human species with the technology or limited knowledge that we currently have.

How then can we raise intelligence in these organisms? The underlying intuitive idea is that challenges push intelligence (we shall later see that whilst intuitive, these ideas are not quire right, or rather they are only partially correct). If we follow this line of thought, it brings us to the question of what challenges? Should we make food more scarce, predators more bountiful, make nutritious and poisonous food harder to differentiate, add hair to the forearms of the creatures, or implement a series of mazes and abstract challenges. It is possible with enough challenges that an intelligent being could emerge. Possibly. We could look at theories of human cognitive develop such as those put forward by developmental psychologist Piaget and attempt to get the organisms to develop each of the stages, one by one. At the same time, theories in this area have woefully large gaps, that are difficult to put into practice for the development of intelligent beings and it is unclear whether an intelligent being would indeed emerge. The bottom line is that this approach is too vague.

Perhaps we can instead focus on the more coarse and capabilities that we wish the robots to develop rather than going through endless possible changes we could introduce or trying to re-create the development of subtle cognitive stages? For example should we strive to develop locomotion, navigating, tool use, or any number of other behaviours? Locomotion is probably no appropriate. It’s been kind of done although admittedly with significantly different technologies and approaches. The link between locomotion is unclear, even though locomotion is one of the primary purposes of the brain. Plants don’t have brains.

What if instead we focused on communication, cooperation, and culture? This feels closer and better. These behaviours are one of the key elements of human behaviour and have enabled human beings to acquire and share behaviour at startling rates compared to what happens in the natural world. Indeed, culture itself has been put forth by evolutionary biologists such as Henry Plotkin as being another level of evolution on top of genetic and and personal learning.

So the question is how do we get these behaviours to evolve? What are the changes we can create to create pressures for these adaptations to occur? It seems straightforward that communication, the foundation for cooperation and culture should emerge when conditions are tough enough that makes the development of communication worthwhile. We could enable the entities to be able to make and hear sounds or to display and perceive lights. We could introduce predators that make the development of communication to enable early warning worthwhile or perhaps make increase the scarcity of food to require coordinated behaviour facilitated through communication in order for group survival.

This then raises a key question that will cause our discussion to take a radical turn. The question is as follows. How can we expect human like behaviours to emerge from animal like or indeed alien conditions? What is it that caused human like behaviours to form? It is theoretically possible that some form of intelligence could emerge from non-human like conditions but that’s the rub, it would lead to behaviour that is non-human like and instead animal like or quite alien.

So instead we pose a different question, what are the conditions that shaped the development of human like behaviour? We shall see as we investigate this avenue of thought that our earlier supposition that communication will emerge when conditions are tough enough is not quite correct according to our main models, what happened in the animal and human worlds. It turns out that that the conditions that led to the development of human like behaviour in particular also provides clues to the development of communication in general. In brief, the idea is that communication is a by product of other adaptations that arose in order for entities to exist in increasingly larger and more complex groups. I’ve written more about this and I’ve detailed it in the ICONIP paper from eons eons ago. Key researchers in this field include Plotkin, Humphries, Boyd and Richerson.

Let us return to the conditions put forwards in the fields that study the development of human behaviour. The key features most often put forth first by prominent researchers such as Campbell are bipedalism, manipulators, large brains, flexible vocal systems, and having to existing in social groups. The first four features are relatively simple to implement as they are structural features that one can explore in physical experiments of computer simulations. They’ve also been explored albeit in ways that are vitally different to what we plan (these vital differences include the use of hebbian neural networks, natural selection, and the combination with the other constraints. For example, there have been experiments on bipedalism but not large brains, large brains but not manipulators, manipulators but not hebbian neural networks and so forth.)

The change in our focus towards whether we entities can adapt to the key constraints has a key benefit. The pursuit of intelligence is difficult because the concept of intelligence is nebulous. One might instead turn to the pursuit of human like behaviours as instead but again, that too is a nebulous concept. Instead, our focus on whether robots can adapt to the conditions that shaped human behaviour provides us with a more straightforward way of approaching the intertwined and difficult to describe properties that we seek.

Indeed, we have a working line of reasoning for following the hunches of many researchers and speculators on the field, that embodiment, and in particular human like embodiment play key roles in the development of intelligence. We have a line of reasoning that connects walking and talking to put things  in a more colloquial manner. We have now provided a line of reasoning that provides steps from the problem we faced at the start of this chapter with the intuitive response. We’ve endeavored to take those who don’t find the intuitive response all that intuitive through the common question, objections, and alternatives.

We now have a way forwards, we can work through the features that the human evolutionary sciences have put forth as the keys that shaped human behaviours. As we introduce these features we can observe the suitability of the use of hebbian neural networks and increase the parameters and especially the sizes appropriately. No longer is it a case of designing a cognitive architecture based on our current understanding of cognitive psychology, it is a matter of evolving a cognitive architecture by using a more predictable process of introducing features both in entity embodiment and environment that we already know of.

The goal is to implement the a set of constraints with a size that is appropriate for the scope of this thesis as a proof of concept that opens the door to subsequent constraints. The goal is also to be discover and understand challenges in this novel approach if and when we encounter them.

We will also discuss the strengths and weaknesses of this approach over the next few sections of this chapter and discuss the roadmap for further research.


HOW CAN WE GET SOCIAL GROUPS TO FORM?

This section and the following ones will go in the next chapter.

I then looked at the evolution of language, the human behaviour, and with a special focus on why humans formed social groups. I looked at what embodiment and environmental features set humans apart. Apart from our physical features, including being bipedal, having manipulators, variable vocal systems, and larger brains, we live in social groups.

I originally thought that if we increased the challenge of the environment enough then language would naturally result. However, there is research that suggests that other factors need to be there first and that language is the byproduct of another adaptation. Evolutionary psychological theory holds that the adaptation that had to occur before language was the ability to predict the behaviour of others. A powerful idea that links to this is that before the ability to predict the behaviour of others, there must come the ability to model the behaviour of oneself. This requires the ability to develop concept of self.

I remember that I also wrote something here about empathy and consciousness but I don’t remember exactly what it was about and where it is. The ideas emerged from a discussion with the engineering machine consciousness group.

SOME IDEAS TO GET SOCIAL GROUPS TO FORM

Anyways, this meant that I needed to come up with ways to get the AI to form social groups without using artificial selection or situations.

I came across research on human development in comparison to ape development. A few key points came up. Different maturation and fertility rates led to different development and the emergence of different roles. Humans have the one longest development periods of all animals but humans have a shorter weaning period than apes meaning that other members of the group can take over caring for the child earlier. In addition, the inclusion of older members and members who are no longer fertile results in the emergence of additional roles which benefit the group. All of this leads to encourages cooperation.

Humphrey (1976) also poses the idea that social groups form to enable the transference of knowledge.

Maturation along with other features such as gender all play roles in encouraging social groups.

An important point is to differentiate between types of cooperation. The first type of cooperation is where the participants are interchangeable. Interchangeable cooperation includes willy-nilly mating. The mating occurs and it doesn’t really matter who it is with. There’s no need for a longer bond. This removes much of the complexities of real social groups where short-term and long-term behaviour matter. Where trust needs to build, where an understanding of others needs to occur. The second type of cooperation is long term.

CAN THEY SURVIVE THE FEATURES THAT  WE’VE INTRODUCED TO ENCOURAGE SOCIAL GROUPS?

We know that the AI can adapt to humanoid embodiments. We don’t know how to encourage the particular type of social groups that we’re after. We’re experimenting with features that theoretically encourages social groups to form, however, we don’t know whether the AI can adapt to those features implemented in the way we have chosen. I estimate that the probability is high that the AI will be able to adapt but we have to do the experiments first. Once we’ve done those experiments with reasonable parameters to satisfy the scrutiny of trusted peers, we can consider what we should change, what subsequent features we should implement, and whether we should continue this line of research at all.

NOTES

There are certain behaviours such as communication, over others such as walking, that we want to take further. Behaviours such as communication have only progressed a little. Note that communication has not occurred in the premise that we have chosen.

This all leads to the question of how we can expect our embodied artificial intelligences to evolve differently to animals that can also move, perceive, and act similarly. There are two related approaches.

The first approach begins with a theory from the field of human evolution. The theory holds that although the physical environment provided adaptation pressure, the social world provided a lot more. Therefore, we should explore the application of social pressure. This leads to the question of how we create social pressure. Again, human evolutionary theory provides ideas concerning this.

The second approach is the reverse of the first approach. It begins with the physical differences that distinguish humans from animals that have played large roles in shaping human behaviour. The main differences are, that we are bipedal, that we have manipulators, versatile vocal systems, large brains, and live in complex social groups.

Therefore, if we are to take things further from our starting premise, we need to get social groups to form. We need to look at the consequences of implementing features such as gender and maturation to get social groups to form. The agents are likely to be able to adapt but this has not done before like this and we don't know for sure. Will it result in language? What should be the next action?


Chapter Four: Architecture

Subsections

Genomes

Neural Networks

Embodiment

Simulation Parameters

Graphics, Vision sensors

Physics

Communication system

Incremental introduction of gender and maturation

Testing for Communication

vocabulary frequency

situation association

[Can they adapt to the constraints that lead to social groups and language?]

Robot

R


Chapter Five: Experiment Plan and Results

PURPOSE - To explain how this chapter connects to the previous and subsequent chapters.

In Chapter One we looked at the problem description, aim, the scope.

In Chapter Two we looked at artificial intelligence in general and progressed to non-narrow embodied evolutionary AI. Not very much has been done in non-narrow embodied evolutionary AI.

In Chapter Three we looked at how we could evolve behaviour further. We identified constraints we could introduce to create the pressures that shaped the development of human behaviour. The question that came out of chapter three was whether a population of simulated robots could cope with those constraints.

In Chapter Four we looked at the architecture we will use for our experiments, an architecture that we will introduce the constraints to.

In this chapter, Chapter Five, we discuss how we implemented and tested the constraints and present the results.

In the next chapter, Chapter Six, we will discuss the results in more detail and conclude.

PURPOSE - To reiterate why whether robots can adapt to the constraints is important.

To do.

PURPOSE - To reiterate why we these constraints are important.

Why these constraints?

Provide a brief recap.

Remember to distinguish between social and physical pressures.

PURPOSE - To explain what we are expecting to see because the one line explanation, “to test whether robots can adapt to the constraints,” is not clear enough.

What are we testing for?

The question we are trying to answer is whether robots can adapt the constraints we have chosen. We chose these constraints because they played key roles in creating the pressures that shaped the development of human behaviour according to human evolution and evolutionary psychology literature.

Normally we would test for adaptation by introducing a constraint and then observing for changes in behaviour. For example ...

However, testing for adaptation by observing for changes in behaviour is beyond the scope of this project. Observing behaviours is difficult due to the architecture. Behaviours can happen either too quickly or too slowly, and we lack the ability to review behaviour of every robot.

The next option to test for adaptation is to introduce a constraint and observe whether the robot population survives. This option is valid but it depends on the type of constraint. For some constraints, it is clearer whether adaptation had to occur in order for the population to survive.

However, we could also introduce a constraint that either does not exert significant pressure on robot behaviour or where robot robustness in a given environment means that the robots do not have to significantly modify their behaviour in order to survive.

An example of a more clear constraint is replacing wheels with legs for locomotion. In this case the constraint is significant enough to threaten population survival. We can likely say that the robots had to have adapted to walk in order to get food or else the population would have died out. There are of course artificial situations we could create where this is not the case such as where food and potential mates keep falling down from the sky making locomotion unnecessary but this is unlikely.

An example of a less clear constraint is gender. Robot populations could theoretically persist by taking longer to find a mate. It depends on the robustness of the robot in a given environment.

However, we are not working with constraints whose effects are clear so we have to find a way to show that population survival occurred because of adaptation and not because of robot robustness.

To do that we have two choices. The first choice is that we can show that adaptation occurred because the adaptation mechanism, personal learning, was critical for population survival. The second choice is that we can show adaptation occurred because the constraint was significant enough to threaten population survival if adaptation did not occur.

In the following section we will explain three approaches and why we chose the third one. The first two approaches, turning learning on and off, and having a population with learning and a population without, tests whether the adaptation mechanism, learning, was critical. The third approach, gradual and non-gradual constraint introduction, tests whether the introduced constraint was significant enough to threaten population survival regardless of population robustness in a specific environment.  

PURPOSE - To explain approach one (turning learning on and off) and why we are not using it. To show that we considered other approaches and why we chose the one we did.

Approach One

Get a stable population.

Introduce a constraint.

See how the population adapts.

Does the population adapt?

Repeat the experiment with learning off.

Populations with learning should adapt more often and die out less often.

Populations with learning should adapt more quickly.

Problem One. What if changing weights constantly is part of their behaviour. Does it show that the weight change parameters are too big? If changing weights constantly is part of behavior, then we would be altering behavior if we turn learning off.

Test One for Problem One. I still have to check whether changing weighs is a necessary part of behavior. I need to do to check an unknown for problem one. See whether a population that is stable and has had learning can survive having learning turned off. Does not make sense. I don’t know whether a population with learning will remain stable if I turn learning off. If successful, then you can go with the more sophisticated approach of training a young robot to survive and then turning learning off.

Problem Two. Young agents need to have learning on to develop their behaviour. Turning learning off does not make sense because children are born without the memory from their parents. It would only make sense if children were born with memories from their parents like in some narrow embodied evolutionary AI approaches.

PURPOSE - To explain approach two (having a population with learning and a population without learning) and why we are not using it.

Approach Two

Start a population without learning.

Start a population with learning.

I expect that the population without learning will not become stable.

I expect that the population with learning will become stable.

I can compare a population with learning and a population without. I mean a population that starts with learning and a population that starts without learning. See whether the population survives without learning from the start.

Problem One. I expect that the population without learning will not become stable. The hard part will be getting a stable population without learning. I can do this if I keep the simulation running long enough.

Problem Two. Child robots don’t inherit parent robot knowledge that the parent robot gained through personal learning.

PURPOSE - To explain approach three (introducing a constraint normally and introducing a constraint gradually) and why we are using it.

Approach Three

Will the gradual approach be enough?

Introduce a constraint gradually.

The population should decrease and then recover.

Introduce a constraint suddenly.

The population should die out.

What if the population does not die out?

What does the first, second, and third option show?

Personal learning has to occur because …

How do we prevent cheating by tweaking parameters? How do we show that the robots are adapting to the constraints? The easier way might be to observe behaviors. Observing behaviors is hard because things either happen too fast or too slow, you don't know which robots to look at, the robots don't live long and you can't record and playback. Observing behaviours is beyond our scope.

Let's focus on gender first.

What if the population does not die out?

The gradual approach was a way to introduce constraints that would otherwise kill a population. Now I’m also using it to show that adaptation is occurring.

PURPOSE - To explain how we will deal with outcomes not always being the same.

To Deal With Messy Results

Repeat the experiment 10 times with non-gradual introduction of constraints.

Repeat the experiment 10 times with gradual introduction of constraints..

PURPOSE - To explain how we will implement the constraints.

Constraint Implementation

Write about how the constraints are being implemented.

Copy from ICONIP paper.

Initial Constraints

  1. Gender
  2. Maturation
  3. Trade, the need for more than one type of nutrient.
  4. Mystery 1
  5. Mystery 2

Is there a better way to implement maturation?

Older robots could accidentally feed younger agents.

You can't cheat with legs, arms, and gender. You can kind of cheat with gender. For example, there just have to be enough of them and they just have to live long enough for the population to keep persisting.

You can cheat with maturation. With maturation, parameters matter.

What is it that I'm looking for? With bipedalism and manipulators it's easy. With gender and maturation it's harder. I need to acknowledge the difficulties with gender and maturation and how I'm countering them.

Will introducing gender lead to an imbalance?

What if robots can persist with gender by randomly moving around?

What if robots can persist with maturation by randomly moving around?

Subsequent Constraints

  • Sleep.
  • Neural network size
  • Vision resolution
  • More vocal flexibility.
  • Knowledge that is easier to learn from others than to discover directly. Body temperature.
  • Fatigue.
  • Increase world size.
  • Refer to early presentations.

Is there a brain size where robots find it hard to recover? Lower and upper limit? Similarly, what about vision resolution?

I remember the mention of hunters and foragers working together. That could inspire ideas for constraints.

PURPOSE - To explain how we will do the gradual and non-gradual constraint introduction.

Constraint Introduction

Write about introducing the constraints gradually.

Copy from ICONIP paper.

PURPOSE - To explain how we expect the population size to react.

Write about expected graphs.

Copy from ICONIP paper.

NOTES

  • I'd love to make the simulation more user friendly. For example, a way to add additional robots.
  • I can write this up and show the unknowns and the options.
  • Do I need to distinguish between genetic, personal, and cultural learning? No. I just have to show that the robots are able to adapt.
  • Bipedalism. Manipulators, need to pick food up and put in mouth

Write implementation

Write introduction

Draw diagrams

Approach Zero

Turn learning off. Demonstrate that learning is actually occurring before introducing any constraints.

What if the change in behaviour is minor and takes a long time to manifest?

Then have to work with a different constraint.

The purpose of each experiment is to show that the robots are able to adapt to the constraints.

The important part is to show that the robots are changing their behaviours.

It is important to show that the robots are adapting because it is easy to design the simulation so that the robots can survive the constraints without changing their behaviour. Elaborate with an example of  robots moving around to find food and mates.

We’re testing to see whether the robots can adapt to the constraints. We want to make sure that the robots are not just randomly moving around and surviving. Robot populations can theoretically persist by randomly moving to find food and mates. We want to make sure that the robots are not just randomly moving around and surviving. It’s easy to set things up so that the robots can survive without having to change their behaviour. Then we could introduce a whole bunch of features and we could claim that the robots adapted even though they would have no effect.Robot populations can persist by randomly moving, eating, and reproducing.

An example of a criticism we are preparing for.

If a population dips and recovers, is that a clear sign that adaptation is occurring?

Depends on how long it takes. Speed of recovery is important.

A population could shrink, food could become more abundant, the population could expand.

Introduce a constraint.

The population drops.

The population recovers.

What does that show?

I'm not so sure that introducing constraints and looking at population reactions is such a good idea.

Brain Storm 1

  • Before constraint. Remove robots. Watch recovery.
  • New constraint. Remove robots. Watch recovery.
  • Old constraint. Remove robots. Watch recovery.

Brain Storm 2

  • What if I introduce a constraint, remove some robots and look at the speed at which robots recover?
  • What if I repeat the process of removing robots and looking at the speed at which robots recover?

Introduce gender constraints. Robots now have to take longer to find a mate. If the robots are long lived then they will eventually find a mate to reproduce and the populations will persist. No meaningful change in behaviour.

That why we need to establish … We need to show that a change in behaviour has occurred. We don’t want to analyse behaviour because that’s beyond our scope. Instead we want to show that a change in behaviour has occurred because it is the only thing that could have changed and it had to change in order for the population to persist or at least grow. I’m thinking of a situation that requires a change in behaviour to access new resources which can lead to a population  boom.

We can show that a constraint leads to population death. Then can we change the way we introduce the constraint slowly and keep everything else the same so that the only thing left that can change is behaviour.


Work thus far in the area of non-narrow embodied evolutionary AI has focused on animal or alien like conditions. Consequently the behaviours that emerged from these conditions have been animal or alien like (low level).

How can we move from lower, animal like behaviours, towards higher, human-like behaviours? It is unlikely that human like behaviours will emerge from animal or alien like conditions. What then are the conditions that we require? Will the new conditions produce the desired behaviours? Fields including anthropology and evolutionary psychology have studied the conditions that shaped human behaviour and we can draw upon these fields.

There are a few problems however, visible behaviours are the result of the interplay between a large number of conditions. This means that even though a robot can adapt to new conditions, the changes in behaviour are likely to be hard to observe until there are large enough numbers of new conditions. At the same time we have to show that adaptation is actually occurring and because we cannot yet use observation, we must endeavour to structure conditions and experiments so that there are other means by which we can show adaptation is occurring.

Therefore while the long-term goal is to test the HYPOTHESIS that we can use human like conditions to evolve robots with human like behaviours, the short term goal is to show that robots can adapt to these conditions which have not been explored before, conditions identified by the evolutionary sciences as being key in shaping human behaviour. The CONSEQUENCE, of this work is that it opens the door to exploring the next set of conditions identified by the evolutionary sciences.

experiment 1 - larger world

population from scratch, hebbian learning on, sudden introduction, x 5

population from scratch, hebbian learning on, gradual introduction, x 5

ongoing population, hebbian learning on, sudden introduction, x 5

ongoing population, hebbian learning on, gradual introduction, x 5

experiment 2 - gender

ongoing population, hebbian learning on, sudden introduction, x 5

ongoing population, hebbian learning on, gradual introduction, x 5

experiment 3 - maturation

ongoing population, hebbian learning on, sudden introduction, x 5

ongoing population, hebbian learning on, gradual introduction, x 5



Chapter Six: Discussion

Subsections

    Results analysis

Further research


Experiment Log

Changed minimum food energy from 2048 to 1024 (09/08/11)

Changed maximum food energy from 2048 to 1024 (09/08/11)

Changed rate of adding food to 1 block per 10 seconds simulation time (09/08/11)


Notes

How do I write chapter two? The point of chapter two is to get to my chosen premise, of how can we get evolved and emergent embodied artificial intelligence under natural selection and situations to develop further.

We need to discuss what researchers have been able to do. That means discussing project after project, and how those projects work.

I also want to discuss the philosophy behind the approach. That means referencing Dreyfus, Brooks, Pfeifer, Nolfi, and more. Is the starting point artificial intelligence in general? It feels too broad. I don't know. At the same time, if I start at my premise, I don't have a whole lot to say. Then the idea should be to work my way to the premise. It feels weird to do things this way, as if I'm doing extra justification. Perhaps I can focus on evolutionary embodied systems, but what then should be the narrative? The more useful question is what was the narrative that I followed to get where I am?

My journey began with my dissatisfaction with AI and my desire for AI to be more. I looked at a whole bunch of approaches to AI, AI applications, and AI problems. One of my challenges was putting my finger on where I felt AI was lacking. This was a more difficult task than I thought. The standard approach researchers took was to solve narrow problems, rather than my approach of following my intuition and going in a more nebulous direction.

Eventually, I came towards an area where AI capability was lacking, which was natural language. I could not figure out how to use knowledge databases to improve AI capability. I did think about using knowledge databases to improve natural language processing and natural language processing to improve knowledge databases.

I looked at how other researchers were using knowledge databases to improve natural language processing and the approaches did not feel right. The approaches used limited algorithms and I felt that there was an important pieces missing. The algorithms were too simple, more like a recipe than a cognitive architecture.

That's roughly when I got into the cognitive architecture stuff. Again, when I looked at the cognitive architectures more closely, I felt that there were still important pieces missing. The cognitive architectures seemed to boil down to simple algorithms and although some cognitive architectures were at least a decade old, they could not do very much. I have to add that similar to how I did not know how to extend knowledge database approaches, I did not know how to extend cognitive architecture approaches and I'm not sure that I would want to.

One of my difficulties has been justifying or picking an application or capability without sounding grandiose. That's why I eventually got to the point of picking a premise and working on taking that premise further. Anyway, I've gone too far ahead in my story.

Around this time, I started thinking about evolution, emergence, adaptation, and learning. I looked at all the things researchers had been able to do and my focus was on embodied systems. The reason for embodiment was to provide a shaping force as AI systems adapted. It seemed obvious to me that having a body solved a number of difficult problems such as having to figure out ways to shape evolution. This reminds me of something Brooks said, the thrust of being that the world was it's own best model.

I saw a number of amazing things that evolutionary embodied systems could do, from walking to simple talking. At the same time, I saw the limitations in those systems. You can evolve a system using inter-generation adaptation, but how then do you evolve it to do something else and how do you introduce intra-generation adaptation? You could evolve a walking behaviour and a talking behaviour and then bolt them together but that doesn't feel right. Also, how do you evolve higher intelligence?

I began thinking next, about what behaviours I wanted to extend. My inclination naturally came back to things that are difficult to program, namely communication. I saw that there had been work on evolutionary language by people in fields including computer science and anthropology. Again, there were things I was dissatisfied with, non-embodiment, non-natural selection, and non-natural situations. Researchers had abstracted their simulations to reduce their unwieldiness but that meant their simulations were lacking something.

Eventually that led me to researchers in artificial life. Here we could consider situations with less artificial selection and less artificial situations. At the same time, there was still a lot of room for going in the direction of my intuition, of evolving embodied systems to acquire advanced behaviours by introducing constraints into the robot embodiments and environments.

The next question seemed to be how to pick constraints from the dizzying number of options to move towards higher intelligence, but that is where this chapter ends and the next chapter begins.

I can sense the structure a little. Tricky part will be moving from one idea to another. Another tricky part will be to describe what it was that I initially wanted. I wanted something that could adapt with minimal manual code surgery. The transition from knowledge databases and cognitive architectures to embodied systems was all keeping with the question of how use evolution in mind. How can we get them to adapt and keep adapting?

I can talk about what researchers have done. I can then talk about the difficulties of doing without artificial selection and situations.

I'm trying to figure out what got me down the embodiment route. There was a long period where I felt that embodiment was unnecessary. The change occurred somewhere in my second year. I had spent months looking at AI problems, then identifying where AI struggles, then natural language, then knowledge databases, then cognitive architectures.

Did evolutionary language come first or did embodiment come first? I think I moved over to embodiment and human needs to deal with the context problem. This was early 2008. I got frustrated with natural language processing, knowledge databases, and cognitive architectures. I went back and read Dreyfus again. Then I read Pfeiffer and Bongard, which had similar ideas.

On page 92, I mentioned my difficulty of convincing Andrew about embodiment.

On page 99, I thought about how situated AI can narrow down possibilities for consideration.

On page 101, I mentioned the idea of using knowledge databases to prime the pump. I wondered whether embodiment could prime the pump.

On page 102, I had trouble using knowledge databases with embodied AI.

Refer to PhD Journal, volume 2 page 107. I came across work on embodied systems developing communication.

Later I thought about embodiment being the shaping mechanism.


Chapter 3 Notes

maturity / gender > social groups > language