Robotics

  • November 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Robotics as PDF for free.

More details

  • Words: 1,453
  • Pages: 4
Evolutionary Robotics Home | Links | Bibliography

Home and Overview

Jump to a Section:

What is Evolutionary Robotics? Artificial Evolution: Survival of the Fittest What is Created During Evolution? Evolving Robot Brains What do the Robots Actually Do? How and Where are the Robots Actually Evolved? Embodied Evolution Evolution in Simulation with Transfer to Reality

What is Evolutionary Robotics? Evolutionary robotics (ER) is an emerging area of research within the much larger field of fully-autonomous robots. One of the primary goals of evolutionary robotics is to develop automatic methods for creating intelligent autonomous robot controllers, and to do so in a way that does not require direct programming by humans. The primary advantage of robot design methods that do not require hand coding or in depth human knowledge is that they might one day be used to produce controllers or even whole robots that are capable of functioning in environments that humans do not understand well.

Artificial

Evolution:

Survival

of

the

Fittest

Evolutionary robotics uses population-based artificial evolution (fogel-1966, holland1975) to evolve autonomous robot controllers (i.e. robot brains) and sometimes robot morphologies (i.e. robot bodies)(lipson-n-2000). Generally, the robots are evolved to perform tasks requiring some level of intelligence, for example moving around in an environment without running into things. The process of controller evolution consists of repeating cycles of controller fitness testing and selection that are roughly analogous generations in natural evolution.

Evolution is initialized by creating a population of randomly configured robots (or robot controllers). During each subsequent cycle, or generation, each of the robot controllers competes in an environment to perform the task for which the robots are being evolved. This process involves placing each controller into a robot and then allowing the robot to interact with its environment for a period of time. Following this, each controller’s performance is evaluated using a fitness selection function (objective function) that measures how well the task was performed. The controllers in the better performing robots are selected, altered and propagated in a repeating process that mimics natural evolution. The alteration process is also inspired by natural evolution and may include mutation and trading of genetic material. Cycles are repeated for many generations to train populations of robot controllers to perform a given task.

Figure 1. An overview of a typical evolutionary robotics training cycle

What

is

Created

During

Evolution?

In the majority of evolutionary robotics work only the control programs are created and configure by the evolutionary process. These controllers come in a variety of forms including neural networks, genetic programming structures (koza-ecal-1992), fuzzy logic controllers (hoffmann-ipmu-1996) and simple look-up and parameter tables that relate sensor inputs to motor outputs (augustsson-gecco-2002). There have also been several examples of evolvable hardware circuits being evolved for robot control ().

Evolving

Robot

Brains

Neural networks are by far the most common type of controllers used in evolutionary robotics. These can be encoded for the process of evolution in a variety of ways. For instance, a neural controller can be represented as a set of connection weights. In this case it is the weights of the network that are actually evolved. The majority of neural networks used in evolutionary robotics are small and accommodate less than 10 sensor inputs (nolfi-iwal-1994, quinn-iwbir-2002). These networks usually have less than ten neurons and between ten and fifty weighted connections. In such cases just the set of weight represented by ten to fifty numbers would be evolved. The largest networks in ER have about 150 inputs and about 5000 connections (nelson-kimas-2003). For these large networks the set of weights and neuron configuration are evolved in the form of a variable sized matrix of numbers. Figure 2. Robot Brains: Example neural network robot controllers Not only controllers can be evolved. It is also possible to find a way to encode the physical structure of a robot and evolve that also. Although there were attempts to do this in the early years of ER research, it has only be in the past five or six years that such methods have lead to robots able to function in the real world. These recent results were accomplished by formulating a set of modular building units that could be easily simulated and fabricated, but that could also be configured and combined into an almost infinite variety of non-trivial robot bodies (lipson-n-2000, hornby-icra-2001, macinnes-al-2004).

What

do

the

Robots

Actually

Do?

Almost all of the research done to date evolved robots capable of very simple behaviors. Common benchmark tasks that have been studied include simple locomotion, locomotion with object avoidance (cliff-spie-1993, grefenstette-mlwrl1994), phototaxis (moving toward light sources), and learning how to walk in the case of legged robots (beer-ab-1992, jakobi-1998). There are only a handful of experiments that have investigated tasks of any significant degree of difficulty. In one example, robots learned to visit three goal locations in a specific order (capi-ab-2005). In another example of a relatively difficult task, teams of robots were evolved to compete against one another to find goal objects in very large complicated environments (nelson-kimas-2003). In order to perform these tasks the robots had to learn to see, and then to discriminate and react to several different types of objects in their environment. Several tasks that required robots to perform sequential movements have also been studied (floreano-nn-2000). In these tasks robots typically must move to an initial goal position before traveling to another final home position. In another sequential task, robots were evolved to search for and pick up objects in an arena and then to drop the objects outside the border of the arena (nolfi-jras-1997).

How

and

Where

are

the

Robots

Actually

Evolved?

The robots and their controllers can be evolved in a variety of ways. Early work dating from the 1990’s generally employed either embodied evolution or evolution in simulation with transfer to real robots after the evolutionary process was complete. More recent research has made use of more complex methods that may use simulation for a potion of the evolution and real robots for another phase of the evolution. In addition, work done in the last five years has co-evolved controllers and morphologies in simulation in a way that allowed physical robots to be fabricated after evolution.

Embodied

Evolution

In the case of embodied evolution, physical robots are used during the evolutionary process (nolfi-iwal-1994, mondada-jras-1995, watson-cec-1999). In the simplest cases controllers are loaded into robots, the robots are tested, and the associated controllers’ fitnesses are evaluated based on the performance of the real robots. Although this procedure insures that the controllers can function in real robots (as opposed to simulated ones), the process is slow –real time. An additional and more serious problem is that even the worst controllers cannot be allowed to damage the real robots during testing, because this would put a stop to the evolutionary process, at least until the robots could be repaired or new ones built. What this really means is that embodied evolution can’t make use of fitness measures that measure the true survivability of robots. designers must instead decide what behaviors a robot is likely to need to perform the task at hand without causing damaging to the robot. In order to do this, the designers must have a reasonably good idea of how to perform the given task, and how to constrain the robot’s training environment to that the robots won’t be damaged. This is a problem when the goal is to get the robots to learn how to do something that the designers don’t know how to do.

Evolution

in

Simulation

with

Transfer

to

Reality

An alternative to embodied evolution is to evolve the controllers in simulated robots living in simulated environments. Now robots can be destroyed during testing and fitness can be based more directly on actual survival. In the long term this is quite important. However, we should point out that for the current state of evolutionary

robotics research, robots generally simply succeed or fail to perform their given tasks and do not face mortal challenges. For instance, in the case of an object avoidance and navigation task, poorly performing robots will likely just bump into objects and become immobilized rather than actually being damaged. Evolution in simulation can proceed much faster than evolution using only real robots. Care must be taken in designing the simulation environments so that the controllers evolved in simulation can function in real robots. A large proportion of current ER research uses evolution in simulation with transfer to reality. One of the most sophisticated simulation environments allowed robots relying on video to see their environment to be evolved in simulation and then transferred to real robots (nelson-iros-2003).

This page is maintained by Andrew Nelson All artwork©1990-2006A.L.Nelson, All rights reserved Site administrator contact: [email protected] ©2006A.L.Nelson

Related Documents

Robotics
October 2019 44
Robotics
June 2020 27
Robotics
June 2020 6
Robotics
July 2020 3
Robotics
November 2019 19
Robotics
November 2019 15