Experiences with Minirobot Platforms in Robotics and AI Laboratory Enric Cervera, Pedro J. Sanz Robotics and AI Lab, Jaume-I University, Campus Riu Sec, 12071 Castell´o (SPAIN) E-mail: ecervera,sanzp @icc.uji.es
Abstract. This paper presents some laboratory experiences carried out in our Robotics and AI courses. Using a configurable autonomous mobile robot platform, a wide variety of problems can be studied: from robot kinematics to learning architectures. A brief description of the robot platform is provided, together with some theoretical and practical aspects of each laboratory activity.
1
INTRODUCTION
During the last two years, we have carried out several laboratory experiences involving autonomous minirobots in our Robotics and AI courses, ranging from kinematic studies to learning strategies, using a single robot platform. This paper presents the off-the-shelf components of the robot, and briefly describes the laboratory experiences. The rest of the paper is organized as follows: Section 2 describes the robotic platform (mechanics, hardware and software) used in the experiments. In Section 3, different learning experiments are presented. Finally, Section 4 presents some conclusions and future plans.
2
Minirobot platform.
in a differential drive configuration [3], thus allowing the robot to go forward, backward, or turn in place. When possible, Lego gear motors (Fig. 2) are used in our designs. These are 9V DC motors with internal gear reduction, turning at 350 rpm with no load. They are more efficient than standard DC motors with external reduction.
EXPERIMENTAL PLATFORM
The autonomous mobile minirobot platform is presented. The overall robot is rather inexpensive, highly configurable, and it has proven to be very robust. Moreover, it provides enough computing power for some rather complex AI algorithms. Figure 1 depicts the platform as used in our fire fighting robot contest. The robot is controlled by a 68HC11-based board, which also reads sensor signals from IR sensors. The fire extinguisher is merely a standard CPU cooler. 2.1
Lego gear motor.
Electromechanics
Robot structure is made completely of Lego parts. It has diametrically opposed drive wheels and two freewheeling castor. Each wheel is driven by its own motor
Additionally, standard servos (Fig. 3) are used in special mechanisms like rotating bases for sensors, in order to scan a broad area, looking for obstacles or a flame.
Servo kit.
Besides the Lego parts, only miscellaneous electronic components were used, e.g. bumpers, infrared proximity sensors, infrared rangers, photo-cells, etc. While bumpers and photocells are common and inexpensive sensors, infrared proximity and range sensors deserve a little more comment.
Sharp GP2D02 infrared range sensor.
view and either hits an object or just keeps on going. In the case of no object, the light is never reflected and the reading shows no object. If the light reflects off an object, it returns to the detector and creates a triangle (Fig. 6) between the point of reflection, the emitter, and the detector.
Infrared reflective sensors (Fig. 4) consist of an infrared LED and a phototransistor in a compact package. Both elements are tuned to the same frequency, thus filtering ambient light. The output of the sensor is related to the amount of radiation reflected by a surface, which depends on the proximity of the surface and its reflectiveness.
Different angles with different distances.
The angles in this triangle vary based on the distance to the object. The receiver portion of these new detectors is actually a precision lens that transmits the reflected light onto various portions of the enclosed linear CCD array based on the angle of the triangle described above. The CCD array can then determine what angle the reflected light came back at and therefore, it can calculate the distance to the object.
Infrared reflectance sensors: on the left QRB1114, on the right QRD1114, at bottom sensor schematics.
We have used these sensors both for detecting obstacles without contact, and for detecting white lines on a black ground. Depending on the environment, there are sensors focused for sensing specular reflection (QRB1114) or unfocused for sensing diffused surfaces (QRD1114). Infrared range sensors (Fig. 5) work in a different manner: they use triangulation and a small linear CCD array to compute the distance and/or presence of objects in the field of view. The basic idea is this: a pulse of IR light is emitted by the emitter. This light travels out in the field of
This new method of ranging is almost immune to interference from ambient light and offers amazing indifference to the color of object being detected. Detecting a black wall in full sunlight is possible. We have use these sensors to detect the opponent robot in a sumo combat, or to dectect walls and doorways in the fire fighting environment. Finally, the Lego rotation sensor (Fig. 7) has been used in experiences where a velocity measurement was needed. The rotation sensor tracks how much an axle inserted in the sensor turns. The sensor measures in increments of 16 counts per rotation meaning that as the axle completes one rotation the sensor counts to 16. If the axle turns 90 degrees, the sensor counts to 4. The sensor can track counter-clockwise rotation by counting backwards (up to -32,768).
2.2
Handy Board expansion kit.
Lego rotation sensor.
and a connector mount for Polaroid 6500 ultrasonic ranging system.
Hardware
Autonomous robot control is achieved by a Handy Board [4]: it is a 68HC11-based controller board designed for experimental mobile robotics work. MIT has licensed the Handy Board at no charge for educational, research, and industrial use (a typical kit is depicted in Fig. 8). Besides the Motorola MC68HC11 processor, the Handy Board includes 32 K of battery-backed static RAM, four outputs for DC motors, a connector system that allows active sensors to be individually plugged into the board, an LCD screen, and an integrated, rechargeable battery pack.
2.3
Software
A wide range of options are available for developing software on the Handy Board, including free assembly language tools provided by Motorola, and commercial C compilers. Additionally, the Handy Board is compatible with Interactive C, the programming environment created for the MIT Lego Robot Design project. Interactive C (IC) is a multi-tasking, C language based compiler that includes a user command line for dynamic expression compilation and evaluation. Though not used yet in our experiences, the Handy Board can be programmed in Java using simpleRTJ, a clean room implementation of the Java language. It differs from other Java VM implementations not only in small memory footprint (which is on most microcontrollers 1823KB) but also the way how the class loading is performed.
Handy Board kit.
Figure 8 depicts a complete Handy Board kit, consisting of the main board, the serial interface / charger board, the AC adapter, a serial cable to connect the PC, and a telephone cable to connect the board to the interface / charger. The board can be upgraded with an Expansion Board (Fig. 9) which plugs on top of it, providing additional features, like additional analog sensor inputs, active Lego sensor inputs, digital outputs, servo motor control signals,
On most embedded systems the Java applications will not be frequently updated or reloaded from the host computers but may require frequent re-starts (turning power on). To minimize the Java application start-up times and to speed up the bytecodes execution the simpleRTJ executes pre-linked Java applications. The Java application class files are linked on the host computer. Such generated application file is then uploaded to the target device for direct execution by the simpleRTJ.
3 LABORATORY EXPERIENCES The presented experiences have been carried out in the MSc Program in Computer Science at Jaume-I University. First, two experiences in the Robotics course are presented: kinematics and sensor-based control. Next three experiences belong to the AI course: reinforcement learning, subsumption, and agents.
3.1
of the robot and to synchronize its two wheels so that the robot will travel in a straight line [3].
Mobile Robot Kinematics
A rotation sensor was attached to each wheel. The goal was to construct the direct and inverse kinematics of a differential-drive mobile robot (see Fig. 10). The model itself is rather simple [5], but the low precision of sensors together with the poor control of motors makes the problem rather difficult.
!
x˙ y˙ θ˙
$%'&("#
cos θ 0 sin θ 0 0 1
Sensor-driven Control
The goal of this experience was to program the robot for a sumo tournament (Fig. 12) [1]. The robot had to be controlled mainly from its sensor inputs, so a state finite automaton was designed, where transitions were triggered by sensor conditions.
Mobile robot kinematics.
The proposed direct kinematic model is:
"#
3.2
$%()
πr * n πr * n + 2πr * nd 2πr * nd ,
) q˙1 q˙2
! ,
Minirobot sumo competition.
(1)
where -
- q1 and q2 are the angle of each wheel, - r is the radius of the wheels, - n is the sensor count for a turn,
Robots had line sensors for detecting the ring boundary, bumpers, and an IR range finder for detection of the opponent. 3.3
Reinforcement Learning
and d is the distance between both wheels. Nonetheless, several interesting results were achieved, e.g. trajectory control made possible to program the robot to go straight.
Reinforcement learning [7] is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with the environment. Robots had to learn a line following behavior. A rather simple task was chosen in order to see the learning progress in a small amount of time. Instead of hardcoding line following algorithms, students programmed a reinforcement learning algorithm which actually learnt to control the robot by its own experience, following different lines marked on the floor. 3.4
.
Control loop.
As depicted in Fig. 11, a simple proportional-integral control loop can be implemented in software, using a rotation sensor attached to each wheel, to control the speed
Subsumption Architecture
Brooks’ subsumption architecture [2] consists of behaviors, i.e. layers of control systems that all run in parallel whenever appropriate sensors fire. Parallel behaviors can be easily implemented in IC on the Handy Board, using its parallel processing primitives. Different sensors (bumpers, proximity, light) were mounted on the robot, and students had to design and implement a layer of control systems. The resulting robot
had an emergent behavior: it wandered around the labora-
/tory, avoiding obstacles and either following or escaping from the light.
3.5
Agent-based Architecture
According to the intelligent agent view [6], the problem of AI is to describe and build agents that receive percepts from the environment and perform actions. Robotics is not defined independently: this approach emphasizes the task environment characteristics in determining the appropriate agent design. The goal task was inspired by the Trinity College Fire Fighting Contest: robots had to find and extinguish a fire in an office-like environment (Figs. 13 and 14) [1].
Robot perception consisted of two IR reflective sensors for detecting white lines at doorways, three IR distance sensors for detecting walls (front, left and right), and a light sensor mounted on a servo for detecting the flame. Driving the robot was a hard challenge in itself, since the differential-drive mechanism was not accurate enough for driving straight, nor it had any dead-reckoning mechanism. Instead, the robot tried to keep a constant distance to side walls in order to travel along the corridors. Though being our first edition, several robots succeeded in finding and extinguising the flame, and one team even programmed a more difficult task: the robot started at an unknown position in the environment, instead of the fixed one. It was able to recognise its location and explore the environment, extinguish the flame, and return to its original location.
4
SUMMARY
This paper has presented the autonomous mobile robot platform used by our students in robotics and AI, and some laboratory experiences ranging from robot kinematics to learning and agent architectures. The most important issue is the high degree of motivation of the students in all of the experiences. We believe that, besides the challenging goals, an important factor was the easy-to-use hardware and software platform: assuming basic skills in C, in less than an hour students were able to write and execute their first robot programs.
!
Fire fighting robot in action.
In the future, we plan to make new experiences allowing robot communication, in order to develop multi-robot systems and collective behavior. Acknowledgements. Support from Jaume-I University (Educative Support Department) and BP Oil Refineria de Castell´o SA is gratefully acknowledged.
References [1] http://robot.act.uji.es/compet/. Minirobot Competitions at UJI. [2] Rodney A. Brooks. A robust layered control system for a mobile robot. IEEE Journal of Robotics and Automation, RA-2:14–23, 1986. [3] Joseph L. Jones, Anita M. Flynn, and Bruce A. Seiger. Mobile Robots: Inspiration to Implementation. A K Peters, Ltd., 2nd edition, 1999. [4] Fred G. Martin. Robotic Explorations: a Hands-on Introduction to Engineering. Prentice Hall, 2001. [5] Philip J. McKerrow. Introduction to Robotics. AddisonWesley, 1993.
0
[6] Suart Russell and Peter Norvig. Artificial Intelligence: A Modern Approach. Prentice Hall, 1994. Fire fighting competition.
[7] Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. The MIT Press, 1998.