A Robotic Wheelchair Based on the Integration of Human and Environmental Observations
2001 IMAGESTATE
Look Where You’re Going
W
ith the increase in the number of senior citizens, there is a growing demand for humanfriendly wheelchairs as mobility aids. Recently, robotic/intelligent wheelchairs have been proposed to meet this need [1]-[5]. Most important evaluation factors for such wheelchairs are safety and ease of operation. Providing autonomy is a way to improve both factors. The avoidance of obstacles using infrared, ultrasonic, vision, and other sensors has been investigated. The ultimate goal is a wheelchair that will take users automatically to the destination that they specify. Such autonomous capability has been also studied. However, in addition to going to certain designated places, we often want to move as freely as we wish. In this case, a good human interface becomes the key factor. Instead of a joystick, like on conventional power wheelchairs, voice can be used in issuing commands [3], [5]. In the Wheelesly robotic wheelchair system [4], the user can control the system by picking up a command icon on the CRT screen by gaze. Eye movements are measured through five electrodes placed on the user’s head. Although autonomous capabilities are important, the techniques required are mostly common with those developed for autonomous mobile robots. Thus, we concentrate on the human-interface issue in our research and implement conventional autonomous capabilities with necessary modifications to realize an actual working system.
We need environmental information to realize autonomous capabilities. Static information can be given by maps. Vision, ultrasonic, and other sensors are used to obtain current information. On the other hand, we need the user’s information for human interfaces. Although the user’s information is obtained through joystick operation in conventional power wheelchairs, we would like to use much simpler actions to realize a user-friendly interface. This requires the wheelchair to observe the user’s actions to understand his/her intentions. However, simple actions tend to be noisy; even if users do not intend to issue commands, they may show movements that can be seen as command actions to the wheelchair. To solve this problem, we propose to integrate the observation of the user with the environmental observations, which are usually used to realize autonomy. In our proposed robotic wheelchair, we choose to use face direction to convey the user’s intentions to the system. When users want to move in a certain direction, it is a natural action for them to look in the direction. Using face direction to control motion has another merit: when the user wants to turn to the left or right, he/she needs to see in the direction by turning the head intentionally. As the turn is getting completed, the user should turn his/her head back to the frontal position to adjust the wheelchair’s direction. However, this behavior is so natural that it happens almost unconsciously. If the user uses a steering lever or wheel to control the vehicle’s
BY YOSHINORI KUNO, NOBUTAKA SHIMADA, AND YOSHIAKI SHIRAI
26
IEEE Robotics & Automation Magazine
1070-9932/03$17.00©2003IEEE
MARCH 2003
motion, he/she needs to move it consciously all the time. The interface by face direction can be user-friendlier than such conventional methods. As mentioned before, however, the problem is that such natural motions are noisy. Users may move their heads even when they do not intend to make turns. At first, we tried to solve the problem just by ignoring quick movements because we can expect that behaviors for turning are done intentionally, thus, they are slow and steady [6]. However, this simple method is not enough. For example, although it is rather natural for humans to look at obstacles when they come close, the previous system would turn toward the obstacles. This article presents our new modified robotic wheelchair. The key point of the research is the integration of the interface by face direction and autonomous capabilities. In other words, it combines the information obtained by observing the user and the environment. The integration means more than that the system has these two kinds of separate functions; it uses the sensor information obtained for autonomous navigation to solve the problem with control by face direction mentioned above. Also, if it can understand the user’s intentions from observing the face, it chooses an appropriate autonomous navigation function to reduce the user’s burden of operation. In addition, we introduce another function, which has not been considered in conventional systems, into our wheelchair. This is realized by observing the user when he/she is off the wheelchair. It has the capability to find the user by face recognition. Then it can move according to the user’s commands, indicated by hand gestures. This function is useful because people who need a wheelchair may find it difficult to walk to their wheelchair. This article describes our design concept of the robotic wheelchair and shows our experimental system with operational experiments.
System Design Figure 1 shows an overview of our robotic wheelchair system. As a computing resource, the system has a PC (AMD Athlon 400 MHz) with a real-time image-processing board consisting of 256 processors developed by NEC Corporation [7]. The system has 16 ultrasonic sensors to see the environment and two video cameras. One camera is set up to see the environment, while the other is used to look at the user’s face. The system controls its motion based on the sensor data. Figure 2 illustrates how the sensor data is used in the wheelchair. The ultrasonic sensors detect obstacles around the
MARCH 2003
wheelchair. If any obstacles are detected, the wheelchair is controlled to avoid them. This obstacle-avoidance behavior overrides other behaviors except those done manually with the joystick to ensure safety. The system computes the face direction from images of the user-observing camera. Users can convey their intentions to
Figure 1. System overview.
Environment Observing Camera
Race Recognition
Gesture Recognition
Ultrasonic Sensors
User Observing Camera
Face Direction Computation
Target Tracking
Go Straight
User’s Intention Recognition
Turn
Environment Recognition
Follow Walls
Avoid Obstacles
Figure 2. System configuration.
IEEE Robotics & Automation Magazine
27
The integration of observation of the user and that of the environment provides a user-friendly human interface. the system by turning their faces. The problem, however, is that users move their heads for various other reasons than controlling the wheelchair’s direction. The system needs to discern wheelchair-control behaviors from others. This is the intention-understanding problem in the current system. Our basic assumption is that users move their heads slowly and steadily when they are told that the wheelchair moves in their face’s direction. Thus, the system ignores quick head movements and only responds to slow, steady movements. There are cases, however, where users look in a certain direction steadily without any intention of controlling the wheelchair. For example, when users notice a poster on a corridor wall, they may look at it by turning their faces in its direction while moving straight. In general, if there are objects close to the user, he/she may tend to look at them for safety and other reasons. In these cases, users usually do not want to turn in those directions. However, we cannot exclude the possibility that they really want to turn toward those objects. To adapt to these situations, the system changes the sensitivity to face-turning detection depending on the environmental data obtained from the ultrasonic sensors. If the ultrasonic sensors detect objects close to the wheelchair in certain direc-
tions, the system reduces the sensitivity to detection of turning the face in these directions. Thus, the wheelchair will not turn in these directions unless the user looks there steadily. The environment-observing camera is used for moving straight autonomously. With conventional power wheelchairs, users need to hold the joystick to control the motion all the time, even when they just want to go straight. In addition to avoiding-obstacle behavior, the current system has an autonomous behavior to reduce the user’s burden in such cases. This behavior is initiated by the face-direction observation. If the face is looking straight awhile, the system considers that the user wants to go straight, starting this behavior. If it can find a certain feature that can be tracked around the image center, it controls the wheelchair motion to keep the feature in the image center. In the case of moving in a corridor, it extracts the side edges of the corridor, controlling the wheelchair motion to keep the intersection of the edges (vanishing point) at the same image position. The ultrasonic sensors are also used to help the wheelchair move straight. We have implemented the wall-following behavior developed for behavior-based robots [8]. The wheelchair can follow the corridor by keeping the distance from the wall using the ultrasonic sensor data. In addition, the environment-observing camera is used to watch the user when he/she is off the wheelchair. People who need a wheelchair may find it difficult to walk to their wheelchair. Thus, our wheelchair has the capability to find the user by face recognition. Then it can recognize his/her hand gestures. The user can make it come or go by hand gestures.
Motion Control by Face Direction Face Direction Computation Our previous system uses a face-direction computation method based on tracking face features [6]. However, the Table 1. Experimental evaluation for response time by six subjects (A-F).
(a)
(c)
Figure 3. Face-direction computation.
28
IEEE Robotics & Automation Magazine
(b)
(d)
Frames (n)
A
B
C
D
E
F
Total
1
1
2
3
3
2
3
14
3
2
2
3
3
3
2
15
5
3
3
2
3
3
3
17
6
1
3
2
2
2
2
12
7
1
3
2
2
2
2
12
8
1
3
3
2
1
2
12
9
1
2
2
1
2
1
9
10
1
2
2
1
2
2
10
12
1
1
2
1
2
2
9
14
1
1
1
1
1
2
7
16
1
1
1
1
1
2
7
18
1
1
1
1
1
1
6
20
1
1
1
1
1
1
6
MARCH 2003
Table 2. Experimental evaluation for unintentional head movements. Frames (n)
Level 1
5
■
■
■
■
■
Level 2
10
❍
❍
❍
■
■
■
■
■
■
15
❍
❍
❍
❍
■
❍
❍
❍
❍
20
❍
❍
❍
❍
❍
❍
❍
❍
❍
30
❍
❍
❍
❍
❍
❍
❍
❍
❍
■
Level 3 ■
■
■
■
■
■
■
■
■
■
■
■
■
■
■
■
❍
■
■
■
■
❍
❍
❍
❍
❍
■
❍
❍
❍
❍
❍
❍
❍ = the wheelchair motion was not affected. ■ = the wheelchair motion was affected.
tracking process sometimes loses the features. Thus, we have devised a new simple robust method as follows. Figure 3 shows each step of the process. Figure 3(a) is an original image. First, we extract bright regions in the image [Figure 3(b)], choosing the largest region as the face region [Figure 3(c)]. Then, we extract dark regions inside the face region, which contain face features, such as the eyes, eyebrows, and mouth [Figure 3(d)]. We compare the centroid of the face region and that of all the face features combined. In Figure 3(c) and (d), the vertical lines passing the centroid of the face region are drawn in thick red lines, and, in Figure 3(d), a vertical line passing the centroid of the face features is drawn in a thin green line. If the latter lies to the right/left of the former (to the left/right in the image), the face can be considered as turning right/left. The output of the process is this horizontal distance between the two centroids, which roughly indicates the face direction. We use vertical movements of the face features for the start-stop switch. If the user nods, the wheelchair starts moving. If the user nods again, it stops. Preliminary Experiments The system can compute face direction at 30 frames per second. We apply a filter to the direction data to separate wheelchair-control behaviors from others. The filter that we use is a simple, smoothing filter that averages the values over a certain number of frames. If this number is large, the system will not be affected by quick unintentional head movements. However, the user may feel uneasy at the slow response of the system. We performed actual running experiments by changing the number of frames n used in the filter to obtain basic data. First, we examined the degree of uneasiness about the slow response. We used six subjects. They were male students at Osaka University and not regular wheelchair users. We told them that the wheelchair would move in their face direction. They drove the wheelchair without any problem. This confirms that face direction can be used for the wheelchair operation. They were asked to give their subjective evaluation score for each smoothing filter condition from the viewpoint of the system’s response to their head movements: 1 for not good, 2 for moderate, and 3 for good. Table 1 shows the result. When n is small, the wheelchair responds sensitively to any head movements. Thus, the scores are a little small. When n is large, the wheelchair does not respond quickly, even when the user turns his/her head intentionally. The scores in these cases are consid-
MARCH 2003
erably small. The highest score was obtained when n was five. Second, we examined whether the system would be affected by quick, unintentional head movements. We considered three levels of movements: quick movements with duration less than 0.5 s (Level 1), moderate speed movements with duration from 0.5 to 1 s (Level 2), and slow movements with duration from 1 to 1.5 s (Level 3). At Level 3, users turn their heads and can read characters in the scene around them. We asked a subject to move his head five times for each level while the wheelchair was moving straight. Then we examined whether or not the wheelchair motions were affected. Table 2 shows the result. It indicates that n should be equal to or greater than 15 if we want the system to not be affected by movements at Levels 1 and 2. Use of Environmental Information The experimental results about the two evaluation factors show a tradeoff relation. We chose a constant n = 10 in our previous wheelchair, satisfying both to some degree [6]. To increase both evaluation factors, we need a method of recognizing human intentions more certainly. We considered using the environmental information instead of obtaining more information from observing the user. This is based on the assumption that our behaviors may be constrained by environmental conditions. We have modified the wheelchair so that it can change the value of n depending on the environmental information obtained by the ultrasonic sensors. If there are objects close to the user, he/she may tend to look at them for n [Frame] 30
20
10
0
1.0
3.0
Distance [m]
Figure 4. Number of frames versus distance.
IEEE Robotics & Automation Magazine
29
Our robotic wheelchair observes the user and the environment, can understand the intentions from behavior and the environmental information, and observes the user when he/she is off the wheelchair. safety and other reasons. However, the user usually does not want to turn in those directions. When users move in a narrow space, such as a corridor, they usually want to move along it. In addition, it might be dangerous if the wheelchair would unexpectedly turn due to the failure of user’s intention recognition. Thus, it is better to use large n in such cases. On the other hand, users may prefer quick response to move freely if they are in an open space where they do not have any obstacles around them. Even if errors
(a)
occur in user’s intention recognition, it is safe in such cases. Thus, it is appropriate to use small n. Based on the above consideration and the results of the preliminary experiments, we have determined to choose n using the relation shown in Figure 4. The horizontal axis shows the distance to objects measured by the ultrasonic sensors. The value n is determined for each turn direction using the sensor measurements in the direction. In the current implementation, we consider only the forward-right and forward-left directions. In addition, we have determined to use n = 8 all the time when the face is turned from either right or left to the center. This is based on comments by the subjects after the preliminary experiments. Their comments are summarized as follows. When they turned the wheelchair to the left/right, they did not mind the slow response. However, when the turn was nearly completed, and they turned their faces back to the frontal position, they felt uneasy if the response was slow. In the former case, they did turning-head behaviors intention-
B
A
Start
(b)
Figure 5. Vision processes for straight motion.
30
IEEE Robotics & Automation Magazine
Figure 6. Experimental environment map.
MARCH 2003
ally. Thus, they did not mind the slow response because their movements themselves were slow and steady. In the latter case, however, their behaviors were almost unconscious and quick. Thus, the slow response caused their uneasiness. This consideration has led us to use small n for center-oriented face movements.
A, the system set n large. When the wheelchair came close to B, the forward-right side became open. Thus, the value of n became small. The upper part of Figure 8 shows face direction during the travel—the lower part the actual wheelchair motion. Although the user moved his head three times to look around the scene before reaching B, the wheelchair moved straight. Then, it turned right when the user moved his head
Going Straight Using Vision With conventional power wheelchairs, users need to hold the joystick to control the motion all the time, even when they just want to go straight. The system has an autonomous behavior to reduce the user’s burden in such cases. This behavior is initiated by the face-direction observation. If the face is looking straight for a while, the system considers that the user wants to go straight, starting this behavior. The system has two vision processes for this behavior. One is a simple template matching based on the sum of absolute difference (SAD). When the system knows that the user would like to go straight, it examines the center region of the image. If the intensity variance in this region is large, the system selects this region as a template. Then, in successive images, it calculates the best matching position with the template, controlling the wheelchair to keep the matching position at the center. The template is updated as the wheelchair moves. If the method fails, and the wheelchair moves in a wrong direction, the user will turn the face in the intended direction. This means that the system can tell its failure from the face-direction computation result. If this happens, it waits awhile until the user keeps looking forward steadily, then resumes the process with a new template. The second is based on a vanishing point. This is used such as in a corridor where a distinguished target to be tracked may not be obtained. The system extracts both side edges of the corridor, calculating the intersection of the edges. It controls the wheelchair motion to keep the intersection (vanishing point) at the same position in the images. Figure 5(a) shows an example of the tracking method where the tracking region is indicated by the white square. Figure 5(b) shows an example of the vanishing-point method, where the extracted corridor edges are drawn in white.
n [Frame]
Distance [m]
30
3.0 Distance
20
2.0
Frame 10
1.0
B
A
0 0
0.50
1.00
1.50 Path [m]
Figure 7. Distance to obstacles and the number of frames.
Face Direction L
Total System Experiments Function Check First, we performed experiments to examine whether or not the proposed functions could work properly, and the wheelchair could navigate effectively in a narrow crowded space. The subjects were not wheelchair users. Figure 6 shows the experimental environment map, where the rectangles and ellipses show desks and chairs, respectively. The curve in this figure shows an example path of the wheelchair. Figure 7 shows the distance from the wheelchair to the obstacles on the right side along the path from A to B in Figure 6. It also shows the number of frames n used in smoothing face-direction data. Since several objects were located around
MARCH 2003
R L Wheelchair Motion A
B
R 0.00
0.50
1.00
1.50 Path [m]
Figure 8. Face direction and wheelchair motion.
IEEE Robotics & Automation Magazine
31
to the right to show his intention of turning. Figure 9 shows the wheelchair in the experimental run. Although we have not evaluated the system quantitatively, the six users who tried the previous version of the wheelchair gave favorable comments to this new version.
(a)
(b)
(c)
(d)
(e)
(f)
Figure 9. Experimental run.
32
IEEE Robotics & Automation Magazine
Experiments by Wheelchair Users Next, we took the wheelchair to a hospital where rehabilitation programs were provided for people who had difficulty in moving around. Five patients, three females and two males, joined the experiments. They were all wheelchair users; three mainly used manual ones and the other two used power wheelchairs daily, which happened to be the same type that we adopted in our system. They were told before riding, “If you nod, the wheelchair will start moving, and if you nod again, it will stop. It will move in the direction where you look.” With this instruction only, all were able to move the wheelchair without any problem. After confirming that they were able to control the wheelchair, we performed experiments to check the effectiveness of the integration of both user and environmental observations. We put boxes, as shown in Figure 10, on the path. We asked them to avoid the boxes. Even though they tended to look at the boxes, the wheelchair successfully avoided the obstacles. Then, we switched off the integration function and asked them again to avoid the boxes. Although they were also able to avoid the obstacles, the radii of the turn around the obstacles were larger than those in the previous case. This can be explained as follows: in this case, when they looked at the obstacles, the wheelchair started turning in their direction. Thus, to avoid collision they tried to move the wheelchair to depart far from them. One subject had a problem with her right elbow, having difficulty in steering her own power wheelchair. She told us that she preferred our robotic wheelchair. Our wheelchair proved to be useful for people like her. The others gave a high evaluation because the wheelchair can move safely by simple operations. However, since the current wheelchair is heavy and cannot turn as quickly as the base machine, they were not sure whether or not the proposed interface was definitely better than that with a joystick. This aspect is the focus of our future work: to make the wheelchair move
MARCH 2003
Figure 10. Experimental scene at a hospital.
Person A
(a)
(b)
Figure 12. Tracking the face and the hands.
Figure 11. Face recognition to detect the user (Person A).
as quickly as the base machine by solving implementation issues and to perform usability tests to compare our wheelchair with ordinary power wheelchairs.
Remote Control Using Face and Gesture Recognition In this section, we describe a new optional function of our wheelchair. Since this is an ongoing project, we briefly describe what we have done so far in this article. Details can be found in [9].
MARCH 2003
On various occasions, people using wheelchairs have to get off them. They need to move their wheelchairs where they do not bother other people. Then, when they leave, they want to make their wheelchairs come to them. It is convenient if they can do these operations by hand gestures, since hand gestures can be used in noisy conditions. However, computer recognition of hand gestures is difficult in complex scenes [10]. In typical cases, many people are moving with their hands moving. Thus, it is difficult to distinguish a “come here” or any other command gestures by the user from other movements in the scene. We propose to solve this problem by combining face recognition and gesture recognition. Our system first extracts face regions, detecting the user’s face. Then, it tracks the user’s face and hands, recognizing hand gestures. Since the user’s face is located, a simple method can recognize hand gestures. It cannot be distracted by other movements in the scene. The system extracts skin-color regions by color segmentation. It also extracts moving regions by subtraction between consecutive frames. Regions around those extracted in both processes are considered as face candidates. The system zooms in on each face candidate one by one, checking whether or not it is really a face by examining the existence of face features. Then, the selected face region data is fed into the face-recognition process. We use the face-recognition method proposed by Moghaddam and Pentland [11]. Images of the user from various angles are compressed in an eigenspace in advance. An observed image is projected onto the eigenspace, and the distance-from-feature-space (DFFS) and the distance-in-feature-space (DIFS) are computed. Using both data, the system identifies whether or not the current face is the user’s face. Figure 11 shows an example of face recognition. After the user’s face is detected, it is tracked based on the simple SAD method. Moving skin-color regions around and under the face are considered the hands. They are also tracked. Figure 12 shows a tracking result. The relative hand positions, with respect to the face, are computed. The spotting-recognition method [12] based on continuous dynamic programming carries out both segmentation and recognition simultaneously using the position data. Gesture recognition in complex environments cannot be perfect. Thus, the system improves the capability through interaction with the user. If the matching score for a particular registered gesture exceeds a predetermined threshold, the wheelchair moves according to the command indicated by this gesture. Otherwise, the gesture with the highest matching score, although it is smaller than the threshold, is chosen. Then, the wheelchair moves a little according to this gesture command. If the user continues the same gesture after seeing this small motion of the wheelchair, it considers that the recognition result is correct, carrying out the order. If the user changes his/her gesture, it begins to recognize the new gesture, iterating the above process. In the former case, the IEEE Robotics & Automation Magazine
33
gesture pattern is registered so that the system can learn it as a new variation of the gesture command. Experiments in our laboratory environments, where several people were walking, have confirmed that we can move the wheelchair by the same hand gestures as we use between humans.
Conclusion We have proposed a robotic wheelchair that observes the user and the environment. It can understand the user’s intentions from his/her behaviors and the environmental information. It also observes the user when he/she is off the wheelchair, recognizing the user’s commands indicated by hand gestures. Experimental results prove our approach promising. Although the current system uses face direction, for people who find it difficult to move their faces, it can be modified to use the movements of the mouth, eyes, or any other body parts that they can move. Since such movements are generally noisy, the integration of observing the user and the environment will be effective to know the real intentions of the user and will be a useful technique for better human interfaces.
Acknowledgments The authors would like to thank Tadashi Takase and Satoshi Kobatake at Kyowakai Hospital for their cooperation in the experiments at the hospital. They would also like to thank Yoshihisa Adachi, Satoru Nakanishi, Teruhisa Murashima, Yoshifumi Murakami, Takeshi Yoshida, and Toshimitsu Fueda, who participated in the wheelchair project and developed part of the hardware and software. This work was supported in part by the Ministry of Education, Culture, Sports, Science and Technology under the Grant-in-Aid for Scientific Research (KAKENHI 09555080, 12650249, 13224011).
References [1] D.P. Miller and M.G. Slack, “Design and testing of a low-cost robotic wheelchair prototype,” Autonomous Robotics, vol. 2, pp. 77-88, 1995. [2] T. Gomi and A. Griffith, “Developing intelligent wheelchairs for the handicapped,” Lecture Notes in AI: Assistive Technology and Artificial Intelligence, vol. 1458, pp. 150-178, 1998. [3] R.C. Simpson and S.P. Levine, “Adaptive shared control of a smart wheelchair operated by voice control,” in Proc. 1997 IEEE/RSJ Int. Conf. Intelligent Robots and Systems, 1997, vol. 2, pp. 622-626. [4] H.A. Yanco and J. Gips, “Preliminary investigation of a semi-autonomous robotic wheelchair directed through electrodes,” in Proc. Rehabilitation Engineering Society of North America 1997 Annual Conf., 1997, pp. 414-416. [5] N.I. Katevas, N.M. Sgouros, S.G. Tzafestas, G. Papakonstantinou, P. Beattie, J.M. Bishop, P. Tsanakas, and D. Koutsouris, “The autonomous mobile robot SENARIO: A sensor-aided intelligent navigation system for powered wheelchair,” IEEE Robot. Automat. Mag., vol. 4, pp. 60-70, 1997. [6] Y. Adachi, Y. Kuno, N. Shimada, and Y. Shirai, “Intelligent wheelchair using visual information on human faces,” in Proc. 1998 IEEE/RSJ Int. Conf. Intelligent Robots and Systems, 1998, vol. 1, pp. 354-359.
34
IEEE Robotics & Automation Magazine
[7] S. Okazaki, Y. Fujita, and N. Yamashita, “A compact real-time vision system using integrated memory array processor architecture,” IEEE Trans. Circuits Syst. Video Technol., vol. 5, pp. 446-452, 1995. [8] I. Kweon, Y. Kuno, M. Watanabe, and K. Onoguchi, “Behavior-based mobile robot using active sensor fusion,” in Proc. 1992 IEEE Int. Conf. Robotics and Automation, 1992, pp. 1675-1682. [9] Y. Kuno, T. Murashima, N. Shimada, and Y. Shirai, “Understanding and learning of gestures through human-machine interaction,” in Proc. 2000 IEEE/RSJ Int. Conf. Intelligent Robots and Systems, vol. 3, pp. 2133-2138. [10] V.I. Pavlovic, R. Sharma, and T.S. Huang, “Visual interpretation of hand gestures for human-computer interaction: A review,” IEEE Trans. Pattern Anal. Machine Intell., vol. 19, pp. 677-695, 1997. [11] B. Moghaddam and A. Pentland, “Probabilistic visual learning for object representation,” IEEE Trans. Pattern Anal. Machine Intell., vol. 19, pp. 696-710, 1997. [12] T. Nishimura, T. Mukai, and R. Oka, “Spotting recognition of gestures performed by people from a single time-varying image,” in Proc. 1997 IEEE/RSJ Int. Conf. Intelligent Robots and Systems, 1997, vol. 2, pp. 967-972.
Yoshinori Kuno received the B.S., M.S., and Ph.D. degrees in electrical and electronics engineering from the University of Tokyo in 1977, 1979, and 1982, respectively. He joined Toshiba Corporation in 1982. From 1987-1988, he was a visiting scientist at Carnegie Mellon University. In 1993, he joined Osaka University as an associate professor in the Department of Computer-Controlled Mechanical Systems. Since 2000, he has been a professor in the Department of Information and Computer Sciences at Saitama University. His research interests include computer vision, intelligent robots, and human-computer interaction. Nobutaka Shimada received the B.Eng., the M.Eng., and the Ph.D (Eng.) degrees in computer-controlled mechanical engineering in 1992, 1994, and 1997, respectively, all from Osaka University, Osaka, Japan. In 1997, he joined the Department of Computer-Controlled Mechanical Systems, Osaka University, Suita, Japan, where he is currently a research associate. His research interests include computer vision, vision-based human interfaces, and robotics. Yoshiaki Shirai received the B.E. degree from Nagoya University in 1964, and received the M.E. and the Ph.D. degrees from the University of Tokyo in 1966 and 1969, respectively. He joined the Electrotechnical Laboratory in 1969. From 1971-1972, he was a visiting researcher at the MIT AI Lab. Since 1988, he has been a professor of the Department of Computer-Controlled Mechanical Systems, Graduate School of Engineering, Osaka University. His research area has been computer vision, robotics, and artificial intelligence. Address for Correspondence: Yoshinori Kuno, Department of Information and Computer Sciences, Saitama University, 255 Shimo-okubo, Saitama, Saitama 338-8570, Japan. Tel.: +81 48 858 9238. Fax: +81 48 858 3716. E-mail:
[email protected]. saitama-u.ac.jp,
[email protected].
MARCH 2003