1 Introduction

1 Introduction

The field of mobile robotics is in a state of expansion. Computing systems for robotics have decreased in size and sensor technology has improved in recent years. Researchers have been given the tools to create autonomous agents in the guise of mobile robots. Intelligent mobile systems have a potential for changing our society, yet successful applications have been few and far between. Reasonably robust and rugged hardware has appeared, yet the software systems are still in their infancy. What may initially seem a great accomplishment in robot design may actually be a domain specific, targeted application. This is due to the inherent complexity of integrating components from various disciplines. Mobile robots are considered a class of autonomous agents, which generally are composed of hardware and software. Most autonomous agent designs include a model of the agent's environment, sometimes simplified to include only doorways and hallways. If the robot or environmental model were changed to meet new specifications then the inclusion or removal of these components should not affect the functionality of the remaining components. To meet these goals, a modular agent design is proposed.

We propose that an effective autonomous system can be developed by delegating high-level control to various symbolic and numerical processors and requesting real-time reactive behavior from a low-level, on-board, system. The low-level system will employ a well-defined message passing scheme with the high-level systems. The low-level system will be modular, in both hardware and software, allowing for incremental improvements without sacrificing performance and functionality. These improvements will come about through an expandable programming language, part of the low-level system, that resides on the mobile robot platform.

There exists a fundamental dichotomy in mobile robotics research: To be considered a success a mobile robot must act as an autonomous agent and accept no human help in resolving difficulties; however, developing a mobile robot system demands human manipulation of the hardware and software. In order to improve the hardware and software the developer must understand how the autonomous agent responds to changing conditions; the developer needs access to the "thinking" processes of the agent. In general, the more complex a task the agent is to perform, the harder it is to understand how the task evolves. If the robot system is created using a modular approach, new modules can be tested and brought on-line once functionality is assured. During operation, module communication can be monitored for any irregularities.

Some rather impressive accomplishments have been presented by Brooks with his subsumption mobile robot architecture, yet he points out that in larger systems, productive behavior is not assured and needs to be explored (Brooks 1990, 14). We do not intend to present a superior mobile robot system but rather show how traditional and nouvelle (in Brooks’ parlance) approaches to mobile robotic design can be applied to creating an effective and functional autonomous system.

The traditional expert system with its rule-based methodology has grown out of favor in many application fields, mobile robotics included. Exclusively rule-based systems become unmanagable when applied to complex problems. Adding rules to define a new situation can have side effects which are difficult to trace. A system that couples traditional procedures with limited use of rules in a modular framework is a recent solution to the problem; it is also the solution applied in the current high-level control system for this mobile robot project. By choosing the flexibility of a fast implementation such as the C language for the low-level system, any reactive architecture allowed in the language’s semantics can be implemented. The low-level system is considered the "on-board" system since it resides on the mobile platform, which is the physically mobile portion of the robot system.

A modular approach to developing autonomous agents has the advantage of using the most appropriate computing tool for the task. An off-board expert system can be used for reasoning, while a fast on-board compiled program can be used to implement reactive behavior. Certain elements, such as navigational data bases can be stored and accessed off board. An obstacle avoidance system should be placed on the robot so that response time is minimized. Such a distributed computing and control system can be referred to as a virtual autonomous mobile robot, in the same sense that modern computers can access virtual memory: A computer’s memory need not physically exist within the computer to be part of the computer’s address space. A mobile robot is a mobile platform with some kind of controller regulating its behavior. An effector and sensor controller with a down-link interface must be present in a virtual system. Certain computationally intensive functions can be implemented in a large stationary computer to communicate with the platform through some link, such as a radio modem. Thus the physical components of a mobile robot system may be spread over a large area. In fact, such a system could conceivably control many mobile platforms, each with its own reactive behavior.

The fundamental advantage of the modular approach is in its implementation. Existing algorithms can be applied to path-search problems while a subsumption architecture could be placed on the robot. Using known methods to solve sub-problems relieves the researcher from redesigning the wheel. If problems were to develop in the field, a data collector and simulator, part of the autonomous system, could track the system’s behavior. The simulator would generate a time history of the desired-to-actual robot behavior, while the data collector would monitor inter-module communication. An errant module could be replaced with an upgrade, etc. A new field would arise: the Robot Doctor, whose diagnosis apparatus would be the lap-top computer, not the stethoscope.

Distributed computing is used at present for mundane tasks such as large mathematical computations, but could serve just as well in controlling robots. Modularizing an autonomous agent has the potential for (1) improving the power and capabilities of such systems, (2) reducing development time since many researchers could apply their skills together, and (3) developing a better understanding of how such a system functions by observing how the modules interact. Research at the University of Washington’s Mobile Robotics Laboratory has built a base that meets the first two objectives. The third objective, of which this project is a part, is currently under study.

This project explores a new low-level robot control system, called LLAMA, that interfaces with a high-level expert-system controller. An efficient message passing interface has been developed between LLAMA and the expert system that relies on token passing. LLAMA contains an on-board programming language which can be used to reduce common functions into one program, called a macro. This macro, which is simply a token, further improves the efficiency of the interface. Complex behaviors can be implemented as parallel processes through multitasking instructions. Macros can execute concurrently in the multitasking environment. LLAMA serves as a base from which any conceivable low-level architecture can be launched, while being a robot control system in its own right.

The LLAMA project was created for three purposes: (1) to provide an efficient low-level actuator and sensor controller; (2) to provide an "open" architecture that allows for future improvements; (3) as noted earlier, a means to access the inner workings of an autonomous system for study. The LLAMA low-level controller contains a post-fix interpreter and compiler that resembles the threaded interpreter programming language, FORTH, but with added multitasking capabilities. Chapter 6 explores the use of finite state machines to approximate the robot’s behavior. Chapter 8 introduces the trajectory emulator, which approximates the motion of the robot based on given commands. An action and sensor coordinator, the expert computer, provides the needed interface between the interpreter and the hardware controllers and is covered in chapter 7 .


Return to Thesis Index