An arbitrary complex system may have many of the features of a robot and not be a robot: A desk-top computer is not a robot whereas a computer that controls a mobile platform is a robot. The desk-top computer can be referred to as a static tool, whereas a robot could be classified as an active tool. The presumed robot must have the capability to move through its environment before it can be considered a robot. An automobile cannot be considered a robot since it is a static tool; if the automobile has anti-lock brakes, which sense and affect the environment, albeit slightly, it could be considered a robot. In our society it is difficult to view an automobile as a robot-it really is a mass of metal and plastic hurtling a human occupant to some destination. An automated guide vehicle, like the Sea-Tac automated subway system can be considered a robot. Note that if a mobile platform is controlled by a computer physically removed from the platform, then the platform can be considered a robot and part of a robot system. This is an important observation since this very nearly describes the LLAMA and G2 mobile robot system.
Mobile Robots are a class of robots that have the capability to transport themselves in three dimensional space. As was just observed, the controller need not be physically present on the mobile platform. To describe this difference in configuration, we will define Open Autonomous System as a mobile robot that is controlled by an external system, but not by a person. We will define a Closed Autonomous System as a mobile robot that is entirely self contained. Using this nomenclature and extending it appropriately, we can describe the LLAMA and G2 project as an Open Near-Autonomous System with capabilities for Closed Autonomy. This says that LLAMA could be programmed to perform some behavior with no external control, although under most circumstances would be controlled by the G2 expert system. The ultimate goal in robotics is an Autonomous System, either open or closed.
An autonomous agent is an abstraction of an autonomous system. Determining whether a system is truly autonomous requires some formal metrics, which is beyond the scope of this discussion. Many models that represent autonomous (robot) agents separate the robot from the environment. Kaelbling presents one possible model of an autonomous system that has the capability for learning by noting that an "agent can be in many different states of information about the environment, and it must map each of these information states, or situations, to a particular action that it can perform in the world" (Kaelbling 1991, 132). The system she suggests is reactive, which appears to be an essential quality for an autonomous agent. Implementing reactive behavior is trivial; implementing learning is a very difficult problem and will not be covered in this discussion except to note that the LLAMA and G2 systems contain resources that might be configured for learning.
A mobile robot system that can analyze and react to its environment through the robot platform is called robo-centric. A robot interacts with the environment by changing its physical presence over time. The robot may change the environment when it enables an effector or senses the environment through its sensors. Its perception of the environment is constantly changing if the robot system senses in a robo- centric manner. For example, even though the robot may not move a great distance and may not change its environment significantly, it is very likely that all sensor readings will have changed their values. This large-scale sensor update can be a great burden on sensor processing and interpreting systems. G2 attempts to dampen this potential speed bottle-neck by introducing an environmental modeler as part of its knowledge base structure. In essence the LLAMA and G2 system is robo-centric, but contains an environmental modeler to reduce the complexity of sensor processing. This modeler expects certain environmental qualities as being absolute, such as level floors, ever-present (read: detectable) walls, landmark recognition, etc. These environmental restrictions and modeler are warranted: A person who has memorized how to walk to the bathroom in the dark relying only on feel uses a memorized model of the environment to reach his or her goal. A mobile robot system is much like this person in that the limited sensing capabilities are represented as sensor processing deficiencies.
A direct effector (commonly referred to simply as an effector) in a mobile robot can be a movable appendage (manipulator) or a locomotive device (wheels). Certain sensors can be considered active since they emanate waves or particles into the environment as part of the sensing process. An example of such a sensor is the ultrasonic range finder. The ultrasonic range finder sends a high frequency impulse into the surrounding environment to stimulate a reflection and eventual detection of the reflected wave.
An indirect effector does not change the platformís physical configuration but does affect the environment. For example, an infra-red elevator interface would include a transmitter on the platform and a detector on the elevator control panel. The robot could request elevator service without making contact with the up/down buttons. If a building containing a robot was "wired" to the robotís supervisor computer, then the supervisor could pass a request to the elevator system directly when the robot and elevator come into close proximity. (Note that the building system-supervisor computer-mobile platform does not conform to our definition of a mobile robot system since the buildingís elements cannot be isolated without impairing the intent of the design.) The effector which would have been used to control the elevator is no longer needed. The presence of the robot near the elevator and the intent to ride on the elevator will have caused a virtual effector event in the latter variation. There are currently no such events implemented in the LLAMA and G2 system since it is robo-centric.
As robots become more complex the need for self-diagnosis arises. There is no fundamental difference between the robot and its environment to the robot hardware/software and its (robot) environment. A robot is composed of different hardware and software modules that attempt to function as a seamless whole. If a module fails or falters, it is imperative that the supervisor is made aware of the fact. Each module may contain sensors that determine the condition of the module. Such sensors are no different from the sensors used to sense the robotís environment. We can refer to these as system sensors. These sensors can be further sub-classed, but current robot technology does not include these variations so the one term will suffice. Similarly, system effectors are plausible devices, but are rarely found in a robot. A consistent definition for a system effector is any device which causes a change in another device using a means other than a physically guided interface. An I/O port or optical fiber interface would not qualify under this definition. If these were included in the definition, virtually any software to hardware interface would qualify as an effector! Changes to system parameters or behavior are part of a class called system interface control.
Future mobile robots may contain modules, perhaps miniature repair robots, that monitor and adjust system performance. LLAMA can be configured to produce either solicited or unsolicited system sensor reports. Additionally, a rudimentary "self-repair" can be implemented: if some event causes an emergency stop condition, such as running into an object, a macro can be designed which will determine the stopped condition and reset the emergency-stop automatically.
Since a robo-centric system can only rely on its own sensors (except as noted above), in an open autonomous system sensor data must return from the implementor to the supervisor. This is the basis for reports. Similarly, supervisor to implementor control is achieved through the use of commands. One advantageous side effect of a robo-centric open autonomous system is that the platform can be modeled as a black box. The platform has one input and one output to the stationary system, which lends itself to such an interpretation. A greater difficulty is how to model the environment. Either the environment must be modeled as part of the black box, or the environmental model must somehow interface to the model of the robotís sensors and actuators.
Fig. 4.1. Agent Classes. The mobile robot agent classes are presented in a heirarchical format. Due to space limitations, the Functional class has been placed near the bottom of the diagram.
Return to Thesis Index