3 The System Interface

3 The System Interface

Selection of G2 as a High-Level Working System

The expert system used, G2 from Gensym, is an object-oriented system that works well with a broad range of operational techniques (Rules, procedures, objects, relations, interface objects, etc.) and it is LISP- based. The user interface uses iconic representations of objects in a graphical workspace environment to create knowledge bases. G2 contains many features which tend to give it a shallow learning curve. G2 has special features that qualify it as a real time system, including a scheduler which ensures that tasks with high priority execute when needed. G2 is typically used in process control applications.

Optimal Interfaces

One goal of the LLAMA project is to create an efficient interface between G2, the high-level system and LLAMA, the low-level system. To achieve this goal commands and reports must contain a degree of meaning that is efficient and unambiguous. If the G2 to LLAMA packet radio interface is configured to echo commands for verification purposes, such a configuration can reduce the communications efficiency. Packet radio systems have built-in error checking and the wire connections between them and the computing system are generally very reliable and echoing commands is usually not necessary. The echoing can be disabled in LLAMA, ensuring that commands and reports travel in only one direction.

Command Efficiency

A command is represented by a set of characters, which give the name of the command. If certain procedures are followed, command execution efficiency may be improved. Each command that is sent to LLAMA from G2 must exist in LLAMA’s command dictionary. Macros and primitives are referenced in the dictionary by name. The system interpreter receives and then executes commands by searching for them in the dictionary. The end of the dictionary is always searched first, which is where the most recent definitions are located, speeding execution of recently defined commands. To improve radio communications efficiency, each macro name should be composed of the minimum number of characters to uniquely describe it. In general, the length of a macro name should be inversely proportional to its frequency of use. The system interpreter should not be used to execute a complex sequence of instructions as each instruction must be searched for in the dictionary; commands of this nature should be defined as a macro if either (1) they will be used more than once or (2) they must execute with the greatest speed. For example, the "all-stop" command, "s", is very short and is transmitted with the least possible delay since it is composed of only one character.

Many of the macros developed for LLAMA will require parameters or arguments. A complex macro might require thirty parameters. Since each parameter might be two or more characters long (including delimiter) a transmission speed penalty is incurred. If it is known that certain parameters are constants over the use period of the macro then they should be encapsulated into the macro. For example, the "steer until angle with correction" command, "STUACACC", requires six parameters. Since the default drive and steer acceleration and correction angle are usually used with this command, these three parameters can be encapsulated as constants into a macro (called "STUAC") with three parameters that performs nearly the same function.

If it is known beforehand that certain grouping of commands will be used repeatedly in various performance orders, those groupings could be converted into macros and substituted into the performance order, improving the download time of the performance order. A performance order will usually use the STUAC version of the "steer until angle with correction" function, not the function with more parameters. If many of these functions are concatenated, the size of the performance order would be nearly half of the equivalent six-parameter performance order.

In some cases, a macro can be rewritten to improve its execution speed. Many of the more advanced but common macros may be written in completely different ways but still perform the same function. The following observation from the PostScript Program Design Manual is also appropriate for FORTH and LLAMA:

"The difference in execution speed between a poorly written C program and well written one may be a factor of two or three at best, if the code is improved dramatically. More often, improvements of 10 percent are considered quite good. . . .

Simply improving the style of a poorly written PostScript language program can yield substantial execution speed improvements, frequently by a factor of ten or more" (Adobe Systems Incorporated 1988, 3).

Such improvements usually result from someone who applies a very good knowledge of programming in these languages; however, certain observations on improving program execution can be made:

1. Stack and stack operators should be used whenever possible, as post-fix systems are optimized for their use. Generally, the stack is used to manipulate temporary values.

2. In-line code within loops should be used since nested macro calls exact a significant performance penalty, especially if repeated frequently.

3. Variables may be used in lieu of temporary stack values if the stack manipulations become excessive.

4. Macros should store final results on the stack if these values will be used by a following macro (as opposed to storing values into variables).

5. As noted above, use constants within macros instead of parameters whenever possible.

Report Efficiency

Report efficiency follows from the command efficiency argument. Reports should contain data that has eliminated redundancies, while ensuring that the character count is minimized. An equivalent macro system could be developed for the report process, where certain repeated or redundant data is somehow encapsulated into a token, reducing the amount of characters in the report. Rather than developing a reverse macro, LLAMA macros can be defined to help encapsulate data, at the expense of processing overhead.

For example, LLAMA may return the range reading of all 24 ultrasonic sensors. If only four sensors have been activated, then the other twenty readings will be zero and thus are not needed in the report. Returning data in this manner is certainly not efficient. All data return formats in LLAMA are constructed from the macro programming language. If the macro used to create the report is inadequate, a new macro could be defined from G2 at runtime and executed by LLAMA. Thus LLAMA provides a flexible method of designing reports.

Continuing with the previous example, we will note that there are trade-offs in designing an efficient report passing system. An ultrasonic reading in the LLAMA system returns all twenty four range readings at one time. The format is as follows:

	ug XX XX XX ... XX
Where XX represents a two character hexadecimal value between 0 and 255. This message is composed of 48 data characters, 2 identifier characters and 25 delimiters, for a total of 75 characters. As stated, there are 24 XX’s, or readings, to display. If a reading is zero then XX = 00, in other words, two characters are always sent for each value. Now assume that our report generator (a macro) is allowed to send one character, "0", if the reading is zero, and two characters otherwise. If every value were zero, we would have a message of 51 characters. Choosing whether to return one or two characters requires a conditional branch in the report generator macro. We also assume that our generator uses a loop to remove the twenty four values from the stack and assemble them into an output report. Thus the conditional branch must take place 24 times. This conditional branch will reduce the efficiency of the display macro. A statistical analysis might show whether the reduction in transmission time of the (possibly) shorter packet is sufficient to outweigh the computational penalty of the display macro.

Token Objects and Skills

What is remarkable is that nearly every token in LLAMA can be described as an object in G2. Although FORTH is not an object-oriented language, it can be represented using object-oriented basics. A macro in LLAMA is just a list object containing an ordered sequence of token objects. Since a list is an object, a list can contain a list, much like the way LLAMA allows nested macros. This observation is fundamental to integrating LLAMA-knowledge in G2. This observation will be used to describe a consistency checker in chapter 6 .

Token objects could be arranged to form skill objects. A skill is a group of actions that can be listed under a common name. A skill object contains the means of generating the appropriate macro and parameters for an action grouped under that skill. A skill object might be a "rectangle." From this skill object a desired skill such as "move the robot in a square path" could be obtained. Such a skill could be created in a number of ways depending on how the skill object is specified in G2 (with or without parameters, continuous or abrupt motion, etc.). A macro to perform "execute a square two feet on a side in ten seconds" would very likely be different from a macro to perform "execute a square two feet on a side". A square is a skill, however, the specification of the square affects how the skill is composed into a macro.

Performances and Performance Orders

Complex robot behavior can be specified if the component behaviors are understood. In most cases, component behaviors are skills. A performance is a coordinated grouping of parameterized skills to define a specific robot operation. A performance order is a composition of performances where each performance executes under certain time, environmental, and/or system constraints. Such a definition does not address LLAMA specifically, so we append the following to the definition: A performance order executes as a parallel process in LLAMA.

In LLAMA coordinated robot behavior occurs when multiple performance orders are interacting, either directly or indirectly (as parallel processes). Performance orders may interact directly by setting and reading global variables or indirectly by changing and observing the robot’s configuration. Chapter 6 presents two performance orders that interact indirectly. Why should performance orders interact? Isn’t a performance order adequate to specify a particular robot operation? These questions can be answered if it is understood that a performance order is a logical grouping of linear or looped instructions, in the sense that a logical grouping is a sequence of behaviors. Ideally, as behaviors become more complex, they should not occur in sequences but in parallel, because many behaviors can be considered (1) adjuncts, or those needed only under certain conditions, or (2) concurrent, such as the human trait of simultaneous walking and talking.

Hans Moravec, a respected researcher in mobile robotics has the following observation on parallel behaviors in mobile robots:

"Our work at Carnegie Mellon with integrated tasks for a mobile robot suggests that basic processes should be organized into modules that run concurrently. A navigation program driving the robot to a desired location might, for example, coexist with the ones that watch out for surprises and dangers. If a stairwell-detecting module concludes that a hazard is near, it will take over control of the robot until the danger was past" (Moravec 1988, 37).

It should be noted that Moravec helped Denning design their three-wheeled mobile robot, ours being a recent version of the original.

The parallel approach suggested by Moravec is very similar to the coordinated robot behavior experienced by executing multiple performance orders in parallel. This layering of modules is more "horizontal" than Brooks’ similar hierarchical subsumption architecture. Spreading processing capability and control between modules that carry near-equal weight reduces the reliance on one particular architecture. The disadvantage is that more hardware and software is needed for such a distributed architecture. LLAMA does not function well if more than about three such modules run in parallel due to the low speed of the one microprocessor running the entire system through the OS-9 process-switching kernel.

Concept Representation

G2 is fundamentally a symbolic processing system. In Wah, "Symbolic processing is defined as those computations that are performed at the same or more abstract level than the word level" (Wah, et al. 1989, 511). Furthermore, using Wah’s definition, G2 represents an open system and is non-deterministic in nature in that it changes itself over time (Wah, et al. 1989, 511). For example, G2 can create new functions for LLAMA at run-time based on the current need, thus the system is non-deterministic. In addition, G2 is a class of what are known as "production systems" in that it uses a rule-based inference engine that operates on antecedents and consequents. The production system in G2 is a variation of traditional production systems. G2 uses a scheduler to achieve real time operation. The scheduler will always try to activate high-priority rules and delay low priority rules if the inference engine gives the scheduler too many rules to fire (Gensym Corporation 1990, 47).

G2 has the ability to represent relationships between objects, either through connections or relations. Arbitrarily connected objects will surely be meaningless. If we define the permissible connections of an object, then it becomes easier to analyze the structure of the connected objects. For example, if the following set of commands are sent to LLAMA, they will loop until the robot’s steer motor has stopped turning:

	{ begin plst until }
G2 can define a loop as a "begin-until" combination. This loop has the temporal quality of capturing control forever or until a condition is met. The plst argument returns the motion status of the steer motor. The concept "wait until motor has stopped turning" is generated as "wait until [condition]", with condition being "motor inactive". Thus a concept has emerged from a sequence of instructions. A mapping could be created that would allow generation of descriptions of any macro. Such a system would be useful in understanding the string of commands generated by an automated performance order generator.]

Return to Thesis Index