7 The LLAMA Robot Control System

7 The LLAMA Robot Control System

The inner workings of LLAMA will be expanded upon in this chapter. Also, a description of the modules that make up LLAMA will be given. The aim is to show how the modules work and describe their capabilities and restrictions.

Importance of Robust Software

Since LLAMA doesnít reside and execute exclusively in a desktop computer, but in a robot, the potential for physical harm and damage caused by software problems is ever present. Of somewhat less consequence, error-prone or inconsistent operation can cause users to limit their use to known system behaviors, which would under-utilize the software. At the worst, the users would choose an alternate software if the burden of operation became too great. Such revelations are not astounding or revolutionary. These concerns are ever-present in commercial software and apply equally in the educational environment.

The single greatest source of potential problems is the drive motor control organization. The drive motor performs translation of the robot, and as such this function could be responsible for damage. Without restricting LLAMAís capabilities severely, certain controls and limits help reduce the possibility of erroneous operation. LLAMA incorporates a drive speed governor, which initially limits the drive speed of the robot to one foot per second. The governor can be overridden to achieve greater speeds. LLAMAís multitasking capabilities provide additional insurance against loss of control. If motion sequences are sent to a spawned interpreter, the system interpreter becomes available to override the motion sequence at any time. The "h," or halt, command has the highest motive override priority in the LLAMA system. The "halt" command stops the robot but not the motion sequences. The "exit" command, which includes a parameter for the process id of the errant spawned interpreter, will terminate the motion sequence.

Confirmation prompts are used in many applications to verify that the user wishes to execute a certain action. They are used if the action may have significant consequences; they give the user an opportunity to reconsider the intended action. Such niceties do not exist in LLAMA primarily because these checks could be encoded into the G2 knowledge base. Any development work that involves motion requires the equivalent of a "driverís permit" for the developer. To qualify one must (1) read the LLAMA Userís Manual, (2) pass a written exam on the use of the LLAMA system, and (3) successfully operate the robot in a controlled environment. This is a procedure established at the University of Washington to instill the importance of safety into new users, as well as understanding the limitations inherent in the system. Eventually, other checks could be incorporated into the system to minimize behavior problems.

OS-9 provides a robust framework on which LLAMA resides. In most cases, if an unexpected condition causes LLAMA to crash, OS-9 will kill the faulty process. Usually control will be returned to the system manager, which will stop the motors through the expert computer. Although it is not guaranteed that the robot will stop under all error conditions, there is some system crash control built into LLAMA and OS- 9.

Report Interface Formats

Since LLAMA is part of a robot control system, report formats are optimized for interfacing to G2. LLAMA system error and message reports occur in three distinct formats: G2 receives a compact encoded message, user messages arrive in readable text format, and user messages can be spoken by the robotís speech synthesizer. The user may select which combination of these three is used. User text messages are preceded by a caret symbol "^" so the G2 interface may direct such messages to its appropriate message objects. Currently such messages are directed to G2ís message board, a workspace containing time- stamped messages from LLAMA and G2.

All control and sensor reports are parsed by the GSI interface and the contents of the reports, if recognized, are placed into the appropriate G2 interface objects. The LLAMA Userís Manual lists these reports and their range of values.

LLAMA does not have a command prompt. FORTH uses the lowercase "ok" prompt if commands are entered correctly. LLAMA simply executes such commands, but only produces a message if an error occurred or a status message was warranted. The lack of a prompt improves the message passing efficiency of the interface. The command "stat," which can be used to ensure a reliable connection between G2 and LLAMA, will return an acknowledgment if a connection exists.

The Interpreter

The LLAMA interpreter performs many duties. It must parse input, compile tokens, spawn tokens, manage spawned processes, execute threaded macros, interface to the speech computer, and send commands to the expert module driver, to name the most important functions. These activities have been grouped into a functional graph, figure 7.1. The connections between boxes indicate the calling hierarchy of processes. For user convenience, LLAMA is started through a system manager as given in figure 7.2. The system manager initializes pipes for inter-process communication and creates events for semaphore multitasking operations while starting needed concurrent processes.

 

Fig. 7.1. LLAMA System Overview. The master system graph for the "interprt" module. The functional interconnection of the LLAMA interpreter system is shown. The block in the dashed lines indicates "interprt" executing as a "spawned interpreter" using the reentrant capabilities of OS-9, thus this diagram represents one physical module called "interprt" in the OS-9 system.

Fig. 7.2 The LLAMA Starting Sequence. The top-most system diagram. It shows that the concurrent support modules "exdvr" and "spkr" are started in an orderly manner. The module "sysmgr" waits for the module "interprt" to complete before killing the support modules "exdvr" and "spkr".

Configurations

The system manager, "sysmgr," starts three concurrent processes in the following order, the expert module driver, "exdvr," the speech computer, "spkr," and finally the interpreter, "interprt". Once the desired modules have been started, the system manager will become idle and enter a sleep mode while consuming very little processor time. To start LLAMA, the command "sysmgr" is given to OS-9 followed by any appropriate command line options. By specifying certain command line options, LLAMA can be configured differently. If the system manager is started without any options, then the interpreter enters work mode with built-in macros. If a "-i" (dash-eye) option is added, then interpretive mode is entered. If "-m" is added, the built-in macros are not compiled: this will speed the boot-up and leave macro definitions to G2 or the user. Finally, if a "-s" is added, the speech computer is not used, which can be useful if a speech board is not installed. In the event that the speech computer is disabled, all information messages and errors will appear in their native format, which is a numerically coded text string. The corresponding messages are available in the appendix of the LLAMA Userís Manual.

Initialization

When OS-9 starts a new process, it reserves memory for that processí variables. The interpreter will initialize these variables based on information contained in the process module. Control is passed to the "main" function in the "INTERPRT.C" module, which initializes LLAMA further and then performs a bootstrap-like operation. The "PARSER.C" module, which is compiled into the OS-9 "interprt" module contains text strings that represent LLAMAís built-in macros. A list of LLAMA variable and constant definitions that may be used by the built-in macros are initialized and executed first. Second, the built-in macros are compiled using LLAMAís macro compiler and third any final primitive or macro commands, such as clearing the emergency stop are executed before control is passed to the input parser (command, or system, interpreter). This final stage could contain a macro that performs a function, such as autonomous behavior. Control would not be passed to the user, thus providing auto power-up execution of complex macros. Alternatively, the final stage could call a C primitive that joins (passes control) to a user- defined interpreter or controller module, such as an obstacle avoider. This controller could replace the FORTH interpreter with an improved version. Normally, however, control will pass to the input parser.

Reentrant Modules

OS-9 has a clever method of running many processes in a small memory space. In a multi-user environment, one program module may be accessed by multiple users at the same time. Some multitasking systems place a new version of the program module in the memory space for each request. OS-9 does not add new modules into memory, instead, it duplicates the moduleís variable space and assigns a different pointer to the program for each user. The same program code is used by each user, hence the term, reentrant module.

LLAMA takes advantage of this capability by allowing interpreters to execute concurrently. Ideally we would like to execute commands as concurrent processes to improve system efficiency. By using a reentrant parser, commands can be executed in parallel. Using this technique we save memory and gain a concurrent version of FORTH.

The Dictionary

The dictionary is a linear linked list of pointers to primitives and macros in the system. The dictionary is placed into user memory, what is commonly referred to as the "heap," during initialization. Only the first, or system interpreter, creates and can modify the dictionary. When the system interpreter spawns a new interpreter, it passes dictionary pointers to the new interpreter. Whereas the system interpreter builds a dictionary, the spawned interpreter does not. The spawned interpreter can access the same macros and primitives as the system interpreter, with the exception of system primitives, and thus LLAMA does not need to duplicate the dictionary list for each interpreter.

The practical implications of creating a single dictionary are evident when only 512 Kilobytes of memory are available and many spawned interpreters are desirable. If the dictionary were duplicated with each spawned interpreter created, only about two spawned interpreters would fit into memory; in the current scheme, at least eight are possible, but more could be spawned if the speaker computer was not used.

The Input Parser

The input parser performs two functions: it filters incoming characters and refers to the dictionary for execution parameters; the latter function is illustrated in figure 7.3. Delimiters in the input stream, such as control characters, are converted into spaces, while multiple spaces are removed. The resulting line of commands is then parsed. Each token is tested to see if it is a number; if it is, it is placed on the stack. If a token is not a number, the general dictionary is searched and the command is executed. If the command is not in the general dictionary, the system dictionary is checked. If it canít be found, an "unknown token" error results. This sequence is repeated until a "bye" command is received; only then is LLAMA shutdown. The input stream is interrupt driven, which means that if there are no characters in the input stream, the system interpreter enters a sleep state, reducing the processing burden on the operating system.

Fig. 7.3. The System Interpreter. The token selection diagram that operates on the input tokens. It is somewhat simplified in that constant and variable definitions are not shown.

System Interpreter Execution Modes

There are four primary modes of execution handled by the system interpreter: primitive, macro, definition, spawn. Primitive execution is straightforward. The dictionary entry contains a pointer to the primitiveís C function. The C function is executed and control is returned to the parser. Macro execution obtains a pointer to a macro execution list, or simply execution list, and calls a C function, called "dolist" that will execute the instructions in the list. Definition execution calls a compiler to compile a sequence of instructions into a macro. Spawn execution calls a compiler to compile a sequence of instructions into a temporary list and spawns an interpreter to execute those instructions.

Macros

The FORTH term for macros is secondaries. They are called secondaries because they are composed of an ordered collection of primaries, or primitives, other secondaries, and possibly numbers. FORTH implements an execution list by generating a sequence of jump instructions to the appropriate functions. Such an approach is very efficient and makes secondary execution quite rapid. LLAMA uses a less efficient method for executing macros due to the limitations of the C implementation, but the concepts are the same. Every macro execution, in FORTH and LLAMA, must be executed through a "dolist" function. The "dolist" function initializes a pointer to the start of the execution list and in LLAMA, provides a means of controlling flow though an execution list.

Figure 7.4 shows the typical macro execution sequence. Since macros may call other macros, a pointer that identifies which instruction is currently being executed must be saved upon entry into the next macro. A dictionary stack is used to save these pointers, allowing macros to call other macros up to the depth of this stack. A macroís pointer is retrieved upon completion of the inner macro from the dictionary stack. An execution list is completed when either the jump address is zero (set to zero by the compiler to mark the end of the list) or the "quit" variable has been set to true. Whenever a macro calls another macro, the "dolist" function is called repeatedly with only changes in pointers taking place. This means that the "dolist" function is recursive. Control is returned to the originating macro by obtaining the return address of the calling function, in this case "dolist". Such control is performed automatically by OS-9.

Fig. 7.4. Macro Execution. A macro is executed as a potentially recursive function. Note that the macro list can call this exact routine, which explains the need for a dictionary stack.

Compiling Macros

The compiler will compile a command sequence that is delimited by the commands ":", or colon, and ";", or semi-colon. The compiler must first verify that the sequence of tokens to be compiled is valid and then create an execution list of pointers to the corresponding C functions. When the system is not compiling, the parser takes each primitive or macro in the incoming stream and calls the corresponding C function. If the ":" character is received, the parser is disconnected from the input process and all successive tokens are captured by the compiler until the ";" character is received. This process is shown graphically in figure 7.5.

Fig. 7.5. Macro Compilation. The graph for the macro compiler. Note that error checking is performed after the entire macro is entered and not during entry.

Since a definition may span many lines and it is not known how large the definition will be, the sequence of tokens entered must be stored in a temporary character list. It would seem that compilation could occur directly on the incoming stream, and there exist algorithms that do just that, however, LLAMAís compiler uses a simpler algorithm at the expense of being less flexible. The compiler generates a linear linked list where each list element contains a line of input, called a "block". Once a ";" token is received, the block linking process terminates and the list is concatenated into one large list of tokens. This list is still in ASCII character format. Although the concatenation reduces the compilerís efficiency slightly, the underlying program construction is easier to understand.

One possible method of executing a macro would involve reading a character-format token list, however such a method would require dictionary look-up at runtime. By encoding the jump address of primitives and macros into the execution list, efficiency is improved dramatically; the dictionary does not need to be accessed at runtime. To create an execution list, LLAMA uses a two stage compilation process: first it sequences through the character token list, calculating the amount of space needed for the macro list and checking for errors. Since most errors are trapped in the first stage, the compiler can use a simple method of building the execution list. Space for the execution list is reserved based on a token count and the space needed for each token. Second, each tokenís execution address is placed into the execution list. The execution list also contains jump pointers for conditional primitives, allowing efficient looping within a list. If no errors are found, then the macro is added to the dictionary.

The compiler supports recursive macros, where a jump address in an execution list points to its own "dolist" function. Notice that this is not the same as jumping to the beginning of the list, as might occur when using a looped conditional expression. Usually, recursive functions are used to arrive at some inner result while operating on intermediate results. The entire recursion is complete when the inner result is achieved. Recursion demonstrates the flexibility of macro lists, however, it has yet to be applied in a robot control problem.

The compiler will allow duplicate macro names, a process called redefinition. The old macro is not removed from the dictionary since other macros may be dependent on the old macro. This feature can be used to "hide" macros and primitives from further use, although redefinition should be used with caution since macro dependencies become harder to track. The assumption that redefining sub-macros will automatically track their parent macros is false. If a macro is redefined with the same name and is currently included within another macro definition, then the parent macro will continue to reference the original sub-macro, not the new one. This implies that the parent macro will have to be redefined to include the new sub-macro definition. Figure 7.6 shows how definitions may become hidden.

Fig. 7.6. Redefining Macros. The above input stream contains definitions to move one foot and three feet. The second "foot" definition was added since a "drstop" was forgotten. The second "foot" will not become attached to "3feet," which will stay the same. The "3feet" macro must be redefined in the same way as before to include the new "foot" definition.

Spawning a New Interpreter

Concurrent interpreter execution is a feature that FORTH lacks. FORTH was designed around sequential processing and does not support, in its basic form, concurrent execution of macros and primitives. LLAMA has capabilities for executing concurrent macros and primitives to improve system performance. Figure 7.7 illustrates how a spawned definition is received and compiled. In many cases, the procedure for executing a sequence of tokens concurrently is similar to that used by the compiler. The "{", open curly bracket, starts a spawn definition, and a "}", closed curly bracket ends the definition. The spawn process creates a linked list of token "blocks" and concatenates them into a long token character list, much like the compile process.

Fig. 7.7. Preparing a Performance Order for Execution. Description of what happens after an open curly bracket is received on the input stream. Note that a large list of tokens is generated until a closed curly bracket is received.

The system interpreter in LLAMA contains a list of currently active spawned interpreters. Each entry in this list contains pertinent facts that relate to the interpreter, such as its process id number. The list manipulation method is given in figure 7.8. Initially, an entry in the list is considered "empty," since it is not associated with a spawned interpreter. If a spawned interpreter is needed and the node is empty, then a new interpreter is created. The system interpreter passes the new interpreter all relevant dictionary information and a pointer to the token character list. The system interpreter waits until the spawned interpreter compiles the token list, at which point the system interpreter becomes free to perform other tasks. A spawned interpreter proceeds through the steps outlined in figure 7.9. After receiving the token list, the spawned interpreter compiles it into the equivalent of a macro. If the token list has an error, the list is deleted and the spawned interpreter waits for a new token list, otherwise it proceeds to execute the macro created by its compiler. Note that a spawn definition is really a single-use macro, in that after it is executed, it is no longer needed.

Fig. 7.8. Obtaining a Spawned Interpreter. Before a new spawned interpreter is created, there may already exist an idle interpreter that can be used. This diagram illustrates the creation and/or selection of spawned interpreters.

Fig 7.9. Performance Order Execution in a Spawned Interpreter. The execution flow of a spawned interpreter. When an interpreter has finished execution it remains active for further use.

Macros created by the spawned interpreter are not placed into the dictionary since this can cause side effects. Assume that a spawn definition could be compiled by the system interpreter. This definition would be placed into the dictionary. The spawned interpreter would be given the name of the macro to execute. It would also be given the start and end locations of the dictionary so that it could find the macroís execution list. Now assume that another spawn definition is generated and sent to another interpreter while the first interpreter is executing its macro. Assume that the first interpreter completes execution of its macro and informs the system interpreter that the macro is no longer needed. The system interpreter proceeds to remove the macro from the dictionary. The second interpreter, however, contains a dictionary start and end location which encompasses the now-missing macro. If this second interpreter searches through the dictionary, it will attempt to access an invalid memory region. This condition would very likely cause a major LLAMA failure. Alternatively, if a definition is placed into the dictionary that is removed prior to this interpreter accessing it, the interpreter will not find it. Placing a macro definition into the dictionary during spawned interpreter execution does not cause problems, since new entries are out of range for a search. In LLAMA, by not allowing the removal of any items during run-time, the invalid memory access problem disappears. It was determined through experimentation that the inability to remove items from the dictionary does not affect the memory resource significantly; however, it may cause some confusion when redefining macros with the same names. Separating the spawned interpreterís macro from the dictionary ensures that the dictionary cannot be corrupted through entry deletions.

As soon as the new definition (containing the performance order) is compiled, a signal is sent to the system interpreter and the system interpreter returns to the command input loop. Once the performance order has completed, the system interpreter receives a signal that the spawned interpreter is done. The spawned interpreter deletes the performance order from OS-9ís memory space. Once complete, the spawned interpreter pauses, becomes inactive, and uses very little processor time. A spawned interpreter may be needed while another is currently executing a performance order: such a situation is illustrated in the timing diagram of figure 7.10. If the system interpreter needs to execute another spawned definition, it can wake up this interpreter without starting another, as illustrated in figure 7.11. Reusing interpreters reduces system overhead and results in less memory fragmentation. Since the system heap is being used frequently under normal operating conditions, too much activity, such as creating and deleting space for spawned interpreter variables, could result in an out-of-memory condition due to heap memory fragmentation. Fragmentation occurs when memory space is de-allocated, where the space de-allocated is too small to use in a memory allocation request. The memory remains unused until either LLAMA is exited, or adjacent blocks are de-allocated, increasing the memory block size.

Fig. 7.10. Spawned Interpreter Timing-Overlap. A performance order may continue executing when a new performance order arrives. In this case, a new spawned interpreter is started, illustrated by "Spawn 2" in the lower portion of the diagram.

Fig. 7.11. Spawned Interpreter Timing-No Overlap. If execution of the first performance order completes before the second is ready to transmit, the spawned interpreter is reused for the next performance order.

Exiting LLAMA

When a "bye" is detected, LLAMA will quit. Figure 7.12 shows the procedure involved. The system interpreter will check its spawned interpreter list and see if any interpreters are working. If some interpreters are working, a "quit" signal is sent to these interpreters. When these interpreters receive this signal, they set their macro "quit" variable to "true". A macro with a true condition for the "quit" variable will terminate (including all calling, or parent, macros, if any) after the current instruction.

 

Fig. 7.12. Terminating LLAMA-The Exit Sequence. The "Exit Interpreter" routine illustrating the attempt to stop execution of spawned interpreters in a controlled manner. The bottom loop guarantees that the spawned interpreter is killed.

Some instructions may require a few milliseconds or more to complete since an instruction is usually a small C function. The system interpreter waits a predetermined arbitrary amount of time for the working interpreters to quit their tasks. It then checks the interpreter list (which contains each interpreterís execution status) to see if the spawned interpreters have in fact stopped working. If a spawned interpreter completes the instruction and exits the macro, it places a "done" into the interpreter list. If the spawned interpreter is done the system interpreter signals the interpreter to terminate and waits for it to do so. The system interpreter will set these entries to "empty" since the corresponding spawned interpreter no longer exists.

The system interpreter checks the list a third time. If any entries are not "empty" then the interpreters must be destroyed by sending a "kill" signal. Generally, interpreters that show a "working" status in this final stage are locked-up and must be terminated. This last step ensures that no spawned processes remain after LLAMA quits. (G2 will be informed of any working interpreters that have to be killed.) Once the system interpreter terminates, control is returned to the system manager. The system manager proceeds to shut down the expert and speech processes and then quits itself, returning the system to the OS-9 prompt.

Expert Computer

The expert computer manages the flow of commands to the expert modules. It also provides an efficient base for low-level motion control and manages a sensor blackboard. These three features are accessed as needed: the expert module computer checks each of the three in sequence to see if they need servicing. It would be possible to separate these functions into individual concurrent modules; however, elaborate algorithms would be required to ensure efficient operation, both in speed and memory usage. Together, these three functions are executed in a sequence called the expert loop.

Command and Reply Packets and the Command Queue

When an interpreter generates a command packet for an expert module it sends a pointer to the expert computer for the command and a reply packet. The pointer is sent through an OS-9 pipe to the command queue of the expert computer. The expert computer eventually sends the command packet to the expert module through the SDLC bus. Immediately after sending the packet, the interpreter goes to sleep. Once a reply is received by the expert computer, it is placed in the reply packet region and the expert computer wakes up the interpreter. The interpreter has the option of acting upon the reply packet, which could contain ultrasonic readings for example, or the interpreter could ignore it.

The command queue accepts command requests from any concurrent module, such as a spawned interpreter. A linked list of pending commands is generated from the command queue input pipe. This queue will typically have fewer than ten elements since LLAMA allows each spawned interpreter to place only one command into the queue. It would be possible for an interpreter to send a group of commands to the queue at one time, but the performance improvement is not significant. Along with the packet pointer, each command request arrives with a priority of execution. The queue is sorted based on the priority of the command. Priority zero is the highest priority available. The priority attribute would be useful if a number of commands are pending and an emergency stop arrives, in which case the emergency stop would be executed first.

If available, only one command is executed per expert loop iteration. The purpose is to distribute computing resources evenly to the other two functions of the expert loop. One of the greatest users of the command queue is the poll motion command. This command could be placed into the sensor blackboard, but the command queue is relatively fast for such commands. LLAMA contains a macro that repeatedly polls the motors until they have stopped moving. This macro accesses the command queue very rapidly. As mentioned earlier, interpreters are paused when accessing the command queue: By pausing the calling interpreter until a response arrives from the expert module, more time slices become available for other active processes, improving system efficiency. The expert loop is only as fast as the slowest responding expert module.

The Sensor Blackboard

Another process that is a part of the expert loop is the sensor blackboard. The sensor blackboard provides a unified and synchronized source for sensor information and is implemented in global memory, much like the system interpreterís dictionary. Any interpreter may request sensor information from the blackboard. Since the data contained in the blackboard is not atomic, some form of semaphore control is essential. In parallel systems, if data can be written or read by a process such that the operating system will not switch to another process during that operation, then that operation is considered atomic. If access to the blackboard was not coordinated in the atomic sense, then it would be possible for one process to read data that was in the process of being written. This would likely result in only partially valid data. LLAMA uses a mutually exclusive readers-writer algorithm using a readers array described by Axford to manage the blackboard (Axford 1989, 114). The writer obtains sensor data out of the protected region and copies it to a holding buffer while in the protected region. The reverse approach is used for the readers. This approach ensures that a process can be locked out of the protected region for only as long as it takes to copy the data into a processing area. It is unlikely that a double buffered approach would be suitable for the sensor blackboard, as given in Axford (Axford 1989, 114), since the writer already uses a separate buffer for obtaining sensor data. The entire sensor blackboard updating and reading are performed in one protected region to simplify interprocess protocol.

Before it can request data from the blackboard, the interpreter must put an initialization request for a sensor item. It must do so even though another interpreter may be accessing data from the blackboard. The motivation behind this apparently convoluted approach is derived from the concept of a "link count". Initially, the link count of each sensor item is zero. An initialization request increments this count. A de- initialization request, similarly, decrements this count. Whenever the count for that item is zero, the blackboard does not need to obtain that sensor data from the expert module, improving efficiency. The link count has an interesting property: A sensor that must be activated before it can be accessed needs to be activated only if the count increments from zero to one. A similar rule applies to deactivation. Thus sensor control becomes transparent to the calling interpreter.

Once an interpreter initializes its connection to the blackboard, it may begin access. A primitive is used to request data from the blackboard, and eventually that data is placed on the stack. A typical sequence of instructions to initialize the ultrasonic blackboard, obtain five sets of data and de-initialize the blackboard is given as follows:

	ulon 5 0 do ulsave loop uloff
Since it is possible to access the blackboard more rapidly than it can obtain new sensor data, LLAMA is configured to wait until new data arrives. This prevents old data from being used in calculations. If all 24 ultrasonic sensors are active, each set of data is obtained every 0.3 seconds, thus the above operation will take approximately 1.5 seconds to complete. The "ulsave" function is temporally bound.

There can be many readers (interpreters), reading the blackboard concurrently. When any reader is reading, the writer, the sensor blackboard controller, must wait until they have completed. To ensure that the writer eventually gets access to the blackboard, the writer asserts its writing flag to indicate that it wishes to write. Any new reader will note this flag, and wait for the writer. This approach helps prevent, in parallel processing terms, "starvation of the writer". Starvation occurs when the competing process is shut out of accessing the resource. A similar approach is used to prevent starvation of the readers. Semaphores, which are parallel process control functions built into the operating system, serve to control the flow of readers and the writer. The use of semaphores ensures that flags and control structures are activated by the appropriate processes in an orderly fashion. The efficiency of the mutual readers-writer algorithm is related to the design of OS-9ís process switching kernel. Tests seem to indicate that the readers/writer implementation does not suffer from starvation problems of the kernel, thus the mutual method outlined above seems to be an appropriate choice for a readers/writer algorithm.

Designing a sensor blackboard is not trivial. The current scheme can be considered fail-safe, but not highly efficient. A reader that needs new data refers to a counter in the blackboard to see if the data is new. In LLAMA, this data may not be new for ten accesses of the blackboard. This "wasteful" reading of the blackboard, since no useful sensor data is retrieved, slows down the writing process, and becomes worse as more readers begin accessing the blackboard. Also, waiting for new data consumes processing time. A solution to these problems might include a flag in the sensor blackboard that would indicate that a reader is waiting for new data. The reader would put itself to sleep until the writer wakes it up with new data. This is not a direct solution since the OS-9 documentation does not explain what will happen if a process goes to sleep while blocking other processes. One possibility is that upon exiting the protected region, the sleeping reader would be wakened due to some quirk in the OS-9 operating system! Such improvements need to be tested under very controlled conditions unless better documentation from Microware becomes available.

The Virtual Line-arc Functions

The expert computer is designed to interface with motors and sensors as part of the command queue; this is an ideal platform in which to create complex motion functions. The expert computer contains five line-arc functions (also referred to as "until" functions). Line-arc functions are motion primitives that execute paths (and trajectories) composed of lines and arcs. LLAMA implements the line-arc functions as virtual functions. The interpreter executes the line-arc functions as if they were motion primitives; however in the expert computer, these functions are composed of various control and sensor interfaces coupled with computational elements, hence the term "virtual functions." From the interpreter, the "STUA," or "steer until angle," command executes an arc trajectory. The "drud," or "drive until distance," command performs straight line motion to a specified distance. The STUA-type functions have an advantage over the similar LLAMA primitive DSV: by stacking a number of STUAís, STUAEís, STUACs, and drudís together in a macro, uninterrupted complex motion composed of arcs and straight line segments is possible.

Moves, turns and vectors are primitives that are commands directly executed by Denningís motor controller expert module; STUA and drud are not. The expert computer contains code which simulates the behavior of such functions. STUA uses Denningís primitive drive and steer commands. It also manipulates Denningís get and set position estimator functions to determine if the angle or distance of the robot equals the desired amounts. Refer to chapter 8 for a description of the algorithm that controls the line-arc functions.

An additional feature of the line-arc functions is that they suspend the calling interpreter until the function has completed, hence they are temporally bound. A "move" command would need to be followed by a repeated test of motion to achieve the same effect. The "move" command is sent through a serial interface, the SDLC bus, to the motion expert module. This serial interface is shared with other expert modules. If the "move" command did not respond until it completed, other commands would have to wait until the move was done. This explains why "move" is not temporally bound. To illustrate the advantage of using line-arc functions, consider the following scenario:

A STUA function is executed by interpreter number 1. It is tested for completion once per expert loop. Also taking place once per expert loop is the accessing of sensor information for the sensor blackboard. Interpreter number 2 is reading the ultrasonic readings from the blackboard and testing them to see if the robot is within one foot of any object. Soon, the STUA function has resulted in moving the robot near a wall. Interpreter 2 determines that the robot is near an object and that motion should stop. Interpreter 2 sends a "stop" command to the expert computer. Since the command queue and pipe are being accessed once per expert loop, the command is received and executed. The stop flag is set indicating that a stop command was received. The robot stops. The STUA function is entered and it checks the stop flag. STUA wakes up interpreter 1, alerts G2 of the STUA stop condition and terminates. Interpreter 1 attempts to send another STUA that is part of its performance order, but the command is refused, as are all others that are received. G2 stops interpreter 1 using an "exit" command, then it sends an "runtil" to reset the line-arc commands and "contemplates" the next move...

The previous example should have illustrated the multitasking capabilities of LLAMA and the inherent process resource sharing that takes place in the expert computer.

The SDLC Interface

The SDLC interface is a serial multi-drop bus using one master and many hosts. Each expert module is a host on the SDLC bus. No expert module may communicate with another expert module, but only may communicate with the expert computer, or similar master. Another restriction is that an application, such as the expert computer, communicating through OS-9 to the expert modules must wait for a reply after a command is sent. Only after a reply is received may another command be sent. The implication is that only one process in OS-9 should talk to the expert modules at one time. The easiest way to do this is to use the expert computer, which is provided with LLAMA. Alternatively, a semaphore system could be devised for all modules that wish to communicate with the expert modules. A global work area would be used to define which process will send the next command. Note that such a scheme would require duplicate SDLC control code (reading and writing packets) for each module that accesses the SDLC bus. Such an approach is left for future work.

Luckily, most command packets that are sent over the SDLC bus are less than fourteen bytes long. The SDLC serial bus is configured for 38400 baud, thus the bus has good throughput. No attempt has been made to ascertain how system performance is affected by continuous use of this line. It seems that in most situations a performance bottle-neck is not experienced, primarily as a result of sharing work in the expert loop. Thus, although the ultrasonics computer may be accessed once per loop through the sensor blackboard, a motion command, such as "stop," can be given once per loop. Command "starvation" does not occur.

Speech Computer

Robot speech for the Denning Mobile Robot was developed as part of this thesis at the University of Washington. This work occurred in three stages: first, the SDLC protocol was partially reverse- engineered; second, a speech synthesizer module with an SDLC controller was built based on engineering principles, not from a kit; and third, a LLAMA module was developed that would use the capabilities of the speech synthesizer. The first two processes are described in chapter 8 while the third follows here.

The speech synthesizer uses a digital filter to simulate the human vocal tract. By sending white noise into the filters and dynamically adjusting them, a crude replica of parts of speech is generated. The manufacturer of the speech synthesizer, GI Electronics, decided that 61 "allophones" strung together in various sequences were sufficient to model human speech. The transmission method of such a string uses Denningís SDLC protocol.

The speech computer serves a supporting role in the LLAMA system. LLAMA can operate well without the speech computer, but the capabilities of the system are greatly enhanced if it is used. Normally, the speech computer waits for input on the speaker pipe. This pipe input is interrupt driven, hence the speech computer spends most of its time sleeping and uses very little processor time. The speech computer accepts four types of input: error messages, informational messages, control inputs, and speech string pointers. The formats for these inputs are very similar and each input is fourteen or less bytes long, and composed of readable ASCII text, a format referred to as compact text. If the speech computer is removed from the system, the compact text is diverted to G2 where it can be translated into appropriate messages.

Each input type, with the exception of the speech string pointer, results in any combination of the following three output formats: compact text, identical to what is received on the input pipe; expanded text, which consists of a text sentence that describes the message; and verbal report, a translation of the message into allophones that are "spoken" by the speech synthesizer. Control inputs do not have allophone support.

Part of the initialization sequence of the speech computer involves detection of the speech synthesizer. If it is not present, then the allophone generator is disabled; otherwise, the speech processor is reset. Currently, the interpreter is configured to send a "hello" to the speech computer after initialization, thus when LLAMA is started a spoken "hello" indicates that LLAMA is ready to be used.

Generating Arbitrary Speech Strings

A facility has been developed by other students (Hutchinson and Smit-see Acknowledgments) to convert text to numbers that represent allophones within G2. The interpreter receives these numbers and places them onto the stack. When a "ts" is received, the appropriate numbers are popped from the stack and are formed into a command packet. Rather than sending the packet to the expert computer directly, the pointer to the packet is first routed to the speech computer. The speech computer can then send the pointer on to the expert computer when the speech computer is ready. The motivation behind this approach is to ensure that every allophone sent to the speech synthesizer is truly spoken.

Here is the problem: the speech synthesizer module has a buffer of 128 bytes. The buffer empties at approximately two bytes (or allophones) per second. If the buffer fills too rapidly, it will overflow and overwrite the start of the buffer. This will result in garbled speech. To prevent this, the speech computer places a request through the command queue for the number of bytes in the speech synthesizer moduleís buffer. If there is enough room, it sends the speech string pointer to the expert module, otherwise it waits for the buffer to empty. Any process that sends speech to the speech processor is guaranteed that its sequence of allophones will eventually be spoken. If each processor had access to the speech moduleís buffer size, there would be no way to establish, in a multitasking environment, the minimum number of bytes free in the speech synthesizer the moment the command packet is received. A solution to such a problem would require probability analysis and some form of randomizing behavior. The complexity of such a problem is dramatically reduced if all speech is diverted to the speech computer.

If an interpreter is executing a performance order with many sentences, eventually the performance order will stop while the speech synthesizerís buffer empties. This implies that any commands in the performance order interspersed between speech commands will execute only as fast as the speech synthesizer buffer empties. This means that, under heavy speech use, short speech phrases that are interspersed with action commands will tend to lag a few seconds behind the actions. Since it cannot be known when a particular phrase has completed (unless it is the only phrase being spoken), it is difficult to synchronize speech with actions. An allophone queue could be implemented in the speech computer to remedy this problem and is left as future work.

Error and Information Messages

The LLAMA Userís Guide lists every run-time error message that can occur in the LLAMA system (Major errors are sent as text directly to G2 and are not noted in the manual); there are approximately 230 generic error messages, and over 3000 individualized messages formed from the generic messages. Each error message is stored in a compact 14 byte representation in the interpreter and error computer. The speech computer translates these error codes into understandable messages. In addition, the speech computer has corresponding allophone representations of these messages. For example the error:

	err 09 36 05
is represented in text string format as:

	"out of memory for name"
This same representation is stored as text allophones:

	"aw pa5 ax vv pa5 mm eh eh mm or iy pa5 ff ff or pa5 nn1 ey mm pa5"
The "pa5" is a space, or quiet delay. This text string is parsed and the code for each allophone element is placed into a command packet. However, the speech computer does additional duty. It notes that a code in the thirties implies a spawned interpreter. The first value, nine, is the interpreterís process id. From these additional qualities it generates the following individualized message:

	"spawned interpreter 9 out of memory for name"
This additional information could be useful in identifying which interpreter caused the error. Most error messages do not signal a fatal event. This implies that a command stream will continue executing, possibly resulting in further errors. A possible partial solution is to build performances in stages, testing new macros for functionality before incorporating them into larger performances.

LLAMA needs to express messages that are not errors, such as "I am moving in a circle." Such a message is an informational message. It uses a similar format as the error messages, but does not include the interpreter number or type.

Control Inputs

The behavior of the speech controller can be altered by specifying special control codes. For example, while developing a macro, a user decided that every instance of that macro definition would have the same name. When the macro is compiled it causes a "macro redefinition" error (non-fatal). This error is generated by the system interpreter, so a control code could be sent to the speech computer that disables speech messages originating from the system interpreter. The instructions are given below:

	2 8 tc 2 13 tc : < name>  < macro body>  ; 2 7 tc 2 12 tc
The first two sets of codes disable both speech and text output while the last two sets of codes restores speech and text. Refer to the LLAMA Userís Manual for more details in using the control inputs.

Radio Controller

An auxiliary module developed for LLAMA, the radio controller, "radio," functions like a wire tap on radio communications during robot boot-up. When LLAMA is connected to the radio system, the input and output stream goes through a TNC, or Terminal Node Controller, to a radio communicating with G2ís base station. The TNC is a packet radio controller system with a microprocessor, RAM buffer and a modem. The TNC is connected to a UHF half-duplex radio. The radio controller determines if the TNC- radio is connected to OS-9. If it is, the radio controller can be configured to automatically reprogram the TNC into a desired state, otherwise it assumes the system was started in "wireline" mode (which uses an RS-232 cable). The reprogramming ensures that G2 will be able to communicate with the robot and ensures that the TNC is in the proper OS-9 converse mode. Once reprogramming is complete, the radio program terminates.

Expert Module Downloader

LLAMA contains a module called "dlex" that downloads expert modules. Each expert module, with the exception of the speech module, stores its execution code in RAM. Before the expert module can be used, the appropriate code must be downloaded. This code is contained on the boot floppy and needs to be executed only when the robot is first powered-up. Future expert modules designed at the University of Washington may not follow Denningís start-up protocol, so this module can be modified to accept any desired protocol. Refer to chapter 8 for more details.


Return to Thesis Index