The single greatest source of potential problems is the drive motor control organization. The drive motor performs translation of the robot, and as such this function could be responsible for damage. Without restricting LLAMAís capabilities severely, certain controls and limits help reduce the possibility of erroneous operation. LLAMA incorporates a drive speed governor, which initially limits the drive speed of the robot to one foot per second. The governor can be overridden to achieve greater speeds. LLAMAís multitasking capabilities provide additional insurance against loss of control. If motion sequences are sent to a spawned interpreter, the system interpreter becomes available to override the motion sequence at any time. The "h," or halt, command has the highest motive override priority in the LLAMA system. The "halt" command stops the robot but not the motion sequences. The "exit" command, which includes a parameter for the process id of the errant spawned interpreter, will terminate the motion sequence.
Confirmation prompts are used in many applications to verify that the user wishes to execute a certain action. They are used if the action may have significant consequences; they give the user an opportunity to reconsider the intended action. Such niceties do not exist in LLAMA primarily because these checks could be encoded into the G2 knowledge base. Any development work that involves motion requires the equivalent of a "driverís permit" for the developer. To qualify one must (1) read the LLAMA Userís Manual, (2) pass a written exam on the use of the LLAMA system, and (3) successfully operate the robot in a controlled environment. This is a procedure established at the University of Washington to instill the importance of safety into new users, as well as understanding the limitations inherent in the system. Eventually, other checks could be incorporated into the system to minimize behavior problems.
OS-9 provides a robust framework on which LLAMA resides. In most cases, if an unexpected condition causes LLAMA to crash, OS-9 will kill the faulty process. Usually control will be returned to the system manager, which will stop the motors through the expert computer. Although it is not guaranteed that the robot will stop under all error conditions, there is some system crash control built into LLAMA and OS- 9.
All control and sensor reports are parsed by the GSI interface and the contents of the reports, if recognized, are placed into the appropriate G2 interface objects. The LLAMA Userís Manual lists these reports and their range of values.
LLAMA does not have a command prompt. FORTH uses the lowercase "ok" prompt if commands are entered correctly. LLAMA simply executes such commands, but only produces a message if an error occurred or a status message was warranted. The lack of a prompt improves the message passing efficiency of the interface. The command "stat," which can be used to ensure a reliable connection between G2 and LLAMA, will return an acknowledgment if a connection exists.
Fig. 7.1. LLAMA System Overview. The master system graph for the "interprt" module. The functional interconnection of the LLAMA interpreter system is shown. The block in the dashed lines indicates "interprt" executing as a "spawned interpreter" using the reentrant capabilities of OS-9, thus this diagram represents one physical module called "interprt" in the OS-9 system.
Fig. 7.2 The LLAMA Starting Sequence. The top-most system diagram. It shows that the concurrent support modules "exdvr" and "spkr" are started in an orderly manner. The module "sysmgr" waits for the module "interprt" to complete before killing the support modules "exdvr" and "spkr".
LLAMA takes advantage of this capability by allowing interpreters to execute concurrently. Ideally we would like to execute commands as concurrent processes to improve system efficiency. By using a reentrant parser, commands can be executed in parallel. Using this technique we save memory and gain a concurrent version of FORTH.
The practical implications of creating a single dictionary are evident when only 512 Kilobytes of memory are available and many spawned interpreters are desirable. If the dictionary were duplicated with each spawned interpreter created, only about two spawned interpreters would fit into memory; in the current scheme, at least eight are possible, but more could be spawned if the speaker computer was not used.
Fig. 7.3. The System Interpreter. The token selection diagram that operates on the input tokens. It is somewhat simplified in that constant and variable definitions are not shown.
Figure 7.4 shows the typical macro execution sequence. Since macros may call other macros, a pointer that identifies which instruction is currently being executed must be saved upon entry into the next macro. A dictionary stack is used to save these pointers, allowing macros to call other macros up to the depth of this stack. A macroís pointer is retrieved upon completion of the inner macro from the dictionary stack. An execution list is completed when either the jump address is zero (set to zero by the compiler to mark the end of the list) or the "quit" variable has been set to true. Whenever a macro calls another macro, the "dolist" function is called repeatedly with only changes in pointers taking place. This means that the "dolist" function is recursive. Control is returned to the originating macro by obtaining the return address of the calling function, in this case "dolist". Such control is performed automatically by OS-9.
Fig. 7.4. Macro Execution. A macro is executed as a potentially recursive function. Note that the macro list can call this exact routine, which explains the need for a dictionary stack.
Fig. 7.5. Macro Compilation. The graph for the macro compiler. Note that error checking is performed after the entire macro is entered and not during entry.
Since a definition may span many lines and it is not known how large the definition will be, the sequence of tokens entered must be stored in a temporary character list. It would seem that compilation could occur directly on the incoming stream, and there exist algorithms that do just that, however, LLAMAís compiler uses a simpler algorithm at the expense of being less flexible. The compiler generates a linear linked list where each list element contains a line of input, called a "block". Once a ";" token is received, the block linking process terminates and the list is concatenated into one large list of tokens. This list is still in ASCII character format. Although the concatenation reduces the compilerís efficiency slightly, the underlying program construction is easier to understand.
One possible method of executing a macro would involve reading a character-format token list, however such a method would require dictionary look-up at runtime. By encoding the jump address of primitives and macros into the execution list, efficiency is improved dramatically; the dictionary does not need to be accessed at runtime. To create an execution list, LLAMA uses a two stage compilation process: first it sequences through the character token list, calculating the amount of space needed for the macro list and checking for errors. Since most errors are trapped in the first stage, the compiler can use a simple method of building the execution list. Space for the execution list is reserved based on a token count and the space needed for each token. Second, each tokenís execution address is placed into the execution list. The execution list also contains jump pointers for conditional primitives, allowing efficient looping within a list. If no errors are found, then the macro is added to the dictionary.
The compiler supports recursive macros, where a jump address in an execution list points to its own "dolist" function. Notice that this is not the same as jumping to the beginning of the list, as might occur when using a looped conditional expression. Usually, recursive functions are used to arrive at some inner result while operating on intermediate results. The entire recursion is complete when the inner result is achieved. Recursion demonstrates the flexibility of macro lists, however, it has yet to be applied in a robot control problem.
The compiler will allow duplicate macro names, a process called redefinition. The old macro is not removed from the dictionary since other macros may be dependent on the old macro. This feature can be used to "hide" macros and primitives from further use, although redefinition should be used with caution since macro dependencies become harder to track. The assumption that redefining sub-macros will automatically track their parent macros is false. If a macro is redefined with the same name and is currently included within another macro definition, then the parent macro will continue to reference the original sub-macro, not the new one. This implies that the parent macro will have to be redefined to include the new sub-macro definition. Figure 7.6 shows how definitions may become hidden.
Fig. 7.6. Redefining Macros. The above input stream contains definitions to move one foot and three feet. The second "foot" definition was added since a "drstop" was forgotten. The second "foot" will not become attached to "3feet," which will stay the same. The "3feet" macro must be redefined in the same way as before to include the new "foot" definition.
Fig. 7.7. Preparing a Performance Order for Execution. Description of what happens after an open curly bracket is received on the input stream. Note that a large list of tokens is generated until a closed curly bracket is received.
The system interpreter in LLAMA contains a list of currently active spawned interpreters. Each entry in this list contains pertinent facts that relate to the interpreter, such as its process id number. The list manipulation method is given in figure 7.8. Initially, an entry in the list is considered "empty," since it is not associated with a spawned interpreter. If a spawned interpreter is needed and the node is empty, then a new interpreter is created. The system interpreter passes the new interpreter all relevant dictionary information and a pointer to the token character list. The system interpreter waits until the spawned interpreter compiles the token list, at which point the system interpreter becomes free to perform other tasks. A spawned interpreter proceeds through the steps outlined in figure 7.9. After receiving the token list, the spawned interpreter compiles it into the equivalent of a macro. If the token list has an error, the list is deleted and the spawned interpreter waits for a new token list, otherwise it proceeds to execute the macro created by its compiler. Note that a spawn definition is really a single-use macro, in that after it is executed, it is no longer needed.
Fig. 7.8. Obtaining a Spawned Interpreter. Before a new spawned interpreter is created, there may already exist an idle interpreter that can be used. This diagram illustrates the creation and/or selection of spawned interpreters.
Fig 7.9. Performance Order Execution in a Spawned Interpreter. The execution flow of a spawned interpreter. When an interpreter has finished execution it remains active for further use.
Macros created by the spawned interpreter are not placed into the dictionary since this can cause side effects. Assume that a spawn definition could be compiled by the system interpreter. This definition would be placed into the dictionary. The spawned interpreter would be given the name of the macro to execute. It would also be given the start and end locations of the dictionary so that it could find the macroís execution list. Now assume that another spawn definition is generated and sent to another interpreter while the first interpreter is executing its macro. Assume that the first interpreter completes execution of its macro and informs the system interpreter that the macro is no longer needed. The system interpreter proceeds to remove the macro from the dictionary. The second interpreter, however, contains a dictionary start and end location which encompasses the now-missing macro. If this second interpreter searches through the dictionary, it will attempt to access an invalid memory region. This condition would very likely cause a major LLAMA failure. Alternatively, if a definition is placed into the dictionary that is removed prior to this interpreter accessing it, the interpreter will not find it. Placing a macro definition into the dictionary during spawned interpreter execution does not cause problems, since new entries are out of range for a search. In LLAMA, by not allowing the removal of any items during run-time, the invalid memory access problem disappears. It was determined through experimentation that the inability to remove items from the dictionary does not affect the memory resource significantly; however, it may cause some confusion when redefining macros with the same names. Separating the spawned interpreterís macro from the dictionary ensures that the dictionary cannot be corrupted through entry deletions.
As soon as the new definition (containing the performance order) is compiled, a signal is sent to the system interpreter and the system interpreter returns to the command input loop. Once the performance order has completed, the system interpreter receives a signal that the spawned interpreter is done. The spawned interpreter deletes the performance order from OS-9ís memory space. Once complete, the spawned interpreter pauses, becomes inactive, and uses very little processor time. A spawned interpreter may be needed while another is currently executing a performance order: such a situation is illustrated in the timing diagram of figure 7.10. If the system interpreter needs to execute another spawned definition, it can wake up this interpreter without starting another, as illustrated in figure 7.11. Reusing interpreters reduces system overhead and results in less memory fragmentation. Since the system heap is being used frequently under normal operating conditions, too much activity, such as creating and deleting space for spawned interpreter variables, could result in an out-of-memory condition due to heap memory fragmentation. Fragmentation occurs when memory space is de-allocated, where the space de-allocated is too small to use in a memory allocation request. The memory remains unused until either LLAMA is exited, or adjacent blocks are de-allocated, increasing the memory block size.
Fig. 7.10. Spawned Interpreter Timing-Overlap. A performance order may continue executing when a new performance order arrives. In this case, a new spawned interpreter is started, illustrated by "Spawn 2" in the lower portion of the diagram.
Fig. 7.11. Spawned Interpreter Timing-No Overlap. If execution of the first performance order completes before the second is ready to transmit, the spawned interpreter is reused for the next performance order.
Fig. 7.12. Terminating LLAMA-The Exit Sequence. The "Exit Interpreter" routine illustrating the attempt to stop execution of spawned interpreters in a controlled manner. The bottom loop guarantees that the spawned interpreter is killed.
Some instructions may require a few milliseconds or more to complete since an instruction is usually a small C function. The system interpreter waits a predetermined arbitrary amount of time for the working interpreters to quit their tasks. It then checks the interpreter list (which contains each interpreterís execution status) to see if the spawned interpreters have in fact stopped working. If a spawned interpreter completes the instruction and exits the macro, it places a "done" into the interpreter list. If the spawned interpreter is done the system interpreter signals the interpreter to terminate and waits for it to do so. The system interpreter will set these entries to "empty" since the corresponding spawned interpreter no longer exists.
The system interpreter checks the list a third time. If any entries are not "empty" then the interpreters must be destroyed by sending a "kill" signal. Generally, interpreters that show a "working" status in this final stage are locked-up and must be terminated. This last step ensures that no spawned processes remain after LLAMA quits. (G2 will be informed of any working interpreters that have to be killed.) Once the system interpreter terminates, control is returned to the system manager. The system manager proceeds to shut down the expert and speech processes and then quits itself, returning the system to the OS-9 prompt.
The command queue accepts command requests from any concurrent module, such as a spawned interpreter. A linked list of pending commands is generated from the command queue input pipe. This queue will typically have fewer than ten elements since LLAMA allows each spawned interpreter to place only one command into the queue. It would be possible for an interpreter to send a group of commands to the queue at one time, but the performance improvement is not significant. Along with the packet pointer, each command request arrives with a priority of execution. The queue is sorted based on the priority of the command. Priority zero is the highest priority available. The priority attribute would be useful if a number of commands are pending and an emergency stop arrives, in which case the emergency stop would be executed first.
If available, only one command is executed per expert loop iteration. The purpose is to distribute computing resources evenly to the other two functions of the expert loop. One of the greatest users of the command queue is the poll motion command. This command could be placed into the sensor blackboard, but the command queue is relatively fast for such commands. LLAMA contains a macro that repeatedly polls the motors until they have stopped moving. This macro accesses the command queue very rapidly. As mentioned earlier, interpreters are paused when accessing the command queue: By pausing the calling interpreter until a response arrives from the expert module, more time slices become available for other active processes, improving system efficiency. The expert loop is only as fast as the slowest responding expert module.
Before it can request data from the blackboard, the interpreter must put an initialization request for a sensor item. It must do so even though another interpreter may be accessing data from the blackboard. The motivation behind this apparently convoluted approach is derived from the concept of a "link count". Initially, the link count of each sensor item is zero. An initialization request increments this count. A de- initialization request, similarly, decrements this count. Whenever the count for that item is zero, the blackboard does not need to obtain that sensor data from the expert module, improving efficiency. The link count has an interesting property: A sensor that must be activated before it can be accessed needs to be activated only if the count increments from zero to one. A similar rule applies to deactivation. Thus sensor control becomes transparent to the calling interpreter.
Once an interpreter initializes its connection to the blackboard, it may begin access. A primitive is used to request data from the blackboard, and eventually that data is placed on the stack. A typical sequence of instructions to initialize the ultrasonic blackboard, obtain five sets of data and de-initialize the blackboard is given as follows:
ulon 5 0 do ulsave loop uloffSince it is possible to access the blackboard more rapidly than it can obtain new sensor data, LLAMA is configured to wait until new data arrives. This prevents old data from being used in calculations. If all 24 ultrasonic sensors are active, each set of data is obtained every 0.3 seconds, thus the above operation will take approximately 1.5 seconds to complete. The "ulsave" function is temporally bound.
There can be many readers (interpreters), reading the blackboard concurrently. When any reader is reading, the writer, the sensor blackboard controller, must wait until they have completed. To ensure that the writer eventually gets access to the blackboard, the writer asserts its writing flag to indicate that it wishes to write. Any new reader will note this flag, and wait for the writer. This approach helps prevent, in parallel processing terms, "starvation of the writer". Starvation occurs when the competing process is shut out of accessing the resource. A similar approach is used to prevent starvation of the readers. Semaphores, which are parallel process control functions built into the operating system, serve to control the flow of readers and the writer. The use of semaphores ensures that flags and control structures are activated by the appropriate processes in an orderly fashion. The efficiency of the mutual readers-writer algorithm is related to the design of OS-9ís process switching kernel. Tests seem to indicate that the readers/writer implementation does not suffer from starvation problems of the kernel, thus the mutual method outlined above seems to be an appropriate choice for a readers/writer algorithm.
Designing a sensor blackboard is not trivial. The current scheme can be considered fail-safe, but not highly efficient. A reader that needs new data refers to a counter in the blackboard to see if the data is new. In LLAMA, this data may not be new for ten accesses of the blackboard. This "wasteful" reading of the blackboard, since no useful sensor data is retrieved, slows down the writing process, and becomes worse as more readers begin accessing the blackboard. Also, waiting for new data consumes processing time. A solution to these problems might include a flag in the sensor blackboard that would indicate that a reader is waiting for new data. The reader would put itself to sleep until the writer wakes it up with new data. This is not a direct solution since the OS-9 documentation does not explain what will happen if a process goes to sleep while blocking other processes. One possibility is that upon exiting the protected region, the sleeping reader would be wakened due to some quirk in the OS-9 operating system! Such improvements need to be tested under very controlled conditions unless better documentation from Microware becomes available.
Moves, turns and vectors are primitives that are commands directly executed by Denningís motor controller expert module; STUA and drud are not. The expert computer contains code which simulates the behavior of such functions. STUA uses Denningís primitive drive and steer commands. It also manipulates Denningís get and set position estimator functions to determine if the angle or distance of the robot equals the desired amounts. Refer to chapter 8 for a description of the algorithm that controls the line-arc functions.
An additional feature of the line-arc functions is that they suspend the calling interpreter until the function has completed, hence they are temporally bound. A "move" command would need to be followed by a repeated test of motion to achieve the same effect. The "move" command is sent through a serial interface, the SDLC bus, to the motion expert module. This serial interface is shared with other expert modules. If the "move" command did not respond until it completed, other commands would have to wait until the move was done. This explains why "move" is not temporally bound. To illustrate the advantage of using line-arc functions, consider the following scenario:
A STUA function is executed by interpreter number 1. It is tested for completion once per expert loop. Also taking place once per expert loop is the accessing of sensor information for the sensor blackboard. Interpreter number 2 is reading the ultrasonic readings from the blackboard and testing them to see if the robot is within one foot of any object. Soon, the STUA function has resulted in moving the robot near a wall. Interpreter 2 determines that the robot is near an object and that motion should stop. Interpreter 2 sends a "stop" command to the expert computer. Since the command queue and pipe are being accessed once per expert loop, the command is received and executed. The stop flag is set indicating that a stop command was received. The robot stops. The STUA function is entered and it checks the stop flag. STUA wakes up interpreter 1, alerts G2 of the STUA stop condition and terminates. Interpreter 1 attempts to send another STUA that is part of its performance order, but the command is refused, as are all others that are received. G2 stops interpreter 1 using an "exit" command, then it sends an "runtil" to reset the line-arc commands and "contemplates" the next move...
The previous example should have illustrated the multitasking capabilities of LLAMA and the inherent process resource sharing that takes place in the expert computer.
Luckily, most command packets that are sent over the SDLC bus are less than fourteen bytes long. The SDLC serial bus is configured for 38400 baud, thus the bus has good throughput. No attempt has been made to ascertain how system performance is affected by continuous use of this line. It seems that in most situations a performance bottle-neck is not experienced, primarily as a result of sharing work in the expert loop. Thus, although the ultrasonics computer may be accessed once per loop through the sensor blackboard, a motion command, such as "stop," can be given once per loop. Command "starvation" does not occur.
The speech synthesizer uses a digital filter to simulate the human vocal tract. By sending white noise into the filters and dynamically adjusting them, a crude replica of parts of speech is generated. The manufacturer of the speech synthesizer, GI Electronics, decided that 61 "allophones" strung together in various sequences were sufficient to model human speech. The transmission method of such a string uses Denningís SDLC protocol.
The speech computer serves a supporting role in the LLAMA system. LLAMA can operate well without the speech computer, but the capabilities of the system are greatly enhanced if it is used. Normally, the speech computer waits for input on the speaker pipe. This pipe input is interrupt driven, hence the speech computer spends most of its time sleeping and uses very little processor time. The speech computer accepts four types of input: error messages, informational messages, control inputs, and speech string pointers. The formats for these inputs are very similar and each input is fourteen or less bytes long, and composed of readable ASCII text, a format referred to as compact text. If the speech computer is removed from the system, the compact text is diverted to G2 where it can be translated into appropriate messages.
Each input type, with the exception of the speech string pointer, results in any combination of the following three output formats: compact text, identical to what is received on the input pipe; expanded text, which consists of a text sentence that describes the message; and verbal report, a translation of the message into allophones that are "spoken" by the speech synthesizer. Control inputs do not have allophone support.
Part of the initialization sequence of the speech computer involves detection of the speech synthesizer. If it is not present, then the allophone generator is disabled; otherwise, the speech processor is reset. Currently, the interpreter is configured to send a "hello" to the speech computer after initialization, thus when LLAMA is started a spoken "hello" indicates that LLAMA is ready to be used.
Here is the problem: the speech synthesizer module has a buffer of 128 bytes. The buffer empties at approximately two bytes (or allophones) per second. If the buffer fills too rapidly, it will overflow and overwrite the start of the buffer. This will result in garbled speech. To prevent this, the speech computer places a request through the command queue for the number of bytes in the speech synthesizer moduleís buffer. If there is enough room, it sends the speech string pointer to the expert module, otherwise it waits for the buffer to empty. Any process that sends speech to the speech processor is guaranteed that its sequence of allophones will eventually be spoken. If each processor had access to the speech moduleís buffer size, there would be no way to establish, in a multitasking environment, the minimum number of bytes free in the speech synthesizer the moment the command packet is received. A solution to such a problem would require probability analysis and some form of randomizing behavior. The complexity of such a problem is dramatically reduced if all speech is diverted to the speech computer.
If an interpreter is executing a performance order with many sentences, eventually the performance order will stop while the speech synthesizerís buffer empties. This implies that any commands in the performance order interspersed between speech commands will execute only as fast as the speech synthesizer buffer empties. This means that, under heavy speech use, short speech phrases that are interspersed with action commands will tend to lag a few seconds behind the actions. Since it cannot be known when a particular phrase has completed (unless it is the only phrase being spoken), it is difficult to synchronize speech with actions. An allophone queue could be implemented in the speech computer to remedy this problem and is left as future work.
err 09 36 05is represented in text string format as:
"out of memory for name"This same representation is stored as text allophones:
"aw pa5 ax vv pa5 mm eh eh mm or iy pa5 ff ff or pa5 nn1 ey mm pa5"The "pa5" is a space, or quiet delay. This text string is parsed and the code for each allophone element is placed into a command packet. However, the speech computer does additional duty. It notes that a code in the thirties implies a spawned interpreter. The first value, nine, is the interpreterís process id. From these additional qualities it generates the following individualized message:
"spawned interpreter 9 out of memory for name"This additional information could be useful in identifying which interpreter caused the error. Most error messages do not signal a fatal event. This implies that a command stream will continue executing, possibly resulting in further errors. A possible partial solution is to build performances in stages, testing new macros for functionality before incorporating them into larger performances.
LLAMA needs to express messages that are not errors, such as "I am moving in a circle." Such a message is an informational message. It uses a similar format as the error messages, but does not include the interpreter number or type.
2 8 tc 2 13 tc : < name> < macro body> ; 2 7 tc 2 12 tcThe first two sets of codes disable both speech and text output while the last two sets of codes restores speech and text. Refer to the LLAMA Userís Manual for more details in using the control inputs.
Return to Thesis Index