- Reactive architectures which emphasize reacting to the immediate world environment without keeping an internal world model representation,
- Deliberative architectures which focus on maintaining a detailed internal world model, carefully planning actions to interact with the environment, and a layered system to disseminate plans to actuators,
- Interactive architectures which focus on communication and cooperation within multi-agent systems, and
- Hybrid architectures which provide a balance of reaction, deliberation, and/or multi-agent coordination.
- Beliefs of an agent expressing its expectations about the current state of the world;
- Desire expressing preferences over future world states or courses of action;
- Goals describing subsets of desires that an agent might pursue (including reactive, local, and social goals);
- Intention expresses the specific goal (or set of goals) to commit to; and
- Plans representing sets of intentions (each representing a partial plan) to be executed to achieve goals.
It should be noted that InteRRaP is not a traditional BDI architecture; it attempts to leverage the advantages of a BDI architecture as a component of its hybrid approach to structuring multi-agent systems, distributing the mental categories over three layers. For example, beliefs are split into three layered models:
- A world model containing beliefs about the environment,
- A mental model holding meta-level beliefs the agent has about itself, and
- A social model describing meta-level beliefs about other agents.
For action deliberation and execution, InteRRaP incorporates three hierarchical control layers described as:
- A behavior-based layer incorporating reactivity and procedural knowledge for routine tasks.
- A local-planning layer that provides means-ends reasoning of local tasks and produces-goal directed behavior.
- A cooperative-planning layer enabling agent reasoning about other agents and supporting coordinated action among agents.
- Behavioral situations which are reactive situations derived purely from the world model,
- Local planning situations which are derived from information from both the world and mental model, and
- Cooperative planning situations which are derived from information from the world, mental and social models.
- Layered control having different level of abstraction,
- Layered knowledge base allowing restriction of the amount of information available to lower control layers,
- Bottom-up activation wherein the next layer up only gains control if the layer below is unable to deal with the recognized situation, and
- Top-down execution wherein each layer uses operational primitives (or Patterns of Behavior (PoB)) defined in the next lower layer to achieve its goals.
The diagram above  illustrates how the underlying principles were used in implementing the control architecture of InteRRaP. There are three primary modules: a world interface providing the agent’s perception, communication, and action interfaces with its environment; a knowledge base partitioned into three layers, consisting of the world, mental and social models described previously; and a control unit organized into the three control layers described previously (behavior-based, local-planning, and cooperative-planning). Furthermore, each control layer has two processes including a situation recognition and goal activation (SG) process and a planning, schedule, and execution (PS) process. Control moves from the behavior layer up until a suitable layer competent for execution is found; action is then directed back down to the behavior layer which is the only layer with direct access to sensors and actuators.
To help limit the scope of responsibility of each layer, each is limited to a respective portion of the knowledge base. For example, the behavior-based layer only has access to the world model and can only recognize situations warranting a purely reactive response. Conversely, the cooperative planning layer has access to the social, mental and world models, allowing it to recognize more complex situations and to plan and pass down execution commands, accordingly.
Implications on O-MaSE
As described previously, O-MaSE is a flexible methodology for the definition and design of multi-agent systems. While choosing InteRRaP as a preferred architecture does not preclude the use of any O-MaSE tasks, it implies the introduction of a new task: Model Situations Task. This task would define the situations which may be recognized for taking action upon. Going a step further along these lines, a supporting O-MaSE task may be introduced – Refine Situations Task – to better assign which control layers should be responsible for recognizing and responding to each situation.
This introductory post to InteRRaP only touches upon the major components of this architectural approach in an effort to concisely describe its intent and organization. The interested reader is strongly encouraged to read the references found at the bottom of this post for more detailed information. In the next post, we’ll look at some examples of how O-MaSE was used to define requirements in alignment with the selected InteRRaP architecture.
It should be noted that it is not my intention to follow InteRRaP “to a tee”; rather, I find its overall organization to be very logical and will use as inspiration for structuring current project work; for example, I could see trex-autonomy as being a suitable approach for implementing the behavior-based and local-planning layers without negating the underlying principles of InteRRaP, nor its implied organization. Time (and a lot of trial-and-error) will tell.