In business software development, the more experienced a developer becomes, the less – he realizes – he knows.  The only developers I know, for the most part, who believe they’re truly gurus in their profession have around three years experience…after that comes a few years of realizing that they’ve only seen the tip of the iceberg to becoming of craftsman in the art of software development.  But developing software for robotics takes this humbling lesson to staggering new heights.

After writing a line-follower way back when, I (thought I) had my “Ah ha, I’ve got it!” moment.  I’ve had a long series of those moments, only to be dramatically humbled again and again when trying to take on increasing complexity.  The most recent, of those humbling moments, came when I began working on the planning layer of a project of mine.  This post is to provide research guidance to others who may be sinking their teeth into this realm.

There are three primary approaches to planning [1]:

  • Programming-based approach which involves anticipating every planning decision and hard-coding the planning mechanism.  This approach, while terrific on smaller problems, does not scale well, is tedious to develop and maintain, and becomes brittle as complexity increases.
  • Learning-based approach which leverages learning algorithms to “teach” the control layer how to approach particular problems.  (See Ethem Alpaydin’s Introduction to Machine Learning, 2nd Ed. for a foray down this path.)
  • Model-based approach which deduces the planning solution from a model of the planning problem.  This approach then has a number of general techniques, including:
    • Classical planning using a basic model using a planning language such as STRIPS, PDDL (the subject of a future post), ADL, or NDDL,
    • Markov Decision Process (MDP) where the model is represented as state transition probabilities and assumes the state is fully observable (discoverable with 100% confidence), and
    • Partially Observable MDP (POMDP) where the state is assumed to be not fully observable.

My current research focus is on emulating probabilistic planning (MDP) using classical planning techniques.  Accordingly, I’d like to share a few key papers and resources, which I have found, which have been of great assistance in my efforts to better understand the planning domain and to prepare for applying it to real-world project work.  (Listed in suggested order to be read.)

Other papers that may be of interest:

As for planning tools, you’ll certainly want to check out the following as well:

  • Teleo-Reactive Executive (T-REX) hybrid executive for autonomous robotics (uses NDDL as the planning language).  It should be duly noted that TREX has been successfully used on a number of real-world projects.
  • Fast Downward planning system (uses PDDL as the planning language)

This should get you going in the right direction for reading up on background materials and emerging research areas in the realm of planning.  Certainly let me know if you have other tips or references, I’m particularly interested in other planning tools being used in real-world scenarios.

Enjoy!
Billy McCafferty