Robots, mazes, and subsumption architecture

Programming virtual robots in the Java language

Robot simulators can be both serious research tools and, as IBM computer programmer Paul Reiners shows in this article, a route to some serious fun with Java™ programming. Find out how to create light-seeking and maze-navigating virtual robots in the Java language using Simbad — an open source robot simulator based on Java 3D technology — to realize the robotics-design concept of subsumption architecture.

Share:

Paul D. Reiners (paul.reiners@gmail.com), Computer Programmer, EMC

Paul ReinersPaul Reiners is a Sun Certified Java Programmer and Developer. He is the developer of several open source programs, including Automatous Monk, Twisted Life, and Leipzig. Reiners received his M.S. in Applied Mathematics (Theory of Computation) at the University of Illinois at Urbana-Champaign in May, 1991, about eight months before HAL 9000 was first switched on at that same campus (January 12, 1992, to be precise). He lives in Minnesota and in his spare time practices the electric bass and plays in an employee jazz band.



04 December 2007

Also available in Chinese Russian Japanese

Introduction

Robotics is a field that long ago left the realm of science fiction to drive advances in industrial automation, health care, space exploration, and other applications. Software robot simulators not only simplify development work for robotics engineers, but they also give researchers a tool for studying artificial intelligence (AI) algorithms and machine learning. One such research-focused simulator is the open source Simbad project, built on Java 3D technology (see Resources). This article shows how to program virtual robots using the Simbad toolkit to realize the robotics-design philosophy of subsumption architecture.

This article starts with a brief overview of robotics and explains the subsumption architecture concept. Then it introduces the Simbad toolkit and shows how to implement a subsumption architecture within Simbad. Then, you'll program a simple robot using this architecture. Finally, you'll look at the fascinating world of mazes and program a second robot that, unlike Homer Simpson (see Resources), can find its way out of a maze. Your robots won't be physical but will "live" in the Simbad virtual world.


Robotics programming

The word robot has no universally accepted definition. For this article's purposes, consider a robot to have three parts:

  • A collection of sensors
  • A program that defines the robot's behavior
  • A collection of actuators and effectors

Traditional robotics

In traditional robotics (that is, robotics before 1986), a robot has a central "brain" that builds and maintains a "map" of the world and makes plans based on that map. First, the robot's sensors — for example, touch sensors, light sensors, and ultrasonic sensors — take in information from its environment. The robot's brain fuses all the information gathered by its sensors and updates the map of its world. The robot then decides on a course of action. It acts through its actuators and effectors. Actuators are basically motors and are attached to effectors, which interact with the robot's environment. Examples of effectors are wheels and arms. (The term actuator is often used loosely to mean either an actuator or an effector.)

In short, a traditional robot takes input from possibly multiple sensors, fuses this sensor information, updates its map of the world, makes a plan based on its current view of the world, and then acts. This method is problematic, though. For one thing, it's highly computation-intensive. Also, keeping a world map up to date is hard because the external world is always changing. Further, many organisms, such as insects, thrive without a map of their external world or even memory; might it be better to try to emulate them? These problems led to a new style of robotics, called behavior-based robotics (BBR). BBR is perhaps the dominant philosophy in robotics labs today.

Subsumption architecture

BBR can be implemented using a subsumption architecture. Subsumption architecture's inventor — Rodney A. Brooks, now head of the MIT AI Lab — introduced it in his seminal 1986 paper, "Elephants Don't Play Chess" (see Resources). Behavior-based robots are built up out of a set of independent, simple behaviors. Behaviors are defined by what triggers them (usually a sensor reading) and the action taken (which usually involves an effector). Behaviors are layered on top of one another. When two behaviors conflict, a central arbitrator decides which should take precedence. The robot's overall behavior is emergent, and, according to BBR proponents, can be greater than the sum of its parts. The emergent higher-level behaviors subsume the lower-level behaviors. Rather than try to engineer a robot, you just add behaviors and see what emerges.


Simbad: A robotic simulation environment

LEGO Mindstorms

This article focuses on building softbots, but if you want to try building physical robots, LEGO Mindstorms is an excellent robotics kit.

A sign at LEGO Mindstorms headquarters reads: "We will do for robotics what iPod did for music." LEGO introduced the first version of its Mindstorms robotics kit in 1998. The kit was immediately a big seller, bigger than LEGO expected. Although it might seem a little pricey at $250, keep in mind that that's the price of an iPod Classic, and everyone has an iPod.

However, the iPod isn't nearly as hackable as Mindstorms. Shortly after Mindstorms was first released, hardware hackers started pulling apart the Mindstorms RCX brick — the "brains" of Mindstorms robots — and reverse-engineering it. LEGO hadn't anticipated this and wasn't sure whether to allow it or to send out cease-and-desist letters. To the credit of LEGO management, they decided to allow Mindstorms hackers to do what they liked.

The result is a flourishing Mindstorms community (see Resources). Although Mindstorms ships with only a drag-and-drop graphical programming language called NXT-G, software hackers were soon porting other languages, such as C and the Java language, to Mindstorms. As a result, it's estimated that 50 percent of Mindstorms kits are being used by adults.

Simbad allows you to simulate robots in software. According to the project Web site, it "enables programmers to write their own robot controller, modify the environment, and use the available sensors. It is mainly dedicated to researchers/programmers who want a simple basis for studying Situated Artificial Intelligence, Machine Learning, and more generally AI algorithms, in the context of Autonomous Robotics and Autonomous Agents."

Simbad is written in the Java language by Louis Hugues and Nicolas Bredeche. The project, hosted at SourceForge.net, is free for you to use and modify under the conditions of the GNU General Public License.

Technical details

A Simbad world can contain Agents (robots) and inanimate objects (boxes, walls, lights, and so on). Time in the Simbad world is divided into discrete ticks. Simbad schedules time-sharing among Agents. Like physical robots, Simbad Agents have both sensors (distance, touch, light, and so on) and actuators (usually wheels). At each tick, a robot can act.

Agents override the performBehavior() method to determine their behavior. In performBehavior(), a robot can take note of sensor readings and set its translational and rotational velocities. performBehavior() takes place in a moment, so it can't issue a command such as "go forward one meter." To get around this limitation, you usually have to keep track of your robot's state. You might also use a timer variable to keep track of how many ticks of the clock you've been in the current state.

The Simbad API

For this article's exercises, you'll mainly be concerned with two Simbad API packages:

  • simbad.sim: This package's classes represent both your robot and the world it lives in. They include (among others):
    • Agent: Agents are robots.
    • Arch: An arch that your robot can drive around or under.
    • Box: Can be used as obstacles in the robot's world.
    • CameraSensor: Lets you view the robot's world from its point of view.
    • EnvironmentDescription: Represents the "world" to which you add robots and objects such as walls or boxes.
    • LampActuator: A lamp that you can add to your robot.
    • LightSensor: Senses the intensity of light.
    • RangeSensorBelt: Contains a set of range sensors around the robot.
    • RobotFactory: Use this to add sensors to your robot.
    • Wall: Another type of obstacle for your robot.
  • simbad.gui: This package's classes display your robot world and let you control it. They include (among others):
    • Simbad: The frame showing the robot world, sensor input, and controls.

Implementing a subsumption architecture in Simbad

To start implementing a subsumption architecture in Simbad, you define a subclass of Agent, called BehaviorBasedAgent. BehaviorBasedAgent contains an array of Behaviors and a boolean matrix specifying which Behaviors suppress which other Behaviors:

private Behavior[] behaviors;
private boolean suppresses[][];

BehaviorBasedAgent acts as a scheduler of Behaviors. Listing 1 shows the code that cycles over behaviors (using the currentBehaviorIndex class variable to keep track of which behavior should get its turn next) and arbitrates among them:

Listing 1. Code for cycling over and arbitrating behaviors
protected void performBehavior() {
   boolean isActive[] = new boolean[behaviors.length];
   for (int i = 0; i < isActive.length; i++) {
      isActive[i] = behaviors[i].isActive();
   }
   boolean ranABehavior = false;
   while (!ranABehavior) {
      boolean runCurrentBehavior = isActive[currentBehaviorIndex];
      if (runCurrentBehavior) {
         for (int i = 0; i < suppresses.length; i++) {
            if (isActive[i] && suppresses[i][currentBehaviorIndex]) {
               runCurrentBehavior = false;

               break;
            }
         }
      }

      if (runCurrentBehavior) {
         if (currentBehaviorIndex < behaviors.length) {
            Velocities newVelocities = behaviors[currentBehaviorIndex].act();
            this.setTranslationalVelocity(newVelocities
                  .getTranslationalVelocity());
            this
                  .setRotationalVelocity(newVelocities
                        .getRotationalVelocity());
         }
         ranABehavior = true;
      }

      if (behaviors.length > 0) {
         currentBehaviorIndex = (currentBehaviorIndex + 1)
               % behaviors.length;
      }
   }
}

Note that performBehavior() overrides simbad.sim.Agent.performBehavior().

Behavior has two abstract methods:.

  • isActive() returns a boolean value depending on whether it should be active given current state of the sensors. (All Behaviors share a set of sensors.)
  • act() returns a pair of velocities — translational and rotational (in that order) — that show the action it wants the motor to take.

Example: A light-seeking, wandering bot

Now you'll create an example bot that has four Behaviors (in order of decreasing precedence), shown in Listings 2 through 5. (Download the source code this article uses.)

  • Avoidance: Change direction after collision or to avoid collision.
  • LightSeeking: Move toward light.
  • Wandering: Occasionally change direction randomly.
  • StraightLine: Move in a straight line.
Listing 2. The Avoidance class (based on the Simbad SingleAvoiderDemo.java demo code)
public boolean isActive() {
   return getSensors().getBumpers().oneHasHit()
         || getSensors().getSonars().oneHasHit();
}

public Velocities act() {
   double translationalVelocity = 0.8;
   double rotationalVelocity = 0;
   RangeSensorBelt sonars = getSensors().getSonars();
   double rotationalVelocityFactor = Math.PI / 32;
   if (getSensors().getBumpers().oneHasHit()) {
      // if ran into something
      translationalVelocity = -0.1;
      rotationalVelocity = Math.PI / 8
            - (rotationalVelocityFactor * Math.random());
   } else if (sonars.oneHasHit()) {
      // reads the three front quadrants
      double left = sonars.getFrontLeftQuadrantMeasurement();
      double right = sonars.getFrontRightQuadrantMeasurement();
      double front = sonars.getFrontQuadrantMeasurement();
      // if obstacle near
      if ((front < 0.7) || (left < 0.7) || (right < 0.7)) {
         double maxRotationalVelocity = Math.PI / 4;
         if (left < right)
            rotationalVelocity = -maxRotationalVelocity
                  - (rotationalVelocityFactor * Math.random());
         else
            rotationalVelocity = maxRotationalVelocity
                  - (rotationalVelocityFactor * Math.random());
         translationalVelocity = 0;
      } else {
         rotationalVelocity = 0;
         translationalVelocity = 0.6;
      }
   }

   return new Velocities(translationalVelocity, rotationalVelocity);
}
Listing 3. The LightSeeking class (based on the Simbad LightSearchDemo.java demo code)
public boolean isActive() {
   float llum = getSensors().getLightSensorLeft().getAverageLuminance();
   float rlum = getSensors().getLightSensorRight().getAverageLuminance();
   double luminance = (llum + rlum) / 2.0;

   // Seek light if it's near.
   return luminance > LUMINANCE_SEEKING_MIN;
}

public Velocities act() {
   // turn towards light
   float llum = getSensors().getLightSensorLeft().getAverageLuminance();
   float rlum = getSensors().getLightSensorRight().getAverageLuminance();
   double translationalVelocity = 0.5 / (llum + rlum);
   double rotationalVelocity = (llum - rlum) * Math.PI / 4;

   return new Velocities(translationalVelocity, rotationalVelocity);
}
Listing 4. The Wandering class
public boolean isActive() {
   return random.nextDouble() < WANDERING_PROBABILITY;
}

public Velocities act() {
   return new Velocities(0.8, random.nextDouble() * 2 * Math.PI);
}
Listing 5. The StraightLine class
public boolean isActive() {
   return true;
}

public Velocities act() {
   return new Velocities(0.8, 0.0);
}

Listing 6 specifies which behaviors suppress which other behaviors:

Listing 6. Specifying overall behavior suppression
private void initBehaviorBasedAgent(BehaviorBasedAgent behaviorBasedAgent) {
   Sensors sensors = behaviorBasedAgent.getSensors();
   Behavior[] behaviors = { new Avoidance(sensors),
         new LightSeeking(sensors), new Wandering(sensors),
         new StraightLine(sensors), };
   boolean subsumes[][] = { { false, true, true, true },
         { false, false, true, true }, { false, false, false, true },
         { false, false, false, false } };
   behaviorBasedAgent.initBehaviors(behaviors, subsumes);
}

Note that although the Behaviors in this example have a total order of precedence (of suppression), it needn't be that way.

As an exercise, you might like to try the following:

  • Add a socialization behavior: move toward friends and away from enemies.
  • Add a light-avoidance behavior.
  • Add lights to some of the robots, so they're attracted to each other.

Mazes

"Finally! I knew we could solve that maze using Tremaux's algorithm!" —Lisa Simpson

Of the several existing maze-solving algorithms, two commonly used ones consume a constant amount of memory regardless of the maze's size. They are wall-following and the Pledge algorithm (invented by Jon Pledge of Exeter, England at age 12). Tremaux's algorithm (Lisa Simpson's algorithm of choice) is an excellent algorithm, but for simplicity this article concentrates on the wall-following and Pledge algorithms.

Maze-generation algorithms

Not only do maze-solving algorithms abound, but so do algorithms for generating mazes. The mazes this article is concerned with are called perfect mazes. Perfect mazes have exactly one path from any point in the maze to any other point (this rules out mazes with loops, "islands," or closed-off sections). Most perfect-maze-generation algorithms work by starting with a maze that has only its exterior walls and "growing" walls inward segment by segment. To ensure that the maze is perfect, every time you add a new segment, you make sure you're not creating a loop or a closed-off section.

Wall-following

Wall-following is a simple maze algorithm you might have learned as a child. All you do to solve a maze using this algorithm is keep your left hand on the left wall (or your right hand on the right wall) and just follow it along until you exit the maze. It's easy to see that this algorithm always works if the maze you're in has an entrance and an exit on its border. However, if the goal is within an island — a part of the maze that's disconnected from the rest of the maze — this algorithm can't find a solution because it can't "hop" over to the island.

The Pledge algorithm

The Pledge algorithm is more sophisticated than wall-following and solves a larger class of mazes because it can jump between islands. The basic idea of the Pledge algorithm is that you pick an absolute direction (such as North, South, East, or West) and always try to go in that direction. I'll call this direction your favored direction. When you run into a wall, you turn right and do left-hand wall-following until you're facing your favored direction and the sum total of your turns is zero (where clockwise turns are considered negative and counter-clockwise turns are considered positive). At that point, you continue going straight in your favorite direction again, and so on. The requirement that your turns sum to zero is necessary to keep you from getting caught in certain loops, such as one shaped like a capital G (try it out on paper to see what I mean).


Algernon: A maze-solving robot

Now it's time to amaze (forgive the pun) your friends and build a maze-solving robot named Algernon.

Designing the robot

To implement either wall-following or the Pledge algorithm, you need to know when the robot reaches an intersection and — when it does reach an intersection — which directions passages go off in.

There's probably more than one way to accomplish this, but you'll do it by mounting a sonar sensor on the robot's left side. This sensor will alert you when the maze has passages going off to the left. To know when the passage the robot is traveling down ends (that is, when it "runs into" a wall), you'll add a touch sensor to the front of the robot.

Programming wall-following

You program Algernon using the algernon.subsumption package (download the source code). Algernon is pretty simple as far as robots go, and you could program him in a straightforward "procedural" manner. However, using subsumption programming makes the code a lot cleaner, more modular, and easier to understand even in a robot as simple as this.

To simplify the algorithm implementation, I'll assume that walls are laid out at right angles. Thus, all the robot's turns will be 90-degree left or 90-degree right turns.

If you think about the (left-hand) wall-following algorithm, you'll see that you can decompose it into four behaviors:

  • Go straight.
  • When you run into a wall, turn right.
  • If you see a passageway to your left, turn left.
  • Stop when you reach the goal.

You need to decide the priority of the four behaviors. The previous list is correct in order of decreasing priority. You'll end up with four classes, each of which extends Behavior:

  • GoStraight
  • TurnRight
  • TurnLeft
  • ReachGoal

Listing 7 is the GoStraight code. TRANSLATIONAL_VELOCITY is a constant set to 0.4:

Listing 7. Behavior code for going straight
public boolean isActive() {
   return true;
}
  
public Velocities act() {
   double rotationalVelocity = 0.0;

   return new Velocities(TRANSLATIONAL_VELOCITY, rotationalVelocity);
}

Listing 8 is the TurnRight code. getRotationCount() returns the number of ticks of the clock it will take to rotate 90 degrees at the rotational velocity you're using.

Listing 8. Behavior code for turning right
public boolean isActive() {
   if (turningRightCount > 0) {
      return true;
   }

   RangeSensorBelt bumpers = getSensors().getBumpers();
   // Check the front bumper.
   if (bumpers.hasHit(0)) {
      backingUpCount = 10;
      turningRightCount = getRotationCount();

      return true;
   } else {
      return false;
   }
}
        
public Velocities act() {
   if (backingUpCount > 0) {
      // We back up a bit (we just ran into a wall) before turning right.
      backingUpCount--;

      return new Velocities(-TRANSLATIONAL_VELOCITY, 0.0);
   } else {
      turningRightCount--;

      return new Velocities(0.0, -Math.PI / 2);
   }
}

When Algernon turns left, he first goes forward a bit so that his back is clear of the wall ending to his left. Then he rotates left. Finally, he needs to go forward a bit more so a wall will be on his left again. Listing 9 is the TurnLeft code:

Listing 9. Behavior code for turning left
public boolean isActive() {
   if (postGoingForwardCount > 0) {
      return true;
   }

   RangeSensorBelt sonars = getSensors().getSonars();
   // Check the sonar on the left.
   if (sonars.getMeasurement(1) > 1.0) {
      // There is a passageway to the left.
      preGoingForwardCount = 20;
      postGoingForwardCount = 40;
      turnLeftCount = getRotationCount();

      return true;
   } else {
      return false;
   }
}
        
public Velocities act() {
   if (preGoingForwardCount > 0) {
      preGoingForwardCount--;

      return new Velocities(TRANSLATIONAL_VELOCITY, 0.0);
   } else if (turnLeftCount > 0) {
      turnLeftCount--;

      return new Velocities(0.0, Math.PI / 2);
   } else {
      postGoingForwardCount--;

      return new Velocities(TRANSLATIONAL_VELOCITY, 0.0);
   }
}

Listing 10 is the ReachGoal code:

Listing 10. Behavior code for reaching the goal
public boolean isActive() {
   RangeSensorBelt sonars = getSensors().getSonars();

   // Is there open space all around us? That is, are we out of the maze?
   double clearDistance = 1.2;
   return sonars.getMeasurement(0) > clearDistance
         && sonars.getMeasurement(1) > clearDistance
         && sonars.getMeasurement(3) > clearDistance
         && sonars.getMeasurement(2) > clearDistance;
}

public Velocities act() {
   // Stop
   return new Velocities(0.0, 0.0);
}

Listing 11 is the main behavior code for Algernon:

Listing 11. Algernon's behavior control code
private void initBehaviorBasedAgent(
      algernon.subsumption.BehaviorBasedAgent behaviorBasedAgent) {
   algernon.subsumption.Sensors sensors = behaviorBasedAgent.getSensors();
   algernon.subsumption.Behavior[] behaviors = { new ReachGoal(sensors),
         new TurnLeft(sensors), new TurnRight(sensors),
         new GoStraightAlways(sensors) };
   boolean subsumes[][] = { { false, true, true, true },
         { false, false, true, true }, { false, false, false, true },
         { false, false, false, false } };
   behaviorBasedAgent.initBehaviors(behaviors, subsumes);
}

Figure 1 shows Algernon navigating a maze:

Figure 1. Algernon running a maze
Algernon running a maze

Note that your robot can solve a maze even though its component parts know nothing about mazes (or even walls). Algernon has no central brain that calculates a way out of the maze. This is what you get out of a subsumption architecture: complex, seemingly purposeful behavior out of a set of simple, layered behaviors.


Conclusion

The Roomba

As I type this, a Roomba is vacuuming the rug under my feet (while unknowingly being stalked by a kitten). The Roomba is developed by iRobot, a company founded by three MIT alumni: Rodney Brooks, Colin Angle, and Helen Greiner. The Roomba is built using a subsumption architecture and has an open interface through which you can create all sorts of interesting hacks. Tod E. Kurt's book Hacking Roomba shares many interesting hacks (see Resources).

In this article, you programmed simulated robots. Programming real physical robots is a lot harder. The physical world intrudes in all sorts of messy ways. For example, it was quite easy to make your wall-following robot move parallel to the wall on its left. In the real world with its imperfect surfaces, getting your robot not to veer into the wall on its left or veer too far away from that wall is quite a challenge in itself. If you enjoy programming, that doesn't mean you'll necessarily enjoy making robots. Making interesting robots probably requires more mechanical skills than it does programming skills.

If you are interested in building and programming your own robots, LEGO Mindstorms is an excellent robotics kit. A low-cost alternative to working with Mindstorms is building Biological Electronic Aesthetics Mechanics (BEAM) robots. BEAM takes the idea of behavior-based robotics one step further and eliminates programming altogether. The overall behavior comes from hardwired, analog reflex-response behaviors. For $30 or less, you could build your first robot with a BEAM kit or through plans you can find in books, such as Gareth Branwyn's Absolute Beginner's Guide to Building Robots (see Resources). Or you could buy a Roomba and hack it.

One of the striking things I noticed after programming robots and looking at others' robot code for only a short time is that you can get a robot to do quite a lot with only a small amount of code. (However, it might take a lot of tinkering and experimenting with constants to get that small amount of code exactly right.) With the LEGO Mindstorms kit, you can build your first simple robot from instructions in an afternoon.

A whole thriving amateur-robotics subculture is out there to explore: robot books, robot contests, robot videos, and probably a robotics club in your area.

Resources

Learn

  • Simbad: Visit the project Web site for the Simbad robot simulator.
  • Absolute Beginner's Guide to Building Robots (Gareth Branwyn, Que, 2003): An excellent overview of robot building as a hobby.
  • Mobile Robots (Joseph L. Jones and Anita M. Flynn, A K Peters, Ltd., 1998): This book shows how to design and build small, inexpensive robots and clearly explains subsumption architecture. The book is especially excellent on ideas and concepts. Flynn and Jones were students of Rodney Brooks at the MIT AI Lab.
  • "Open source robotics toolkits" (M. Tim Jones, developerWorks, September 2006): Information on Simbad and other software for testing your robotics algorithms.
  • "Introduction to robotics technology" (Darrick Addison, developerWorks, September 2001): A general introduction to robotics and open source robot-control software.
  • Flesh and Machines (Rodney A. Brooks, Pantheon Books, 2002): Brooks describes the development of his behavior-based robotics philosophy in this nontechnical and highly enjoyable book.
  • The Simpsons: Meet the iconic U.S. television family. The "Stop, or My Dog Will Shoot!" episode features Lisa Simpson helping the family escape from a maize maze using Tremaux's algorithm.
  • Building Robots with LEGO Mindstorms NXT (Dave Astolfo, Mario Ferrari, and Giulio Ferrari, Syngress Publishing, 2007): This book — one of the best two of the many on LEGO Mindstorms — has a chapter on building maze-solving robots.
  • "Geeks in Toyland" (Brendan I. Koerner, Wired, February 2006): This article describes the development of LEGO Mindstorms NXT and how customer hackers were recruited to help with the design and development.
  • Hacking Roomba (Tod E. Kurt, Wiley, 2007): This book has a lot of interesting Roomba hacks: making your Roomba sing and draw, installing Linux ®on it, and using it as a mouse (why not?) are a few examples.
  • Turtle Geometry (Harold Abelson and Andrea diSessa, MIT Press, 1986): This fascinating book includes information on mazes.
  • Fast, Cheap & Out of Control: This documentary by Errol Morris features an interview with Rodney Brooks (and interviews with a lion tamer, a topiary gardener, and the world's leading authority on hairless mole-rats).
  • Think Labyrinth: Maze Algorithms: Maze expert Walter D. Pullen has written an excellent overview of mazes and maze algorithms.
  • Maze: Wikipedia's informative entry on mazes.
  • Browse the technology bookstore for books on these and other technical topics.
  • developerWorks Java technology zone: Hundreds of articles about every aspect of Java programming.

Get products and technologies

  • Algernon is an ongoing open source project. Download the source code this article uses.
  • Simbad: Download the Simbad toolkit.

Discuss

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Java technology on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Java technology
ArticleID=272838
ArticleTitle=Robots, mazes, and subsumption architecture
publish-date=12042007