Opportunities for automation are not always simple
Recently, Mark Handelsman, marketing manager at FANUC Robotics America Inc., described some possible reasons as to why robots may be getting "smarter": Topping his list is sensory input.
Since the initial inception of robotic machinery, the mechanical "men" have brought with them some level of intelligence. Things like deciding whether a part was available, recognizing if certain conditions existed, or finding errors. Usually this was accomplished by a specific sensor detecting a specific condition; a simple solution to a simple problem in automation.
Reality is, though, that "opportunities for automation are not always this simple." Take parts location for example. What if the part is in a bin, mixed with other parts?
The answer to making robots smarter may lie with upgrading their "senses."
I can see clearly now
The most impact from this way of thinking has come from the development of two-dimensional vision systems. 2D vision systems usually consist of standard industrial cameras mounted over parts bins, high enough so the robot can move under it. If more precision is required, a camera can be mounted on the robot. Either way, the camera moves over the layer of parts, snaps a photo of their position, and the robot "decides" where a part is and where parts aren't, chooses the place where one is, then executes the algorithm to move to those coordinates and retrieve the part. Once the layer is empty of parts, the robot determines that the separator sheet (which divides the layers) should be removed.
In some traditional methods of using robots to find and retrieve parts, the dunnage (the separator material) was specially designed to "help" the robot ID the parts. It turns out that installing a 2D vision system has reached a point in which it is the cheaper alternative (and much more reliable than it used to be) to custom dunnage. An added benefit to the vision system is that it can quickly adapt to handle different parts on the same line, as well as change its abilities to handle a change in the existing parts (often with just a small image code rewrite).
Vision systems have improved so much that not only can the cameras allow the robots to select specific parts from among many, but they can also ID and pick a part off a standard conveyor belt, also reducing the costs associated with purchasing custom-design, fixtured pallet conveyors.
But what about 3D obstacles
2D vision systems are great for parts that lie flat, but guess what -- they don't always do that in a 3D world. How would you handle that? One proven, simple technique is to use laser light strips in conjunction with a 2D camera:
An overhead 2D camera provides a rough location of parts in a bin. This camera also identifies the next part to be selected. A second camera mounted to the robot works in conjunction with a laser. The robot moves the laser and camera over the next part and then the laser places a cross hair over a target on the part. This target could be an edge, circle, or other distinct feature on the part. Through simple triangulation, the camera can locate the position and orientation of the part in 3D.
Feeling around for the parts
There's also something called six degrees of freedom force sensors that are commonly used to give robots tactile feedback. For high precision assembly (like inserting shafts into holes or gears into housings), force sensors provide robots with another advantage (and the robot can be programmed to move a gear back and forth until it engages with each stage, just like a human would have to do).
Polishing and grinding jobs are other ways to use force sensors. With them, the robot has the ability to maintain a constant force on a complex-shaped object as the orientation varies. It can even compensate for gravitational effects.
What about tomorrow?
Even with a soft automotive market (the biggest user of industrial robots), the industry in North America grew at an average annual rate of 20 percent between 2003-5, mostly due to reductions in cost and increases in performance. The increase in performance is partly due to advances in intelligent sensor technology. It will be interesting to see the effect of smarter and smarter sensors on the abilities of robots to make wiser "decisions."
Machine Vision Online offers resources, forums, whitepapers, and market data on automated vision sensors.
At FANUC, you can learn how blank alignment with vision can make robots more flexible (they can identify parts and change programs accordingly) and how it can help them make adjustments depending on measurements made about the parts.
Thanks to Mark Handelsman, marketing manager at FANUC Robotics America Inc., for making this monograph possible.