The total number of robots deployed globally is increasing (see Resources). The automation of manufacturing—process control application growth—has driven this increase along with the desire to spare human workers from risking their health in dangerous situations (such as rescue robotics). It is clear that as the robot population grows, the ability to manage robotics remotely, and more specifically over the Web, will have increasing value. Recognizing this trend and the increased use of personal robotics (for example, the iRobot Roomba and Scooba), Google and Willow Garage teamed to announce in 2011 that they have launched a "cloud robotics" initiative, with software and hardware support. The initiative includes integration of Willow Garage OpenCV with Robot Operating System (ROS) code and the development of PR2, a personal robotic system that is cloud enabled. The belief is that interfacing robotics in the cloud will enable faster development of robotics technology and help to lower cost on sophisticated sensors and robotic platforms.
The key advantages and features of cloud robotics pointed out by Google are:
- Lowering cost and barriers to entry by moving processing into
the cloud to remove limits on memory and storage and to
enhance perception with less computing on-board the robotic platforms
and more in the cloud:
- Hosting a server on a robotic platform is expensive in terms of mobile battery power, cooling, and embedding.
- Robots can use existing cloud software services like Google "Goggles" to recognize objects rather than implementing object recognition on each robotic platform.
- The Android Open Accessory application programming interface makes interaction with robotics through mobile portals simpler compared to developing new mobile control interfaces.
- Google's software as a service features like mapping, navigation, voice recognition, text to speech, and translation can be leveraged.
- Simplifying the challenges of robotics by supporting
standard cloud robotics platforms (for example, Willow Garage PR2 and
TurtleBot) and open source cloud robotics development software like
- The PR2 carries a price tag of US$400,000 ($280,000 discounted for open source developers and researchers), but TurtleBot and iRobot research and education platforms are much more affordable.
- ROS, which includes open source
C++, Python, Lisp, and Java™ code, provides functionality needed for computer vision (using OpenCV) for perception, navigation, manipulation, and mobility.
- As shown in the section on the challenge of robotics, robotics tasking is not simple, so an open source operating system like ROS allows educators and students to focus on applications and learn the details of the math in the process.
Google's cloud robotics initiative may not share all of the same goals that educators have, but it will allow educators to leverage a great resource. Google has a goal to support robotics innovation, industry, and education. The collaboration possible among education, industry, and innovators has great potential.
Cloud robotics application to education: a University of Colorado case study
Applying cloud robotics to education is easy in concpet, but the challenge lies in the details of an integrated solution that enhances the student experience. Since starting a robotics lab in 2000, I have considered this and worked on related concepts. Even with lowering costs, robotics can still be expensive, take up space, be dangerous (requiring keep-out operation areas), and—with their popularity—be in short supply. As such, in 2008 I suggested that an initiative be started at the University of Colorado (CU) Boulder to determine how to make ECEN 5623, an existing Real-Time Embedded Systems class that includes robotics and computer vision applications, available to distance students. Clearly, using the World Wide Web and Google Android mobile portals to robotics labs will make them more widely accessible. At the time, the open source software making this much more tractable from Google, Microsoft®, and the open source community was less developed. Announcement of the Google cloud robotics initiative at Google I/O will help enhance this potential. The more students, hobbyists, and developers who have access to quality robotics platforms, the more this open source initiative will grow.
The main goals for ECEN 5623 and robotics courses at CU are access related:
- ECEN 5623 has limited enrollment because of lab space and access
constraints as well as cost:
- Today, ECEN 5623 (see Resources) has eight low-torque robotics and computer vision workstations and can host 24 students but is limited to this enrollment by the space and cost of hosting these systems.
- Today, ECEN 5623 does not support distance learners, because it requires students to directly interact with the 5-degrees of freedom (DOF) and 6-DOF arms and computer vision embedded systems used in labs.
- ECEN 5623 could be expanded using cloud robotics methods:
- Remote access to robotics over the web or mobile Android would allow distance students to participate. The challenge is how to administer this if something goes wrong, but the portal technology found in ROS will help.
- In addition to remote access to robotics, ROS.org's rviz and simulations tools like MATLAB, Mathematica, and the Virtual Reality Modeling Language (VRML) can be used to visualize robotics, so that students can debug robot-tasking algorithms before interfacing with real hardware.
- Remote robotics has higher value than simulation, because there is no substitute for dealing with real sensors and actuators. No doubt teaching assistants, the instructor, and some IT support are needed on site for courses in robotics, but the ability to access platforms over the Web can enable scaling. For ECEN 5623, one thought is to rack-mount arms in keep-out spaces and allow on-site physical interaction as well as remote interaction—in essence, some level of virtual presence by students. As noted in the Google presentation on ROS, a publish/subscribe camera stream from the 3D Kinect vision system allows one to "be inside the head of the robot" remotely with a simple Android tablet.
Leveraging community clouds for robotics would enhance the effort of cloud-enabling an existing robotics course. For example, we can link other robotics labs at CU, like the Correll Lab (see Resources) and share resources, processing, and algorithms more easily. The Correll Lab focuses on mobile robotics, self-assembling robotics, and many robotics topics that go beyond the more introductory topics in ECEN 5623.
Designing a cloud-based robotics course
Very often, robotics classes focus on building robots rather than programming, tasking, optimizing, and operating. This is clearly an important aspect of robotics education, and classes on design and implementation of robotics are critical. However, in addition, the software and operations technology for robotics is key to greater success and wider deployment. Just as many people thought that software was of little consequence to early PCs, it seems that many ignored the software aspect of robotics until recently. One reason is that there is no one leading hardware platform in robotics, although this is starting to change. Hardware vendors are starting to emerge that are ubiquitous, like Willow Garage and iRobot. Companies opening such platforms to education provide the key to long-term success and market growth. Education will benefit.
Major considerations for designing a new cloud robotics course are as follows:
- Will the course focus on the construction or tasking and operation of
robotics? Much of the Google Cloud Robotics initiative relieves the
educator and student from building the platform and software, but if
those fundamentals are the goal of the course, the course designer
will want to carefully pick and choose what is re-used versus built:
- Electrical and mechanical engineering courses may want to leverage resources at a component level for actuation, sensing, vision systems, and embedded control, providing the opportunity for students to then test these components and subsystems in a cloud robotics lab.
- Software re-use is a huge time savings and, as Google points out, allows the user to focus on higher-level applications, but the value of full source is that students can also implement and integrate low-level fundamental algorithms on their own to learn about them.
- For focus on high-level robotics theory, cloud robotics can make the
higher levels of robotics tasking more immediately accessible:
- The ability to jump in at the highest levels of robotics theory, tasking mobile robots, and working at the perception level will enable classes not only for electrical and mechanical engineers but a wider audience, much like the PC eventually became accessible to all disciplines.
- Courses at the tasking level will help drive interest in the component and subsystem level in electronics, mechanics, and software that otherwise might be difficult for educators to motivate.
Because education is concerned with teaching fundamentals and empowering students to explore, the course designer should carefully consider education goals for cloud robotics. Subsystems like Kinect, software like ROS, and environments like Cloud Robotics will empower and help motivate students, but it is up to educators to bring this back to focus on fundamentals. The worst outcome would be courses that are only "integration" or "user" courses, where students leave the experience unaware of the challenges or how things work inside the black boxes. The ideal is to expose them to the internals and allow them to explore putting larger systems together that enable the construction of motivating robotics. The next section looks at some of the details that make robotics a challenge. The goal is not to remove the challenge but to remove it as a barrier to entry for both educators and students, so they can peel this challenge apart layer by layer to the depth that is appropriate for the institution, the students, and the educator, from high school to universities and in a wide range of disciplines.
The challenge of robotics
Robotics involves the engineering of robots, which refers to systems inspired by Czech writer Karel Čapek that in his play were artificial humans, indistinguishable from real humans; but the term has more broadly come to include a wide variety of systems with human-like ability to manipulate, sense, and interact with an environment. Whether a system is a robot is somewhat subjective, but at a minimum, a robot must have human-like senses (vision, tactile, auditory) and the ability to move and manipulate objects like humans do with actuation similar to human limbs. Often, we design robotics with specialized sensors and actuators that give them an advantage to operate in dangerous, dirty, tedious, exacting, or other environments that humans find challenging. Robotics and automation is expanding in both controlled industrial settings and in less controlled field scenarios. As the operational environments become more challenging, robotics must include mobility, manipulation, and sensing systems that are more sophisticated.
Robotic kinematics and manipulation
Robot kinematics involves the rotational (revolute) and translational (prismatic) movement of joints in robotic manipulators, which, as in humans, can include arms, hands, legs, and torsos. For mobile robots, often a simple system with independently driven wheels and a caster is used for so called "holonomic" translation on an XY plane (better known as a floor). Holonomic in robotics simply means that the system-controllable DOF is equal to the total DOF. In this simple example, the robot with the independently driven wheels and caster has two controllables (each motor, left and right) and has the ability to move in any direction on the same XY plane (2-DOF). Arms and other manipulators in robotics may have many more DOF, and often an arm is 5-DOF or 6-DOF. For example, an arm may have a rotating base; a rotating shoulder, elbow, or wrist; and a grasping end effector. Each joint in turn may have yaw, pitch, and roll (3-DOF potential per joint); but most often, joints have only 1- or 2-DOF.
Industrial robotic arms can be classified based on the combination of revolute and prismatic joints they include. For example, the SCARA robotic arm includes only base rotation over an XY plane, a displaced second rotational DOF above the base, and a prismatic actuator that lowers or raises a tool onto or off the surface the arm sits upon and can rotate the tool (as shown in Figure 1) for a total of 4-DOF. By comparison, a Cartesian robot has prismatic joints that allow it to translate on each axis—X, Y, and Z—as shown in Figure 1.
Figure 1. SCARA and Cartesian arm kinematics
Inverse kinematics are more difficult to solve than forward for a number of reasons, but foremost is that even for simple cases, there is most often more than one solution to position an effector at the same location in space. As shown in Figure 2, simply rotating the shoulder down and elbow up reaches the same location as the shoulder up and elbow down.
Figure 2. Inverse kinematics often has multiple solutions
Sensing and computer vision
Sensing and computer vision in robotics often include tactile (limit switch), servo positioning, and embedded or overhead camera sensing. In the case of camera sensing that is overhead (not embedded on the arm), often the cameras are used to observe the arm joint or end effector position and positioning relative to some target object. For example, if I simply want the arm to pick up an object, I can drive the centroid (center) of the gripper to coincide with the center of the object to grip as seen from a top view. I still have to compute a Z height off the reachable surface, but the surface Z height might be known in advance. Having camera feedback allows what would be a challenging inverse kinematics problem to be solved by a simpler error-driven guidance function. The ability to find edges, segment, and identify objects in a scene from an acquired image in real time is a basic machine vision capability often required. A simple raster-based edge-finding and centroid computation algorithm is included in Downloads as an example. The scenario described for a gripper and a ring object is show in Figure 3.
Figure 3. Demonstration of machine vision tasking with object recognition and centroid calculation
The arm and machine vision system shown use recognition of multiple markers (on the arm) and objects such that tasking the arm to stack rings on the post is simply a matter of finding the centroids of each marker type (recognized by color) and placing ring centroids over the stacker pole blue marker centroid. This implementation was based upon the simple example provided by a student group in ECEN 5623, Real-Time Embedded Systems, in fall 2011 (Group #2).
Transformation of education with Cloud Robotics
It is quite possible that robotics is reaching the tipping point predicted by many where personal robots become much like the PC became in the late 1980s, with explosive growth as the cost is driven down while the value to individuals goes up. Organizations like Amazon, the National Aeronautics and Space Administration (NASA), education, hospitals using tele-medicine and tele-surgery, and home-owners making use of personal robotics for simple tasks like vacuuming and mopping are driving the deployment numbers up. Interest by hobby roboticists continues to rise and has led to the success of component and hobby system vendors like RobotShop, SparkFun, CrustCrawler, and Acroname, just to name a few. Likewise, along with open source software like ROS, Microsoft has released robotics software and a Kinect software development kit (SDK). The Kinect, used for gesture recognition in video gaming, has provided a 3D depth perception camera system that can be integrated into robotics—a use that is now blessed and supported by Microsoft.
The real limitation right now for personal robotics is education. Much like the PC, if cost and access to robotics platforms are brought down by economies of scale and by Cloud Robotics providing wider access, then a much larger workforce and better-educated consumer will be able to participate in the personal robotics industry. Open source software like ROS, affordable platforms like TurtleBot and CrustCrawler, and accessible components are all making robotics much more feasible in education for broader student populations.
The future of cloud-based robotics
This article makes an argument for cloud-based robotics, something that I thought was unique when I started my research but found to be a shared vision of many roboticists and robotics systems organizations. Although remote robotics has been practiced by organizations like NASA Jet Propulsion Laboratory with special requirements for deep-space exploration, it is clear that it has become a matter of convenience and a potential solution for significant social problems such as the availability of surgeons with specific skills in the same locations as the patients who need their services. The future of cloud robotics looks bright not only for education but for the business of robotics and as another driving function behind cloud computing in general.
|Simple Image Centroid Example||centroid-example.zip||144KB|
|Simple Mathematica Reach Analysis||mathematica-input.zip||1KB|
- According to the Institute of Electrical and Electronics Engineers (IEEE), the total number of robots deployed globally is increasing.
- Read Cloud computing fundamentals (Grace Walker, developerWorks, December 2010) to learn more about cloud computing.
- The IEEE Robotics and Automation Society (RAS) is an excellent resource for educators and researchers alike. IEEE and RAS publish many articles that are in depth and motivational on topics like Cloud Robotics: Connected to the Cloud, Robots Get Smarter and what is going on at Google Cloud Robotics.
- Check out this cloud robotics presentation, which was presented by Damon Kohler, Ryan Hickman, Ken Conley, and Brian Gerkey in this video. Several useful software tools such as ROS are discussed along with a standard robotic platform called PR2.
- I am considering how to make projects like this chess-playing robot built in ECEN 5623/4623 Real-Time Embedded Systems at CU Boulder in the Department of Electrical, Computer, and Energy Engineering certificate program in Embedded Systems available to more students as distance education. CU Boulder has other robotics labs like the Correll Lab opening up the potential for a community cloud.
- I outlined remote robotics lab concepts in "Building a continuing education program for embedded systems with labs and distance support," written for the International Association for Continuing Engineering Education 11th World Conference on Continuing Engineering Education held in 2008. Cloud Robotics tools like rosjava and software infrastructure and standard robotics platforms may enable personal robotics, which are much like the PC is today.
- Many simple tactile and other low-bandwidth sensors can be used on robotics, but the use of computer vision provides powerful situational awareness. Image processing for computer vision can be simplified using OpenCV, which has been integrated with ROS tools along with methods for visualizing and interacting with robotics using rviz.
- For a more basic machine vision system, check out simple National Television System Committee (NTSC)/closed-circuit TV cameras that provide 30Hz 640x480 resolution and Linux® drivers like Video for Linux or VxWorks drivers along with uncompressed image processing with PBMPLUS. These tools allow students to write vision applications from the ground up.
- Another simple option is USB web cameras and the Linux UVC driver.
- Numerous organizations and resources exist to assist with Cloud Robotics for all age and experience levels, including FIRST, BEST, and iRobot Corporation's education initiative known as Starter Programs for the Advancement of Robotics Knowledge (SPARK).
- A possible practical application of cloud robotics is tele-surgery, which allows an expert surgeon to operate over a network wherever a patient may be.
- Remote robotics has been in practice at places like NASA for Mars Exploration missions, described in detail in "Roving Mars," by Steve Squyres. The speed-of-light communication delays of deep-space force remote interaction with robotics.
- Likewise, NASA is investing in the cloud with programs like Nebula to ensure that data collected from robotic space exploration is widely available and integration of the cloud noted in this Amazon Web Services article.
- Robot kinematics and dynamics, the math
for describing robotic motion, is non-trivial. Software tools like OpenRAVE can help
along with analysis tools like Mathematica and MATLAB. Before coding in
C, it's useful to visualize forward kinematics and inverse kinematics in tools like Mathematica.
- For more information on kinematics, dynamics, computer vision, and real-time embedded systems, the educator working at this low level will find the following of interest: Robot Modeling and Control (Mark W. Spong, Seth Hutchinson, and M. Vidyasagar; Wiley, 2005) Real-time Embedded Components and Systems (Sam Siewert; Delmar Cengage Learning, 2007), Machine Vision: Theory Algorithms Practicalities (E. R. Davies; Elsevier, 2004), and Learning OpenCV (Gary Bradski and Adrian Kaehler; O'Reilly, 2008).
- For software resources and documentation on how to integrate your sensor network and robotics with Android smart phones, see the Android Open Accessory Development Kit.
- Tap into developerWorks Cloud computing to share knowledge and experience of application and services with other developers building their projects for cloud deployment.
- Browse the technology bookstore for books on these and other technical topics.
- Follow developerWorks on Twitter.
- Watch developerWorks demos ranging from product installation and setup demos for beginners to advanced functionality for experienced developers.
Get products and technologies
- For high-definition video and image capture, many new options have become available, including the BeagleBoard-xM platform booting Linux with Leopard iMaging HD cameras. Likewise, for both standard- and high-definition frame grabbers, tuners, and encoders/decoders, Hauppauge carries a wide variety of PCI cards and USB devices that often work with Linux drivers.
- Many options exist for off-the-shelf robotic platforms like the TurtleBot personal robot; simple 5-DOF and 6-DOF CrustCrawler robotic arms that I use in my class along with NTSC-based computer vision systems my students build; and great options for modifying or building your own robotics from the ground up with sensors, actuators, motor drivers, servos, and microcontrollers and other components from sources like SparkFun, Acroname Robotics, and the RobotShop.
- Much like a PC, personal robotics has emerged and includes options from Microsoft Robotics and the Kinect Vision System, which now includes a Kinect SDK, allowing hobby and personal robotics developers to do some interesting tricks.
- Robotics that are indistinguishable from humans being built by the Kokoro Company Ltd. fit the original envisioning by Karel Čapek. Likewise, avatars—human likeness captured in digital movies like James Cameron's "Avatar"—fit the original definition, but subsequently, much less human-like and more practical robotic systems have emerged, allowing us to avoid repetitive, dangerous, and hazardous tasks or to extend our vision and reach.
- Evaluate IBM products in the way that suits you best.
- Get involved in the developerWorks community. Connect with other developerWorks users while exploring the developer-driven blogs, forums, groups, and wikis.
Dig deeper into Cloud computing on developerWorks
Get samples, articles, product docs, and community resources to help build, deploy, and manage your cloud apps.
Experiment with new directions in software development.
Software development in the cloud. Register today to create a project.
Deploy public cloud instances in as few as 5 minutes. Try the SoftLayer public cloud instance for one month.