Cloud-based education, Part 3: Cloud-based robotics for education

Leveraging sensor, actuator, and computational networks

The previous articles in this series reviewed strategies for using the cloud in general and explored high-performance computing. This article explores robotics and how the cloud makes robotics for educators and students more economical and available, with a broad range of sensors, actuators, computational resources, and applications. Remote access to a wider range of physical systems that can be time-shared and location-shifted in addition to simulation and building instances on site will vastly expand access to robotics. Today, with the cloud, hands-on interaction with remote robotics is much more feasible. This article provides a starting point for cloud-based robotics educational strategy.

Share:

Sam Siewert (Sam.Siewert@colorado.edu), Senior Instructional Faculty, University of Colorado

Sam SiewertDr. Sam Siewert is an embedded system design engineer who has worked in the aerospace, telecommunications, and storage industries since 1988. He presently teaches at the University of Colorado Boulder as Senior Instructional Faculty in the Embedded Systems Certification Program, which he co-founded, and serves as an advisor to the Electrical Engineering Capstone Design course. In addition, he has recently founded Trellis-Logic LLC, an embedded systems consulting firm that specializes in digital media, robotics, and cyber-physical applications.



14 February 2012

Cloud robotics

The total number of robots deployed globally is increasing (see Resources). The automation of manufacturing—process control application growth—has driven this increase along with the desire to spare human workers from risking their health in dangerous situations (such as rescue robotics). It is clear that as the robot population grows, the ability to manage robotics remotely, and more specifically over the Web, will have increasing value. Recognizing this trend and the increased use of personal robotics (for example, the iRobot Roomba and Scooba), Google and Willow Garage teamed to announce in 2011 that they have launched a "cloud robotics" initiative, with software and hardware support. The initiative includes integration of Willow Garage OpenCV with Robot Operating System (ROS) code and the development of PR2, a personal robotic system that is cloud enabled. The belief is that interfacing robotics in the cloud will enable faster development of robotics technology and help to lower cost on sophisticated sensors and robotic platforms.

The key advantages and features of cloud robotics pointed out by Google are:

  • Lowering cost and barriers to entry by moving processing into the cloud to remove limits on memory and storage and to enhance perception with less computing on-board the robotic platforms and more in the cloud:
    • Hosting a server on a robotic platform is expensive in terms of mobile battery power, cooling, and embedding.
    • Robots can use existing cloud software services like Google "Goggles" to recognize objects rather than implementing object recognition on each robotic platform.
    • The Android Open Accessory application programming interface makes interaction with robotics through mobile portals simpler compared to developing new mobile control interfaces.
    • Google's software as a service features like mapping, navigation, voice recognition, text to speech, and translation can be leveraged.
  • Simplifying the challenges of robotics by supporting standard cloud robotics platforms (for example, Willow Garage PR2 and TurtleBot) and open source cloud robotics development software like ROS:
    • The PR2 carries a price tag of US$400,000 ($280,000 discounted for open source developers and researchers), but TurtleBot and iRobot research and education platforms are much more affordable.
    • ROS, which includes open source C++, Python, Lisp, and Java™ code, provides functionality needed for computer vision (using OpenCV) for perception, navigation, manipulation, and mobility.
    • As shown in the section on the challenge of robotics, robotics tasking is not simple, so an open source operating system like ROS allows educators and students to focus on applications and learn the details of the math in the process.

Google's cloud robotics initiative may not share all of the same goals that educators have, but it will allow educators to leverage a great resource. Google has a goal to support robotics innovation, industry, and education. The collaboration possible among education, industry, and innovators has great potential.


Cloud robotics application to education: a University of Colorado case study

Applying cloud robotics to education is easy in concpet, but the challenge lies in the details of an integrated solution that enhances the student experience. Since starting a robotics lab in 2000, I have considered this and worked on related concepts. Even with lowering costs, robotics can still be expensive, take up space, be dangerous (requiring keep-out operation areas), and—with their popularity—be in short supply. As such, in 2008 I suggested that an initiative be started at the University of Colorado (CU) Boulder to determine how to make ECEN 5623, an existing Real-Time Embedded Systems class that includes robotics and computer vision applications, available to distance students. Clearly, using the World Wide Web and Google Android mobile portals to robotics labs will make them more widely accessible. At the time, the open source software making this much more tractable from Google, Microsoft®, and the open source community was less developed. Announcement of the Google cloud robotics initiative at Google I/O will help enhance this potential. The more students, hobbyists, and developers who have access to quality robotics platforms, the more this open source initiative will grow.

The main goals for ECEN 5623 and robotics courses at CU are access related:

  • ECEN 5623 has limited enrollment because of lab space and access constraints as well as cost:
    • Today, ECEN 5623 (see Resources) has eight low-torque robotics and computer vision workstations and can host 24 students but is limited to this enrollment by the space and cost of hosting these systems.
    • Today, ECEN 5623 does not support distance learners, because it requires students to directly interact with the 5-degrees of freedom (DOF) and 6-DOF arms and computer vision embedded systems used in labs.
  • ECEN 5623 could be expanded using cloud robotics methods:
    • Remote access to robotics over the web or mobile Android would allow distance students to participate. The challenge is how to administer this if something goes wrong, but the portal technology found in ROS will help.
    • In addition to remote access to robotics, ROS.org's rviz and simulations tools like MATLAB, Mathematica, and the Virtual Reality Modeling Language (VRML) can be used to visualize robotics, so that students can debug robot-tasking algorithms before interfacing with real hardware.
    • Remote robotics has higher value than simulation, because there is no substitute for dealing with real sensors and actuators. No doubt teaching assistants, the instructor, and some IT support are needed on site for courses in robotics, but the ability to access platforms over the Web can enable scaling. For ECEN 5623, one thought is to rack-mount arms in keep-out spaces and allow on-site physical interaction as well as remote interaction—in essence, some level of virtual presence by students. As noted in the Google presentation on ROS, a publish/subscribe camera stream from the 3D Kinect vision system allows one to "be inside the head of the robot" remotely with a simple Android tablet.

Leveraging community clouds for robotics would enhance the effort of cloud-enabling an existing robotics course. For example, we can link other robotics labs at CU, like the Correll Lab (see Resources) and share resources, processing, and algorithms more easily. The Correll Lab focuses on mobile robotics, self-assembling robotics, and many robotics topics that go beyond the more introductory topics in ECEN 5623.


Designing a cloud-based robotics course

Very often, robotics classes focus on building robots rather than programming, tasking, optimizing, and operating. This is clearly an important aspect of robotics education, and classes on design and implementation of robotics are critical. However, in addition, the software and operations technology for robotics is key to greater success and wider deployment. Just as many people thought that software was of little consequence to early PCs, it seems that many ignored the software aspect of robotics until recently. One reason is that there is no one leading hardware platform in robotics, although this is starting to change. Hardware vendors are starting to emerge that are ubiquitous, like Willow Garage and iRobot. Companies opening such platforms to education provide the key to long-term success and market growth. Education will benefit.

Major considerations for designing a new cloud robotics course are as follows:

  • Will the course focus on the construction or tasking and operation of robotics? Much of the Google Cloud Robotics initiative relieves the educator and student from building the platform and software, but if those fundamentals are the goal of the course, the course designer will want to carefully pick and choose what is re-used versus built:
    • Electrical and mechanical engineering courses may want to leverage resources at a component level for actuation, sensing, vision systems, and embedded control, providing the opportunity for students to then test these components and subsystems in a cloud robotics lab.
    • Software re-use is a huge time savings and, as Google points out, allows the user to focus on higher-level applications, but the value of full source is that students can also implement and integrate low-level fundamental algorithms on their own to learn about them.
  • For focus on high-level robotics theory, cloud robotics can make the higher levels of robotics tasking more immediately accessible:
    • The ability to jump in at the highest levels of robotics theory, tasking mobile robots, and working at the perception level will enable classes not only for electrical and mechanical engineers but a wider audience, much like the PC eventually became accessible to all disciplines.
    • Courses at the tasking level will help drive interest in the component and subsystem level in electronics, mechanics, and software that otherwise might be difficult for educators to motivate.

Because education is concerned with teaching fundamentals and empowering students to explore, the course designer should carefully consider education goals for cloud robotics. Subsystems like Kinect, software like ROS, and environments like Cloud Robotics will empower and help motivate students, but it is up to educators to bring this back to focus on fundamentals. The worst outcome would be courses that are only "integration" or "user" courses, where students leave the experience unaware of the challenges or how things work inside the black boxes. The ideal is to expose them to the internals and allow them to explore putting larger systems together that enable the construction of motivating robotics. The next section looks at some of the details that make robotics a challenge. The goal is not to remove the challenge but to remove it as a barrier to entry for both educators and students, so they can peel this challenge apart layer by layer to the depth that is appropriate for the institution, the students, and the educator, from high school to universities and in a wide range of disciplines.


The challenge of robotics

Robotics involves the engineering of robots, which refers to systems inspired by Czech writer Karel Čapek that in his play were artificial humans, indistinguishable from real humans; but the term has more broadly come to include a wide variety of systems with human-like ability to manipulate, sense, and interact with an environment. Whether a system is a robot is somewhat subjective, but at a minimum, a robot must have human-like senses (vision, tactile, auditory) and the ability to move and manipulate objects like humans do with actuation similar to human limbs. Often, we design robotics with specialized sensors and actuators that give them an advantage to operate in dangerous, dirty, tedious, exacting, or other environments that humans find challenging. Robotics and automation is expanding in both controlled industrial settings and in less controlled field scenarios. As the operational environments become more challenging, robotics must include mobility, manipulation, and sensing systems that are more sophisticated.


Robotic kinematics and manipulation

Robot kinematics involves the rotational (revolute) and translational (prismatic) movement of joints in robotic manipulators, which, as in humans, can include arms, hands, legs, and torsos. For mobile robots, often a simple system with independently driven wheels and a caster is used for so called "holonomic" translation on an XY plane (better known as a floor). Holonomic in robotics simply means that the system-controllable DOF is equal to the total DOF. In this simple example, the robot with the independently driven wheels and caster has two controllables (each motor, left and right) and has the ability to move in any direction on the same XY plane (2-DOF). Arms and other manipulators in robotics may have many more DOF, and often an arm is 5-DOF or 6-DOF. For example, an arm may have a rotating base; a rotating shoulder, elbow, or wrist; and a grasping end effector. Each joint in turn may have yaw, pitch, and roll (3-DOF potential per joint); but most often, joints have only 1- or 2-DOF.

Industrial robotic arms can be classified based on the combination of revolute and prismatic joints they include. For example, the SCARA robotic arm includes only base rotation over an XY plane, a displaced second rotational DOF above the base, and a prismatic actuator that lowers or raises a tool onto or off the surface the arm sits upon and can rotate the tool (as shown in Figure 1) for a total of 4-DOF. By comparison, a Cartesian robot has prismatic joints that allow it to translate on each axis—X, Y, and Z—as shown in Figure 1.

Figure 1. SCARA and Cartesian arm kinematics
Image showing simple kinematics

Inverse kinematics can be difficult to solve

Inverse kinematics is the general problem of finding a series of revolute joint rotations that results in the end effector reaching a specific position in Cartesian space relative to the base of the manipulator. If you set up a system of simultaneous equations and attempt to solve, you come up with more than one answer for the shoulder and elbow rotations. Which solution is correct? Luckily, software packages like OpenRAVE can be leveraged that provide forward and inverse kinematics for arbitrary kinematic chains (see Resources for a link to more information).

Inverse kinematics are more difficult to solve than forward for a number of reasons, but foremost is that even for simple cases, there is most often more than one solution to position an effector at the same location in space. As shown in Figure 2, simply rotating the shoulder down and elbow up reaches the same location as the shoulder up and elbow down.

Figure 2. Inverse kinematics often has multiple solutions
Image showing some of the solutions for inverse kinematics

Sensing and computer vision

Sensing and computer vision in robotics often include tactile (limit switch), servo positioning, and embedded or overhead camera sensing. In the case of camera sensing that is overhead (not embedded on the arm), often the cameras are used to observe the arm joint or end effector position and positioning relative to some target object. For example, if I simply want the arm to pick up an object, I can drive the centroid (center) of the gripper to coincide with the center of the object to grip as seen from a top view. I still have to compute a Z height off the reachable surface, but the surface Z height might be known in advance. Having camera feedback allows what would be a challenging inverse kinematics problem to be solved by a simpler error-driven guidance function. The ability to find edges, segment, and identify objects in a scene from an acquired image in real time is a basic machine vision capability often required. A simple raster-based edge-finding and centroid computation algorithm is included in Downloads as an example. The scenario described for a gripper and a ring object is show in Figure 3.

What's the difference between computer and machine vision?

Machine vision is generally considered to be image processing of scenes from a digital camera that are acquired in a controlled environment—known lighting conditions; field of view; camera angles; and well-tested, easy-to-recognize and segment objects of interest. By comparison, when a digital camera is used to observe arbitrary scenes in uncontrolled or less controlled conditions, this is most often considered a computer vision problem, which requires much more adaptive and intelligent image processing. In short, machine vision is specifically applied to the extraction of system state from image data in process control and/or robotics, whereas computer vision involves extraction of information from arbitrary scenes observed in the real world (see Resources).

Figure 3. Demonstration of machine vision tasking with object recognition and centroid calculation
Demonstration of machine vision tasking with object recognition and centroid calculation

The arm and machine vision system shown use recognition of multiple markers (on the arm) and objects such that tasking the arm to stack rings on the post is simply a matter of finding the centroids of each marker type (recognized by color) and placing ring centroids over the stacker pole blue marker centroid. This implementation was based upon the simple example provided by a student group in ECEN 5623, Real-Time Embedded Systems, in fall 2011 (Group #2).


Transformation of education with Cloud Robotics

It is quite possible that robotics is reaching the tipping point predicted by many where personal robots become much like the PC became in the late 1980s, with explosive growth as the cost is driven down while the value to individuals goes up. Organizations like Amazon, the National Aeronautics and Space Administration (NASA), education, hospitals using tele-medicine and tele-surgery, and home-owners making use of personal robotics for simple tasks like vacuuming and mopping are driving the deployment numbers up. Interest by hobby roboticists continues to rise and has led to the success of component and hobby system vendors like RobotShop, SparkFun, CrustCrawler, and Acroname, just to name a few. Likewise, along with open source software like ROS, Microsoft has released robotics software and a Kinect software development kit (SDK). The Kinect, used for gesture recognition in video gaming, has provided a 3D depth perception camera system that can be integrated into robotics—a use that is now blessed and supported by Microsoft.

The real limitation right now for personal robotics is education. Much like the PC, if cost and access to robotics platforms are brought down by economies of scale and by Cloud Robotics providing wider access, then a much larger workforce and better-educated consumer will be able to participate in the personal robotics industry. Open source software like ROS, affordable platforms like TurtleBot and CrustCrawler, and accessible components are all making robotics much more feasible in education for broader student populations.


The future of cloud-based robotics

This article makes an argument for cloud-based robotics, something that I thought was unique when I started my research but found to be a shared vision of many roboticists and robotics systems organizations. Although remote robotics has been practiced by organizations like NASA Jet Propulsion Laboratory with special requirements for deep-space exploration, it is clear that it has become a matter of convenience and a potential solution for significant social problems such as the availability of surgeons with specific skills in the same locations as the patients who need their services. The future of cloud robotics looks bright not only for education but for the business of robotics and as another driving function behind cloud computing in general.


Downloads

DescriptionNameSize
Simple Image Centroid Examplecentroid-example.zip144KB
Simple Mathematica Reach Analysismathematica-input.zip1KB

Resources

Learn

Get products and technologies

  • For high-definition video and image capture, many new options have become available, including the BeagleBoard-xM platform booting Linux with Leopard iMaging HD cameras. Likewise, for both standard- and high-definition frame grabbers, tuners, and encoders/decoders, Hauppauge carries a wide variety of PCI cards and USB devices that often work with Linux drivers.
  • Many options exist for off-the-shelf robotic platforms like the TurtleBot personal robot; simple 5-DOF and 6-DOF CrustCrawler robotic arms that I use in my class along with NTSC-based computer vision systems my students build; and great options for modifying or building your own robotics from the ground up with sensors, actuators, motor drivers, servos, and microcontrollers and other components from sources like SparkFun, Acroname Robotics, and the RobotShop.
  • Much like a PC, personal robotics has emerged and includes options from Microsoft Robotics and the Kinect Vision System, which now includes a Kinect SDK, allowing hobby and personal robotics developers to do some interesting tricks.
  • Robotics that are indistinguishable from humans being built by the Kokoro Company Ltd. fit the original envisioning by Karel Čapek. Likewise, avatars—human likeness captured in digital movies like James Cameron's "Avatar"—fit the original definition, but subsequently, much less human-like and more practical robotic systems have emerged, allowing us to avoid repetitive, dangerous, and hazardous tasks or to extend our vision and reach.
  • Evaluate IBM products in the way that suits you best.

Discuss

  • Get involved in the developerWorks community. Connect with other developerWorks users while exploring the developer-driven blogs, forums, groups, and wikis.

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Cloud computing on developerWorks


  • Bluemix Developers Community

    Get samples, articles, product docs, and community resources to help build, deploy, and manage your cloud apps.

  • developerWorks Labs

    Experiment with new directions in software development.

  • DevOps Services

    Software development in the cloud. Register today to create a project.

  • Try SoftLayer Cloud

    Deploy public cloud instances in as few as 5 minutes. Try the SoftLayer public cloud instance for one month.

static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Cloud computing, Open source
ArticleID=792349
ArticleTitle=Cloud-based education, Part 3: Cloud-based robotics for education
publish-date=02142012