Cognitive

How cloud will bring robots down to Earth

hqdefault

Gajamohan Mohanarajah (Photo: TEDx)

The term “cloud robotics,” is relatively new, having been coined in 2010 by James Kuffner while at Carnegie Mellon. Five years later, cloud robots exist mainly in the lab. But it’s the combination of cloud and robotics that will eventually enable self-driving cars. And industries as varied as healthcare, transportation, and emergency response are looking to cloud robots to eventually provide faster and safer basic medical procedures, mass transit services, and search and rescue missions. In fact, a recent study from researchers at Frost & Sullivan declared cloud robotics “will lead to the development of smart robots that have higher computing efficiency and consume less power. These attributes will drive down the cost of manufacturing as there is less hardware and also result in lower emissions.”

Thoughts on Cloud wanted to get an update on the state of robotics and the role the cloud plays in making them a reality, so we spoke with Gajamohan Mohanarajah, CEO of Rapyuta Robotics, a cloud-based, open source, platform-as-a-service framework for robots. Dedicated to furthering this technology, Gajan earned a Doctoral degree from ETH Zurich developing RoboEarth, an initiative funded by the European Union to build an Internet for robots. In January, 2015, just six months after its founding, Rapyuta Robotics raised 351 million Japanese Yen (roughly 2.9 million US dollars) in its first round of funding, showing that investors are setting their sights on cloud robotics as an emerging growth area.

ThoughtsOnCloud: Where are we in the evolution of robots?

Mohanarajah: We have a lot to do. There is a huge market and there are a lot of robots today, but they are mostly in factories where the task is very well defined and there are no uncertainties. These robots are doing the same things over and over again. The reason you don’t see robots outdoors is that outside factors are unpredictable so they cannot handle all of these situations. We, as humans, learn over years to handle these things. Whereas robots aren’t learning yet so they find it hard to move around.

The first challenge is the ability to robustly move from point A to point B in a normal environment – home or office – which is still not a solved problem. Once they start moving they have to understand what they are doing. They have to understand and then associate – semantically linking these things so a lot of artificial intelligence knowledge has to come in. For example, they can understand concepts of color and shape, but have to connect them.

TOC: What are some of the biggest barriers to ongoing development?

Mohanarajah: To address these challenges it is a problem of both hardware and software. If you have a small team it is hard to tackle all of them.

And the other is availability of sensors. The robot is a collection of sensors and actuators. The scale is not large enough to build your own sensors for robots. We are leveraging sensors created for existing technologies like gaming and smart phones.

What are some of the biggest opportunities or applications for robots?

Mohanaraja: As a company, initially we want to do security and inspection. If you have a big building, or bank, you typically have four to five security guards doing regular patrols. We want to reduce the number of humans who have to do this and use robots instead. They will perform regular patrols and will work with existing sensors so that if a sensor sends an alert to the cloud, using this common medium [for computation, storage and sharing of knowledge] it will send the robot to that place to gather evidence if there is something wrong. The cloud [helps to] allocate resources dynamically.

Inspection is quite similar. Here the robot observes for changes. For example, companies want to send robots to inspect wind turbines. They would go around the turbine, observe for cracks, and notify. These kinds of robots are very lightweight and low cost but we still want them to be very intelligent. This is where the cloud comes in.

What I am describing is different from the Internet of Things (IoT). We see the IoT as a collection of sensors that are sending data to the cloud. We see cloud robotics as not just sending data but then trying to do something based on that data; robotics is an extension of the IoT, bringing in intelligence and motion. If you see something going wrong – there’s a fire – the sensor can just notify, the robot may be able to do something and put the fire out.

TOC: Why do we need a framework that is cloud based?

Mohanarajah: The cloud nowadays is a scalable resource of computation storage and networks.  Why this is interesting for robotics is that you need a lot of computation storage for robots. You may know about IoT, but here what is different from a sensor is that robots are typically moving around in an environment – not only a factory environment but also a very dynamic environment. So sometimes you may require a lot of computation, and sometimes you may not require it at all, for example when the robot may be in idle state. Because of these dynamic changes in computation and storage, the modern cloud provides the perfect platform for robots to exploit. That is the biggest reason why you need a framework that is cloud based.

The other reason is sharing of knowledge. For example, when you buy a robot for vacuum cleaning, when that robot hits something it is programmed to turn a certain number of degrees and go ahead. Every time you turn on the vacuum cleaner it hasn’t learned anything and will still bump into things. We wanted to help the robots learn and even share information with other robots in the next city. So a common medium is important not just for storage (because robots are going to collect a lot of data), you also need some sort of algorithm that extracts this data and it must live in a common medium in order to share.

TOC: What are the key elements required?

Mohanarajah: In addition to the scalability of computation and storage you also need availability of networks. You can’t go into a desert or tropical rain forest where there is no network connection. You need proper access to data centers. In a normal urban environment you always have these things so it is becoming easier to use these kinds of technologies.

Add Comment
No Comments

Leave a Reply

Your email address will not be published.Required fields are marked *

More Cognitive Stories

Skypicker offers flights for up to 90 percent less with a cloud-based ticketing portal

In 2012, Skypicker founder Oliver Dlouhy was looking for an affordable flight from the Czech Republic to Portugal. Noting the expense of the direct flight options available online, Dlouhy spent a day combing through various websites, finally purchasing two less costly flights from different airlines. The lengthy process inspired Dloughy to create a new online […]

5 cloud takeaways from IBM CEO Ginni Rometty’s CNBC interview

IBM chief executive Virginia Rometty appeared on CNBC’s Squawk on the Street today, where she discussed a range of topics, including the company’s push to the cloud.

Golf is cognitive computing

How is cognitive computing similar to the game of golf? In cognitive computing, systems are not explicitly programmed. Systems “learn to reason” from interactions with humans and from completing computations. Golfers are not programmed to choose particular clubs. Golfers learn from other golfers and from the experience of playing a particular course. Seasoned golfers can […]