Dockercon14 Keynote - Borderless Clouds
gcuomo 060000X9CG Visits (2722)
At Dockercon14, Andrew Spyker and I teamed up to talk about and demo what we are doing, in IBM and across the Industry, around a topic that I call the "borderless cloud". During the talk, I covered the concept of borderless cloud and how it relates to IBM's strategy. I also talked about how Docker is one of the erasers of the lines between various clouds with regards to openness. We discussed how, regardless of vendor, deployment option and location, we need to focus on the following things:
Fast - Especially in the age of devops and continuous delivery how lack of speed is a killer. Even worse, actually unforgivable, having manual steps that introduce error is not acceptable any longer. Docker helps with this by having layered file systems that allow for just updates to be pushed and loaded. Also, with its process model it starts as fast as you'd expect your applications to start. Finally, Docker helps by having a transparent (all the way to source) description model for images which guarantees you run what you coded, not some mismatch between dev and ops.
Optimized - Optimized means not only price/performance but also optimization of location of workloads. In the price/performance area IBM technologies (like our IBM Java read-only memory class sharing) can provide for much faster application startup and less memory when similar applications are run on a single node. Also, getting the hypervisor out of the way can help I/O performance significantly (still a large challenge in VM based approaches) which will help data oriented applications like Hadoop and databases.
Open - Openness of cloud is very important to IBM, just like it was for Java and Unix/Linux. Docker can provide the same write once, run anywhere experience for cloud workloads. It is interesting how this openness combined with the fast/small also allows for advances in devops not possible before with VM's. It is now possible to now run production like workload configurations on premise (and on developer's laptops) in almost the exact same way as deployed in production due to the reduction in overhead vs. running a full virtual machine.
Responsible - Moving fast isn't enough. You have to most fast with responsibility. Specifically you need to make sure you don't ignore security, high availability, and operational visibility when moving so fast. With the automated and repeatable deployment possible with Docker (and related scheduling systems) combined with micro-service application design high availability and automatic recovery becomes easier. Also, enterprise deployments of Docker will start to add to the security and operational visibility capabilities.
Here is the full video...