March 14, 2017 | Written by: Rob Hirschfield, Founder and CEO, RackN
Share this post:
Portability is the key concept of interoperability. When systems are interoperable, we can move around code and processes between different infrastructure and platforms with minimal concern about the layers below. In the past, I’ve described this as a “black box” approach where users only care about the APIs and are blind to the implementation details. Ideally, APIs provide a perfect abstraction so that different implementations of the API are completely equal.
Sadly, even small implementation differences can break API interoperability.
That means that when users of open software install it, configuration choices for their environment or technology stack may cause the software to behave slightly differently and break interoperability. In another common case, the pace of innovation creates problems. New features being introduced can also change behaviors that make it difficult to interoperate between versions. While these issues may not impact the single user’s experience, they have profound impacts across the community.
Without interoperability, it’s difficult to build ecosystems and share best practices.
Ecosystem and shared practices are a significant part of the user value for large, complex open platforms like OpenStack, Cloud Foundry and Kubernetes. The ecosystem ensures that people build products on top of the platform that furthers the platforms’ relevance and utility. Shared practices help control the cost of operating the platforms by allowing the community to benefit from communal operational experience.
We can drive interoperability with architectural work that drives consistent behaviors or add APIs to discover useful variations. Communities need to be reasonably opinionated to reduce variations. When variation is required—such as when different SDN layers or container runtime engines—then projects should maintain clear APIs to abstract implementations.
There are also interoperability tasks within the work to maintain a project. This work takes the form of maintaining and applying compliance tests to running systems such as OpenStack DefCore/Refstack work championed by IBM and others. It also means enforcing parity between development, test and production environments. Interoperability breaks down quickly when developer and continuous integration environments are very different from production deployments.
But the primary driver for interoperability is users demanding it.
Users and operators can put significant pressure on project leaders and vendors to ensure that the platforms are interoperable. That means rewarding vendors who take on time to work on the type of work I described in addition to adding features. It also means rewarding vendors who help drive operational improvements in a shared way. Those actions encourage shared best practices.
If you like these ideas, please subscribe to my blog, RobHirschfeld.com where I explore site reliability engineering, DevOps and open hybrid infrastructure challenges. And join me at my session at Interconnect 2017: Open cloud architecture: Think you can out-innovate the best of the rest?