November 20, 2018 | Written by: Spyros Tzovairis
Categorized: Compute Services | Customer Stories
Share this post:
DTWISE is a Greek software company that helps enterprises understand and optimize their energy consumption by providing real-time insights. We gather and analyze data from Internet of Things (IoT) devices, and our primary route to market is through utility providers who white-label our service and provide it to their customers. ELPEDISON, one of the three biggest Greek utilities in the energy sector, is our customer.
Rapid move to microservices
As DTWISE grew, it became clear that we needed to change the architecture of our existing monolithic software to make it more scalable and easier to support. We also wanted better control over deployment and release management, and we decided that a microservices-based approach would be best. Based on past experience, we believed that Kubernetes container orchestration software would help us better manage the configuration dependencies inherent in a microservices architecture.
Assess your readiness for cloud-native dev
We had a tight four-month deadline to deliver the new version of our software to Elpedison, and we didn’t want anything to distract from the core development work. Therefore, we looked for a managed service that offered vanilla Kubernetes because we didn’t have time to learn anything new or vendor-specific. We chose the IBM Cloud Kubernetes Service because it met this criterion and because we felt confident that IBM Cloud’s scale and reputation would help us meet the contracted SLAs for our major new client. In addition, we were able to benefit from credits towards our monthly usage during the first year thanks to the IBM Cloud startup program.
Scalability and uptime
We managed the deployment of our new containerized application to Kubernetes without external assistance, and we were impressed by the quality and ease of use of the IBM Cloud Kubernetes Service. Today, we are running on two dedicated instances with four cores and 16 GB of RAM each, supporting around 20 distinct microservices. Typically, we have a separate Kubernetes pod for each service, which makes it easier to perform rolling updates without taking down the whole application. We also create a different set of pods for each utility and for our direct B2C sales. Kubernetes automatically handles the scaling of the pods, making sure that that at least one pod of each service is running. This was one of the primary reasons for choosing this approach.
A microservices architecture has helped us make significant improvements to uptime, which is a critical consideration for our clients. For example, if we need to take the user interface down to patch it, our real-time alerts still run, and we have an uninterrupted data feed from our distributed grid of data loggers. And since everything is in different pods, a single outage does not impact the service as a whole.
The microservices approach does add some complexity because you have to manage a lot of dependencies between each service. This is one of the reasons we wanted Kubernetes—we can just select the right configuration once and rely on Kubernetes to make sure that the services we want point to the correct containers. We’re continuing to optimize in this area; we plan to start using Helm charts to better manage resource definitions in our configurations.
One of the nice surprises we got from the IBM Cloud Kubernetes Service was the built-in Vulnerability Advisor. This has helped us improve our images, pointing us in the right direction for things such as the need to update a particular library or perform a security improvement.
Saving time and effort
With IBM Cloud Kubernetes Service, we benefit from significantly faster deployment. This was a pain-point in the previous architecture—we had to wait much longer for builds and configuration updates, which increased our operational overhead. We’re now saving around 20 percent in operational time, and we were able to deploy our infrastructure within a single day, which was a really big benefit. With the old monolithic set-up, that would probably have taken a week.
Saving time in deployment is very helpful because we can focus our efforts on other matters. Combining the IBM Cloud infrastructure with a containerized approach also offers more performance, querying data 30 percent faster than the previous version of the application.
Infrastructure scalability is the other big advantage—we can create new worker nodes for Kubernetes at the click of a button. We just apply the relevant Kubernetes configuration files with different scalings, the system distributes the pods to the new workers, and they take on workload as required. Thanks to our new architecture, we are confident that we can grow as rapidly as our clients need us to and extend our future capabilities by making use of Big Data Analytics and Cognitive Services available on the IBM Cloud.
In the future, we’ll be expanding our use of IBM Cloud. We currently store our backups on another vendor’s public cloud, but we plan to rewrite our scripts to use IBM Cloud Object Storage. Having the backups on the same cloud will obviously enable faster recovery, and we’ll ensure resilience by copying the data to other IBM data centers. We already use IBM Cloud file storage for persistent data from specific containers that require a file system.
As needed, we will also expand the use of IBM Cloud Virtual Servers to run a Couchbase database cluster and to support our own Elasticsearch service. We have developed our own query language, resembling the Search Processing Language (SPL), that sits on top of Elasticsearch and provides an intuitive way of diving into the details of data. Users can access standard reports from drop-down menus, but for deeper analysis, our language provides a user-friendly way to query Elasticsearch with the ability to make downstream queries in an endless fashion. For example, you can search, aggregate the results, make further calculations, join the results with another query, and visualize the final results. None of this requires a programmer—it can all be done by an energy engineer or other tech professionals using the command prompt.