May 3, 2017 | Written by: Leho Nigul
Categorized: eCommerce & Merchandising
Share this post:
We’re in the process of rolling out an exciting transformation of our eCommerce portfolio. This is the first in a series of posts that will formally introduce these advancements and review the architecture of the services from a technical point of view. In this piece, the author outlines the vision and key principles that are guiding this evolution. In subsequent posts, we’ll do a deeper dive into the details.
The purpose of this article is to provide an “under the hood” peek into the architectural transformation happening within IBM eCommerce solutions.
First, our objective. We asked ourselves this question at the outset of our project: how can we leverage the many advantages of the latest architectural patterns, and still provide users with a richness and reliability similar to the WebSphere Commerce capabilities that they know well and love.
Here are the underlying architectural principles that have been our guiding star as we set out to enable robust, eCommerce SaaS capabilities:
- Rapid scalability of components and services
- Everything is driven by API
- Ability to roll out features frequently, with no adverse effects to client customizations
- Self service enablement for developers to tweak and customize the solution
There were two ways to approach this challenge.
One was to re-write every single function and feature of the products from scratch, with new code that would offer similar functional capabilities to WebSphere Commerce. This approach wasn’t feasible, first, in terms of the effort it would take — after all, even after years of trying, many competitors in the eCommerce arena have failed to develop products with capabilities similar to those of WebSphere Commerce. Then, also because of the potential “LOTs of brand new code” effect that could affect our customers who entrust their business to our battle-proven IBM eCommerce product.
The second approach was to use the “Strangler pattern,” where the existing system is gradually replaced with smaller, independent components. There is a clear evidence that this approach works well in the eCommerce domain. (See example.) The obvious benefit of this approach is to mitigate the “big bang” effect in terms of code and functionality robustness, and ensure that customers continue to enjoy rich commerce functionality.
So, the paradigm we defined for our journey in componentization and microservice enablement for eCommerce is Boulder to Rocks to Pebbles. We start with a monolith, WebSphere Commerce, which we break into rocks, and which in turn, we break into pebbles.
While being big fans of the microservices architecture paradigm, we also wanted to ensure that we provided the right levels of granularity for our services.
We started with the traditional WebSphere Commerce as base, and evolved its architecture with an intention to incrementally dissect it into smaller components and microservices.
As you can see from the above diagrams, we separated our solution into multiple components and services. For example, the Store component can scale independently of any other component in the system, and communicate to the rest of the services exclusively through well-defined REST APIs. This, of course, also means that customers can create their own fully custom storefronts, and use our platform in a headless (API-driven) fashion. We continue to provide a Starter Store (many of our competitors are not able to do), which demonstrates how different available APIs can be combined to deliver a robust end-to-end shopping experience as a starting point for their Store development.
You can also see that, in some cases, we decided to leverage existing and proven services from our broader IBM portfolio as integral parts of our eCommerce solution. A few examples of these services are:
- IBM Watson Campaign Automation service for sending transactional emails (instead of hosting custom managed SMTP servers, or forcing customers to buy third party solutions)
- IBM Watson Content Hub services, enhancing digital asset management in product catalog management
- IBM ID service for authenticating and federating client business users, enabling out-of-the-box Single-Sign-On with other IBM services and offerings
Another thing to highlight in the diagrams above is the concept of eXternal Customizations (xC). One of the key principles of our SaaS solution is the ability to quickly and efficiently update the platform to the latest version of the code. The traditional Java inheritance-based customization model makes such update scenarios quite complicated, since customizations are running in the same JVM as the platform itself, and use Java class inheritance to customize out-of-the- box java code. While such a customization model is extremely powerful, and still very much relevant for customers who require extreme levels of customizations, it also creates a significant “joined at the hip” effect between specific versions of the platform and the customization code.
To address this issue, we developed an xC programming model, where client customizations are deployed to separate JVMs, and communicate with the platform through well-defined REST APIs. (I will write a blog post specifically on this topic soon, but for now you can reference existing documentation for our xC concept in recent release).
This approach allows us to separate customizations from the platform itself, but at the same time avoid performance issues with hosting custom code in separate data centers, as suggested and done by some of our competitors. Such REST/JSON extensibility architecture also opens future opportunities to create customization logic not only in Java, but also in other languages of the developer’s choice.
Another key architectural decision to highlight is our usage of Containers to enable rapid scalability and environment consistency. One of the main driving forces behind our adoption of Container technology was to ensure that we have absolute consistency of software deployment between all environments, including the developer environments. Also, Containers allow extremely fast scale up and scale down scenarios. Before, horizontal scalability required such steps as node federation into WebSphere Application Server (WAS) Network Deploy cluster, updates to the Web Server plugins, etc., In the new architecture, all we need to do to scale certain services horizontally is to start up more containers.
Another benefit that flows from using containers is the overall simplification of our delivery pipeline while maintaining robustness. Since the outcome of our build system is a container, it doesn’t matter if we are creating a new environment or updating an already existing environment with a fix; for both scenarios, we just deploy a new version of the container(s) onto the platform.
As we moved into the containerization world, we had to solve several interesting technical challenges. Again, I will cover these in more detail in the upcoming blog posts, but here is just a teaser:
- How to efficiently pass environment-specific parameters to the containers (since we want to keep containers as immutable as possible)
- We use Vault/Consul pair as a secure and centralized configuration management tool. Docker containers fetch the environment specific properties from Vault on start-up.
- How to effectively enable secure communications and use customer-specific SSL certificates
- For the entire solution, data in motion and data at rest is encrypted (DIME/DARE) with secrets safely managed by Vault
- How to deal with the log files, since containers can be very short-lived
- We use Greylog as an effective log management solution, where log files from all containers will be collected
- … and many other interesting learnings
Finally, as with any solution consisting of multiple independent services, we had to ensure that we enable such capabilities as:
- Robust circuit breaker mechanism for cascading failure preventions (we use Hystrix framework from Netflix’s stack)
- High performant publish/subscribe solution for event-sharing among services (we use Kafka)
- Battle tested Docker orchestration framework (we went with Mesos/Marathon)
- Reliable monitoring framework (we use NewRelic and log based monitoring hooked up with Pager Duty and Slack)
- And many other services required for building robust SaaS Commerce platform
Looking toward the future…
Our architectural transformation journey is not quite finished.
As we go forward, we will introduce more microservices in our solutions. They won’t be “nanoservices,” as in some of the competitive offerings. We are thoughtfully seeking the right balance between service granularity and performance constraints.
We will continue to tune and enhance our REST APIs to provide implementers with the most optimal API set and granularity to implement their customizations.
We will continue to expand the coverage of our xC customization model.
And, of course, we will continue to utilize best-of-breed services from within the broader IBM portfolio for seamless consumption by our customers.