Caching as a decoupling technique
When using caching as your decoupling technique, the consuming application (for example, Web store) gets information such as item attributes, inventory balance, or item availability from a local data cache. This approach reduces the need to synchronously query Sterling™ Order Management System Software.
This decoupling technique is the use of local caching.
The local information cache can be updated by utilizing a variety of algorithms that offer various degrees of sophistication, performance and accuracy. Not only does this technique provide a way to decouple two areas of the solution, but it also provides significant performance, response time, and scalability advantages that are especially useful in end-user or website scenarios.
As an example, one area where Sterling Order Management System Software typically recommends utilizing this algorithm is for caching ATP (Available to Promise) data on the Web site. In some customer environments where shopping cart abandonment rates are very high, for example 100 item lookups to one item ordered, it is better to have Sterling Order Management System Software push out item availability to the Web storefront using the Sterling Order Management System Software Real-time Inventory Monitor. With this approach, most inventory lookups that are part of the customer's browsing and ordering experience can be served from the Web site without any synchronous calls to Sterling Order Management System Software. Based on business requirements, if the inventory levels are sufficiently high, the web storefront can sell that item. The Web storefront would revert to synchronous inventory availability check when the inventory levels are below a certain threshold. More importantly, Sterling Order Management System Software can be down without affecting the Web storefront.
While this cookie cutter approach to inventory caching may not work for all scenarios, techniques such as these can be invariably applied to almost all critical interfaces to provide simplistic but “safe” algorithms to counter planned or unplanned downtime without affecting end users or disabling critical functionality areas altogether.