Modified on by Anbu Ponniah
Authored by : Raghuveer P Nagar, Senior Architect for IBM Order management
By going through the article, readers can expect to know what IBM UrbanCode Deploy (UCD) is, why one needs it for IBM Order Management deployments, how one can access it, how it is useful for day-to-day IBM Order Management deployments and few of our learning as an implementation team.
IBM UrbanCode Deploy
IBM UrbanCode Deploy is a tool used for automating application deployments (in both on-premise and on-cloud scenarios).It simplifies, and standardizes, the application deployment process across environments, throughout the development and maintenance of the application. Additionally, it offers clear visibility (into what is running where, differences across the environments, who changed what, etc) and rollbacks of the deployed applications.
IBM Order Management
IBM currently offers its multi channel commerce Order Management System (OMS) as both on-premise and on-cloud software. The on-premise software is commonly known as Sterling Order Management. IBM Order Management is software as a service (SaaS) offering, which is a cloud-based OMS service offering (on the IBM SoftLayer infrastructure). For managing IBM Order Management deployments, UCD is used as the deployment automation and self-servicing tool.
On-boarding and Access
IBM Order management Environment cannot be accessed directly. To access the environment, that is, to access UCD and manage the OMS application, you first need to connect to jump host. Jump host connection provides ability to bypass firewalls that can prohibit internet services. Refer here for detailed process of connecting to IBM Order Management environment using jump host.
Once you are connected to IBM Order Management environment, you can access UCD. In addition to deploying application, UCD is used for complete implementation process, such as deploying changes and fixes into the environment. Refer this link for more information on UCD access.
After accessing and logging into UCD, you need to download developer toolkit environment. Developer toolkit is replica of the IBM Order Management environment, which enables you to do required customizations on your development environment. Go here for step-by-step process on downloading the developer toolkit from UCD.
Key OMS Deployment Features in UCD
UCD provides insights into IBM Order Management deployment audits, status and history. It helps in deploying IBM Order Management application in no time and also provides wide variety of process management options to also administer the deployment. The following are few of the key OMS administration processes which can be executed through UCD:
- Deploying custom code, like deploying custom jar files (say, the jar files having custom User Exit implementation and classes for Web Services)
- Exporting Configurations/CDT, which includes exporting CDT from one environment (say, the master configuration environment) and comparing it with another environment (say, the development environment)
- Importing Configurations, that is, applying CDT exported from another environment
- Starting and stopping application server
- Managing Agents, including starting, stopping and triggering agents
- Managing Queues, such as configuring a JMS queue and clearing contents of a Queue
- Managing OMS Application Server Logs, like exporting and configuration driven archiving of the logs
- Installing 3rd Party Certificates, for various integrations and implementation features
- Importing Customer Overrides (in customer_override.properties file) to the database
The processes mentioned above remain same for OMS related product offerings like IBM Call Center for Commerce as well.
While working on UCD for IBM Order Management, we have experienced many common application deployment and management issues. The following is a list of few of those, with tips based on our experience:
- Automatic Process Monitoring - UCD has email configuration to inform the status of particular process through email. For example, it can send emails when agent startup fails and build failures. As one of the first things, one should ensure that this configuration is complete.
- Application/Code Build Failures – While building custom code in the development environment, you may get errors like “Not able to access the file” and “Not enough memory” (in the build logs), say when the build involves compilation of the custom Java classes. It is important to know and remember that these issues need access of the installation directory, which is with the cloud support team. So, one should raise a cloud support ticket on facing such issues.
- Infrastructure Related Errors – While running a process, an infrastructure issue is reported, say “Database connection parameters incorrect". This happens during a time window when the password is reset in the database server, but an implementation script is run before the same password is updated in the deployed application. One should raise cloud support in such scenarios also.
- UCD Options - For administrators who are new to UCD, it will be a good idea to spend some time in getting aware of the available OMS deployment options by exploring various user interface (UI) options of UCD. This may sound unusual, but, think of the productivity loss which you may have when you reach cloud support or go through the documentation just to know that the option is already available on the UCD UI!
This is what we have today on the IBM UCD tool for OMS deployments on cloud. We will share more experiential information in our upcoming blogs in the OMS-on-Cloud stream. Stay tuned for our next article in the series, which will be on Developer Toolkit.
Modified on by Raghuveer P Nagar
Referring to our second article on Developer Toolkits, this is our third article from the OMS-on-Cloud stream in the WCE Practitioners Lounge.
By going through the article, readers can expect to know how to install and deploy the customizations along with configurations for IBM Order Management on cloud.
Building and Deploying Custom jar Files
After the developers complete the coding, they check-in the code to the project code repository. Project specific automated scripts are used to build the custom jar files using the checked-in code. For deploying the built jar file, as the first step, the file should be placed in required drop box, as suggested by the devops team. For accessing the drop box using SFTP tools, devops team shares the URL and port along with the key file.
For deploying the custom jar file,
- Login to IBM UrbanCode Deploy (UCD) tool
- As shown below, Go to Components and click on the component which has been created for your project to deploy custom jars
- Say, you had built /opt/custom/Custom17.1.jar file. Go to versions tab and check if Custom17.1.jar has been updated in the list of jars.
- If not, as shown below, click on the ‘Import New Versions’ and update with latest version that was built.
- Go to the Applications Tab and click on your order management system (OMS) Application Environment (where you want to deploy this jar). Click the play button. This opens Run Process popup of UCD tool. As shown below, select Build Customized Runtime as the process to run.
- Once process is submitted, you should see below success messages
- On building customized runtime, deployed OMS application needs to be refreshed. To do that, open Run Process popup.
- As shown below, select 'Update OMS Application' process from Process dropdown, click on 'Choose Versions' to choose the version which has been built using Build Customized Runtime process, and the click Submit.
Exporting CDT and Viewing Differences
To deploy OMS configurations, you need to export CDT first, from the master configuration (MC) environment. To export CDT, open Run Process popup to run 'Export CDT XML' process. The CDT export will fail if the ydkpref.xml is not defined correctly, so, before running the process, make sure that ydkprefs.xml is up-to-date. If required, you can check the details of ‘Export CDT XML’ process using ‘Download All logs’ option (downloaded zip file will have stdout.txt, which will have details of failure of CDT export if it fails), or the download log options on each executed step.
After exporting the CDT successfully, the next step is to compare the exported CDT with its previous versions. The comparison ensures that only intended changes are included in the latest CDT. To do that,
- Open Run Process popup from your development OMS application.
- As shown below, click on Components tab, select your OMS config component, and click Versions tab. Select the latest CDT version and then click on Compare link.
- This will open the following popup, where you select the version with which the comparison should be made:
- Once we click on Submit button on Compare Versions, we should see the differences as below.
- Click on Compare to see the details of the differences, as shown below.
After exporting the CDT XMLs with the correct and complete changes, the next step is to import it in your development environment. For that, open Run Process popup from your development OMS application. As shown below, select 'Import CDT XML' from Process dropdown:
Click on Choose Versions. This will open a popup, as shown below. The popup will list of CDT versions available. Current Environment Inventory will show the last applied CDT on the environment. You can click Add button for the MC version to deploy.
Restart the Application server after the CDT process. For more information on restarting the application server, you can refer to this link.
Starting and Stopping Agent and Integration servers
IBM UCD provides processes for starting and stopping Agent and integration servers. For more information on the same, please refer to the information here.
After starting a server, one can verify if the server has started successfully .This can be done by checking the logs (through export logs option from IBM UCD) or by checking System management console which is accessible from the OMS application console.
Exporting and Archiving logs
Using UCD, you can export Application, Agent and integration server logs . Please refer to this page for more information.
Logs collected might be few days old, hence the size of the log file may be large. If required, you can request the devops team to reduce the period of time for collecting the logs.
UCD also provides option to archive the logs on the Run Process popup, as shown below.
This is what we have today on installing and deploying customizations and configurations for OMS on cloud. We will share more experiential information in our upcoming blogs in the OMS-on-Cloud stream. Stay tuned for our next article in the series, which will be on Integration with External Systems!
Modified on by Raghuveer P Nagar
Referring to our first article on IBM UrbanCode Deploy, this is our second article from the OMS-on-Cloud stream in the WCE Practitioners Lounge.
By going through the article, readers can expect to know what IBM Order Management Developer Toolkits are, how one can access those, and few of our learning on the new integrated development toolkit.
While implementing IBM Order Management, developers make both code and configuration changes to the development environment before promoting the changes to higher environments like QA and UAT. To help developers, developer toolkit is provided. The toolkit can be downloaded from UrbanCode Deploy (UCD) Dashboard and installed on a local machine or VM to have an exclusive environment for development.
Starting IBM Order Management version 18.2, there are two types of developer toolkit:
- Developer Toolkit
- Integrated Development Toolkit
Before 18.2, only IBM OMS Developer Toolkit was available.
OMS developer toolkit has IBM DB2, IBM WebSphere Application Server and IBM MQ as prerequisites. After following the toolkit installation steps, EAR file is required to be built and deployed to access the environment for development.
OMS Integrated Development Toolkit is based on the docker, with Docker and Docker Compose as the prerequisites. There is no need of installing database, application or messaging server, all these come bundled with the toolkit. Post completing installation through docker, development environment will be available. There is no need of creating EAR and deploying, as it is part of steps executed by Docker!!
Downloading Developer Toolkits
Perform the following steps to download IBM Order Management Developer Toolkits from UCD. For more information on UCD access, please refer to On-boarding and Access section here.
- Select OMS-Application from the list of available applications in the Application tab of UCD
- Go to Environment tab, and click play button against the OMS DEV environment
- On the pop up screen, as shown in the screenshots below, select 'Extract IBM OMS Developer Toolkit' from Process drop down for the Developer Toolkit, whereas select 'Extract IBM OMS Integrated Development Toolkit' for the Integrated Development Toolkit
- If you want to include existing customization, check "Include Customization" checkbox
On click of Submit, execution of ‘Extract IBM OMS Developer Toolkit’or ’Extract IBM OMS Integrated Development Toolkit’ process starts. You will be able to see the progress on UCD as shown below.
After process is completed, toolkit zip file will be available on FTP location for download.
Installing Developer Toolkit
This is the non-integrated developer toolkit. After meeting the pre-requisites mentioned above and unzipping the downloaded toolkit (devtoolkit.zip), the following steps need to be executed
- Install JDK, and set environment variables accordingly
- Rename devtoolkit_setup.properties.sample to devtoolkit_setup.properties in the extracted devtoolkit folder, and set mandatory properties
- Run devtoolkit_setup script to install the toolkit
- Build EAR file and deploy it on application server
You can find detailed information on the steps above at the Set up the IBM Order Management developer toolkit environment section.
Installing Integrated Development Toolkit
This is docker based toolkit. Integrated Development Toolkit of version 18.2 is supported for LINUX (Centos, RHEL). Following commands can be used on Centos 7 VM (recommended) to install the pre-requisites.
i.yum install docker
ii.systemctl start docker
iii.systemctl status docker
iv.systemctl enable docker
Docker Compose Installation
i.sudo curl -L https://github.com/docker/compose/releases/download/1.21.2/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
ii.sudochmod +x /usr/local/bin/docker-compose
You can unzip the package by running “yum install unzip” command.
After pre-requisites installation, the following preparation steps need to be executed:
- Upload devtoolkit_docker.zip file to Centos VM.
- Run following command to extract toolkit setup files: unzip devtoolkit_docker.zip && cd compose && chmod +x *.sh
- In compose folder, om-compose.properties property file has default values of properties used at installation, change the properties based on your needs.
Following options which are available in executable file (om-compose.sh) will be useful for IBM Order Management integrated development toolkit installation.
- setup <optional:cust-jar>: Setup a fresh docker based integrated OM environment
- setup-upg<optional:cust-jar>: Upgrade existing environment to new images
- update-extn<cust_jar>: Update OM environment with the latest customization jar
- extract-rt <extract_dir>: Extract a copy of runtime directory on your host machine
- start|stop|restart: Start/stop/restart your docker environment
- wipe-clean: Wipes clean all your containers, including volume data
- license: Shows the various license information
- update-mq-bindings <queue_name>: Update your MQ bindings with the queue
The following steps need to be executed for installing the Integrated Development Toolkit:
- Create folder mentioned in MQ_JNDI_DIR property. Default value is /var/oms/jndi. If you are not changing default value, then create /var/oms/jndi folder.
- Go to folder where toolkit zip file is extracted and then go to compose folder
- Run following command to start the installation: ./om-compose.sh setup
On installation, four docker containers are created: OMS Runtime, DB2, Liberty and MQ. There is no need to create smscfs.ear and deploying it (installation steps take care of this). You can check the logs of liberty container to check deployment status of smcfs.ear. Once deployment is complete, you can access OMS environment with following URL: http://<IP>:<Port>/smcfs/console/login.jsp
Set up a new docker-based developer toolkit environment section here has the detailed information on the installation steps mentioned above.
Post Developer Toolkit Installation
Post installing developer toolkit, development environment will be available. However, the development activities should be performed on the latest customizations and configurations. For this, you will have to generate CDT XMLs from UCD and then import these XMLs in your development environment. Also, export the existing customization from IBM Order Management environment to development environment if you have downloaded IBM OMS Integrated Development Toolkit or not checked ‘Include Customization’ while downloading IBM OMS Developer Toolkit. Once development environment is in sync with existing customization and configuration, you are all set for the development!
This is what we have today on the developer toolkit for OMS on cloud. We will share more experiential information in our upcoming blogs in the OMS-on-Cloud stream. Stay tuned for our next article in the series, which will be on Deploying Customizations!
Modified on by Shweta Gupta IBM
The shoppers have loved the search bar on the shopping sites and come to expect them to be their first friend to take them close to the product they intend to shop on the site. This is akin to the quintessential ‘Shopping Assistants’ in the marts, supermarkets, our our big or small fashion outlets. The shop may be earmarked and categorized, however if we are not familiar that particular store, we tend to get awed by the array of aisles and have an instant liking towards the store assistant who can either guide you the aisle/corner of the store for your desired product or even better escort you to the product that you are seeking. You would agree that some of these assistants have converted how many bags you carried back home from your shopping expedition.
The current state of eCommerce has come a long way and with most search engines offering a set of standard technical capabilities, one would expect to find our shopping search experiences almost similar. However, we are actually at length from this experience. While I was looking for studies on the subject, I stumbled upon a 2014 report on smashingmagazine.com, where they benchmarked the search experience of the 50 top-grossing US e-commerce websites, rating each website across a set of 60 search usability parameters, which you would agree is rather very fine grained. Offcourse a study from 2014 would require a revisit in 2016 at the pace one would expect our eCommerce sites to launch capabilities in an agile way, to stay competitive.
What does this mean to you for as eCommerce Implementation Project Manager?
I have observed that even though search is an important functional feature on eCommerce sites, the search relevance testing is continued to be missed out of the plan. The traditional functional testing, includes search and browse feature test, where the test team is responsible for a set of use-cases for checking that the search functionality works as expected. However, this approach largely misses out on the desired shopper experience for the search results or/and the desired search results from the business point of view. The User Acceptance Testing (UAT) phase would typically have the business users look at search relevancy, however it remains more of an organic exercise, amongst the overall umbrella of UAT, not getting the due attention, time, effort and most importantly understanding of the approach, input and outcome.
What does this mean to you as the eCommerce Product Leadership for your business?
This is definite a very continual exercise for you, just as it is to deliver on the business requirements via your eCommerce and mCommerce channels. You will be doing analysis at the ‘search hits’ and ‘search misses’, hopefully on a weekly basis, and working towards tuning your results based on the data your shoppers are feeding into you. The strong point is that because you have invested into WebSphere Commerce, you will find that the Business Tools allow you with a lot of flexibility and ease to achieve your results.
Let us explore an approach towards search relevance, look at a proposal for the test strategy and guidelines for your WebSphere Commerce Search powered commerce site.
Outlining the search relevance testing proposal and approach
The Project Management of eCommerce sites must include a sprint (or more depending upon the catalog size) which focuses on the search relevance. I have had discussions with several test managers on the topic and found them wanting for guidance around the ‘search relevance testing’. This prompted me to create an approach document that can be used as starting guideline for your eCommerce site.
- Your list of search terms: Work with the business to identify a list of ‘n’ search terms which you would like to focus in your initial iterations. Where n could be 50, 100, 200 so on based on the width and depth of the catalog, product items.
- Variety in search terms: The search terms should include a variety, where you have single words, multi-words, words with units like weight (think grocery), phrases with attributes like color (think garments), variance in how shoppers search for certain products, brands which your catalog does not support, misspelled products …
- Get into the shopper’s shoes, sandals: Leverage analytics data from different sources to identify the changing trends and shoppers behavior. Go out there to your favorite, and not so favorite retailers with similar catalogs, and learn what you like and what you would wish did better.
- Product title/description recommendations: The quality of the search result highly depends upon the way the catalog data is created and maintained. The natural search results from the search engine is based on the query relevance as defined in the configuration. One of the important recommendation which would also come out of the relevance activity is on enriching the product descriptions and other searchable attributes.
- Tester training: Your army of testers may not be trained to think on the topic of search relevance. Plan for a training and get them to think and experience shopping around the domain. This may be simpler for certain industries like grocery and fashion, however this may slightly trickier incase your site is selling spare parts, or heavy industry tools.
- Search Developers: The search developer needs to be part of the iteration to support the relevance tuning. This will require to look at the relevancy scores, and providing analysis on the result. The developer will also be responsible for making any changes which come through search configuration changes.
- Inputs to the search relevance activity: At the start of the search activity, you will have the list of search terms, expected results, synonyms, and search replacement terms.
- Outputs of the search relevance activity: At the end of the search relevance activity, you will list of search terms, actual result-set, if it meets expected results and no what is the delta. You will also have a more expanded/modified list of synonyms, search replacement terms and search rules.
- Iterative: This exercise should be iterative based on how is your development life-cycle. If the catalog upload, products, brands are staggered towards different releases, plan the search relevance accordingly.
- Input to the performance cycle: The outcome from this exercise should also be fed into the performance system as the search performance should factor in the recommendations which will be applied on the production.
The search relevance is a very subjective criterion which varies by industry, business, catalog managers, merchandizers, the current promotions, and most of all your product catalog.
The search relevance activity needs to get under the skin of the shopper to be able to identify the patterns and results. The business needs to be engaged very closely in the process.
I would hope that the approach outlined in this article helps you to work towards the planning exercise and also gives a starting point. Do share your experience on search relevance, as I look forward to the different ways in which we are out to achieve the business outcome from our commerce sites – that is convert the searches into baskets and orders.
I am attaching a basic version of search template. If you think this is useful, do let me know so that I can share other versions on the same.
Search Relevancy Template - 2016-FEB-SearchRelevance.xlsxView Details
Previous related blog: Customizing search for shoppers and retaining them as their attention time dwindles
1 - Changing the relevancy of search index fields
2 - Tuning multiple-word search result relevancy by using minimum match and phrase slop in WCS Version 7
3 - Search Rules
Modified on by Vani Mittal
A requirement that we sometimes come across is to have region based product assortment and pricing. FMCG and grocery retailers typically have different pricing and product assortment in their physical stores and the same needs to be replicated in the online channel. Running local promotions and offers is also quite common, for example, for a local festival or event.
There are multiple ways to achieve this requirement and the approach you choose to implement depends on the level of differentiation required among regions. Let’s explore some of these approaches.
The definition of region is kept intentionally loose here. Multiple countries could constitute a single region. Each country could form a region or one country could be made up of multiple regions.
The region to which a user belongs could be identified using many different approaches. It could either be through the user provided zip code or by using a geo-location service. This discussion is out of scope of this article. All we really care about is that before a user starts browsing the catalog, her region has been identified and saved somewhere, be it in a cookie or in the database.
Approach 1 - Multiple extended sites
This is probably the first approach that comes to one’s mind. The structure and concept of extended sites lends itself well to this requirement in general.
Each region can be represented using a separate e-site. Each e-site can have its own sales catalog that offers a subset of the products available in the master catalog in the catalog asset store, thus offering a different product assortment.
Each e-site could have its own price rule which could override the prices from the master catalog as needed.
The e-sites can even have their own UI differences and region specific marketing and promotions.
If the number of regions is very large then similar regions could be combined into a single e-site with the minor differences within those regions being implemented using other approaches discussed here.
Approach 2 - Multiple catalog filters in B2C store model
In the B2C store model, the store’s default contract is used to define the default entitlement of the store. All customers shop under this one contract which by default provides the same catalog and pricing to all users. There are ways to work within this default contract and offer differentiated catalog and pricing.
You can create multiple catalog filters and add them to the default contract using the CatalogFilterTC term. Each TC should have a TC level participant for a member group (region specific) which means that the TERMCOND_ID & MEMBER_ID columns in PARTICIPNT table will be populated while the TRADING_ID will be NULL. See the definition of PARTICIPNT table for details of these columns: http://www-01.ibm.com/support/knowledgecenter/SSZLC2_7.0.0/com.ibm.commerce.database.doc/database/participnt.htm?lang=en.
The default contract cannot be edited through WebSphere Commerce Accelerator, so to add the catalog filters to it you either need to use the contract XML import command (ContractImportApprovedVersion) or write custom data load configurations using TableObjectMediator to load the data into the required tables (TERMCOND, PARTICIPNT etc).
The default contract always has one price rule associated with it. The price rule can include different branches of pricing for different member groups thereby allowing for different pricing. Alternatively, you could create a custom price rule condition that checks for the user’s region (assuming it is saved somewhere – cookie or database) and use that to decide which path to follow.
Refer to this topic in info center on how to create custom price rule conditions:
Adding a new custom condition or action for a price rule
For this approach to work, the customers need to be added to region specific member groups. Only static member groups, i.e. member groups that have users added to them explicitly, can be used as TC level participants. Users can be added or removed from member groups dynamically, but you need to write custom code to map the shoppers with the appropriate member groups. This can become a performance bottleneck, particularly if the retailer has a very large user base.
Approach 3 - Multiple buyer contracts in B2B store model
If you are using the Enterprise edition of WebSphere Commerce, you can use the B2B store model in a B2C scenario with some adaptations.
Starting from Feature Pack 8, a single Aurora based storefront for both B2B and B2C capabilities is offered. When publishing the Aurora store archive you can choose to publish it as a B2B storefront and then disable the B2B features that you don’t require, for example, requisition lists, saved orders etc. Some of these features can be disabled using the Store Management tool in Management Center, while some might require JSP changes.
The benefit of basing your B2C store on the B2B store model is that you get the B2B business user functions; for example, you have the ability to manage the buyer contracts using WebSphere Commerce Accelerator.
In Feature Pack 7 and earlier, you can use the Elite starter store as your store’s base and again adapt it for B2C, i.e. enable guest user shopping and disable the features you don’t need.
If you don’t need the B2B business user functions, you can base your store on the Aurora B2C model too. In this case, you can manage the contracts either through contract XML import command (ContractImportApprovedVersion) or write custom data load configurations using TableObjectMediator to load the data into the required tables. Data load is particularly useful when managing a large number of contracts.
Whatever store model you choose to go with, the rest of the approach described here will remain the same.
Product assortment and pricing
Each region can be mapped to a buyer contract with each contract being associated with a unique catalog filter and price rule. A catalog filter can be assigned to a contract through a TC of type CatalogFilterTC and a price rule through a TC of type PriceRuleTC.
For a given contract, only a single price rule can be in effect at one time. Therefore, the price rule you assign must generate prices for every customer and for every catalog entry available to customers under the contract.
The contracts need to be applicable for all users. This is handled through contract participation where one row in PARTICIPNT table with MEMBER_ID value NULL having a ‘buyer’ participant role in each regional contract is created. This allows for any user to shop under the contract.
Once the customer’s region is identified, the corresponding contract can be set in the session using ContractSetInSession URL. Once this is done, the customer will see and be entitled to the region specific product assortment and pricing.
For contracts and catalog filter to be used at runtime, search entitlement needs to be enabled. This is required since from Feature Pack 7 onwards, browsing and searching as non-transactional requests were offloaded to the search server. As a result, the WebSphere Commerce server acts as a transaction server, and the search server acts as a non-transactional server that can be separately deployed and scaled. Entitlement was moved to the search server to apply entitlement checks before and after search results are returned in the storefront. If you use the Aurora storefront from Feature Pack 7, you will need to update the product and category REST calls in the JSPs to pass in the contractId that is set in session, so that it is used by the search server to enforce the entitlement.
By default, search entitlement is disabled for B2C stores and enabled for B2B stores. You can insert or update an entry with the name ‘wc.search.entitlement’ and value 0 (disable) or 1 (enable) in STORECONF table.
If the regions have simple UI differences, then those could be managed through contract specific style sheets where you can follow a naming convention for the files so that the appropriate one is picked up. For example, the name of the CSS file could be same as the contract name. Alternatives could be designed using the same concept (map the UI differences to the contract) if the UI differences are significant, but this would require customization.
Marketing and promotions
If region specific marketing and promotions are a requirement, then region specific customer segments can be created to target the customers. The customer segments can be created as static member groups, i.e. member groups that have users added to them explicitly. Users can be added or removed from member groups dynamically, but you need to write custom code to map the shoppers with the appropriate member groups. And as discussed in the previous approach this can become a performance bottleneck.
An alternative is to not include any explicit members or implicit rules of membership when creating the customer segments. So, all customer segments are actually identical copies of each other with only the names being different. To identify a specific customer segment as belonging to a particular region, the relationship between a region’s customer segment and contract can be stored in a custom table. You can customize the member group checking logic to return true when evaluating the customer segment of the region that the customer is currently browsing. Essentially, every user will be treated as a member of the regional customer segment only when they are browsing the contract of that region.
You can then use the customer segment while defining a promotion or a marketing activity. Refer to the screenshot below for an example of a region specific product recommendation.
This approach of using multiple buyer contracts works well when the number of regions is not too high. If you have a large number of regions, then that would mean a large number of eligible contracts for each user since each user is eligible to shop under any contract. WebSphere Commerce server and search server scan through all eligible contracts of a customer at multiple points. Therefore, having a very large number of eligible contracts can impact the performance of the storefront.
WebSphere Commerce provides a flexible and powerful architecture and runtime framework. As a result, there can often be multiple ways to implement a given requirement. In this blog post, we discussed the approaches that can be implemented to offer region specific products, pricing, UI, marketing and promotions. You should be able to evaluate the approaches given here and map them to your business requirements to select the best approach that works for you.
Modified on by Anbu Ponniah
Performance metrics available for WebSphere Commerce
Need for Agile performance preview
The solution architecture is done and the iterations are on their way churning out designs and code. In projects executed following agile methodologies, features are designed and added in increments. While it is not too hard to get the whole picture on functional capabilities of the solution, it is relatively difficult to get a handle on how all this incremental design and code will eventually perform untill the last of the iterations are done and dusted. That often means, projects shift from iterative mode to waterfall mode with a performance test phase planned right at the end of development cycle, leaving little leeway for development teams to correct any major performance bottlenecks identified during performance testing.
Can performance testing be made iterative? I can imagine my performance specialist rolling her eyes. Problem is that we cannot certify the system for performance unless a test can be run in a controlled environment. And it takes time - which is premium in short iterations demanded by agile projects. So, is there hope? If we compromise our scope to not "certify" the system for performance and instead go for "flushing out major performance problems upfront", we may have a way around this conundrum.
Performance Measurement tool and framework
From fix pack 7 onwards performance logger feature supports a trace string "com.ibm.commerce.performance=fine" to trace response time of key external calls from WC to OMS. Through this a development team can understand the SLA when checking availability, reserving and cancelling inventory and getting order details from OMS. With Fix pack 9, the performance measurement logger feature of WebSphere Commerce allows teams to understand response time, impact of caching and size of result on each call at various application layers. It replaces the old ServiceLogger tracing with a package of logging that can be analyzed through the performance measurement tool.
How to enable and use performance measurement capabilities is well documented in the knowledge center. But here are the essentials of how a development team can use these capabilities to get a preview of performance capabilities of the solution before official performance test is due:
1. Plan a day during your test runs in your iterations for collecting performance metrics.
2. Work with your WCS/WAS administrator to enable the following traces:
3. Have your testers execute a full suite of tests that represents typical browsing behavior of the live site - include search, promotional messages/pricing, navigation, registered/guest and checkout flows.
4. Collect the files in WAS_profiledir/logs/ folder of your server.
5. Use your favorite tool to project the analysis -- for example, basic performance reports are in CSV format and can be simply charted in spreadsheets
Once a base line is formed, further iterations can focus on deviations from previous measurement.
What is measured
Development team gets a preview of where the heaviness is and where either a caching strategy or re-design is required. The reports are detailed enough to point out the following:
Average call duration and Execution time (performance reports)
Size of results (performance reports)
Cache hit/miss (performance reports)
Stack of operations with time taken by each child in the stack (stack reports)
Get information on a full execution cycle (execution report)
Identify source of calls (caller report)
A note of Caution
While this test is not a replacement for proper performance load test, this type of testing can point out where contention is likely to occur and take corrective design decisions early in the development cycle. If this was so easy, then why isn't everyone doing it - simply because there are likely to be false positives and a few false negatives in the findings. After all, the test environment is unlikely to be modeled to represent the scale of prod or perf environment. The environment itself may have some variability due to multiple testers accessing it and development teams delivering fixes. And test execution is subject to manual variations - think time cannot be controlled, sequence may vary etc. So, teams should take findings of such a testing with a grain of salt. So, such an approach requires the performance architect to calculate probabilities against the findings to determine which of them represent real problems. But in the hands of an expert, this mechanism can help create a solution that is functionally devoid of performance bottlenecks.
Modified on by Vani Mittal
WebSphere Commerce search provides enhanced search functionality in starter stores by enabling enriched search engine capabilities such as automatic search term suggestions and spelling correction, while influencing store search results by using search term associations, and search-based merchandising rules.
Starting from Feature Pack 7, search server is a new WebSphere Application Server runtime instance with an embedded Solr runtime, which includes a published RESTful based API to interact with the WebSphere Commerce runtime instance. It de-couples the store pages into the presentation layer and business logic layer of search solution. With this approach, the search and browse-only traffic from the storefront can be offloaded away from the WebSphere Commerce server (transactional server), to the search server. The B2C store component pages use the RESTful API to retrieve necessary data from the Search Server instance and present it to an online store visitor. With this architecture, the search servers can be scaled separately and therefore the search and browse traffic can be handled independently, creating a flexible and scalable deployment model that can adjust to various storefront browsing traffic patterns at different times or shopping seasons.
A well designed cache is an important aspect of a well performing search solution in WebSphere Commerce. In this blog post, we will discuss the different types of caching that can be implemented pertaining to search.
The following diagram gives an overview of the caching configurations, triggers and invalidations with respect to the search solution of FEP7+.
There are multiple ways caching can be implemented in the WC and search servers. Typically, in Commerce server, we have:
1. JSP cache
2. Servlet cache (REST)
3. Command cache
4. Data cache
In the search server, we have:
1. Servlet cache (REST)
2. Data cache
3. Solr native cache
Let us discuss the cache configurations that are relevant for the search solution in FEP7+.
Search is used in the storefront in multiple places, mainly for category navigation, faceted navigation, keyword search and auto-suggest.
In keyword search and faceted navigation, there is a lower possibility of reuse of cache entries since there can be a large number of possible keywords and facet selections. Hence, the general recommendation is to not cache full page responses when keyword search or faceted navigation is used. In this case, caching at a lower level can provide a better cache hit ratio. For example, instead of caching the entire search results page, it makes sense to cache the individual product thumbnails which can be reused across different facet selections and search results.
On the other hand, the default version of category navigation pages (that is, without facet selections) can be full page cached. As mentioned above, for facet selections, it is beneficial to cache the individual product thumbnails as fragments. This would have the thumbnails cached for reuse across transactions but when the category page is repeatedly hit, the page would render without having to fetch/read all the individual product entries again.
As discussed earlier, with FEP7+, there is a separation between WC server and search server. The catalog pages, for example, category and product pages have REST calls which hit the search server directly. In this scenario, there can be two possible options for caching category navigation pages:
1) If WC JSPs are being used for rendering the UI then full page and fragment caching can be used as discussed above.
2) If a different technology is used as the front-end (for example, a content management system) and WC is used only as the back-end, then it is a common practice to use the REST API provided by WC for the business logic. In this case, the response of the REST calls can be cached in the search server (discussed in next point). This keeps the separation of browse and transactional traffic intact and as a result one can fully benefit from the ability to scale up search/browse traffic separately.
When using JSP caching, there are well documented invalidation techniques which can be implemented. Details can be found in IBM Knowledge Center:
Setting up cache invalidation for WebSphere Commerce search
Caching and invalidation in WebSphere Commerce search
REST services use both server-side caching and client-side caching. Server-side caching is achieved using dynacache, and client-side caching is achieved using cache directives in the response header.
The topic of REST caching is well documented, so I will not go into detail and instead just provide resources that can be useful for understanding how REST caching works. This topic in IBM Knowledge Center provides information about REST caching strategies.
To see an example of how to cache a REST request, you can refer to this blog post by Marco Fabbri.
From FEP7+, the search infrastructure can be deployed in a separated WAS cluster and then the dynacache service can be enabled on every cluster member the same way you do on Commerce servers. The response of the search REST calls can be cached using dynacache on the search server itself. This is an excellent blog post by Marco Fabbri that describes how to do so.
The default invalidation technique used on the search server is different from the WC server. Since there is no scheduler running on the search server, there is no DynaCacheInvalidation job that runs in the background to perform invalidations. The invalidation on the search server is managed by the cache manager instead. The search server periodically checks for pending invalidations (pending invalidation events in CACHEIVL table) before executing each request. What this means is that each search server instance is responsible for invalidation of its own (local) cache instances.
WebSphere Commerce contains code to use the WebSphere Dynamic Cache facility to perform more caching of database query results. This additional caching code is referred to as the WC data cache. Similar to this, there exists a data cache on the search server that is used for storing internal runtime properties that are used by certain search features to improve overall performance. These data object caches are configured to use DistributedMap object caches.
This Knowledge Center link provides a list of search data caches.
As with the base cache, the invalidations in the search data cache are managed by the cache manager on the search server. This blog entry by Andres Voldman can be referred to for more details about the search data cache configuration.
Solr native cache
To obtain maximum query performance, Solr stores several different pieces of information using in-memory caches. Result sets, filters and document fields are all cached so that subsequent, similar searches can be handled quickly. It is important to understand what they do and to tune them to suit the actual environment and search traffic.
You can refer to Apache Solr documentation for more information about Solr native cache.
For additional guidance, you can refer to the "Optimizing Solr native caching performance" section in this article and the "Search caching" section in this Knowledge Center article.
There are multiple ways caching can be implemented in the WC and search servers. In this blog post, we discussed the cache configurations that are relevant for the search solution that uses the new search architecture introduced in Feature Pack 7.
As an architect/practitioner growing up in a world of enterprise architecture where data is rarely seen outside the ambit of relational schemas, NoSQL brings with it a breath of fresh air - simultaneously challenging previous beliefs, promising exciting opportunities and bringing its load of inevitable baggages. In this blog post, I shall walk down the discovery path of how, as an architect, I see NoSQL complementing or supplementing or replacing traditional solutions. The perspective is that of an architect trained in a world of stack architecture and transactions, so this may not resonate well with advocates of either technology. However it highlights opportunities and challenges that would translate into solutions that everyone can relate to. This will be a two part series - part 1 (this blog) will set the context and part 2 will detail one solution possibility.
Defining NoSQL as a comparison of SQL databases is no longer straightforward. Lines are blurring. For this blog, NoSQL databases are those that have flexible schema capabilities, store their data as one of the following formats:
and primarily support a non-SQL query interface. They may or may not support some or all of ACID (Atomicity, Consistency, Isolation, Durability). Internet is filled with information on NoSQL, their features and extensions, so I will try not to replicate this information. Instead let us focus on what it means for ecommerce - starting with WebSphere Commerce.
Let us first set some background here. First order of business is to view information managed in an ecommerce solution in a perspective unconstrained by traditional views. For example, one may view data handled by WebSphere commerce from the following perspectives
Catalog perspective -- Attributes, structure and administrative information related to products. Here the term attributes is used in the traditional sense of the word and not restricted to WebSphere Commerce's definition of attributes. Hence, the information included in this set of data are information that describes products for external consumption, information about products for retailer's own management, information about structure and relationship between products and information that are needed to administer or manage it all. Entire Catalog data model of WebSphere commerce falls under this bucket. Importantly, this does not include price and inventory.
Order perspective -- Attributes, structure and administrative information about shopping carts and orders. Same definition here too - includes information from shopper, retailer and administrative perspectives. This, in most cases, constitutes the primary transactional data for a retailer. Information about orders will need information from other sets of data such as catalog, payments, pricing, marketing etc. It may be tempting to already think relational (foreign keys) here. If we hold off the temptation to bucket perspectives of data into relational concepts, we may find it easier to see alternative views.
Customer perspective -- Information about shoppers in our retail solution. It includes login and password for registered shoppers. It includes addresses and payment information for all shoppers. Again, it will have overlaps with orders and catalog perspectives.
Marketing perspective -- Information needed to market or promote one or more products. It includes information that defines the rules, information that should be displayed to shoppers and business users and information needed for business logic to evaluate and apply them.
Above is just a short list that this blog will use for illustrating a point of view and is not an exaustive or correct list.
For each perspective, we see the following are immediately apparent:
some data is needed to display to external world versus some information is for internal use.
some data is static, others not so static
some data needs high degree of ACID support others can live with cursory checks
With this backdrop, let us consider the following use cases:
UC1: A shopper must be allowed to define and manage any number of personal information about themselves.
UC2: A shopper must be shown personalized messages based on personal information they have shared
Traditional solution requires some innovation to allow users to create their own free-form information - so we may end up with a lot of duplicates or near-duplicates. This information can soon grow out of hand - imagine if you anticipate a million customers and dozen attributes on average for each. So natural inclination is to push back on such a requirement and instead box the attributes to retailer defined values such as hobbies, favorite XYZ (sports, color, place...) etc.
Then, in WC, one could define a service to adding information to Member attributes part of the Member data model in WebSphere Commerce, customize promotions/espots logic to allow targeting based on member attributes and then during customer interactions check if member attribute driven espots and/or promotions can be shown. This is one way (and one of our future blogs will detail this). Other way would be to define some custom table and store these as key-value pairs (in fact storing in MBRATTR has a key-value sense to it).
NoSQL gives us an opportunity to think beyond the constraints of data model here. After all if the information naturally fits the key value pair model OR can be modeled as a "Personalization Document" of a customer, why not think NoSQL here?
The solution must be capable of creating this information, allowing shoppers to maintain it and for internal commerce logic (such as promotion/pricing components) to use it in their transactions, specifically as part of existing commerce transactions. And it need not require remodelling entire WebSphere Commerce database and business logic to use NoSQL. I will be advocating a layered approach where each type of data model is used to take advantage of their core strengths while minimizing the custom code needed as much as we can.
We will explore that solution in my next blog.
Modified on by Vani Mittal
There are many order capture scenarios prevalent in the eCommerce space today. Some of the commonly used are – pre-orders, recurring orders & subscriptions.
These order capture mechanisms are appropriate for different situations and different kind of products. For example, pre-orders are great for allowing shoppers to reserve their own personal copy of a highly anticipated product.
In this 2 part blog post, we will explore the above mentioned order capture mechanisms and also cover the various points that need to be kept in mind while implementing them using the IBM Commerce portfolio of products. Part 1 (aka this one) will cover subscriptions and recurring orders, while part 2 (coming next week) will cover pre-orders.
Subscriptions & Recurring Orders
Subscription is a business model where a customer must pay a subscription price to have access to the product/service. The model was pioneered by magazines and newspapers, but is now used by many businesses and websites. Recurring orders are similar to subscriptions in that they also allow shoppers to buy the same items on a periodic basis without having to place orders repeatedly.
In case of subscriptions, the time period and frequency is determined by the retailer. For example, a publisher decides that a magazine will be offered on a monthly basis and at 1 or 2 year subscriptions. While in case of recurring orders, the time period and frequency is usually decided by the shopper. For example, a shopper decides that she wants 1 litre of milk to be delivered to her every day for the next 3 months.
There are multiple benefits of subscriptions and recurring orders, some of which are mentioned below:
Retailers benefit as they are assured of a predictable and constant revenue stream from the subscribed shoppers.
It increases customer loyalty by allowing customers to become greatly attached to using the product or service and, therefore, be more likely to renew when the time comes.
It reduces customer acquisition costs and allows for personalized marketing.
Subscriptions and recurring orders are supported out of the box by IBM Commerce solutions, namely, WebSphere Commerce and Sterling Order Management. However, there are certain aspects that a retailer must think through before implementing this feature as it can have an impact across almost all of the eCommerce subsystems.
Merchandising & Marketing
Before a product can be bought as a subscription or be part of a recurring order, it needs to be marked as such. This can be done using IBM Management Center for WebSphere Commerce or via the catalog data load process. The retailer needs to prepare this data and ensure that the additional properties, for e.g. subscription frequency and time period, are included in the catalog data.
Another aspect to consider is product life-cycle management. If a product is withdrawn from sale, what happens to the subscriptions & recurring orders that it is part of? One option could be to proactively cancel such orders and email the customers giving them a choice to order similar products (an opportunity for cross-selling and up-selling).
The payment for a subscription is usually captured upfront for the entire duration. On the other hand, for a recurring order the payment may be captured when each instance of the recurring order is processed. This introduces many complexities in the management of payment. For example, how do you handle the case where the credit card provided at the time of placing the original recurring order expires? In WebSphere Commerce, the recurring order will be cancelled if the payment for one of its instances fails during order processing. The customer can then be informed through an email or SMS and be asked to place a new recurring order. Additionally, there are 3rd party recurring billing management systems available that handle the many subtle complexities of subscription based payments, such as automatically emailing customers about credit cards expiring. Such systems could theoretically be integrated with WebSphere Commerce to handle credit card expiry detection and notification proactively.
Pricing & shipping
As mentioned in the previous point, payment for a subscription is usually captured upfront. So, the total price being charged to the customer includes the price for all the items for the entire duration of the subscription. Any future price changes will not affect the customer during the period of subscription. While for recurring orders, it is common to capture the payment as and when each instance is processed. In this case, it is up to the business to decide whether the original price applicable at the time of placing the recurring order should be honored or should the current price be taken into account.
When a customer places a recurring order, the retailer is in effect promising the customer that the items will be available for the entire time period chosen by the customer. However, it may happen that the items are out of stock at the time of fulfillment of an instance of a recurring order. What should be the business process in this case? In WebSphere Commerce, the out of box behavior is for the recurring order to be automatically cancelled and the customer to be notified of the fact via email or SMS. An alternative approach could be to delay the fulfillment of the particular instance of the recurring order till inventory becomes available. This may be a good option if inventory is expected to be available soon and the customer can accept a delay. The customer could also be given a choice to accept similar products that are in stock providing an opportunity for cross-selling and up-selling. These alternative approaches do not come out of box but could possibly be implemented with some customization.
For normal orders, different retailers have different policies on whether they allow an order to be modified by the customer once it has been submitted. A common practice is to provide a short window of time (maybe a couple of hours), before an order is sent for fulfillment, in which the customer can request for changes in an order via the retailer’s call center. Once the order is sent to the backend system for processing then only cancellations may be allowed. In case of recurring orders there is further thought to be given, since a customer may want to edit one of the instances of the recurring order or maybe the remaining unfulfilled instances of the order at some point of time in the future. In this case, it may be simpler to just cancel the recurring order and allow the customer to place a new one.
Returns, refunds & cancellations
The return process for a normal order itself can be complex depending on the process and business policies followed by a retailer, but there can be additional intricacies involved in case of subscriptions and recurring orders. Consider a scenario where a customer has subscribed to a monthly magazine for a year and has paid the total subscription cost upfront. A few months in to the subscription the customer decides to cancel it. How much refund is the customer entitled to? You would need to work out the refund by calculating the proportionate cost of each magazine issue and then deducting the cost of the issues that the customer has already received.
Providing customer service via the phone and email can be expensive. It is always a good idea to empower customers to manage their account and order information through the online store. Features like order history are pretty much standard these days. But in case of recurring orders and subscriptions, some additional functionality may be required to make self-service truly effective. Customers should be allowed to track and manage their subscriptions. They should be able to view the history of not just the order originally placed but also the recurring instances of the order. They should be allowed to cancel or edit the orders (within the boundaries of the business policies governing it of course). They should receive appropriate notifications of actions to be taken, for example, if a subscription is up for renewal.
In this blog post, we discussed the main business processes involved in subscriptions and recurring orders. In next week’s post, we will explore pre-orders and how they can be leveraged by retailers.
eBay does it. Amazon does it. Alibaba does it. Walmart, Best Buy, Flipkart... list keeps growing every quarter. Quite a few clients I have talked to think marketplace is a "low hanging fruit" and are compelled to try it. This is especially true in what is traditionally referred to as "growth markets". On paper, the allure is irresistible - after all who wouldn't love to make a commission for providing real estate for sellers and buyers to meet and transact. In an era of cheap computing resources, this sounds almost like free lunch. Only it isn't. I will explore some aspects of a marketplace strategy that are worth considering before going live with a marketplace solution.
First of all - the millennial retailers often don't realize that marketplace model has been around for ages. Focus has always been to bring together sellers and buyers on a online portal. For example, manufacturing units would set up portal to bring suppliers together and bid for contracts. Metal junction is a successful auction based marketplace for coal industry in India. Even a stock exchange is a marketplace - with stock brokers playing the role of the portal to allow sellers and buyers to transact. Why then we see a renewed interest in marketplace? Why are retailers queuing up to try this model now?
Reason for adoption
Most consultants and retailers agree that decision to go for a marketplace is based on following reasons:
- Survival or expansion strategy based on whether retailer does it to counter erosion of customer base to large online retailers OR reaching out to new customer base/market
- expand the catalog that a retailer offers through their online channel without investing in larger warehouses and logistics
- Only the sale transaction need to be hosted by the marketplace provider - procurement and fulfillment is executed by the seller.
- no risk of stale inventory or need for mark down - inventory risk is with the seller
- sometimes the legal, political, social and cultural restrictions makes it easier for companies to launch marketplaces to circumvent those restrictions
- earn fees/commissions -- though this is relatively small, this is almost all profit when compared the cost of hosting transactions
- shopper may feel they got more choices in a single portal(psychological angle) hence popular expectation is that it leads to customer stickiness
I am sure there are more reasons, but no major surprises here.
But retailers overlook or don't give enough importance to some important aspects. I 'll talk about four such aspects in this blog.
Four aspects retailers mustn't overlook
Main among them is that shoppers often don't differentiate between the seller and marketplace retailer. A poor experience with a seller often translates to poor experience with the marketplace and lead to a lost footfall. In other words, established retailers are putting their reputation on the line on behalf of sellers. So they better have a way to enforce discipline among sellers. One way could be to do random spot checks. I have heard of employees posing as shoppers to get proof of malpractice against sellers with questionable track record. But that is reactive approach. Marketplace provider would need to take up more responsibility. How about a tie up a logistics provider and have random quality check at pick up points? Companies often dismiss this as a risk that needs to be written off but overlook the intangible value of brand reputation. Differentiating on trust often wins long lasting clientele beyond finicky deal-hunters.
Other thing is standardization of catalog information - quality of information from many sellers is below par. They are probably managing all information in spreadsheets and have systemic problems of duplication, inconsistency or general lack of information. Unless the marketplace strategy has a solution to address this problem, shoppers are going to find anemic product details pages.
Sellers have a mix of mature and amateur back end systems. So the marketplace solution must be able to expose robust APIs for mature sellers to integrate with their back end systems while allowing gateways for upstart sellers to directly manage their business on the marketplace itself.
Fourth is differentiation. When marketplaces are dime a dozen, what makes a shopper select a particular solution. This is where opinions are as diverse as people. Some say cool User Interface, others say cool features, yet others rich content, SEO, social integration etc. Most impactful ecommerce solutions have a compelling reason for shoppers and sellers to stay with you. Companies that target specific market segments have better success than generic retailers. If the marketplace is targeting office supplies, home furnishings or industrial tools, it is more likely to succeed than a low-margin driven competitive general retail.
There are many more - a quick search on the Internet should reveal a more comprehensive list. But above four is what I found as prerequisites a retailer must think before opening up a marketplace.
The tool for success
IBM Commerce portfolio has a rich set of products that enables a end-to-end ecommerce solution to be built to custom order. It comes with a rich set of off the shelf features, however its real value is in building on top of the vast and deep foundation it provides. At face value people often dismiss it as not suitable for marketplace solution. However its data model is rich in its catalog, contract and organizational capabilities, almost all of its services are available as REST APIs and WebSphere Commerce and Order management tandem provides the deep catalog and order management capabilities that is par none. IBM's deep analytics portfolio is well known and is available as real time as-a-service solutions. What I would like is a unified business tool that can bring together and expose all these deep capabilities to enable execution of a marketplace strategy. With all essential backend services available as web services and REST APIs it is just a matter of exposing them over relevant UI
Modified on by Anbu Ponniah
WebSphere Commerce Toolkit – Virtual Machine, Virtual Hard Disk and Docker
WebSphere Commerce toolkit is an essential development tool for customizing WebSphere Commerce application – to extend or override business logic, customize store flow, extend tool capabilities etc. It is built on top of Rational Application Developer which itself uses Eclipse at its core. It provides a derby database with WebSphere Commerce data model bootstrapped with WSC data and an integrated WebSphere Application Server runtime. It additionally provides plugin extensions needed to automate some of the commerce specific development tasks (JET comes to mind).
While official system requirements are found <here>, 8 to 12 GB of HDD (SDD will be nice), a fairly powerful modern CPU and 8GB of RAM is what I have come to expect – more if you want your operational activities to be zippy.
One of the common questions asked by IT teams is how do I ensure all developers are able to setup and use toolkits that have same configuration, bootstrap data, APARs etc. Everyone is looking for quick, error free and repeatable steps for enabling developers across a team to setup their toolkit and stay up to date. In this blog I am presenting some options that I have seen or discussed, what it takes to pursue those options and call out pros and cons for each. I would like to hear about other options readers may have seen or any trouble they had encountered when pursuing these options.
Option 1: Old style trial by fire – no pain no gain
Ask every developer to install the stack in their computing device. This typically involves creating a fairly detailed document with screenshots and step-by-step instructions starting from where to download the product packages, how to run the installer, what options to choose, where to click etc. Let us face it, this is error prone. I have seen even seasoned developers miss a step and end up with a corrupt toolkit. There are always people who seem to be destined to encounter exceptions and errors more frequently than others. It is also time consuming. And if for any reason the tookit gets corrupted, it could reset the development clock.
Pros: Simple on paper – just plan toolkit setup time for each developer in your project plan. Toolkit runs on native OS – so best performance. Also, developers will need to gain expertise on setup – they will be forced to understand how things work under the hood.
Cons: Time consuming, inefficient, error prone, lot of duplicate effort wasted, backing up is tedious, troubleshooting is tedious, requires some under-the-hood knowledge.
Tips: If this is your chosen path, then you can mitigate some of the risk by creating data and configuration bootstrap packages to bring up basic environment through automated scripts instead of manual steps. For example, create a single zip file for all configuration changes needed (ACP, search configurations etc) and a single script that can copy files around, run acploads, dataload, build search index etc. Standardize the folder path and versions of packages. In fact such bootstrap script is recommended no matter which option you choose. There is also a "central repository" setup that saves individuals from having to download software from IBM Passport Advantage account and host it in your intranet.
Option 2: Traditional Virtual Machines – share with overhead burden
VM promises setup once and deploy anywhere model – and to large extent delivers on that promise with few caveats. You can have a seasoned developer or WSC administrator to create the VM package and share it with others in the team. Team can just boot up the VM and become productive. This works to most extent – but virtualization overhead needs extra processing power and memory, there may be network setup required, moving around files from VM to host and to other developers require additional hoops that one needs to jump through and there may be licensing costs for your virtualization software and the guest operating system in addition to your employees’ host OS licenses.
Pros: Create once and deploy any number of times model, standardization is simple – just publish new VM image. Also, it allows folks running OS that does not support toolkits to run the toolkit (Mac or Linux).
Cons: Licensing overhead, execution overhead due to full virtualization – slow performance (it is slow for me even with 4GB RAM dedicated - 6GB is when it starts breathing easy), too big (~40GB). Extra steps needed to transfer around files in and out of the virtual machine.
Tips: Even here, to roll out updates, it may be simpler to publish a data or configuration load scripts and reserve publishing new VM image only for product upgrades. Beware of saved passwords of your colleagues – you may inadvertently update repositories under their proxies and cause confusion in the project. You need 4GB dedicated RAM for a VM to be functional – if it is less, it starts showing. Interestingly, SSD drives do not help as much in speeding up VMs as it does with regular native installations – my guess is this is mainly due to VMs reserving and managing entire chunk of disk space by themselves (This is relatively speaking – it was still faster than spinning drives).
Option 3: Light weight virtualization –pseudo will do
Virtual Hard Disk is probably the most popular choice I have seen in my projects in the past 2 years. VHDs are a Microsoft virtual hard disk format that behave like a mountable drive with fixed or dynamic space, typically configured to have a max capacity. This appears as a single file until it is mounted and is typically supported in all flavors of Windows OS. For WSC toolkit, one can install all product packages needed for the toolkit into the mounted drive (taking care to ensure installation manager’s repositories and license data directories are made to point into the mounted drive) and then just distribute the .vhd file. This can be as lean as 12GB in size and can be easily shared. As long as everyone mounts with the same drive letter, this works. It gives the benefits of running directly on native OS while standardization of installations and bootstrap configurations of a virtual machine.
Pros: Lightweight, easy to create, standardize and distribute, no extra virtualization costs or overheads
Cons: Works for Windows only, tied to the drive letter till death does it apart. Not everything works within a self contained VHD - DB2/Oracle DBs and web servers try to keep some important references in C: drives or in registry entries - so this works best for a "embedded" web server with cloudscape DB.
Tips: Drive letter management needs some foresight if you are a partner expecting to execute multiple projects and there are only 20 to play around with (assuming A, B, C, D, E and F are needed for various drives already). Installation manager keeps some information in a data config folder that defaults to C: even if you are installing the IM in another drive. This requires some tweaking of configuration to keep installation manager specific settings within the VHD mounted drive – else you may need to install some of the packages in your local computers before you can run stuff from the mounted VHD.
Note: I have been asked before – why not docker? Sure, docker works – now we have windows server containers available on docker. So theoretically one can install the toolkit and all necessary packages on such a container and share. But, we are not strictly adhering to “supported” versions as WSC developer edition is supported for professional and enterprise editions of Windows 7 only – no mention of Windows 8.1 or 10, leave out the server editions that have docker containers available. Other option is to have docker images for your DB and web server - which need work-arounds for VHD.
Option 4: Go solo – path for the brave
People who understand what they want to do with a development toolkit and don’t mind going solo – several options do exist. First a disclaimer – IBM does not recommend any of these. If you get into trouble while doing this, IBM may provide a best efforts based (kind) help but no guarantee of seeing out of any quagmire you work yourself into. Also, you may still want to use RAD if you do BOD, plugin or tool development.
Now we have docker containers for various pieces of the stack - eclipse can run in a container, so can db2 and liberty profile of WAS. Imagine this – a docker container running same DB as your QA server installations of WSC. That would make it so easy for you to extract and load all the test data for local developer use. While doing that between *nix and Windows is “fun” (tongue in cheek), with docker, you could potential distribute same data model and loaded data to both developers and testers! Change in development environment might actually work the first time in server environment! Alright, that solves the DB. How about building source control and building projects --- these are folders that your favorite editor can be mapped to and source control scripts or plugins that can keep it in sync with a repository. If you primarily work in Stores project, then no sweat. Ok, now you can develop and have data – how about doing some unit testing. Well, how about a docker container with WebSphere Application Server running in it – and configured to connect to your database docker instance! Then your ubiquitous compile and publish tasks of Rational Application Developer needs to be replaced by a build and deploy process – similar to how it happens in a server. It is cumbersome for java classes – but it can be simple copies for JSPs and JS files. You may also setup web server and search server containers while you are at it. All this makes it easy for you, the super developer, to manage exactly where you want to spend your resources and how you want to test it. It is fun, it can be fast and it can be daunting for not so savvy.
Pros: Fun that mechanics get taking apart a machine and piecing it together in new ways, database can be shared and exactly like server, topology can be server like, changes can be more accurately tested in local and defects more likely reproduced in local.
Cons: You need to be an expert and highly motivated, it can be daunting for not so savvy developers and standardization is piecemeal – needs a solid plan and extreme discipline – but that is similar to how configuration management and continuous integration is approached in server environments.
Tips: Works for innovation minded folks – network with some IBM folks . Your nous and your professional network are your safety nets here.
There you go folks – some options and musings around them. Hopefully got em brain cells thinking in the right direction.
**Edit1: Embedded links to installation information contextual to the blog
Modified on by Shweta Gupta IBM
What is High Availability?
Wikipedia describes High availability as a characteristic of a system, which describes the duration (length of time) for which the system is operational. For the curious, the wiki page also provides the downtime for the availability goals.
IDC published in its survey report earlier in 2015, ‘Unplanned application downtime costs the Fortune 1000 from $1.25 billion to $2.5 billion every year’.
This article discusses the high available considerations for the WebSphere Commerce solution components, with focus on DB2 HADR as the key database layer high availability.
High Availability of the WebSphere Commerce system components
A typical WebSphere Commerce system spans across various tiers: Web server, application servers, search servers, database server. It will talks to several downstream systems like Order Fulfillment, Payment and multitude others for different purposes.
In general, High Availability is achieved by making systems redundant. Each tier of product has its own specific solution to achieve High Availability. For example, at the Web server tier, Load Balancer is commonly used to distribute traffic across a Web server cluster. At the application server tier, WebSphere Application Server federation and clustering is managed by the Network Deployment Manager. You will need to ensure that you know and understand the high availability capability of all the downstream systems to be able to determine the high availability of your commerce system.
The commerce system is most likely to have one active database, and therefore it can be a single point of failure if it is not setup for High Availability Disaster Recovery (HADR) in case of DB2 and similar high available setup for Oracle database. Without HADR, a partial site failure requires restarting the database management server that contains the database. The length of time that it takes to restart the database and the server where it is located is unpredictable. It can take several minutes before the database is brought back to a consistent state and made available. DB2 HADR may be setup such that a standby database can take over in seconds. Further, you can redirect the clients that used the original primary database to the new primary database by using automatic client reroute or retry logic in the application. DB2 HADR feature provides a high availability solution for both partial and complete site failures. HADR protects against data loss by replicating data changes from a source or the primary server, to one or more standby servers. DB2 HADR supports up to three remote standby servers and is available in all DB2 editions.
DB2 High Availability is about ensuring that a database system or other critical server functions remain operational both during planned or unplanned outages, such as maintenance operations, or hardware or network failures. Reduced database down-time enables you to meet strict SLAs with no loss of data during infrastructure failures. DB2 provides database clustering as well as high availability and disaster recovery capabilities designed to maximize data availability during both planned and unplanned events. It also allows you to quickly and easily adapt to changing workloads with minimal involvement from database administrators, and frees application developers from the underlying complexities of database design and architecture. Mobile, online and enterprise applications need continuously available data to keep transactional workflows and analytics operating at maximum efficiency. Any downtime can leave mission-critical databases inaccessible and applications unresponsive. IBM DB2 pure Scale helps change the economics of continuous data availability. DB2 pure Scale is designed for organizations that require high availability, reliability and scalability for online transaction processing (OLTP) to meet stringent SLAs.
Key considerations - Planning for High Availability and Disaster Recovery
- Site location: Is there a single site, two sites, or more, network bandwidth and connectivity between the sites. This will help layout the strategy for choosing whether you want a single site to serve the main traffic and allow the other to be used in case of site failure or if both your sites will handle the traffic in parallel. If the sites are geographically co-located or connected with super network, the sites can share traffic with ease.
- Active-Active or Active Passive: The Web Tier and the application tiers (commerce and search) can be configured to be active-active or active-passive, however the commerce database will be likely to be setup as active-passive. In case the sites do not have the required high-speed connectivity, there is a challenge as they have to talk to the database which is remote to them.
- Single-cell or dual-cell: The commerce and search application servers can be setup in a single cell or dual-cell topology. The dual-cell topology enables for the updates, rollouts and application deployments to be applied with reduced downtime. With the version of 8 WebSphere Commerce, there is going to be more and more focus on zero downtime deployments.
- Failover/Disaster Recovery Capacity: The capacity of the system which will be available to serve in case of a site downtime, is required to be planned. The IBM tech sizing document will make recommendation on the failover capacity, and this data can be used to work out a starting configuration for the percentage. The failover planning can be at 25%, 50%, 75% or 100% or a math in between.
- Search Application server Managed Configuration: WebSphere Commerce v8 has introduced the advanced configuration topology that allows the search server to be in a Managed Configuration which enables the search master, subordinate & repeater servers with templates that can be managed by the deployment manager.
- Database High Availability topology: The database backup needs to be considered based on the database native capabilities and the site location. Taking an example of DB2 as database, and a 2 Site setup which are physically remote to each other and not connected with the required bandwidth. DB2 will have Onsite Hot Standby in its primary site using DB2 NEARSYNC and another Offsite Disaster Recovery which will use SUPERASYNC. The hot standby will be on automatic knob, whereas the offsite would be on a manual control. DB2 High Availability setup has two options, AIX/Linux/OS clustering or native DB2 HADR and the choice must be made based on the governing factors. The bandwidth calculations for HADR would depend on the calculation of peak incremental data per hour.
Reference: High Availability and Disaster Recovery Options for DB2 for Linux, UNIX, and Windows
- DB2’s geographically dispersed DB2® pureScale™ cluster (GDPC) capability: if you are looking for an active-active high available system, consider the pureScale capability where GDPC provides the scalability and application transparency of a regular single-site DB2 pureScale cluster, cross-site. As described in the developerWorks article, ‘this is the active-active system, where the pureScale members at both sites are sharing the workload between them as usual, with workload balancing (WLB) maintaining an optimal level of activity on all members, both within and between sites. This means that the second site is not a standby site, waiting for something to go wrong. Instead, the second site is pulling its weight, returning value for investment even during day-to-day operation.’
- Automatic Client Reroute options on your application: Once you have your database high availability topology agreed upon, you will be leveraging the automatic client re-route, popularly referred to as ACR. ACR can be configured in multiple ways, and allows to choose properties to set parameters like maxRetriesForClientReroute and retryIntervalForClientReroute. In DB2 HADR you can configure ACR facility where client connections are automatically rerouted to an alternate server when the primary server fails. It is the preferred reroute method. ACR can be used between any two servers, not just HADR primary and standby. It is up to the administrator to set up replication to ensure that the two servers have the same data content. Other replication methods, such as CDC, Q-rep can also be used to sync up the two servers. The ACR functionality is entirely separate from HADR.
The High Availability for a commerce production depends upon several factors like availability requirements, planning for high availability, disaster recovery, site planning, integrated systems, network bandwidth, database topology, being a few which will be under consideration.
The goal of this article is to help you think of your key consideration as you plan, prepare, design, implement and test your different tiers for high availability.
Credits: Anbu Ponniah is our most regular reviewer and provides feedback to ensure that we cover points which will be of interest to our readers. Thank You Anbu for valuable insights, always.
About the authors:
Pravin Kedia is Analytics Solution Architect with IBM helping customers on Data Warehouse and Database Solutions. He is passionate about IBM technologies and shares his insights through the blogs on developer works.
Shweta Gupta is a WebSphere Commerce consultant with IBM helping customers with their ecommerce journey. She is passionate about performance of the systems She shares her insights through the blogs on developer Works. Read about her other publications on linkedin
Modified on by Shweta Gupta IBM
You may have been part of commerce implementation cycles where the functional stability pushes the timelines for the performance iterations. In such scenarios, the project teams may decide to start some level of application diagnosis while waiting for the isolated performance system with stable functional build.
The lower testing environments like SIT (System Integration) and UAT (User Acceptance) can make good platforms for the solution implementation and development team to do analysis on the application performance. These are some of the factors you would like to consider when you would like to use them for diagnosis and consider them in your analysis factor.
a) What is their deployment topology with respect to the production
b) What is the level in terms of the cumulative fixes, APARs wrt the production
c) What is the data load sizes with respect to the production
d) What are the management center business rules, activities, search rules present
e) What are the eSpots and promotions available
f) Can you get some isolated time slot to conduct your single user and some trivial load tests
g) What is the state of the integration systems like order management, payment, and others and how do you isolate your diagnosis
There are several monitoring tools that I have covered in my other blogs that can aid in your analysis. You may not have an APM (Application Performance Management) tool installed on these environments’, however you do have access to the Health Center tool which is packaged with WebSphere Application Server (WAS). I will use this specific article to focus on one of the IBM Monitoring and Diagnostic Tools, Health Center, which is an agent-based diagnostic tool for monitoring the status of a running Java applications.
The Health Center agent libraries are included with the WAS server Java SDK (Health Center 3.0 or later is required for WebSphere Commerce). This makes it an easy-to-use tool available on these environments which can be used by the developers, architects and performance analysts. The Health Center tool can be as easily used in the toolkit development environment to allow developers code for performance. It will show them the time spent and the thread stack. It is prescribed that all developers profile their code and make it performing prior during the implementation phase and work on the 80-20 rule, where if you are able to optimize the highest consumers, you will gain performance. When the code is in the SIT and UAT, the focus should be more from an integration perspective. However, I must admit, we often end up doing application level code tuning based on analysis till much later in the cycles.
Configuration, Setup and Running Health Center
1 – Check the version
# /opt/IBM/WebSphere/AppServer/java/bin/java -Xhealthcenter -version
[Thu 27 Aug 2015 11:55:33 AM IST] com.ibm.diagnostics.healthcenter.java INFO: Health Center 18.104.22.16841209
Aug 27, 2015 11:55:33 AM com.ibm.java.diagnostics.healthcenter.agent.mbean.HCLaunchMBean <init>
INFO: Agent version "22.214.171.12441209"
INFO: Health Center agent started on port 1972.
java version "1.6.0"
2 - Enabling the Health Center agent in profile or headless mode
In the headless mode, the agent starts collecting data immediately, and stores the data in files called healthcenterpid.hcd, where pid is the process ID of the agent. You can load this file into the client at a later date, to view the data. Headless mode is useful for systems where you cannot connect a client to the agent.
I am going to use the health center tool in the full mode for this article. The agent starts collecting data immediately and then you have to launch the client and connect to the server process for viewing the data.
3 – HealthCenter Settings
The Health Center agent can be configured as the JVM properties. Please refer to the KnowldegeCenter documentation for the configuration.
Knowledge Center Reference
Property (use -D for JVM arguments)
Use this format with the -Xhealthcenter option of the Java command when you start the agent and the application at the same time
Sets the JMX port number. For JMX client-agent connections (the default), the client uses port 1972 by default for communicating with the agent. If the client cannot use port 1972, it increments the port number and tries again, for up to 100 attempts. You can override the first port number that the agent tries to use.
Set to on to enable the client to communicate with the agent by using a JMX connection. This property is set to on by default.
Specifies headless mode. The agent starts collecting data immediately, and stores the data in files rather than sending it to the client. When the application ends, the agent creates a file called healthcenterpid.hcd, where pid is the process ID of the agent. You can load this file into the client at a later date, to view the data.
If headless mode is used, you can specify multiple other settings
I am starting the WebSphere Commerce Server using the following parameters for this illustration purpose:
-Xhealthcenter, -Dcom.ibm.java.diagnostics.healthcenter.agent.port=1980, -Dcom.ibm.diagnostics.healthcenter.jmx
These settings get saved to the <server.xml> under:
<profile directory >/config/cells/<cell>/nodes/<node>/servers/<servername>/server.xml and are picked up by startServer.sh
4 – Restart the server
Restart the WebSphere Commerce Server you are profiling. Confirm that the healthcenter agent shows up on the server. You will find the directory <healthcenter> created under the application server logs
Installing the Health Center Client to View Data
You may follow any one of the options to install the IBM Support Assistant.
Using a Health Center Client to View Data
1 – Connect the Health Center to the port mentioned while starting the application server.
The connection will start and you will be able to see the following data using the left navigation.
Example of a Profiling View for com.ibm.commerce.rest* filter:
You can use filters to view the classes and methods that you are watching. You can then turn on the trace for the method you want to drill. In Health Center, you have to enable method tracing when you start your application, as it cannot be enabled at run time. Enabling method tracing is intensive and negatively impacts application performance, so you want to do it based on what you are drilling.
Using health center profiler for your debugging
I am showing a few screenshots to help you see the traces for some of the commerce use-cases.
If your storefront has created extension to the Wish List and written custom code around it, you are looking to profile the user actions around adding to wish list, sharing the list and viewing it. You can chose a filter com.ibm.commerce.giftcenter.* and then look at the Invocation paths.
You may use the “Reset Data” button on the toolbar to get data for the current flow only. The second button from the right.
Search based operations:
IBM Health Center allows the commerce developers to monitor and profile their application in development and testing environments with an ease, no additional skill and resource required to install and use monitoring tools. The tool comes pre-installed with WAS and has a simple configuration to be setup in both profiling mode and headless mode.
The tool provides data like CPU usage, GC, I/O, Network, however the focus I have provided in this article is the profiling of live application, and using the method traces. This enables a developer to review the invocations and get into the details of the hot methods and drilling further to locate and then work on them. It will allow the analyst/architect/developer to analyze the path taken for a cold, not-warmed page versus and cached page. The developer can extract nuggets of information from the flows, which help with both troubleshooting functional issues, and performance issues.
When do you exercise load on the environment, the Health Center will allow you to view the system metrics and JVM metrics like the threads information. The Web Container threads are most valuable in understanding the behavior of application under load. I will come back to discuss the thread analysis topic again. If you are interested in some specific monitoring methodology or tool, feel free to talk about it using the comments section.
Musings on WebSphere Commerce and stack options
Most of us know this - WebSphere Commerce is an enterprise Java application that runs on top of WebSphere Application Server that runs on an Unix OS and uses DB2 or Oracle as its persistence layer. Technology space is constantly evolving and new choices are available for each layer that WebSphere Commerce depends on. There is often doubts in the mind of retail IT departments on whether they can safely adopt an alternative technology in any of the stacks. This blog is a collection of my musings on that topic:
By default, everyone assumes that WebSphere Commerce (WCS) runs on an IBM stack! But a quick look at detailed requirements page reveals some non-IBM components in the stack: Windows and Linux on Intel based processor systems are two key differences in the operating system layer (bringing in flavors of windows server, Red Hat Enterprise linux and SuSE into the mix), while Oracle is a key difference at the database layer. Otherwise, the list looks like a pure IBM list. However, I have seen customers experiment with more choices than the above. Let us look at some variations.
1. Web Server Options
IBM HTTP Server v126.96.36.199 corresponds to Apache web server 2.2.29. Some customers choose Microsoft Internet Information Services (IIS) because they are using that solution in other parts of their IT enterprise. Similarly, other choices such as NGinx or Apache web server is not unknown. Of course, the default choice of IBM HTTP Server comes with plugin-configuration, predefined configurations for SSL communication between WAS and web server layers and supporting both unmanaged and managed WAS nodes. So using a different web server solution would require IT team to perform some manual configuration. Reasons for choosing an alternative web server is usually because they already use a different web server for their all enterprise needs or are after specific capabilities of a web server (for example, Apache web server version 2.4's bandwidth limits for clients is needed).
2. Hosting and OS options
WebSphere Commerce is supported on popular enterprise *nix options such as AIX and Linux on x86, AMD and PPC architectures and virtualization options such as PowerVM. However, customers often wonder if they can deploy WCS solutions on their choice of cloud providers. The short answer is "WCS is functionally compatible to most cloud providers". Cloud providers differ on the virtualization technology they use - so product will provide a full set of functional features, but the sizing needed to run the solution may differ between the providers. As long as the IT team works with that provider to adjust their techline sizing, this should work. Of course, going with IBM Softlayer means the techline sizing from WCS is straight forward and support is at one place. IBM Commerce on Cloud DevOps services has hardened reference architectures and devops patterns used in multiple client engagements. We also have IBM Urbancode and Chef based setup, configuration and deployment management solution that automates many of the mundane tasks in addition to supporting common solution architectures. And as per http://www-01.ibm.com/support/knowledgecenter/SSZLC2_7.0.0/com.ibm.commerce.install.doc/refs/rigsupportSCCI.htm: "Depending on your virtualization configuration and use case scenarios, the cost of operating in virtualized environments could be more than 30% of your overall native, non-CCI IBM WebSphere Commerce available capacity".
On operating systems, if the database and WebSphere Application Server runs on that OS, theoretically WCS also runs. However, from WCS product literature, it is clear that only a handful of operating systems have been certified (tested) for production use. So, while WCS "should" work (and probably would) and knowing IBM, it will probably provide a best efforts help for solving any problems reported on those operating systems, companies adopting any unsupported flavors of Linux should have a plan in place to handle such situations. For example, having at least a small farm of machines in supported OS flavor helps in differentiating a problem as specific to OS or configuration vs WCS product. I haven't personally seen clients experiment much here.
3. DB options
DB2 and Oracle are very popular amongst WCS customers. What if someone wants to experiment with some NoSQL or in-memory DB options? Again, typically with IBM, these will not be supported but if you open a PMR you would find that IBM will help you on a "best efforts" basis - note that this is the same level of support provided for your custom code. I have seen customers with good IT teams experiment with such options. One of the earlier posts in this blog discusses a pattern where some of the information is moved to a NoSQL DB. Similar exploration on using alternatives for parts of transaction processing - for example if you keep all registered shopper information in an elastic search solution.
4. Queue, Transformation, ESB etc
WCS integrates with multitude of IBM and third party products - IBM OMS (Sterling), CPQ... and other OMS and ERP systems are common. Point to point queue based or ESB based integration patters are popular. IBM MQ and IIB are supported. IBM has a long history of supporting integration with SAP based systems, in fact some say better than Hybris. What if you want to explore something like Apache Kafka? Like any solution built on loose coupling - by implementing a few Java Adaptors one can integrate over such technologies too.
5. Monitoring and managing utilities
Any OS level monitoring and managing utilities are of course supported - that is a no-brainer. WCS tracing and logging framework is built on WAS's jRAS framework. While any log monitoring or log analytics tool that reads those logs is easy to support, if you need to replace the complete logging solution with some custom logging framework, you would need to switch out some foundation jars from WCS and WAS. This will impact your support statement. One option to be "in support" is to extend needed classes and override the loggers at all critical places. WCS provides many extension points which can come in handy here.
It should be mentioned that another layer that often pops up during solution discussion is "Search". As we know WebSphere Commerce Search has Solr 4.7 (lucene) as its engine, but there are many add-ons that integrates search into mainstream e-commerce. So not just usual search and catalog navigation uses Solr, but price and inventory are available there, rules to modify search are now bubbled up to business tools and search is integral part of precision marketing too. However, some customers want to improve the core search capability itself and that is possible. For example customers may want to upgrade to a later version of Solr to take advantage of a specific filter or add custom extensions to support specific language or capability (such as phonetic search). Since the architecture is loosely coupled, and since Solr configuration (as kept in solrconfig.xml) is visible to IT teams, modifying that layer is straightforward.
7. Utility libraries
Lastly, I would like to mention about multiple third party utility or special purpose libraries that become part of the solution - these are typically located in WC_TOOLKIT/workspace/WC/lib directory in our toolkit. I know of instances where projects needed string processing capability available in a later version of a third party library. That is also the place for adding solution specific additional third party libraries.
I have just shared my musings about some of the common alternatives that come up during discussions with IT experts involved in e-commerce solutions. There are numerous other layers Note that none of the above should be interpreted as a support statement from IBM - these were just examples of options explored to various degree in various contexts. As always, check with your IBM support personnel before venturing into something different.
Modified on by Raghuveer P Nagar
Referring to our third article on Deploying Customizations, this is our fourth and last article from the OMS-on-Cloud stream in the WCE Practitioners Lounge.
By going through the article, readers can expect to know the integration scenarios for IBM Order Management (which is OMS on cloud) and how it can be integrated with other systems present in the enterprise architecture.
System Integration and Considerations
Commerce solution for an enterprise is developed using multiple systems, with each system having a specific role in the solution. For example, in an enterprise solution, in addition to IBM Order Management (which primarily has multi-channel order management and fulfillment role), there will be systems like CRM and Payment Gateways for customer management and payment management (including refunds) responsibilities respectively. To make all these system work together on various business scenarios, the systems are integrated with each other. Moreover, there are a number of considerations during the system integration, like Deployment of the external system (on cloud or on premise), Security, Exception Handling and flexibility (integration through middleware vs point to point integration).
Types of Integration
We can categorize the integrations into the following categories:
- Inbound Synchronous Integration
- Outbound Synchronous Integration
- Inbound Asynchronous Integration
- Outbound Asynchronous Integration
Synchronous Integration between two systems is when data (request and response) is exchanged in real time whereas Asynchronous Integration between two systems is when the data is not exchanged in real time (that is, when data is first stored and then processed for exchange). From IBM Order Management perspective, as part of outbound integrations, IBM Order Management will synchronously invoke a service in an external system, whereas, as part of inbound integrations, external system will invoke an IBM Order Management service.
Integrating IBM Order Management
In a typical asynchronous integration, data is put into components like JMS queue, file (CSV), database, etc. Later on data from asynchronous component is picked and processed. Data can be put into asynchronous component by the OMS for consumption by an external system (outbound flow) or by an external system for consumption by OMS (inbound flow). For implementing asynchronous integration in OMS, a service is configured using OMS service definition framework (SDF). This service is run through an integration agent which reads or posts the data.
IBM Order Management, which is OMS on cloud, supports asynchronous integration through both REST services and WebSphere MQ. With IBM Order Management access, you get access to WebSphere MQ service as well. To know more about supported queue operations for IBM Order Management, please visit adding queue and managing queues. Though WebSphere MQ is the natural choice for asynchronous integration, one may have to use a custom REST service in certain cases. For example, the security policy of a client may prefer REST service based integration over a queue based integration. In such cases, for inbound integrations, you can develop and expose a REST service which internally can use out-of-the-box WebSphere MQ service for required asynchronous processing, whereas, for outbound integrations, it is the responsibility of the service you invoke that it asynchronously processes the request by IBM Order Management. Please visit invoking IBM Order Management REST services and setting required properties for details on these topics.
As far as synchronous integration is concerned, IBM Order Management supports integration with external system through REST APIs/Services for inbound flows. For outbound flows, external system system’s services can be called through custom code. Depending on the type of external service (REST, SOAP Web service, EJB Web service, etc), you can develop a custom utility which needs to be used while invoking external synchronous services. Further, to have the custom utility reusable, you can parameterize it (with parameters like URL and request method).
The following figure depicts the pictorial view of a sample IBM Order Management integration with an external system:
The sample integration depicted in the figure works as described below:
- OMS Asynchronous Outbound – On an appropriate event in IBM Order Management, data (to be sent to the external system) is posted to an internal (to OMS) queue. An OMS integration agent reads the data from the queue and processes it (through “External Service Request” ) by either invoking an external service (which can be a SOAP web service or REST service) or directly putting message into external queue. If there is any error, the error is reprocessed later to send the request again.
- OMS Asynchronous Inbound – On an appropriate event, external system puts a message (having required data) in an IBM Order Management queue. An OMS integration agent reads the data from the queue and processes it. If there is any error, the error is reprocessible.
- OMS Synchronous Outbound – IBM Order Management invokes external system service synchronously, with an appropriate timeout. Also, the external service can be a SOAP service, REST service or EJB web service. OMS also receives the response from the external service and handles the response for both success and failure scenarios.
- OMS Synchronous Inbound – External system invokes IBM Order Management REST service. It also receives the response from OMS and handles the response for both success and failure scenarios.
You need to perform whitelisting activity to configure mutual trust between the systems being integrated. That is, the list of IP addresses of the systems which will be sending data to IBM OMS need to be whitelisted (because OMS will process only the data sent by whitelisted IP addresses). Similarly, as required, OMS IP addresses need to be whitelisted at other end.
Considering all the systems with which IBM Order Management is being integrated, you also need to analyze various types of access required by the external systems. Based on the analysis, you should design and configure appropriate “integration user groups” along with the “integration users”. Finally, you need to configure API security to configure secure access to APIs and services.
Exception handling in IBM Order Management can be implemented as required. We have not experienced any fundamental difference in the way integration related exceptions are handled in IBM Order Management.
OMS asynchronous integration allows reprocessing the error through both exception console user interface (UI) and API. For enabling the processible exceptions, in the asynchronous SDF service, one should configure asynchronous components as reprocessible in case of exceptions (by selecting the Is Reprocessible checkbox). With this, when an error occurs during processing, the error is visible in exception console so that. If required, you can review the error in console and resend the error for re-processing after doing the needful. For example, if item feed to OMS fails for few items due to locking on OMS table, the failed inputs can be reprocessed from exception console UI or custom code. Also, there can be a number of reasons for the integration errors, like JMS is down, network error, timeout, etc. Depending on the scenarios you want to handle, you can reprocess the exceptions through custom code.
For synchronous inbound integrations, you need to model appropriate errors/exceptions (code as well as description) and share the same with the external system. For the outbound ones, you need to handle the possible errors from the external system.
The external system with which IBM Order Management is being integrated can be a cloud system or an on-premise one. While implementing integration, it (how has the external system been deployed?) does not matter for OMS point of view, that is, the tasks at OMS side (like whitelisting the external system IP addresses) remain same.
As far as integration approach is concerned, considering the relatively large number of systems in commerce enterprise architectures, it is preferred to integrate IBM Order Management (with external systems) through ESB and API gateways.
For more information about integrating IBM Order Management with external systems, you can visit this link.
This is what we have today on integrating IBM Order Management with external systems. With this, we end our blogs in the OMS-on-Cloud stream. Also, we are planning another series around specific OMS integration scenarios, like Integrating with ERP for Order to Cash and Common Marketplace Integrations. Stay tuned for our next blog!