Modified on by Raghuveer P Nagar
Referring to our third article on Deploying Customizations, this is our fourth and last article from the OMS-on-Cloud stream in the WCE Practitioners Lounge.
By going through the article, readers can expect to know the integration scenarios for IBM Order Management (which is OMS on cloud) and how it can be integrated with other systems present in the enterprise architecture.
System Integration and Considerations
Commerce solution for an enterprise is developed using multiple systems, with each system having a specific role in the solution. For example, in an enterprise solution, in addition to IBM Order Management (which primarily has multi-channel order management and fulfillment role), there will be systems like CRM and Payment Gateways for customer management and payment management (including refunds) responsibilities respectively. To make all these system work together on various business scenarios, the systems are integrated with each other. Moreover, there are a number of considerations during the system integration, like Deployment of the external system (on cloud or on premise), Security, Exception Handling and flexibility (integration through middleware vs point to point integration).
Types of Integration
We can categorize the integrations into the following categories:
- Inbound Synchronous Integration
- Outbound Synchronous Integration
- Inbound Asynchronous Integration
- Outbound Asynchronous Integration
Synchronous Integration between two systems is when data (request and response) is exchanged in real time whereas Asynchronous Integration between two systems is when the data is not exchanged in real time (that is, when data is first stored and then processed for exchange). From IBM Order Management perspective, as part of outbound integrations, IBM Order Management will synchronously invoke a service in an external system, whereas, as part of inbound integrations, external system will invoke an IBM Order Management service.
Integrating IBM Order Management
In a typical asynchronous integration, data is put into components like JMS queue, file (CSV), database, etc. Later on data from asynchronous component is picked and processed. Data can be put into asynchronous component by the OMS for consumption by an external system (outbound flow) or by an external system for consumption by OMS (inbound flow). For implementing asynchronous integration in OMS, a service is configured using OMS service definition framework (SDF). This service is run through an integration agent which reads or posts the data.
IBM Order Management, which is OMS on cloud, supports asynchronous integration through both REST services and WebSphere MQ. With IBM Order Management access, you get access to WebSphere MQ service as well. To know more about supported queue operations for IBM Order Management, please visit adding queue and managing queues. Though WebSphere MQ is the natural choice for asynchronous integration, one may have to use a custom REST service in certain cases. For example, the security policy of a client may prefer REST service based integration over a queue based integration. In such cases, for inbound integrations, you can develop and expose a REST service which internally can use out-of-the-box WebSphere MQ service for required asynchronous processing, whereas, for outbound integrations, it is the responsibility of the service you invoke that it asynchronously processes the request by IBM Order Management. Please visit invoking IBM Order Management REST services and setting required properties for details on these topics.
As far as synchronous integration is concerned, IBM Order Management supports integration with external system through REST APIs/Services for inbound flows. For outbound flows, external system system’s services can be called through custom code. Depending on the type of external service (REST, SOAP Web service, EJB Web service, etc), you can develop a custom utility which needs to be used while invoking external synchronous services. Further, to have the custom utility reusable, you can parameterize it (with parameters like URL and request method).
The following figure depicts the pictorial view of a sample IBM Order Management integration with an external system:
The sample integration depicted in the figure works as described below:
- OMS Asynchronous Outbound – On an appropriate event in IBM Order Management, data (to be sent to the external system) is posted to an internal (to OMS) queue. An OMS integration agent reads the data from the queue and processes it (through “External Service Request” ) by either invoking an external service (which can be a SOAP web service or REST service) or directly putting message into external queue. If there is any error, the error is reprocessed later to send the request again.
- OMS Asynchronous Inbound – On an appropriate event, external system puts a message (having required data) in an IBM Order Management queue. An OMS integration agent reads the data from the queue and processes it. If there is any error, the error is reprocessible.
- OMS Synchronous Outbound – IBM Order Management invokes external system service synchronously, with an appropriate timeout. Also, the external service can be a SOAP service, REST service or EJB web service. OMS also receives the response from the external service and handles the response for both success and failure scenarios.
- OMS Synchronous Inbound – External system invokes IBM Order Management REST service. It also receives the response from OMS and handles the response for both success and failure scenarios.
You need to perform whitelisting activity to configure mutual trust between the systems being integrated. That is, the list of IP addresses of the systems which will be sending data to IBM OMS need to be whitelisted (because OMS will process only the data sent by whitelisted IP addresses). Similarly, as required, OMS IP addresses need to be whitelisted at other end.
Considering all the systems with which IBM Order Management is being integrated, you also need to analyze various types of access required by the external systems. Based on the analysis, you should design and configure appropriate “integration user groups” along with the “integration users”. Finally, you need to configure API security to configure secure access to APIs and services.
Exception handling in IBM Order Management can be implemented as required. We have not experienced any fundamental difference in the way integration related exceptions are handled in IBM Order Management.
OMS asynchronous integration allows reprocessing the error through both exception console user interface (UI) and API. For enabling the processible exceptions, in the asynchronous SDF service, one should configure asynchronous components as reprocessible in case of exceptions (by selecting the Is Reprocessible checkbox). With this, when an error occurs during processing, the error is visible in exception console so that. If required, you can review the error in console and resend the error for re-processing after doing the needful. For example, if item feed to OMS fails for few items due to locking on OMS table, the failed inputs can be reprocessed from exception console UI or custom code. Also, there can be a number of reasons for the integration errors, like JMS is down, network error, timeout, etc. Depending on the scenarios you want to handle, you can reprocess the exceptions through custom code.
For synchronous inbound integrations, you need to model appropriate errors/exceptions (code as well as description) and share the same with the external system. For the outbound ones, you need to handle the possible errors from the external system.
The external system with which IBM Order Management is being integrated can be a cloud system or an on-premise one. While implementing integration, it (how has the external system been deployed?) does not matter for OMS point of view, that is, the tasks at OMS side (like whitelisting the external system IP addresses) remain same.
As far as integration approach is concerned, considering the relatively large number of systems in commerce enterprise architectures, it is preferred to integrate IBM Order Management (with external systems) through ESB and API gateways.
For more information about integrating IBM Order Management with external systems, you can visit this link.
This is what we have today on integrating IBM Order Management with external systems. With this, we end our blogs in the OMS-on-Cloud stream. Also, we are planning another series around specific OMS integration scenarios, like Integrating with ERP for Order to Cash and Common Marketplace Integrations. Stay tuned for our next blog!
Modified on by Raghuveer P Nagar
Referring to our second article on Developer Toolkits, this is our third article from the OMS-on-Cloud stream in the WCE Practitioners Lounge.
By going through the article, readers can expect to know how to install and deploy the customizations along with configurations for IBM Order Management on cloud.
Building and Deploying Custom jar Files
After the developers complete the coding, they check-in the code to the project code repository. Project specific automated scripts are used to build the custom jar files using the checked-in code. For deploying the built jar file, as the first step, the file should be placed in required drop box, as suggested by the devops team. For accessing the drop box using SFTP tools, devops team shares the URL and port along with the key file.
For deploying the custom jar file,
- Login to IBM UrbanCode Deploy (UCD) tool
- As shown below, Go to Components and click on the component which has been created for your project to deploy custom jars
- Say, you had built /opt/custom/Custom17.1.jar file. Go to versions tab and check if Custom17.1.jar has been updated in the list of jars.
- If not, as shown below, click on the ‘Import New Versions’ and update with latest version that was built.
- Go to the Applications Tab and click on your order management system (OMS) Application Environment (where you want to deploy this jar). Click the play button. This opens Run Process popup of UCD tool. As shown below, select Build Customized Runtime as the process to run.
- Once process is submitted, you should see below success messages
- On building customized runtime, deployed OMS application needs to be refreshed. To do that, open Run Process popup.
- As shown below, select 'Update OMS Application' process from Process dropdown, click on 'Choose Versions' to choose the version which has been built using Build Customized Runtime process, and the click Submit.
Exporting CDT and Viewing Differences
To deploy OMS configurations, you need to export CDT first, from the master configuration (MC) environment. To export CDT, open Run Process popup to run 'Export CDT XML' process. The CDT export will fail if the ydkpref.xml is not defined correctly, so, before running the process, make sure that ydkprefs.xml is up-to-date. If required, you can check the details of ‘Export CDT XML’ process using ‘Download All logs’ option (downloaded zip file will have stdout.txt, which will have details of failure of CDT export if it fails), or the download log options on each executed step.
After exporting the CDT successfully, the next step is to compare the exported CDT with its previous versions. The comparison ensures that only intended changes are included in the latest CDT. To do that,
- Open Run Process popup from your development OMS application.
- As shown below, click on Components tab, select your OMS config component, and click Versions tab. Select the latest CDT version and then click on Compare link.
- This will open the following popup, where you select the version with which the comparison should be made:
- Once we click on Submit button on Compare Versions, we should see the differences as below.
- Click on Compare to see the details of the differences, as shown below.
After exporting the CDT XMLs with the correct and complete changes, the next step is to import it in your development environment. For that, open Run Process popup from your development OMS application. As shown below, select 'Import CDT XML' from Process dropdown:
Click on Choose Versions. This will open a popup, as shown below. The popup will list of CDT versions available. Current Environment Inventory will show the last applied CDT on the environment. You can click Add button for the MC version to deploy.
Restart the Application server after the CDT process. For more information on restarting the application server, you can refer to this link.
Starting and Stopping Agent and Integration servers
IBM UCD provides processes for starting and stopping Agent and integration servers. For more information on the same, please refer to the information here.
After starting a server, one can verify if the server has started successfully .This can be done by checking the logs (through export logs option from IBM UCD) or by checking System management console which is accessible from the OMS application console.
Exporting and Archiving logs
Using UCD, you can export Application, Agent and integration server logs . Please refer to this page for more information.
Logs collected might be few days old, hence the size of the log file may be large. If required, you can request the devops team to reduce the period of time for collecting the logs.
UCD also provides option to archive the logs on the Run Process popup, as shown below.
This is what we have today on installing and deploying customizations and configurations for OMS on cloud. We will share more experiential information in our upcoming blogs in the OMS-on-Cloud stream. Stay tuned for our next article in the series, which will be on Integration with External Systems!
Modified on by Raghuveer P Nagar
Referring to our first article on IBM UrbanCode Deploy, this is our second article from the OMS-on-Cloud stream in the WCE Practitioners Lounge.
By going through the article, readers can expect to know what IBM Order Management Developer Toolkits are, how one can access those, and few of our learning on the new integrated development toolkit.
While implementing IBM Order Management, developers make both code and configuration changes to the development environment before promoting the changes to higher environments like QA and UAT. To help developers, developer toolkit is provided. The toolkit can be downloaded from UrbanCode Deploy (UCD) Dashboard and installed on a local machine or VM to have an exclusive environment for development.
Starting IBM Order Management version 18.2, there are two types of developer toolkit:
- Developer Toolkit
- Integrated Development Toolkit
Before 18.2, only IBM OMS Developer Toolkit was available.
OMS developer toolkit has IBM DB2, IBM WebSphere Application Server and IBM MQ as prerequisites. After following the toolkit installation steps, EAR file is required to be built and deployed to access the environment for development.
OMS Integrated Development Toolkit is based on the docker, with Docker and Docker Compose as the prerequisites. There is no need of installing database, application or messaging server, all these come bundled with the toolkit. Post completing installation through docker, development environment will be available. There is no need of creating EAR and deploying, as it is part of steps executed by Docker!!
Downloading Developer Toolkits
Perform the following steps to download IBM Order Management Developer Toolkits from UCD. For more information on UCD access, please refer to On-boarding and Access section here.
- Select OMS-Application from the list of available applications in the Application tab of UCD
- Go to Environment tab, and click play button against the OMS DEV environment
- On the pop up screen, as shown in the screenshots below, select 'Extract IBM OMS Developer Toolkit' from Process drop down for the Developer Toolkit, whereas select 'Extract IBM OMS Integrated Development Toolkit' for the Integrated Development Toolkit
- If you want to include existing customization, check "Include Customization" checkbox
On click of Submit, execution of ‘Extract IBM OMS Developer Toolkit’or ’Extract IBM OMS Integrated Development Toolkit’ process starts. You will be able to see the progress on UCD as shown below.
After process is completed, toolkit zip file will be available on FTP location for download.
Installing Developer Toolkit
This is the non-integrated developer toolkit. After meeting the pre-requisites mentioned above and unzipping the downloaded toolkit (devtoolkit.zip), the following steps need to be executed
- Install JDK, and set environment variables accordingly
- Rename devtoolkit_setup.properties.sample to devtoolkit_setup.properties in the extracted devtoolkit folder, and set mandatory properties
- Run devtoolkit_setup script to install the toolkit
- Build EAR file and deploy it on application server
You can find detailed information on the steps above at the Set up the IBM Order Management developer toolkit environment section.
Installing Integrated Development Toolkit
This is docker based toolkit. Integrated Development Toolkit of version 18.2 is supported for LINUX (Centos, RHEL). Following commands can be used on Centos 7 VM (recommended) to install the pre-requisites.
i.yum install docker
ii.systemctl start docker
iii.systemctl status docker
iv.systemctl enable docker
Docker Compose Installation
i.sudo curl -L https://github.com/docker/compose/releases/download/1.21.2/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
ii.sudochmod +x /usr/local/bin/docker-compose
You can unzip the package by running “yum install unzip” command.
After pre-requisites installation, the following preparation steps need to be executed:
- Upload devtoolkit_docker.zip file to Centos VM.
- Run following command to extract toolkit setup files: unzip devtoolkit_docker.zip && cd compose && chmod +x *.sh
- In compose folder, om-compose.properties property file has default values of properties used at installation, change the properties based on your needs.
Following options which are available in executable file (om-compose.sh) will be useful for IBM Order Management integrated development toolkit installation.
- setup <optional:cust-jar>: Setup a fresh docker based integrated OM environment
- setup-upg<optional:cust-jar>: Upgrade existing environment to new images
- update-extn<cust_jar>: Update OM environment with the latest customization jar
- extract-rt <extract_dir>: Extract a copy of runtime directory on your host machine
- start|stop|restart: Start/stop/restart your docker environment
- wipe-clean: Wipes clean all your containers, including volume data
- license: Shows the various license information
- update-mq-bindings <queue_name>: Update your MQ bindings with the queue
The following steps need to be executed for installing the Integrated Development Toolkit:
- Create folder mentioned in MQ_JNDI_DIR property. Default value is /var/oms/jndi. If you are not changing default value, then create /var/oms/jndi folder.
- Go to folder where toolkit zip file is extracted and then go to compose folder
- Run following command to start the installation: ./om-compose.sh setup
On installation, four docker containers are created: OMS Runtime, DB2, Liberty and MQ. There is no need to create smscfs.ear and deploying it (installation steps take care of this). You can check the logs of liberty container to check deployment status of smcfs.ear. Once deployment is complete, you can access OMS environment with following URL: http://<IP>:<Port>/smcfs/console/login.jsp
Set up a new docker-based developer toolkit environment section here has the detailed information on the installation steps mentioned above.
Post Developer Toolkit Installation
Post installing developer toolkit, development environment will be available. However, the development activities should be performed on the latest customizations and configurations. For this, you will have to generate CDT XMLs from UCD and then import these XMLs in your development environment. Also, export the existing customization from IBM Order Management environment to development environment if you have downloaded IBM OMS Integrated Development Toolkit or not checked ‘Include Customization’ while downloading IBM OMS Developer Toolkit. Once development environment is in sync with existing customization and configuration, you are all set for the development!
This is what we have today on the developer toolkit for OMS on cloud. We will share more experiential information in our upcoming blogs in the OMS-on-Cloud stream. Stay tuned for our next article in the series, which will be on Deploying Customizations!
Modified on by Anbu Ponniah
Authored by : Raghuveer P Nagar, Senior Architect for IBM Order management
By going through the article, readers can expect to know what IBM UrbanCode Deploy (UCD) is, why one needs it for IBM Order Management deployments, how one can access it, how it is useful for day-to-day IBM Order Management deployments and few of our learning as an implementation team.
IBM UrbanCode Deploy
IBM UrbanCode Deploy is a tool used for automating application deployments (in both on-premise and on-cloud scenarios).It simplifies, and standardizes, the application deployment process across environments, throughout the development and maintenance of the application. Additionally, it offers clear visibility (into what is running where, differences across the environments, who changed what, etc) and rollbacks of the deployed applications.
IBM Order Management
IBM currently offers its multi channel commerce Order Management System (OMS) as both on-premise and on-cloud software. The on-premise software is commonly known as Sterling Order Management. IBM Order Management is software as a service (SaaS) offering, which is a cloud-based OMS service offering (on the IBM SoftLayer infrastructure). For managing IBM Order Management deployments, UCD is used as the deployment automation and self-servicing tool.
On-boarding and Access
IBM Order management Environment cannot be accessed directly. To access the environment, that is, to access UCD and manage the OMS application, you first need to connect to jump host. Jump host connection provides ability to bypass firewalls that can prohibit internet services. Refer here for detailed process of connecting to IBM Order Management environment using jump host.
Once you are connected to IBM Order Management environment, you can access UCD. In addition to deploying application, UCD is used for complete implementation process, such as deploying changes and fixes into the environment. Refer this link for more information on UCD access.
After accessing and logging into UCD, you need to download developer toolkit environment. Developer toolkit is replica of the IBM Order Management environment, which enables you to do required customizations on your development environment. Go here for step-by-step process on downloading the developer toolkit from UCD.
Key OMS Deployment Features in UCD
UCD provides insights into IBM Order Management deployment audits, status and history. It helps in deploying IBM Order Management application in no time and also provides wide variety of process management options to also administer the deployment. The following are few of the key OMS administration processes which can be executed through UCD:
- Deploying custom code, like deploying custom jar files (say, the jar files having custom User Exit implementation and classes for Web Services)
- Exporting Configurations/CDT, which includes exporting CDT from one environment (say, the master configuration environment) and comparing it with another environment (say, the development environment)
- Importing Configurations, that is, applying CDT exported from another environment
- Starting and stopping application server
- Managing Agents, including starting, stopping and triggering agents
- Managing Queues, such as configuring a JMS queue and clearing contents of a Queue
- Managing OMS Application Server Logs, like exporting and configuration driven archiving of the logs
- Installing 3rd Party Certificates, for various integrations and implementation features
- Importing Customer Overrides (in customer_override.properties file) to the database
The processes mentioned above remain same for OMS related product offerings like IBM Call Center for Commerce as well.
While working on UCD for IBM Order Management, we have experienced many common application deployment and management issues. The following is a list of few of those, with tips based on our experience:
- Automatic Process Monitoring - UCD has email configuration to inform the status of particular process through email. For example, it can send emails when agent startup fails and build failures. As one of the first things, one should ensure that this configuration is complete.
- Application/Code Build Failures – While building custom code in the development environment, you may get errors like “Not able to access the file” and “Not enough memory” (in the build logs), say when the build involves compilation of the custom Java classes. It is important to know and remember that these issues need access of the installation directory, which is with the cloud support team. So, one should raise a cloud support ticket on facing such issues.
- Infrastructure Related Errors – While running a process, an infrastructure issue is reported, say “Database connection parameters incorrect". This happens during a time window when the password is reset in the database server, but an implementation script is run before the same password is updated in the deployed application. One should raise cloud support in such scenarios also.
- UCD Options - For administrators who are new to UCD, it will be a good idea to spend some time in getting aware of the available OMS deployment options by exploring various user interface (UI) options of UCD. This may sound unusual, but, think of the productivity loss which you may have when you reach cloud support or go through the documentation just to know that the option is already available on the UCD UI!
This is what we have today on the IBM UCD tool for OMS deployments on cloud. We will share more experiential information in our upcoming blogs in the OMS-on-Cloud stream. Stay tuned for our next article in the series, which will be on Developer Toolkit.
Welcome to WCE Practioners Lounge
I am excited to announce the launch of WCE Practioners Lounge. This is a closed group of highly skilled senior consultants and architects with decades of experience configuring, implementing, performance testing and assuring complex commerce, marketing and supply chain solutions for customers across the world.
What can you expect?
This community will share best practices, recipes, troubleshooting guides and sample code useful for solutions implemented across various Watson Customer Engagement product lines – Watson Commerce, Watson Marketing and Watson Supply Chain. You can expect blogs and forum discussions across a variety of topics covering not just IBM products, but also associated 3rd party offerings, underlying technologies and stack offerings.
What is the take away for readers?
Readers will get to know how latest features are used in real projects, how requirements are addressed by various products/offerings, how to deal with devops, how to tune performance and much more. So, if you are a developer, technical lead or an architect - this will empower you with knowledge that you can use in your very next engagement. Of course, many of the readers can relate to things we will share in the blog and can reflect back on their own projects. Note that if you are more interested in the strategic "why" part of what we are talking, you should head to our sister blog at Thinkers Lounge group in LinkedIn site.
How can you participate?
To begin with - please share your comments, include our blogs in your social circle, suggest further topics and contact authors directly for discussions. We plan to open up forums for deep dive discussions. At this time authors are limited to a hand-picked set of practioners from within IBM WCE BU. However, we will re-evaluate our charter every quarter to ensure best of knowledge is shared through this channel.
Who are the current authors?
Lead editors and authors are (in alphabetic order): N Krishnan, Raghuveer Nagar, Sudhanshu Shekar Sar and Siddharth R Rao. They have a team of authors working with them. Each blog will call out contributions and link to their profile
Look forward to the first blog, on Urbancode based develops on 25th June. Further blogs should appear every other Monday thereafter.
(Executive Architect and Distinguished IT Specialist, Watson Customer Engagement, IBM India)
Modified on by Shweta Gupta IBM
In the PART1 of this series, I introduced the topic of security around eCommerce from a perspective of being in a Hot Air Balloon (which would be 2000 lifetime floor badge if you wear a Fitbit). You will want to read the Part 1 to get a glimpse of CISO’s role in your organization’s security, and introduction on IBM’s security services. However, if you want to directly dive into securing your WebSphere Commerce application, read on this article.
The WebSphere Commerce security model constitutes of setting up the access control, through authentication, authorization and policies. It provides through some of the main security standards applicable to the eCommerce industry. It also covers managing against common attack types. However, as outlined in the PART 1, the malicious attacks continuously innovate themselves, and therefore business needs to evaluate its advanced threat management capability on an ongoing basis.
In this PART2 of the series, I am going to cover 4 areas from the application perspective.
- Access control
- Hardening against common attacks types
- Data security
- Security Standards
Authentication: WebSphere Commerce issues an authentication token. This token is associated with a user on every subsequent request after the authentication process. There are account policies for password, account lockout, timeout that govern authentication.
Authorization: Access control policies is leveraged for enforcing authorization in Commerce.
Let us look at some some common access control areas that are prominent from en eCommerce perspective.
- Logon - Enforcing too many failed attempts: Set up an account lockout policy for different user roles in WebSphere Commerce. This is to enable enforcement of a lockout in case there is malicious attempt to break a password. There is also deterrents by setting up delays in between of consecutive unsuccessful attempts.
- Logon - Prevent privileged users from logging in externally: It is desirable to dis-allow access to the privileged users of WebSphere Commerce application, like the site administrator or a customer service representative. This need is primarily from a security perspective where one would not want a hacker to gain privileged access and act with malicious intent. WebSphere Commerce V 8 Mod Pack 1 has provided with a security configuration in wc-server.xml that can be enabled and then customer headers can be used to define roles to disallow the logon. There is further customization that one can do to address the security needs for logon. Earlier to this introduction of the feature, in earlier version of WebSphere Commerce (v7, v8), the article provides sample code to achieve the functionality.
- Securing the WebSphere Commerce search server: This can be achieved at two levels, WebSphere Application Server (WAS) Administrative Security or WebSphere Application Server (WAS) Application Security. The WAS Admin Administrative Security is recommended, as the application security is more expensive in terms of performance.
Protecting custom enterprise beans, data beans, controller commands, views: Primary resources should be protected. If a user is allowed to access a primary resource, the user would be allowed to access its dependent resources.
Access Control and Data Beans - The following article discusses the best practices for access control with WebSphere Commerce and how to use Data Beans securely
Hardening against common attacks types - A view on the common attacks types and guidance on handling them
- Changing URL params to break the system: WhiteList data validation on a URL is disabled by default. This restricts the running of URL such that it only allows processing of URLs that conform to the regular expression. This means you could restrict any malicious intent on the site by changing the params on the URL for store-id, catalog-id etc...
- Cross-site scripting protection, is enabled by default for the Stores web module. It is important that all the parameters are encoded by developers during the development phase and tested thoroughly. The prohibitedChars rule has been provided by default, however it cannot be comprehensive. The blacklist is used as fall back for cases when the store is not encoding the output properly (c:out). The problem with blacklists is that hackers keep finding ways of bypassing them by using different encodings and escaping characters. It's not possible to come up with a completely robust blacklist solution ( OWASP calls blacklist "fragile" ). This is more complex to apply to the REST body. It is recommended that there is a review on the OWASP Cross-site Scripting site for project implementation. The project management and security architect should provide guidelines on preventing, protecting and testing against Cross-site scripting.
- Mitigating threat of denial of service attacks: It is recommended to set the boundary for the product search results by specifying maximum page size and result size. Similarly, one can set allowable ranges for other business objects like the shopping cart.
- Enable ClickJacking Protection – Clickjacing is when an attacker is able to trick you into believing that you are on a particular site/page, while re-directing you to do something else. This can be achieved via several techniques, however we will focus on the mechanism where iFrames are exploited to overlay your desired site. The article talks about the X-Frame options and Content Security Policy.
- WebSphere Commerce database encryption to a stronger standard to reduce the chances of a successful brute force attack by Migrating from Triple DES to AES-128 encryption
- Data security practices for integrating eCommerce system with Order Management System
1 - Use of Cryptographic Algorithms and Key Lengths: NIST announced its Special Publication (SP) 800-131A in 2011, which recommends transitioning of the use of Cryptographic Algorithms and Key Lengths.
Standard requires adherence to
- Digital signatures must use at least SHA-2 hashing algorithm, and WebSphere Commerce v8 uses SHA-2
- Cryptographic keys adhere to a minimum key strength of 112 bits. WebSphere Commerce provides a detailed procedure for the migration of encrypted data in the database to use AES 128-bit encryption.
- Enable TLS 1.2 for SSL and disable protocols less than TLS 1.2 for WebServer, any integrations with LDAP, outbound email. WebSphere Application server must adhere to TLS 1.2
- All certificates with RSA or DSA keys must be 2048 bits or higher. Certificates with elliptic curve keys shorter than 160 bits must be replaced with longer keys. All certificates must be signed by an allowed signature algorithm. For example, SHA-256, SHA-384, or SHA-512
2 - Federal Information Processing Standards publication 140-2 (FIPS 140-2) covers the security standards that are required for cryptographic modules. The WebSphere Commerce documents the steps that are needed for commerce to runn on a WebSphere Application Server and on HTTP servers that are in FIPS 140-2 mode.
3 - The Payment Card Industry (PCI) Data Security Standard (DSS) - The PCI DSS Version 3.0 standard lists 12 requirements which retailers, online merchants, credit data processors, and other payment-related businesses must implement to help protect cardholders and their data. The requirements include technology controls (such as data encryption, user access control, and activity monitoring) and required procedures. This is required to be implemented when the cardholder data resides with the business. The WebSphere Commerce Knowledge Center documents the procedure ONLY for the WebSphere Commerce application to be PCI compliant, while there are many other aspects that the business needs to take care as part of the compliance.
The security of eCommerce is a broad topic that requires the Chief Information Security Officer (CISO) at the organization to work at various levels. This was covered in the PART 1
This article is an attempt to help the reader get a glimpse on the application security aspects for a commerce site. I have tried to categorize security from a view of access, vulnerabilities and data protection and help with a capability view on WebSphere Commerce.
I have also created a template spreadsheet that one can refer in their project as a starting point to track the various headings, status and desired status. This template is to be enhanced/tailored to suit your security posture.
2016-July-WCS-Security-Template-v1.xlsxView Details for Project Managers to capture actions. Do provide your feedback if you use it to enhance it further.
Modified on by Shweta Gupta IBM
Who owns and understands the security posture for the organization?
I would start with making an assumption that if you were a WebSphere Commerce architect, specialist, or a developer, you will agree with me that it will be foolish to attempt to respond to the loaded question – ‘Is our eCommerce site safe?’
This would be something that is under a purview of a Chief Information Security Officer (CISO). If you are interested to see a sample of the job responsibilities of a CISO profile in a commerce organization, have a look at this linkedin entry
The infographics on cyber-attacks from Threat Intelligent Report
IBM X-Force Threat Intelligent Report published by IBM, shows the 2015 security incidents and the retail industry is the second most attacked industry.
The components of a commerce shop that would typically need a CISO’s attention
There are several areas of a commerce shop that require to be at a desired level of security maturity model.
- To start with there is a need to have a Governance structure in place. This would include the processes established inorder to manage and report on the compliance
- Focus on the People aspect of the business, that would talk about identity and access management
- Infrastructure security would cover several pieces, networks, servers and the endpoints
- Protection of Data is key and this may entail industry standards and compliance
- Application security will include all the different applications and the security levels would be different based on their exposure to the internet
How does IBM help companies and businesses to be secure, and this would apply to eCommerce
IBM has a strong security product portfolio, business and product offerings. There is also an online set of Quiz Questions to help an organization evaluate their understanding of the security risk.
The following areas of focus, will help the security stakeholders to engage with IBM
Your organization’s security program - Understanding risks, establishing the right policies and programs and having a strong, cohesive team to implement changes is critical to building a next-generation security environment. IBM® Security can help you design an integrated framework to simplify the challenge of securely protecting the enterprise.
Tackle Advanced Threats – You are uncertain on the advanced threats and how to be prepared to manage them. IBM® Security can help you with a view the security landscape with a wide-angle lens to thoroughly understand the origins and distinctive features of attackers. Fraud, endpoint and data protection. Security intelligence and analytics. Incident response. They're all part of a comprehensive approach using extensive research and detective work to pinpoint, outsmart and stop them.
Protect critical assets – You have an internet facing business, with potentially millions of customers, partners, vendors accessing the systems. IBM® Security can help you leverage analytics and insight for rapid detection and response to protect critical systems and records.
Further, IBM provides services where the security experts will do an assessment of all the aspects of an organization from a security maturity model. This will allow the CISO to understand the security posture for the organization and the desired posture for each of the components mentioned above, like people, data, and infrastructure. This is an important step in securing organization and business against the cyber security threats.
Security of the WebSphere Commerce empowered storefront
The CISO’s governance and strategy will form the backbone of the eCommerce security. This will ensure that each of the applications follow the guidelines and demonstrate adherence to the security framework.
The eCommerce program manager will own the security aspects of the WebSphere Commerce implementation. It will therefore be the individual responsibility of every member involved in the implementation to understand the application guidelines for WebSphere Commerce and consider them at design, development, testing and deployment phases.
The product KnowledgeCenter has a topic in itself on ‘securing’, that details on the topics of authentication, authorization, session management. There is also section to cover security standards ‘National Institute of Standards and Technology’ (NIST) that provides guidance on the use of stronger cryptographic keys.
The Payment Card Industry (PCI) Data Security Standard (DSS) is applicable if your eCommerce system is capturing and holding credit card and payment-related data into the system. The product has created a summary of specific configuration that are required in the WebSphere Commerce implementation in order to comply with the PCI-DSS.
Having pointed the readers to the Knowledge Center for a lot of product documentation on the topic, I am also doing a PART 2 of this series to cover a simplified version of the aspects of the WebSphere Commerce product’s security considerations. Look out for the second part for a quick and clear insight areas of Access control, Hardening against common attacks types and Data security.
Credit: Thank You Sreekanth for your inputs and review on the topic.
Sreekanth Iyer is an Executive Architect with the IBM Cloud (CTO Office) team and works on defining the technical strategy and development of IBM Cloud Security. He has over 20 years of industry experience and has led several client solutions for Telco, Electronics, E&U, Govt, BPO & Banking industries. He is an Open Group Certified Distinguished Architect, IBM Master Inventor, Certified Ethical Hacker and Member of IBM Academy of Technology.
WebSphere Commerce Development Environment
WebSphere Commerce developer environment comes with a Rational Application Developer, a WebSphere Application Server runtime with an embedded web server and an Apache Derby database. While options are available to switch your database to DB2 or other RDBMS, if you go with default as a developer, you would like to access the database using a client with some GUI capabilities. So what are our options:
- Go with what is available in the product – you either have your server running and access the DB using the in-built web interface or use the interactive SQL scripting tool “ij” for offline access; neither of which is particularly attractive for all uses. For instance, browsing available table and their schema information, accessing history of previous commands executed, exporting results of queries are all cumbersome activities. And on top of this, current setup of Apache Derby allows only one active connection at any time – so if you are connected through ij, your server cannot even start and if your server is running ij will not be able to connect.
- Switch to a full feature DBMS which has proprietary or open source tools available. For example, DB2 comes with a control center that allows you to connect and browse multiple local and remote database schema. You may also use RAD’s “Database Development” perspective to create and manage connections to local and remote databases.
- Setup Rational Application Developer to connect and browse your derby database
- Setup a custom tool (such as SQuirreL) to connect and browse your derby database.
In this blog I shall cover the last two options – setting up your Rational Application developer to connect and browse the included Apache Derby database and later the same using SQuirreL .
Steps for connecting to Apache Derby using Rational Application Developer.
Launch your WebSphere Commerce toolkit
Click on Window > Open Perspective > Other. In the resulting pop-up, select “Database Development” and click OK
This opens the database development perspective with a data source explorer on the left side.
Right click on Database Connections and choose New.
In the Connections Parameters window, Choose Derby in the “Select a database manager” section, type in the following information in the “properties” section:
For e.g. C:\WCDE80\db\mall and jdbc:derby:C:\WCDE80\db\mall respectively.
(Note: Username is not needed for the default database packaged with WSC toolkit, so this string can be anything).
Next, in the JDBC driver section, choose “Derby 10.0 – Other Driver default” from the drop down. Then click on the button with three dots on the right to open the “Specify Jar List” window. Here, if the list is empty click on “Add jar/zip” or if there is a jar already present, then choose the jar and click on “Edit jar/zip” button. Then browse and choose “derby.jar” from your <TOOLKIT>\lib\ folder. Tip: You may choose any of the embedded driver options available since it is really the JAR that is configured against that driver that is important. Choosing the right version of jar is important as the tool will not allow connecting to databases created with newer version of Apache with older version of drivers. Hence choosing the driver that is present within the toolkit is important to keep the version same.
Click OK to come back to the connection parameters window. Here use “Test Connection” to test the connection – this should be successful. If not check the URL and make sure it is connectable from command line using ij interface.
Click Finish. Now you should see the connection in the “Data Source Explorer” section on the left. You can right click and choose connect to connect and browse the schema. You can also right click on the connection and choose “New SQL Script” to type in your own SQLs and choose and execute them. You can keep these handy script files around to use when you need. You can also extract and load data from one database to another (works best if done table by table) – so if you need to get data for specific tables, this is handy to have. And, if at all you need to develop new access beans, they are just a right click away too.
Steps for connecting to Apache Derby using SQuirreL .
This one is mostly following the steps documented in Using SQuirreL SQL Client with Derby
Only thing we need to make sure is choose the right version of derby.jar in the Extra Class Path of the “Apache Embedded Driver” configuration:
A note here: Do just the embedded driver setup. That works with about 10 minutes of effort. Setting up the client driver requires a Derby Network Server and make some involved changes to JDBC connection setup in our toolkit. That is also the prerequisite to allow multiple connections to the database. That is not covered in this blog.
There you go folks – a basic “how to” blog for a change. This was triggered because a new member of my team asked “how to I connect to our project’s DB in local?” and was not convinced with the options available out of the box.
Modified on by Anbu Ponniah
WebSphere Commerce Toolkit – Virtual Machine, Virtual Hard Disk and Docker
WebSphere Commerce toolkit is an essential development tool for customizing WebSphere Commerce application – to extend or override business logic, customize store flow, extend tool capabilities etc. It is built on top of Rational Application Developer which itself uses Eclipse at its core. It provides a derby database with WebSphere Commerce data model bootstrapped with WSC data and an integrated WebSphere Application Server runtime. It additionally provides plugin extensions needed to automate some of the commerce specific development tasks (JET comes to mind).
While official system requirements are found <here>, 8 to 12 GB of HDD (SDD will be nice), a fairly powerful modern CPU and 8GB of RAM is what I have come to expect – more if you want your operational activities to be zippy.
One of the common questions asked by IT teams is how do I ensure all developers are able to setup and use toolkits that have same configuration, bootstrap data, APARs etc. Everyone is looking for quick, error free and repeatable steps for enabling developers across a team to setup their toolkit and stay up to date. In this blog I am presenting some options that I have seen or discussed, what it takes to pursue those options and call out pros and cons for each. I would like to hear about other options readers may have seen or any trouble they had encountered when pursuing these options.
Option 1: Old style trial by fire – no pain no gain
Ask every developer to install the stack in their computing device. This typically involves creating a fairly detailed document with screenshots and step-by-step instructions starting from where to download the product packages, how to run the installer, what options to choose, where to click etc. Let us face it, this is error prone. I have seen even seasoned developers miss a step and end up with a corrupt toolkit. There are always people who seem to be destined to encounter exceptions and errors more frequently than others. It is also time consuming. And if for any reason the tookit gets corrupted, it could reset the development clock.
Pros: Simple on paper – just plan toolkit setup time for each developer in your project plan. Toolkit runs on native OS – so best performance. Also, developers will need to gain expertise on setup – they will be forced to understand how things work under the hood.
Cons: Time consuming, inefficient, error prone, lot of duplicate effort wasted, backing up is tedious, troubleshooting is tedious, requires some under-the-hood knowledge.
Tips: If this is your chosen path, then you can mitigate some of the risk by creating data and configuration bootstrap packages to bring up basic environment through automated scripts instead of manual steps. For example, create a single zip file for all configuration changes needed (ACP, search configurations etc) and a single script that can copy files around, run acploads, dataload, build search index etc. Standardize the folder path and versions of packages. In fact such bootstrap script is recommended no matter which option you choose. There is also a "central repository" setup that saves individuals from having to download software from IBM Passport Advantage account and host it in your intranet.
Option 2: Traditional Virtual Machines – share with overhead burden
VM promises setup once and deploy anywhere model – and to large extent delivers on that promise with few caveats. You can have a seasoned developer or WSC administrator to create the VM package and share it with others in the team. Team can just boot up the VM and become productive. This works to most extent – but virtualization overhead needs extra processing power and memory, there may be network setup required, moving around files from VM to host and to other developers require additional hoops that one needs to jump through and there may be licensing costs for your virtualization software and the guest operating system in addition to your employees’ host OS licenses.
Pros: Create once and deploy any number of times model, standardization is simple – just publish new VM image. Also, it allows folks running OS that does not support toolkits to run the toolkit (Mac or Linux).
Cons: Licensing overhead, execution overhead due to full virtualization – slow performance (it is slow for me even with 4GB RAM dedicated - 6GB is when it starts breathing easy), too big (~40GB). Extra steps needed to transfer around files in and out of the virtual machine.
Tips: Even here, to roll out updates, it may be simpler to publish a data or configuration load scripts and reserve publishing new VM image only for product upgrades. Beware of saved passwords of your colleagues – you may inadvertently update repositories under their proxies and cause confusion in the project. You need 4GB dedicated RAM for a VM to be functional – if it is less, it starts showing. Interestingly, SSD drives do not help as much in speeding up VMs as it does with regular native installations – my guess is this is mainly due to VMs reserving and managing entire chunk of disk space by themselves (This is relatively speaking – it was still faster than spinning drives).
Option 3: Light weight virtualization –pseudo will do
Virtual Hard Disk is probably the most popular choice I have seen in my projects in the past 2 years. VHDs are a Microsoft virtual hard disk format that behave like a mountable drive with fixed or dynamic space, typically configured to have a max capacity. This appears as a single file until it is mounted and is typically supported in all flavors of Windows OS. For WSC toolkit, one can install all product packages needed for the toolkit into the mounted drive (taking care to ensure installation manager’s repositories and license data directories are made to point into the mounted drive) and then just distribute the .vhd file. This can be as lean as 12GB in size and can be easily shared. As long as everyone mounts with the same drive letter, this works. It gives the benefits of running directly on native OS while standardization of installations and bootstrap configurations of a virtual machine.
Pros: Lightweight, easy to create, standardize and distribute, no extra virtualization costs or overheads
Cons: Works for Windows only, tied to the drive letter till death does it apart. Not everything works within a self contained VHD - DB2/Oracle DBs and web servers try to keep some important references in C: drives or in registry entries - so this works best for a "embedded" web server with cloudscape DB.
Tips: Drive letter management needs some foresight if you are a partner expecting to execute multiple projects and there are only 20 to play around with (assuming A, B, C, D, E and F are needed for various drives already). Installation manager keeps some information in a data config folder that defaults to C: even if you are installing the IM in another drive. This requires some tweaking of configuration to keep installation manager specific settings within the VHD mounted drive – else you may need to install some of the packages in your local computers before you can run stuff from the mounted VHD.
Note: I have been asked before – why not docker? Sure, docker works – now we have windows server containers available on docker. So theoretically one can install the toolkit and all necessary packages on such a container and share. But, we are not strictly adhering to “supported” versions as WSC developer edition is supported for professional and enterprise editions of Windows 7 only – no mention of Windows 8.1 or 10, leave out the server editions that have docker containers available. Other option is to have docker images for your DB and web server - which need work-arounds for VHD.
Option 4: Go solo – path for the brave
People who understand what they want to do with a development toolkit and don’t mind going solo – several options do exist. First a disclaimer – IBM does not recommend any of these. If you get into trouble while doing this, IBM may provide a best efforts based (kind) help but no guarantee of seeing out of any quagmire you work yourself into. Also, you may still want to use RAD if you do BOD, plugin or tool development.
Now we have docker containers for various pieces of the stack - eclipse can run in a container, so can db2 and liberty profile of WAS. Imagine this – a docker container running same DB as your QA server installations of WSC. That would make it so easy for you to extract and load all the test data for local developer use. While doing that between *nix and Windows is “fun” (tongue in cheek), with docker, you could potential distribute same data model and loaded data to both developers and testers! Change in development environment might actually work the first time in server environment! Alright, that solves the DB. How about building source control and building projects --- these are folders that your favorite editor can be mapped to and source control scripts or plugins that can keep it in sync with a repository. If you primarily work in Stores project, then no sweat. Ok, now you can develop and have data – how about doing some unit testing. Well, how about a docker container with WebSphere Application Server running in it – and configured to connect to your database docker instance! Then your ubiquitous compile and publish tasks of Rational Application Developer needs to be replaced by a build and deploy process – similar to how it happens in a server. It is cumbersome for java classes – but it can be simple copies for JSPs and JS files. You may also setup web server and search server containers while you are at it. All this makes it easy for you, the super developer, to manage exactly where you want to spend your resources and how you want to test it. It is fun, it can be fast and it can be daunting for not so savvy.
Pros: Fun that mechanics get taking apart a machine and piecing it together in new ways, database can be shared and exactly like server, topology can be server like, changes can be more accurately tested in local and defects more likely reproduced in local.
Cons: You need to be an expert and highly motivated, it can be daunting for not so savvy developers and standardization is piecemeal – needs a solid plan and extreme discipline – but that is similar to how configuration management and continuous integration is approached in server environments.
Tips: Works for innovation minded folks – network with some IBM folks . Your nous and your professional network are your safety nets here.
There you go folks – some options and musings around them. Hopefully got em brain cells thinking in the right direction.
**Edit1: Embedded links to installation information contextual to the blog
Modified on by Shweta Gupta IBM
The shoppers have loved the search bar on the shopping sites and come to expect them to be their first friend to take them close to the product they intend to shop on the site. This is akin to the quintessential ‘Shopping Assistants’ in the marts, supermarkets, our our big or small fashion outlets. The shop may be earmarked and categorized, however if we are not familiar that particular store, we tend to get awed by the array of aisles and have an instant liking towards the store assistant who can either guide you the aisle/corner of the store for your desired product or even better escort you to the product that you are seeking. You would agree that some of these assistants have converted how many bags you carried back home from your shopping expedition.
The current state of eCommerce has come a long way and with most search engines offering a set of standard technical capabilities, one would expect to find our shopping search experiences almost similar. However, we are actually at length from this experience. While I was looking for studies on the subject, I stumbled upon a 2014 report on smashingmagazine.com, where they benchmarked the search experience of the 50 top-grossing US e-commerce websites, rating each website across a set of 60 search usability parameters, which you would agree is rather very fine grained. Offcourse a study from 2014 would require a revisit in 2016 at the pace one would expect our eCommerce sites to launch capabilities in an agile way, to stay competitive.
What does this mean to you for as eCommerce Implementation Project Manager?
I have observed that even though search is an important functional feature on eCommerce sites, the search relevance testing is continued to be missed out of the plan. The traditional functional testing, includes search and browse feature test, where the test team is responsible for a set of use-cases for checking that the search functionality works as expected. However, this approach largely misses out on the desired shopper experience for the search results or/and the desired search results from the business point of view. The User Acceptance Testing (UAT) phase would typically have the business users look at search relevancy, however it remains more of an organic exercise, amongst the overall umbrella of UAT, not getting the due attention, time, effort and most importantly understanding of the approach, input and outcome.
What does this mean to you as the eCommerce Product Leadership for your business?
This is definite a very continual exercise for you, just as it is to deliver on the business requirements via your eCommerce and mCommerce channels. You will be doing analysis at the ‘search hits’ and ‘search misses’, hopefully on a weekly basis, and working towards tuning your results based on the data your shoppers are feeding into you. The strong point is that because you have invested into WebSphere Commerce, you will find that the Business Tools allow you with a lot of flexibility and ease to achieve your results.
Let us explore an approach towards search relevance, look at a proposal for the test strategy and guidelines for your WebSphere Commerce Search powered commerce site.
Outlining the search relevance testing proposal and approach
The Project Management of eCommerce sites must include a sprint (or more depending upon the catalog size) which focuses on the search relevance. I have had discussions with several test managers on the topic and found them wanting for guidance around the ‘search relevance testing’. This prompted me to create an approach document that can be used as starting guideline for your eCommerce site.
- Your list of search terms: Work with the business to identify a list of ‘n’ search terms which you would like to focus in your initial iterations. Where n could be 50, 100, 200 so on based on the width and depth of the catalog, product items.
- Variety in search terms: The search terms should include a variety, where you have single words, multi-words, words with units like weight (think grocery), phrases with attributes like color (think garments), variance in how shoppers search for certain products, brands which your catalog does not support, misspelled products …
- Get into the shopper’s shoes, sandals: Leverage analytics data from different sources to identify the changing trends and shoppers behavior. Go out there to your favorite, and not so favorite retailers with similar catalogs, and learn what you like and what you would wish did better.
- Product title/description recommendations: The quality of the search result highly depends upon the way the catalog data is created and maintained. The natural search results from the search engine is based on the query relevance as defined in the configuration. One of the important recommendation which would also come out of the relevance activity is on enriching the product descriptions and other searchable attributes.
- Tester training: Your army of testers may not be trained to think on the topic of search relevance. Plan for a training and get them to think and experience shopping around the domain. This may be simpler for certain industries like grocery and fashion, however this may slightly trickier incase your site is selling spare parts, or heavy industry tools.
- Search Developers: The search developer needs to be part of the iteration to support the relevance tuning. This will require to look at the relevancy scores, and providing analysis on the result. The developer will also be responsible for making any changes which come through search configuration changes.
- Inputs to the search relevance activity: At the start of the search activity, you will have the list of search terms, expected results, synonyms, and search replacement terms.
- Outputs of the search relevance activity: At the end of the search relevance activity, you will list of search terms, actual result-set, if it meets expected results and no what is the delta. You will also have a more expanded/modified list of synonyms, search replacement terms and search rules.
- Iterative: This exercise should be iterative based on how is your development life-cycle. If the catalog upload, products, brands are staggered towards different releases, plan the search relevance accordingly.
- Input to the performance cycle: The outcome from this exercise should also be fed into the performance system as the search performance should factor in the recommendations which will be applied on the production.
The search relevance is a very subjective criterion which varies by industry, business, catalog managers, merchandizers, the current promotions, and most of all your product catalog.
The search relevance activity needs to get under the skin of the shopper to be able to identify the patterns and results. The business needs to be engaged very closely in the process.
I would hope that the approach outlined in this article helps you to work towards the planning exercise and also gives a starting point. Do share your experience on search relevance, as I look forward to the different ways in which we are out to achieve the business outcome from our commerce sites – that is convert the searches into baskets and orders.
I am attaching a basic version of search template. If you think this is useful, do let me know so that I can share other versions on the same.
Search Relevancy Template - 2016-FEB-SearchRelevance.xlsxView Details
Previous related blog: Customizing search for shoppers and retaining them as their attention time dwindles
1 - Changing the relevancy of search index fields
2 - Tuning multiple-word search result relevancy by using minimum match and phrase slop in WCS Version 7
3 - Search Rules
Modified on by Shweta Gupta IBM
What is High Availability?
Wikipedia describes High availability as a characteristic of a system, which describes the duration (length of time) for which the system is operational. For the curious, the wiki page also provides the downtime for the availability goals.
IDC published in its survey report earlier in 2015, ‘Unplanned application downtime costs the Fortune 1000 from $1.25 billion to $2.5 billion every year’.
This article discusses the high available considerations for the WebSphere Commerce solution components, with focus on DB2 HADR as the key database layer high availability.
High Availability of the WebSphere Commerce system components
A typical WebSphere Commerce system spans across various tiers: Web server, application servers, search servers, database server. It will talks to several downstream systems like Order Fulfillment, Payment and multitude others for different purposes.
In general, High Availability is achieved by making systems redundant. Each tier of product has its own specific solution to achieve High Availability. For example, at the Web server tier, Load Balancer is commonly used to distribute traffic across a Web server cluster. At the application server tier, WebSphere Application Server federation and clustering is managed by the Network Deployment Manager. You will need to ensure that you know and understand the high availability capability of all the downstream systems to be able to determine the high availability of your commerce system.
The commerce system is most likely to have one active database, and therefore it can be a single point of failure if it is not setup for High Availability Disaster Recovery (HADR) in case of DB2 and similar high available setup for Oracle database. Without HADR, a partial site failure requires restarting the database management server that contains the database. The length of time that it takes to restart the database and the server where it is located is unpredictable. It can take several minutes before the database is brought back to a consistent state and made available. DB2 HADR may be setup such that a standby database can take over in seconds. Further, you can redirect the clients that used the original primary database to the new primary database by using automatic client reroute or retry logic in the application. DB2 HADR feature provides a high availability solution for both partial and complete site failures. HADR protects against data loss by replicating data changes from a source or the primary server, to one or more standby servers. DB2 HADR supports up to three remote standby servers and is available in all DB2 editions.
DB2 High Availability is about ensuring that a database system or other critical server functions remain operational both during planned or unplanned outages, such as maintenance operations, or hardware or network failures. Reduced database down-time enables you to meet strict SLAs with no loss of data during infrastructure failures. DB2 provides database clustering as well as high availability and disaster recovery capabilities designed to maximize data availability during both planned and unplanned events. It also allows you to quickly and easily adapt to changing workloads with minimal involvement from database administrators, and frees application developers from the underlying complexities of database design and architecture. Mobile, online and enterprise applications need continuously available data to keep transactional workflows and analytics operating at maximum efficiency. Any downtime can leave mission-critical databases inaccessible and applications unresponsive. IBM DB2 pure Scale helps change the economics of continuous data availability. DB2 pure Scale is designed for organizations that require high availability, reliability and scalability for online transaction processing (OLTP) to meet stringent SLAs.
Key considerations - Planning for High Availability and Disaster Recovery
- Site location: Is there a single site, two sites, or more, network bandwidth and connectivity between the sites. This will help layout the strategy for choosing whether you want a single site to serve the main traffic and allow the other to be used in case of site failure or if both your sites will handle the traffic in parallel. If the sites are geographically co-located or connected with super network, the sites can share traffic with ease.
- Active-Active or Active Passive: The Web Tier and the application tiers (commerce and search) can be configured to be active-active or active-passive, however the commerce database will be likely to be setup as active-passive. In case the sites do not have the required high-speed connectivity, there is a challenge as they have to talk to the database which is remote to them.
- Single-cell or dual-cell: The commerce and search application servers can be setup in a single cell or dual-cell topology. The dual-cell topology enables for the updates, rollouts and application deployments to be applied with reduced downtime. With the version of 8 WebSphere Commerce, there is going to be more and more focus on zero downtime deployments.
- Failover/Disaster Recovery Capacity: The capacity of the system which will be available to serve in case of a site downtime, is required to be planned. The IBM tech sizing document will make recommendation on the failover capacity, and this data can be used to work out a starting configuration for the percentage. The failover planning can be at 25%, 50%, 75% or 100% or a math in between.
- Search Application server Managed Configuration: WebSphere Commerce v8 has introduced the advanced configuration topology that allows the search server to be in a Managed Configuration which enables the search master, subordinate & repeater servers with templates that can be managed by the deployment manager.
- Database High Availability topology: The database backup needs to be considered based on the database native capabilities and the site location. Taking an example of DB2 as database, and a 2 Site setup which are physically remote to each other and not connected with the required bandwidth. DB2 will have Onsite Hot Standby in its primary site using DB2 NEARSYNC and another Offsite Disaster Recovery which will use SUPERASYNC. The hot standby will be on automatic knob, whereas the offsite would be on a manual control. DB2 High Availability setup has two options, AIX/Linux/OS clustering or native DB2 HADR and the choice must be made based on the governing factors. The bandwidth calculations for HADR would depend on the calculation of peak incremental data per hour.
Reference: High Availability and Disaster Recovery Options for DB2 for Linux, UNIX, and Windows
- DB2’s geographically dispersed DB2® pureScale™ cluster (GDPC) capability: if you are looking for an active-active high available system, consider the pureScale capability where GDPC provides the scalability and application transparency of a regular single-site DB2 pureScale cluster, cross-site. As described in the developerWorks article, ‘this is the active-active system, where the pureScale members at both sites are sharing the workload between them as usual, with workload balancing (WLB) maintaining an optimal level of activity on all members, both within and between sites. This means that the second site is not a standby site, waiting for something to go wrong. Instead, the second site is pulling its weight, returning value for investment even during day-to-day operation.’
- Automatic Client Reroute options on your application: Once you have your database high availability topology agreed upon, you will be leveraging the automatic client re-route, popularly referred to as ACR. ACR can be configured in multiple ways, and allows to choose properties to set parameters like maxRetriesForClientReroute and retryIntervalForClientReroute. In DB2 HADR you can configure ACR facility where client connections are automatically rerouted to an alternate server when the primary server fails. It is the preferred reroute method. ACR can be used between any two servers, not just HADR primary and standby. It is up to the administrator to set up replication to ensure that the two servers have the same data content. Other replication methods, such as CDC, Q-rep can also be used to sync up the two servers. The ACR functionality is entirely separate from HADR.
The High Availability for a commerce production depends upon several factors like availability requirements, planning for high availability, disaster recovery, site planning, integrated systems, network bandwidth, database topology, being a few which will be under consideration.
The goal of this article is to help you think of your key consideration as you plan, prepare, design, implement and test your different tiers for high availability.
Credits: Anbu Ponniah is our most regular reviewer and provides feedback to ensure that we cover points which will be of interest to our readers. Thank You Anbu for valuable insights, always.
About the authors:
Pravin Kedia is Analytics Solution Architect with IBM helping customers on Data Warehouse and Database Solutions. He is passionate about IBM technologies and shares his insights through the blogs on developer works.
Shweta Gupta is a WebSphere Commerce consultant with IBM helping customers with their ecommerce journey. She is passionate about performance of the systems She shares her insights through the blogs on developer Works. Read about her other publications on linkedin
Modified on by Shweta Gupta IBM
You may have been part of commerce implementation cycles where the functional stability pushes the timelines for the performance iterations. In such scenarios, the project teams may decide to start some level of application diagnosis while waiting for the isolated performance system with stable functional build.
The lower testing environments like SIT (System Integration) and UAT (User Acceptance) can make good platforms for the solution implementation and development team to do analysis on the application performance. These are some of the factors you would like to consider when you would like to use them for diagnosis and consider them in your analysis factor.
a) What is their deployment topology with respect to the production
b) What is the level in terms of the cumulative fixes, APARs wrt the production
c) What is the data load sizes with respect to the production
d) What are the management center business rules, activities, search rules present
e) What are the eSpots and promotions available
f) Can you get some isolated time slot to conduct your single user and some trivial load tests
g) What is the state of the integration systems like order management, payment, and others and how do you isolate your diagnosis
There are several monitoring tools that I have covered in my other blogs that can aid in your analysis. You may not have an APM (Application Performance Management) tool installed on these environments’, however you do have access to the Health Center tool which is packaged with WebSphere Application Server (WAS). I will use this specific article to focus on one of the IBM Monitoring and Diagnostic Tools, Health Center, which is an agent-based diagnostic tool for monitoring the status of a running Java applications.
The Health Center agent libraries are included with the WAS server Java SDK (Health Center 3.0 or later is required for WebSphere Commerce). This makes it an easy-to-use tool available on these environments which can be used by the developers, architects and performance analysts. The Health Center tool can be as easily used in the toolkit development environment to allow developers code for performance. It will show them the time spent and the thread stack. It is prescribed that all developers profile their code and make it performing prior during the implementation phase and work on the 80-20 rule, where if you are able to optimize the highest consumers, you will gain performance. When the code is in the SIT and UAT, the focus should be more from an integration perspective. However, I must admit, we often end up doing application level code tuning based on analysis till much later in the cycles.
Configuration, Setup and Running Health Center
1 – Check the version
# /opt/IBM/WebSphere/AppServer/java/bin/java -Xhealthcenter -version
[Thu 27 Aug 2015 11:55:33 AM IST] com.ibm.diagnostics.healthcenter.java INFO: Health Center 126.96.36.19941209
Aug 27, 2015 11:55:33 AM com.ibm.java.diagnostics.healthcenter.agent.mbean.HCLaunchMBean <init>
INFO: Agent version "188.8.131.5241209"
INFO: Health Center agent started on port 1972.
java version "1.6.0"
2 - Enabling the Health Center agent in profile or headless mode
In the headless mode, the agent starts collecting data immediately, and stores the data in files called healthcenterpid.hcd, where pid is the process ID of the agent. You can load this file into the client at a later date, to view the data. Headless mode is useful for systems where you cannot connect a client to the agent.
I am going to use the health center tool in the full mode for this article. The agent starts collecting data immediately and then you have to launch the client and connect to the server process for viewing the data.
3 – HealthCenter Settings
The Health Center agent can be configured as the JVM properties. Please refer to the KnowldegeCenter documentation for the configuration.
Knowledge Center Reference
Property (use -D for JVM arguments)
Use this format with the -Xhealthcenter option of the Java command when you start the agent and the application at the same time
Sets the JMX port number. For JMX client-agent connections (the default), the client uses port 1972 by default for communicating with the agent. If the client cannot use port 1972, it increments the port number and tries again, for up to 100 attempts. You can override the first port number that the agent tries to use.
Set to on to enable the client to communicate with the agent by using a JMX connection. This property is set to on by default.
Specifies headless mode. The agent starts collecting data immediately, and stores the data in files rather than sending it to the client. When the application ends, the agent creates a file called healthcenterpid.hcd, where pid is the process ID of the agent. You can load this file into the client at a later date, to view the data.
If headless mode is used, you can specify multiple other settings
I am starting the WebSphere Commerce Server using the following parameters for this illustration purpose:
-Xhealthcenter, -Dcom.ibm.java.diagnostics.healthcenter.agent.port=1980, -Dcom.ibm.diagnostics.healthcenter.jmx
These settings get saved to the <server.xml> under:
<profile directory >/config/cells/<cell>/nodes/<node>/servers/<servername>/server.xml and are picked up by startServer.sh
4 – Restart the server
Restart the WebSphere Commerce Server you are profiling. Confirm that the healthcenter agent shows up on the server. You will find the directory <healthcenter> created under the application server logs
Installing the Health Center Client to View Data
You may follow any one of the options to install the IBM Support Assistant.
Using a Health Center Client to View Data
1 – Connect the Health Center to the port mentioned while starting the application server.
The connection will start and you will be able to see the following data using the left navigation.
Example of a Profiling View for com.ibm.commerce.rest* filter:
You can use filters to view the classes and methods that you are watching. You can then turn on the trace for the method you want to drill. In Health Center, you have to enable method tracing when you start your application, as it cannot be enabled at run time. Enabling method tracing is intensive and negatively impacts application performance, so you want to do it based on what you are drilling.
Using health center profiler for your debugging
I am showing a few screenshots to help you see the traces for some of the commerce use-cases.
If your storefront has created extension to the Wish List and written custom code around it, you are looking to profile the user actions around adding to wish list, sharing the list and viewing it. You can chose a filter com.ibm.commerce.giftcenter.* and then look at the Invocation paths.
You may use the “Reset Data” button on the toolbar to get data for the current flow only. The second button from the right.
Search based operations:
IBM Health Center allows the commerce developers to monitor and profile their application in development and testing environments with an ease, no additional skill and resource required to install and use monitoring tools. The tool comes pre-installed with WAS and has a simple configuration to be setup in both profiling mode and headless mode.
The tool provides data like CPU usage, GC, I/O, Network, however the focus I have provided in this article is the profiling of live application, and using the method traces. This enables a developer to review the invocations and get into the details of the hot methods and drilling further to locate and then work on them. It will allow the analyst/architect/developer to analyze the path taken for a cold, not-warmed page versus and cached page. The developer can extract nuggets of information from the flows, which help with both troubleshooting functional issues, and performance issues.
When do you exercise load on the environment, the Health Center will allow you to view the system metrics and JVM metrics like the threads information. The Web Container threads are most valuable in understanding the behavior of application under load. I will come back to discuss the thread analysis topic again. If you are interested in some specific monitoring methodology or tool, feel free to talk about it using the comments section.
Modified on by Shweta Gupta IBM
Why Data Cache required for Commerce Search servers
One of the common questions which comes from the search developers and WebSphere Commerce architects is on the role data cache plays in search performance. I am hoping that through this article, I am able to de-mystify this ask to an extent. Two have my colleagues have written articles around the search data cache which has been introduced since FEP7. Andres Voldman in The Data Cache Moves to the Search Server
And Vani Mittal in Caching strategies for the new search architecture (FEP7+)
However, I believe that there is space for another article which will go next level into what is actually cached into the search data cache. The commerce search server built on solr uses the in-memory caching providing by solr. The common cache tunable like filterCache, queryResultCache, documentCache significantly help the results to be cached, hence enabling the search queries to be highly responsive. This is very much required to be tuned for WebSphrere Commerce FEP7+ search and browse performance.
There has been additional cache introduced in FEP7 for search servers at the WebSphere Application Server(WAS) Dynamic Cache layer. This data cache focus is to perform caching of database query results which is used during the pre-processing phase of the query. The WAS DistributedMap and DistributedObjectCache interfaces are interfaces for the dynamic cache that applications can cache and share Java objects by storing a reference to the object in the cache.
What is in the search data cache
The KnowledgeCenter links give an introduction to the Search Data Cache and provides a list of object cache IDs under the DistributedMaps
FEP8 WebSphere Commerce search Data Object Cache
There are 13 data cache objects that are enabled by default in FEP8 for Aurora storefront. We will examine some of these caches and explore a handful to get a better insight.
Object Cache Distributed Maps pre-configured on commerce search server
Search, Price Rules
Example of some cache contents
SearchSystemDistributedMapCache stores the configuration from the STORECONF table, like the wc.search.priceMode and wc.search.entitlement. wc.search.priceMode.compatiblePriceIndex
SearchFlexFlowDistributedMapcache saves the SQL result for the feature enablement check on expanded category navigation, and if it is enabled for the storefront. When enabled, expanded category navigation displays all of the products that belong to the immediate category and its subcategories to display. When disabled, category navigation displays only the products immediately within the current category. Facets on the left sidebar belong to only those products from the search. Customers cannot filter by Brand or Price at the higher levels as shown in the first image on this page.
Sample cache entry:
com.ibm.commerce.foundation.internal.server.services.search.util.StoreHelper.featureEnabled:[findFlexFlowByStoreIdAndFeature, 10001, ExpandedCategoryNavigation]:
This is used to cache search rules associated with merchandizing. For example, the provider SolrRESTSearchBasedMerchandisingExpressionProvider, calls the marketing RESTful service to run search activities. The query fragments produced by search activities will be added back into the SelectionCriteria object for other downstream processing. These search rules are cacheable in this object cache.
Sample cache entry:
com.ibm.commerce.foundation.server.services.rest.search.expression.solr.SolrRESTSearchBasedMerchandisingExpressionProvider:[searchRuleQueryFragments, dresses, null, null, , , null, , 10052, 10001, 1000]:
This is as the name suggests, used to store the facet information for facet list by category id, products by category, facets by keyword, sorted facets. As an instance, it will store the result of the of the facet list by the search profile IBM_ComposeFacetListByCategoryId. This is an important cache for Facet Display performance as keeps data for FacetHelper.category, FacetHelper.keyword, FacetHelper.columns, FacetHelper.attr, FacetHelper.sortedForNavigation, FacetHelper.sortedForKeywordSearch
com.ibm.commerce.foundation.internal.server.services.search.util.FacetHelper.category:[filterFacets, CatalogEntry, IBM_ComposeFacetListByCategoryId, USD, -1, 10001, 10006, 10052, true]:
com.ibm.commerce.foundation.internal.server.services.search.util.FacetHelper.category:[filterFacets, CatalogEntry, IBM_findProductsByCategory, USD, -1, 10001, 10038, 10052, true]:
SearchCatHierarchyDistributedMapCache, These caches are used to store the category information for categories, sub-categories and navigation path.
com.ibm.commerce.foundation.internal.server.services.search.util.HierarchyHelper.category.name:[findCategoryNameByCatalogIdAndLanguageId, 10052, -1, 10001, 10001]:
com.ibm.commerce.foundation.internal.server.services.search.util.HierarchyHelper.navigation.path:[findNavigationPathByCatalogIdAndLanguageId, 10052, -1, 10001, 10001]:
Make sure that you have setup the cache invalidation for WebSphere Commerce search correctly to avoid getting stale data. The cache should be invalidated post re-indexing. If you are using frequent inventory refreshes, the cache invalidation must be scheduled properly.
This KnowledgeCenter link outlines tips on Cache invalidation
The following items must be considered to determine an appropriate delay, in milliseconds, before the cache invalidation occurs after each search reindexing:
The time the next reindexing scheduler command is started
The approximate amount of time that the reindexing might take to complete
The next replication time between the production search index and the repeater
The approximate amount of time that the index replication might take to complete
Where the sum of the time estimates is equal to the approximate required delay time before cache invalidation can occur.
I have tried to de-mystify some of the object caches and its application on the search servers. This is not meant to be a comprehensive guide, but I hope this will help you to consider caching as you build extensions to your storefront’s search capability.
It will also be useful to understand the cache contents as you are tuning your system for performance. By monitoring the cache statistics on these cache objects, the number of cache entries, cache hits and cache misses, you will be able to tune the cache sizes. Some of these would be derivative of your catalog size, deep sequencing, and search rules.
Musings on WebSphere Commerce and stack options
Most of us know this - WebSphere Commerce is an enterprise Java application that runs on top of WebSphere Application Server that runs on an Unix OS and uses DB2 or Oracle as its persistence layer. Technology space is constantly evolving and new choices are available for each layer that WebSphere Commerce depends on. There is often doubts in the mind of retail IT departments on whether they can safely adopt an alternative technology in any of the stacks. This blog is a collection of my musings on that topic:
By default, everyone assumes that WebSphere Commerce (WCS) runs on an IBM stack! But a quick look at detailed requirements page reveals some non-IBM components in the stack: Windows and Linux on Intel based processor systems are two key differences in the operating system layer (bringing in flavors of windows server, Red Hat Enterprise linux and SuSE into the mix), while Oracle is a key difference at the database layer. Otherwise, the list looks like a pure IBM list. However, I have seen customers experiment with more choices than the above. Let us look at some variations.
1. Web Server Options
IBM HTTP Server v184.108.40.206 corresponds to Apache web server 2.2.29. Some customers choose Microsoft Internet Information Services (IIS) because they are using that solution in other parts of their IT enterprise. Similarly, other choices such as NGinx or Apache web server is not unknown. Of course, the default choice of IBM HTTP Server comes with plugin-configuration, predefined configurations for SSL communication between WAS and web server layers and supporting both unmanaged and managed WAS nodes. So using a different web server solution would require IT team to perform some manual configuration. Reasons for choosing an alternative web server is usually because they already use a different web server for their all enterprise needs or are after specific capabilities of a web server (for example, Apache web server version 2.4's bandwidth limits for clients is needed).
2. Hosting and OS options
WebSphere Commerce is supported on popular enterprise *nix options such as AIX and Linux on x86, AMD and PPC architectures and virtualization options such as PowerVM. However, customers often wonder if they can deploy WCS solutions on their choice of cloud providers. The short answer is "WCS is functionally compatible to most cloud providers". Cloud providers differ on the virtualization technology they use - so product will provide a full set of functional features, but the sizing needed to run the solution may differ between the providers. As long as the IT team works with that provider to adjust their techline sizing, this should work. Of course, going with IBM Softlayer means the techline sizing from WCS is straight forward and support is at one place. IBM Commerce on Cloud DevOps services has hardened reference architectures and devops patterns used in multiple client engagements. We also have IBM Urbancode and Chef based setup, configuration and deployment management solution that automates many of the mundane tasks in addition to supporting common solution architectures. And as per http://www-01.ibm.com/support/knowledgecenter/SSZLC2_7.0.0/com.ibm.commerce.install.doc/refs/rigsupportSCCI.htm: "Depending on your virtualization configuration and use case scenarios, the cost of operating in virtualized environments could be more than 30% of your overall native, non-CCI IBM WebSphere Commerce available capacity".
On operating systems, if the database and WebSphere Application Server runs on that OS, theoretically WCS also runs. However, from WCS product literature, it is clear that only a handful of operating systems have been certified (tested) for production use. So, while WCS "should" work (and probably would) and knowing IBM, it will probably provide a best efforts help for solving any problems reported on those operating systems, companies adopting any unsupported flavors of Linux should have a plan in place to handle such situations. For example, having at least a small farm of machines in supported OS flavor helps in differentiating a problem as specific to OS or configuration vs WCS product. I haven't personally seen clients experiment much here.
3. DB options
DB2 and Oracle are very popular amongst WCS customers. What if someone wants to experiment with some NoSQL or in-memory DB options? Again, typically with IBM, these will not be supported but if you open a PMR you would find that IBM will help you on a "best efforts" basis - note that this is the same level of support provided for your custom code. I have seen customers with good IT teams experiment with such options. One of the earlier posts in this blog discusses a pattern where some of the information is moved to a NoSQL DB. Similar exploration on using alternatives for parts of transaction processing - for example if you keep all registered shopper information in an elastic search solution.
4. Queue, Transformation, ESB etc
WCS integrates with multitude of IBM and third party products - IBM OMS (Sterling), CPQ... and other OMS and ERP systems are common. Point to point queue based or ESB based integration patters are popular. IBM MQ and IIB are supported. IBM has a long history of supporting integration with SAP based systems, in fact some say better than Hybris. What if you want to explore something like Apache Kafka? Like any solution built on loose coupling - by implementing a few Java Adaptors one can integrate over such technologies too.
5. Monitoring and managing utilities
Any OS level monitoring and managing utilities are of course supported - that is a no-brainer. WCS tracing and logging framework is built on WAS's jRAS framework. While any log monitoring or log analytics tool that reads those logs is easy to support, if you need to replace the complete logging solution with some custom logging framework, you would need to switch out some foundation jars from WCS and WAS. This will impact your support statement. One option to be "in support" is to extend needed classes and override the loggers at all critical places. WCS provides many extension points which can come in handy here.
It should be mentioned that another layer that often pops up during solution discussion is "Search". As we know WebSphere Commerce Search has Solr 4.7 (lucene) as its engine, but there are many add-ons that integrates search into mainstream e-commerce. So not just usual search and catalog navigation uses Solr, but price and inventory are available there, rules to modify search are now bubbled up to business tools and search is integral part of precision marketing too. However, some customers want to improve the core search capability itself and that is possible. For example customers may want to upgrade to a later version of Solr to take advantage of a specific filter or add custom extensions to support specific language or capability (such as phonetic search). Since the architecture is loosely coupled, and since Solr configuration (as kept in solrconfig.xml) is visible to IT teams, modifying that layer is straightforward.
7. Utility libraries
Lastly, I would like to mention about multiple third party utility or special purpose libraries that become part of the solution - these are typically located in WC_TOOLKIT/workspace/WC/lib directory in our toolkit. I know of instances where projects needed string processing capability available in a later version of a third party library. That is also the place for adding solution specific additional third party libraries.
I have just shared my musings about some of the common alternatives that come up during discussions with IT experts involved in e-commerce solutions. There are numerous other layers Note that none of the above should be interpreted as a support statement from IBM - these were just examples of options explored to various degree in various contexts. As always, check with your IBM support personnel before venturing into something different.
Modified on by Vani Mittal
A requirement that we sometimes come across is to have region based product assortment and pricing. FMCG and grocery retailers typically have different pricing and product assortment in their physical stores and the same needs to be replicated in the online channel. Running local promotions and offers is also quite common, for example, for a local festival or event.
There are multiple ways to achieve this requirement and the approach you choose to implement depends on the level of differentiation required among regions. Let’s explore some of these approaches.
The definition of region is kept intentionally loose here. Multiple countries could constitute a single region. Each country could form a region or one country could be made up of multiple regions.
The region to which a user belongs could be identified using many different approaches. It could either be through the user provided zip code or by using a geo-location service. This discussion is out of scope of this article. All we really care about is that before a user starts browsing the catalog, her region has been identified and saved somewhere, be it in a cookie or in the database.
Approach 1 - Multiple extended sites
This is probably the first approach that comes to one’s mind. The structure and concept of extended sites lends itself well to this requirement in general.
Each region can be represented using a separate e-site. Each e-site can have its own sales catalog that offers a subset of the products available in the master catalog in the catalog asset store, thus offering a different product assortment.
Each e-site could have its own price rule which could override the prices from the master catalog as needed.
The e-sites can even have their own UI differences and region specific marketing and promotions.
If the number of regions is very large then similar regions could be combined into a single e-site with the minor differences within those regions being implemented using other approaches discussed here.
Approach 2 - Multiple catalog filters in B2C store model
In the B2C store model, the store’s default contract is used to define the default entitlement of the store. All customers shop under this one contract which by default provides the same catalog and pricing to all users. There are ways to work within this default contract and offer differentiated catalog and pricing.
You can create multiple catalog filters and add them to the default contract using the CatalogFilterTC term. Each TC should have a TC level participant for a member group (region specific) which means that the TERMCOND_ID & MEMBER_ID columns in PARTICIPNT table will be populated while the TRADING_ID will be NULL. See the definition of PARTICIPNT table for details of these columns: http://www-01.ibm.com/support/knowledgecenter/SSZLC2_7.0.0/com.ibm.commerce.database.doc/database/participnt.htm?lang=en.
The default contract cannot be edited through WebSphere Commerce Accelerator, so to add the catalog filters to it you either need to use the contract XML import command (ContractImportApprovedVersion) or write custom data load configurations using TableObjectMediator to load the data into the required tables (TERMCOND, PARTICIPNT etc).
The default contract always has one price rule associated with it. The price rule can include different branches of pricing for different member groups thereby allowing for different pricing. Alternatively, you could create a custom price rule condition that checks for the user’s region (assuming it is saved somewhere – cookie or database) and use that to decide which path to follow.
Refer to this topic in info center on how to create custom price rule conditions:
Adding a new custom condition or action for a price rule
For this approach to work, the customers need to be added to region specific member groups. Only static member groups, i.e. member groups that have users added to them explicitly, can be used as TC level participants. Users can be added or removed from member groups dynamically, but you need to write custom code to map the shoppers with the appropriate member groups. This can become a performance bottleneck, particularly if the retailer has a very large user base.
Approach 3 - Multiple buyer contracts in B2B store model
If you are using the Enterprise edition of WebSphere Commerce, you can use the B2B store model in a B2C scenario with some adaptations.
Starting from Feature Pack 8, a single Aurora based storefront for both B2B and B2C capabilities is offered. When publishing the Aurora store archive you can choose to publish it as a B2B storefront and then disable the B2B features that you don’t require, for example, requisition lists, saved orders etc. Some of these features can be disabled using the Store Management tool in Management Center, while some might require JSP changes.
The benefit of basing your B2C store on the B2B store model is that you get the B2B business user functions; for example, you have the ability to manage the buyer contracts using WebSphere Commerce Accelerator.
In Feature Pack 7 and earlier, you can use the Elite starter store as your store’s base and again adapt it for B2C, i.e. enable guest user shopping and disable the features you don’t need.
If you don’t need the B2B business user functions, you can base your store on the Aurora B2C model too. In this case, you can manage the contracts either through contract XML import command (ContractImportApprovedVersion) or write custom data load configurations using TableObjectMediator to load the data into the required tables. Data load is particularly useful when managing a large number of contracts.
Whatever store model you choose to go with, the rest of the approach described here will remain the same.
Product assortment and pricing
Each region can be mapped to a buyer contract with each contract being associated with a unique catalog filter and price rule. A catalog filter can be assigned to a contract through a TC of type CatalogFilterTC and a price rule through a TC of type PriceRuleTC.
For a given contract, only a single price rule can be in effect at one time. Therefore, the price rule you assign must generate prices for every customer and for every catalog entry available to customers under the contract.
The contracts need to be applicable for all users. This is handled through contract participation where one row in PARTICIPNT table with MEMBER_ID value NULL having a ‘buyer’ participant role in each regional contract is created. This allows for any user to shop under the contract.
Once the customer’s region is identified, the corresponding contract can be set in the session using ContractSetInSession URL. Once this is done, the customer will see and be entitled to the region specific product assortment and pricing.
For contracts and catalog filter to be used at runtime, search entitlement needs to be enabled. This is required since from Feature Pack 7 onwards, browsing and searching as non-transactional requests were offloaded to the search server. As a result, the WebSphere Commerce server acts as a transaction server, and the search server acts as a non-transactional server that can be separately deployed and scaled. Entitlement was moved to the search server to apply entitlement checks before and after search results are returned in the storefront. If you use the Aurora storefront from Feature Pack 7, you will need to update the product and category REST calls in the JSPs to pass in the contractId that is set in session, so that it is used by the search server to enforce the entitlement.
By default, search entitlement is disabled for B2C stores and enabled for B2B stores. You can insert or update an entry with the name ‘wc.search.entitlement’ and value 0 (disable) or 1 (enable) in STORECONF table.
If the regions have simple UI differences, then those could be managed through contract specific style sheets where you can follow a naming convention for the files so that the appropriate one is picked up. For example, the name of the CSS file could be same as the contract name. Alternatives could be designed using the same concept (map the UI differences to the contract) if the UI differences are significant, but this would require customization.
Marketing and promotions
If region specific marketing and promotions are a requirement, then region specific customer segments can be created to target the customers. The customer segments can be created as static member groups, i.e. member groups that have users added to them explicitly. Users can be added or removed from member groups dynamically, but you need to write custom code to map the shoppers with the appropriate member groups. And as discussed in the previous approach this can become a performance bottleneck.
An alternative is to not include any explicit members or implicit rules of membership when creating the customer segments. So, all customer segments are actually identical copies of each other with only the names being different. To identify a specific customer segment as belonging to a particular region, the relationship between a region’s customer segment and contract can be stored in a custom table. You can customize the member group checking logic to return true when evaluating the customer segment of the region that the customer is currently browsing. Essentially, every user will be treated as a member of the regional customer segment only when they are browsing the contract of that region.
You can then use the customer segment while defining a promotion or a marketing activity. Refer to the screenshot below for an example of a region specific product recommendation.
This approach of using multiple buyer contracts works well when the number of regions is not too high. If you have a large number of regions, then that would mean a large number of eligible contracts for each user since each user is eligible to shop under any contract. WebSphere Commerce server and search server scan through all eligible contracts of a customer at multiple points. Therefore, having a very large number of eligible contracts can impact the performance of the storefront.
WebSphere Commerce provides a flexible and powerful architecture and runtime framework. As a result, there can often be multiple ways to implement a given requirement. In this blog post, we discussed the approaches that can be implemented to offer region specific products, pricing, UI, marketing and promotions. You should be able to evaluate the approaches given here and map them to your business requirements to select the best approach that works for you.