Modified on by Raghuveer P Nagar
Referring to our third article on Deploying Customizations, this is our fourth and last article from the OMS-on-Cloud stream in the WCE Practitioners Lounge.
By going through the article, readers can expect to know the integration scenarios for IBM Order Management (which is OMS on cloud) and how it can be integrated with other systems present in the enterprise architecture.
System Integration and Considerations
Commerce solution for an enterprise is developed using multiple systems, with each system having a specific role in the solution. For example, in an enterprise solution, in addition to IBM Order Management (which primarily has multi-channel order management and fulfillment role), there will be systems like CRM and Payment Gateways for customer management and payment management (including refunds) responsibilities respectively. To make all these system work together on various business scenarios, the systems are integrated with each other. Moreover, there are a number of considerations during the system integration, like Deployment of the external system (on cloud or on premise), Security, Exception Handling and flexibility (integration through middleware vs point to point integration).
Types of Integration
We can categorize the integrations into the following categories:
- Inbound Synchronous Integration
- Outbound Synchronous Integration
- Inbound Asynchronous Integration
- Outbound Asynchronous Integration
Synchronous Integration between two systems is when data (request and response) is exchanged in real time whereas Asynchronous Integration between two systems is when the data is not exchanged in real time (that is, when data is first stored and then processed for exchange). From IBM Order Management perspective, as part of outbound integrations, IBM Order Management will synchronously invoke a service in an external system, whereas, as part of inbound integrations, external system will invoke an IBM Order Management service.
Integrating IBM Order Management
In a typical asynchronous integration, data is put into components like JMS queue, file (CSV), database, etc. Later on data from asynchronous component is picked and processed. Data can be put into asynchronous component by the OMS for consumption by an external system (outbound flow) or by an external system for consumption by OMS (inbound flow). For implementing asynchronous integration in OMS, a service is configured using OMS service definition framework (SDF). This service is run through an integration agent which reads or posts the data.
IBM Order Management, which is OMS on cloud, supports asynchronous integration through both REST services and WebSphere MQ. With IBM Order Management access, you get access to WebSphere MQ service as well. To know more about supported queue operations for IBM Order Management, please visit adding queue and managing queues. Though WebSphere MQ is the natural choice for asynchronous integration, one may have to use a custom REST service in certain cases. For example, the security policy of a client may prefer REST service based integration over a queue based integration. In such cases, for inbound integrations, you can develop and expose a REST service which internally can use out-of-the-box WebSphere MQ service for required asynchronous processing, whereas, for outbound integrations, it is the responsibility of the service you invoke that it asynchronously processes the request by IBM Order Management. Please visit invoking IBM Order Management REST services and setting required properties for details on these topics.
As far as synchronous integration is concerned, IBM Order Management supports integration with external system through REST APIs/Services for inbound flows. For outbound flows, external system system’s services can be called through custom code. Depending on the type of external service (REST, SOAP Web service, EJB Web service, etc), you can develop a custom utility which needs to be used while invoking external synchronous services. Further, to have the custom utility reusable, you can parameterize it (with parameters like URL and request method).
The following figure depicts the pictorial view of a sample IBM Order Management integration with an external system:
The sample integration depicted in the figure works as described below:
- OMS Asynchronous Outbound – On an appropriate event in IBM Order Management, data (to be sent to the external system) is posted to an internal (to OMS) queue. An OMS integration agent reads the data from the queue and processes it (through “External Service Request” ) by either invoking an external service (which can be a SOAP web service or REST service) or directly putting message into external queue. If there is any error, the error is reprocessed later to send the request again.
- OMS Asynchronous Inbound – On an appropriate event, external system puts a message (having required data) in an IBM Order Management queue. An OMS integration agent reads the data from the queue and processes it. If there is any error, the error is reprocessible.
- OMS Synchronous Outbound – IBM Order Management invokes external system service synchronously, with an appropriate timeout. Also, the external service can be a SOAP service, REST service or EJB web service. OMS also receives the response from the external service and handles the response for both success and failure scenarios.
- OMS Synchronous Inbound – External system invokes IBM Order Management REST service. It also receives the response from OMS and handles the response for both success and failure scenarios.
You need to perform whitelisting activity to configure mutual trust between the systems being integrated. That is, the list of IP addresses of the systems which will be sending data to IBM OMS need to be whitelisted (because OMS will process only the data sent by whitelisted IP addresses). Similarly, as required, OMS IP addresses need to be whitelisted at other end.
Considering all the systems with which IBM Order Management is being integrated, you also need to analyze various types of access required by the external systems. Based on the analysis, you should design and configure appropriate “integration user groups” along with the “integration users”. Finally, you need to configure API security to configure secure access to APIs and services.
Exception handling in IBM Order Management can be implemented as required. We have not experienced any fundamental difference in the way integration related exceptions are handled in IBM Order Management.
OMS asynchronous integration allows reprocessing the error through both exception console user interface (UI) and API. For enabling the processible exceptions, in the asynchronous SDF service, one should configure asynchronous components as reprocessible in case of exceptions (by selecting the Is Reprocessible checkbox). With this, when an error occurs during processing, the error is visible in exception console so that. If required, you can review the error in console and resend the error for re-processing after doing the needful. For example, if item feed to OMS fails for few items due to locking on OMS table, the failed inputs can be reprocessed from exception console UI or custom code. Also, there can be a number of reasons for the integration errors, like JMS is down, network error, timeout, etc. Depending on the scenarios you want to handle, you can reprocess the exceptions through custom code.
For synchronous inbound integrations, you need to model appropriate errors/exceptions (code as well as description) and share the same with the external system. For the outbound ones, you need to handle the possible errors from the external system.
The external system with which IBM Order Management is being integrated can be a cloud system or an on-premise one. While implementing integration, it (how has the external system been deployed?) does not matter for OMS point of view, that is, the tasks at OMS side (like whitelisting the external system IP addresses) remain same.
As far as integration approach is concerned, considering the relatively large number of systems in commerce enterprise architectures, it is preferred to integrate IBM Order Management (with external systems) through ESB and API gateways.
For more information about integrating IBM Order Management with external systems, you can visit this link.
This is what we have today on integrating IBM Order Management with external systems. With this, we end our blogs in the OMS-on-Cloud stream. Also, we are planning another series around specific OMS integration scenarios, like Integrating with ERP for Order to Cash and Common Marketplace Integrations. Stay tuned for our next blog!
Modified on by Raghuveer P Nagar
Referring to our second article on Developer Toolkits, this is our third article from the OMS-on-Cloud stream in the WCE Practitioners Lounge.
By going through the article, readers can expect to know how to install and deploy the customizations along with configurations for IBM Order Management on cloud.
Building and Deploying Custom jar Files
After the developers complete the coding, they check-in the code to the project code repository. Project specific automated scripts are used to build the custom jar files using the checked-in code. For deploying the built jar file, as the first step, the file should be placed in required drop box, as suggested by the devops team. For accessing the drop box using SFTP tools, devops team shares the URL and port along with the key file.
For deploying the custom jar file,
- Login to IBM UrbanCode Deploy (UCD) tool
- As shown below, Go to Components and click on the component which has been created for your project to deploy custom jars
- Say, you had built /opt/custom/Custom17.1.jar file. Go to versions tab and check if Custom17.1.jar has been updated in the list of jars.
- If not, as shown below, click on the ‘Import New Versions’ and update with latest version that was built.
- Go to the Applications Tab and click on your order management system (OMS) Application Environment (where you want to deploy this jar). Click the play button. This opens Run Process popup of UCD tool. As shown below, select Build Customized Runtime as the process to run.
- Once process is submitted, you should see below success messages
- On building customized runtime, deployed OMS application needs to be refreshed. To do that, open Run Process popup.
- As shown below, select 'Update OMS Application' process from Process dropdown, click on 'Choose Versions' to choose the version which has been built using Build Customized Runtime process, and the click Submit.
Exporting CDT and Viewing Differences
To deploy OMS configurations, you need to export CDT first, from the master configuration (MC) environment. To export CDT, open Run Process popup to run 'Export CDT XML' process. The CDT export will fail if the ydkpref.xml is not defined correctly, so, before running the process, make sure that ydkprefs.xml is up-to-date. If required, you can check the details of ‘Export CDT XML’ process using ‘Download All logs’ option (downloaded zip file will have stdout.txt, which will have details of failure of CDT export if it fails), or the download log options on each executed step.
After exporting the CDT successfully, the next step is to compare the exported CDT with its previous versions. The comparison ensures that only intended changes are included in the latest CDT. To do that,
- Open Run Process popup from your development OMS application.
- As shown below, click on Components tab, select your OMS config component, and click Versions tab. Select the latest CDT version and then click on Compare link.
- This will open the following popup, where you select the version with which the comparison should be made:
- Once we click on Submit button on Compare Versions, we should see the differences as below.
- Click on Compare to see the details of the differences, as shown below.
After exporting the CDT XMLs with the correct and complete changes, the next step is to import it in your development environment. For that, open Run Process popup from your development OMS application. As shown below, select 'Import CDT XML' from Process dropdown:
Click on Choose Versions. This will open a popup, as shown below. The popup will list of CDT versions available. Current Environment Inventory will show the last applied CDT on the environment. You can click Add button for the MC version to deploy.
Restart the Application server after the CDT process. For more information on restarting the application server, you can refer to this link.
Starting and Stopping Agent and Integration servers
IBM UCD provides processes for starting and stopping Agent and integration servers. For more information on the same, please refer to the information here.
After starting a server, one can verify if the server has started successfully .This can be done by checking the logs (through export logs option from IBM UCD) or by checking System management console which is accessible from the OMS application console.
Exporting and Archiving logs
Using UCD, you can export Application, Agent and integration server logs . Please refer to this page for more information.
Logs collected might be few days old, hence the size of the log file may be large. If required, you can request the devops team to reduce the period of time for collecting the logs.
UCD also provides option to archive the logs on the Run Process popup, as shown below.
This is what we have today on installing and deploying customizations and configurations for OMS on cloud. We will share more experiential information in our upcoming blogs in the OMS-on-Cloud stream. Stay tuned for our next article in the series, which will be on Integration with External Systems!
Modified on by Raghuveer P Nagar
Referring to our first article on IBM UrbanCode Deploy, this is our second article from the OMS-on-Cloud stream in the WCE Practitioners Lounge.
By going through the article, readers can expect to know what IBM Order Management Developer Toolkits are, how one can access those, and few of our learning on the new integrated development toolkit.
While implementing IBM Order Management, developers make both code and configuration changes to the development environment before promoting the changes to higher environments like QA and UAT. To help developers, developer toolkit is provided. The toolkit can be downloaded from UrbanCode Deploy (UCD) Dashboard and installed on a local machine or VM to have an exclusive environment for development.
Starting IBM Order Management version 18.2, there are two types of developer toolkit:
- Developer Toolkit
- Integrated Development Toolkit
Before 18.2, only IBM OMS Developer Toolkit was available.
OMS developer toolkit has IBM DB2, IBM WebSphere Application Server and IBM MQ as prerequisites. After following the toolkit installation steps, EAR file is required to be built and deployed to access the environment for development.
OMS Integrated Development Toolkit is based on the docker, with Docker and Docker Compose as the prerequisites. There is no need of installing database, application or messaging server, all these come bundled with the toolkit. Post completing installation through docker, development environment will be available. There is no need of creating EAR and deploying, as it is part of steps executed by Docker!!
Downloading Developer Toolkits
Perform the following steps to download IBM Order Management Developer Toolkits from UCD. For more information on UCD access, please refer to On-boarding and Access section here.
- Select OMS-Application from the list of available applications in the Application tab of UCD
- Go to Environment tab, and click play button against the OMS DEV environment
- On the pop up screen, as shown in the screenshots below, select 'Extract IBM OMS Developer Toolkit' from Process drop down for the Developer Toolkit, whereas select 'Extract IBM OMS Integrated Development Toolkit' for the Integrated Development Toolkit
- If you want to include existing customization, check "Include Customization" checkbox
On click of Submit, execution of ‘Extract IBM OMS Developer Toolkit’or ’Extract IBM OMS Integrated Development Toolkit’ process starts. You will be able to see the progress on UCD as shown below.
After process is completed, toolkit zip file will be available on FTP location for download.
Installing Developer Toolkit
This is the non-integrated developer toolkit. After meeting the pre-requisites mentioned above and unzipping the downloaded toolkit (devtoolkit.zip), the following steps need to be executed
- Install JDK, and set environment variables accordingly
- Rename devtoolkit_setup.properties.sample to devtoolkit_setup.properties in the extracted devtoolkit folder, and set mandatory properties
- Run devtoolkit_setup script to install the toolkit
- Build EAR file and deploy it on application server
You can find detailed information on the steps above at the Set up the IBM Order Management developer toolkit environment section.
Installing Integrated Development Toolkit
This is docker based toolkit. Integrated Development Toolkit of version 18.2 is supported for LINUX (Centos, RHEL). Following commands can be used on Centos 7 VM (recommended) to install the pre-requisites.
i.yum install docker
ii.systemctl start docker
iii.systemctl status docker
iv.systemctl enable docker
Docker Compose Installation
i.sudo curl -L https://github.com/docker/compose/releases/download/1.21.2/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
ii.sudochmod +x /usr/local/bin/docker-compose
You can unzip the package by running “yum install unzip” command.
After pre-requisites installation, the following preparation steps need to be executed:
- Upload devtoolkit_docker.zip file to Centos VM.
- Run following command to extract toolkit setup files: unzip devtoolkit_docker.zip && cd compose && chmod +x *.sh
- In compose folder, om-compose.properties property file has default values of properties used at installation, change the properties based on your needs.
Following options which are available in executable file (om-compose.sh) will be useful for IBM Order Management integrated development toolkit installation.
- setup <optional:cust-jar>: Setup a fresh docker based integrated OM environment
- setup-upg<optional:cust-jar>: Upgrade existing environment to new images
- update-extn<cust_jar>: Update OM environment with the latest customization jar
- extract-rt <extract_dir>: Extract a copy of runtime directory on your host machine
- start|stop|restart: Start/stop/restart your docker environment
- wipe-clean: Wipes clean all your containers, including volume data
- license: Shows the various license information
- update-mq-bindings <queue_name>: Update your MQ bindings with the queue
The following steps need to be executed for installing the Integrated Development Toolkit:
- Create folder mentioned in MQ_JNDI_DIR property. Default value is /var/oms/jndi. If you are not changing default value, then create /var/oms/jndi folder.
- Go to folder where toolkit zip file is extracted and then go to compose folder
- Run following command to start the installation: ./om-compose.sh setup
On installation, four docker containers are created: OMS Runtime, DB2, Liberty and MQ. There is no need to create smscfs.ear and deploying it (installation steps take care of this). You can check the logs of liberty container to check deployment status of smcfs.ear. Once deployment is complete, you can access OMS environment with following URL: http://<IP>:<Port>/smcfs/console/login.jsp
Set up a new docker-based developer toolkit environment section here has the detailed information on the installation steps mentioned above.
Post Developer Toolkit Installation
Post installing developer toolkit, development environment will be available. However, the development activities should be performed on the latest customizations and configurations. For this, you will have to generate CDT XMLs from UCD and then import these XMLs in your development environment. Also, export the existing customization from IBM Order Management environment to development environment if you have downloaded IBM OMS Integrated Development Toolkit or not checked ‘Include Customization’ while downloading IBM OMS Developer Toolkit. Once development environment is in sync with existing customization and configuration, you are all set for the development!
This is what we have today on the developer toolkit for OMS on cloud. We will share more experiential information in our upcoming blogs in the OMS-on-Cloud stream. Stay tuned for our next article in the series, which will be on Deploying Customizations!
Modified on by Anbu Ponniah
Authored by : Raghuveer P Nagar, Senior Architect for IBM Order management
By going through the article, readers can expect to know what IBM UrbanCode Deploy (UCD) is, why one needs it for IBM Order Management deployments, how one can access it, how it is useful for day-to-day IBM Order Management deployments and few of our learning as an implementation team.
IBM UrbanCode Deploy
IBM UrbanCode Deploy is a tool used for automating application deployments (in both on-premise and on-cloud scenarios).It simplifies, and standardizes, the application deployment process across environments, throughout the development and maintenance of the application. Additionally, it offers clear visibility (into what is running where, differences across the environments, who changed what, etc) and rollbacks of the deployed applications.
IBM Order Management
IBM currently offers its multi channel commerce Order Management System (OMS) as both on-premise and on-cloud software. The on-premise software is commonly known as Sterling Order Management. IBM Order Management is software as a service (SaaS) offering, which is a cloud-based OMS service offering (on the IBM SoftLayer infrastructure). For managing IBM Order Management deployments, UCD is used as the deployment automation and self-servicing tool.
On-boarding and Access
IBM Order management Environment cannot be accessed directly. To access the environment, that is, to access UCD and manage the OMS application, you first need to connect to jump host. Jump host connection provides ability to bypass firewalls that can prohibit internet services. Refer here for detailed process of connecting to IBM Order Management environment using jump host.
Once you are connected to IBM Order Management environment, you can access UCD. In addition to deploying application, UCD is used for complete implementation process, such as deploying changes and fixes into the environment. Refer this link for more information on UCD access.
After accessing and logging into UCD, you need to download developer toolkit environment. Developer toolkit is replica of the IBM Order Management environment, which enables you to do required customizations on your development environment. Go here for step-by-step process on downloading the developer toolkit from UCD.
Key OMS Deployment Features in UCD
UCD provides insights into IBM Order Management deployment audits, status and history. It helps in deploying IBM Order Management application in no time and also provides wide variety of process management options to also administer the deployment. The following are few of the key OMS administration processes which can be executed through UCD:
- Deploying custom code, like deploying custom jar files (say, the jar files having custom User Exit implementation and classes for Web Services)
- Exporting Configurations/CDT, which includes exporting CDT from one environment (say, the master configuration environment) and comparing it with another environment (say, the development environment)
- Importing Configurations, that is, applying CDT exported from another environment
- Starting and stopping application server
- Managing Agents, including starting, stopping and triggering agents
- Managing Queues, such as configuring a JMS queue and clearing contents of a Queue
- Managing OMS Application Server Logs, like exporting and configuration driven archiving of the logs
- Installing 3rd Party Certificates, for various integrations and implementation features
- Importing Customer Overrides (in customer_override.properties file) to the database
The processes mentioned above remain same for OMS related product offerings like IBM Call Center for Commerce as well.
While working on UCD for IBM Order Management, we have experienced many common application deployment and management issues. The following is a list of few of those, with tips based on our experience:
- Automatic Process Monitoring - UCD has email configuration to inform the status of particular process through email. For example, it can send emails when agent startup fails and build failures. As one of the first things, one should ensure that this configuration is complete.
- Application/Code Build Failures – While building custom code in the development environment, you may get errors like “Not able to access the file” and “Not enough memory” (in the build logs), say when the build involves compilation of the custom Java classes. It is important to know and remember that these issues need access of the installation directory, which is with the cloud support team. So, one should raise a cloud support ticket on facing such issues.
- Infrastructure Related Errors – While running a process, an infrastructure issue is reported, say “Database connection parameters incorrect". This happens during a time window when the password is reset in the database server, but an implementation script is run before the same password is updated in the deployed application. One should raise cloud support in such scenarios also.
- UCD Options - For administrators who are new to UCD, it will be a good idea to spend some time in getting aware of the available OMS deployment options by exploring various user interface (UI) options of UCD. This may sound unusual, but, think of the productivity loss which you may have when you reach cloud support or go through the documentation just to know that the option is already available on the UCD UI!
This is what we have today on the IBM UCD tool for OMS deployments on cloud. We will share more experiential information in our upcoming blogs in the OMS-on-Cloud stream. Stay tuned for our next article in the series, which will be on Developer Toolkit.
Welcome to WCE Practioners Lounge
I am excited to announce the launch of WCE Practioners Lounge. This is a closed group of highly skilled senior consultants and architects with decades of experience configuring, implementing, performance testing and assuring complex commerce, marketing and supply chain solutions for customers across the world.
What can you expect?
This community will share best practices, recipes, troubleshooting guides and sample code useful for solutions implemented across various Watson Customer Engagement product lines – Watson Commerce, Watson Marketing and Watson Supply Chain. You can expect blogs and forum discussions across a variety of topics covering not just IBM products, but also associated 3rd party offerings, underlying technologies and stack offerings.
What is the take away for readers?
Readers will get to know how latest features are used in real projects, how requirements are addressed by various products/offerings, how to deal with devops, how to tune performance and much more. So, if you are a developer, technical lead or an architect - this will empower you with knowledge that you can use in your very next engagement. Of course, many of the readers can relate to things we will share in the blog and can reflect back on their own projects. Note that if you are more interested in the strategic "why" part of what we are talking, you should head to our sister blog at Thinkers Lounge group in LinkedIn site.
How can you participate?
To begin with - please share your comments, include our blogs in your social circle, suggest further topics and contact authors directly for discussions. We plan to open up forums for deep dive discussions. At this time authors are limited to a hand-picked set of practioners from within IBM WCE BU. However, we will re-evaluate our charter every quarter to ensure best of knowledge is shared through this channel.
Who are the current authors?
Lead editors and authors are (in alphabetic order): N Krishnan, Raghuveer Nagar, Sudhanshu Shekar Sar and Siddharth R Rao. They have a team of authors working with them. Each blog will call out contributions and link to their profile
Look forward to the first blog, on Urbancode based develops on 25th June. Further blogs should appear every other Monday thereafter.
(Executive Architect and Distinguished IT Specialist, Watson Customer Engagement, IBM India)
Modified on by Shweta Gupta IBM
In the PART1 of this series, I introduced the topic of security around eCommerce from a perspective of being in a Hot Air Balloon (which would be 2000 lifetime floor badge if you wear a Fitbit). You will want to read the Part 1 to get a glimpse of CISO’s role in your organization’s security, and introduction on IBM’s security services. However, if you want to directly dive into securing your WebSphere Commerce application, read on this article.
The WebSphere Commerce security model constitutes of setting up the access control, through authentication, authorization and policies. It provides through some of the main security standards applicable to the eCommerce industry. It also covers managing against common attack types. However, as outlined in the PART 1, the malicious attacks continuously innovate themselves, and therefore business needs to evaluate its advanced threat management capability on an ongoing basis.
In this PART2 of the series, I am going to cover 4 areas from the application perspective.
- Access control
- Hardening against common attacks types
- Data security
- Security Standards
Authentication: WebSphere Commerce issues an authentication token. This token is associated with a user on every subsequent request after the authentication process. There are account policies for password, account lockout, timeout that govern authentication.
Authorization: Access control policies is leveraged for enforcing authorization in Commerce.
Let us look at some some common access control areas that are prominent from en eCommerce perspective.
- Logon - Enforcing too many failed attempts: Set up an account lockout policy for different user roles in WebSphere Commerce. This is to enable enforcement of a lockout in case there is malicious attempt to break a password. There is also deterrents by setting up delays in between of consecutive unsuccessful attempts.
- Logon - Prevent privileged users from logging in externally: It is desirable to dis-allow access to the privileged users of WebSphere Commerce application, like the site administrator or a customer service representative. This need is primarily from a security perspective where one would not want a hacker to gain privileged access and act with malicious intent. WebSphere Commerce V 8 Mod Pack 1 has provided with a security configuration in wc-server.xml that can be enabled and then customer headers can be used to define roles to disallow the logon. There is further customization that one can do to address the security needs for logon. Earlier to this introduction of the feature, in earlier version of WebSphere Commerce (v7, v8), the article provides sample code to achieve the functionality.
- Securing the WebSphere Commerce search server: This can be achieved at two levels, WebSphere Application Server (WAS) Administrative Security or WebSphere Application Server (WAS) Application Security. The WAS Admin Administrative Security is recommended, as the application security is more expensive in terms of performance.
Protecting custom enterprise beans, data beans, controller commands, views: Primary resources should be protected. If a user is allowed to access a primary resource, the user would be allowed to access its dependent resources.
Access Control and Data Beans - The following article discusses the best practices for access control with WebSphere Commerce and how to use Data Beans securely
Hardening against common attacks types - A view on the common attacks types and guidance on handling them
- Changing URL params to break the system: WhiteList data validation on a URL is disabled by default. This restricts the running of URL such that it only allows processing of URLs that conform to the regular expression. This means you could restrict any malicious intent on the site by changing the params on the URL for store-id, catalog-id etc...
- Cross-site scripting protection, is enabled by default for the Stores web module. It is important that all the parameters are encoded by developers during the development phase and tested thoroughly. The prohibitedChars rule has been provided by default, however it cannot be comprehensive. The blacklist is used as fall back for cases when the store is not encoding the output properly (c:out). The problem with blacklists is that hackers keep finding ways of bypassing them by using different encodings and escaping characters. It's not possible to come up with a completely robust blacklist solution ( OWASP calls blacklist "fragile" ). This is more complex to apply to the REST body. It is recommended that there is a review on the OWASP Cross-site Scripting site for project implementation. The project management and security architect should provide guidelines on preventing, protecting and testing against Cross-site scripting.
- Mitigating threat of denial of service attacks: It is recommended to set the boundary for the product search results by specifying maximum page size and result size. Similarly, one can set allowable ranges for other business objects like the shopping cart.
- Enable ClickJacking Protection – Clickjacing is when an attacker is able to trick you into believing that you are on a particular site/page, while re-directing you to do something else. This can be achieved via several techniques, however we will focus on the mechanism where iFrames are exploited to overlay your desired site. The article talks about the X-Frame options and Content Security Policy.
- WebSphere Commerce database encryption to a stronger standard to reduce the chances of a successful brute force attack by Migrating from Triple DES to AES-128 encryption
- Data security practices for integrating eCommerce system with Order Management System
1 - Use of Cryptographic Algorithms and Key Lengths: NIST announced its Special Publication (SP) 800-131A in 2011, which recommends transitioning of the use of Cryptographic Algorithms and Key Lengths.
Standard requires adherence to
- Digital signatures must use at least SHA-2 hashing algorithm, and WebSphere Commerce v8 uses SHA-2
- Cryptographic keys adhere to a minimum key strength of 112 bits. WebSphere Commerce provides a detailed procedure for the migration of encrypted data in the database to use AES 128-bit encryption.
- Enable TLS 1.2 for SSL and disable protocols less than TLS 1.2 for WebServer, any integrations with LDAP, outbound email. WebSphere Application server must adhere to TLS 1.2
- All certificates with RSA or DSA keys must be 2048 bits or higher. Certificates with elliptic curve keys shorter than 160 bits must be replaced with longer keys. All certificates must be signed by an allowed signature algorithm. For example, SHA-256, SHA-384, or SHA-512
2 - Federal Information Processing Standards publication 140-2 (FIPS 140-2) covers the security standards that are required for cryptographic modules. The WebSphere Commerce documents the steps that are needed for commerce to runn on a WebSphere Application Server and on HTTP servers that are in FIPS 140-2 mode.
3 - The Payment Card Industry (PCI) Data Security Standard (DSS) - The PCI DSS Version 3.0 standard lists 12 requirements which retailers, online merchants, credit data processors, and other payment-related businesses must implement to help protect cardholders and their data. The requirements include technology controls (such as data encryption, user access control, and activity monitoring) and required procedures. This is required to be implemented when the cardholder data resides with the business. The WebSphere Commerce Knowledge Center documents the procedure ONLY for the WebSphere Commerce application to be PCI compliant, while there are many other aspects that the business needs to take care as part of the compliance.
The security of eCommerce is a broad topic that requires the Chief Information Security Officer (CISO) at the organization to work at various levels. This was covered in the PART 1
This article is an attempt to help the reader get a glimpse on the application security aspects for a commerce site. I have tried to categorize security from a view of access, vulnerabilities and data protection and help with a capability view on WebSphere Commerce.
I have also created a template spreadsheet that one can refer in their project as a starting point to track the various headings, status and desired status. This template is to be enhanced/tailored to suit your security posture.
2016-July-WCS-Security-Template-v1.xlsxView Details for Project Managers to capture actions. Do provide your feedback if you use it to enhance it further.
Modified on by Shweta Gupta IBM
Who owns and understands the security posture for the organization?
I would start with making an assumption that if you were a WebSphere Commerce architect, specialist, or a developer, you will agree with me that it will be foolish to attempt to respond to the loaded question – ‘Is our eCommerce site safe?’
This would be something that is under a purview of a Chief Information Security Officer (CISO). If you are interested to see a sample of the job responsibilities of a CISO profile in a commerce organization, have a look at this linkedin entry
The infographics on cyber-attacks from Threat Intelligent Report
IBM X-Force Threat Intelligent Report published by IBM, shows the 2015 security incidents and the retail industry is the second most attacked industry.
The components of a commerce shop that would typically need a CISO’s attention
There are several areas of a commerce shop that require to be at a desired level of security maturity model.
- To start with there is a need to have a Governance structure in place. This would include the processes established inorder to manage and report on the compliance
- Focus on the People aspect of the business, that would talk about identity and access management
- Infrastructure security would cover several pieces, networks, servers and the endpoints
- Protection of Data is key and this may entail industry standards and compliance
- Application security will include all the different applications and the security levels would be different based on their exposure to the internet
How does IBM help companies and businesses to be secure, and this would apply to eCommerce
IBM has a strong security product portfolio, business and product offerings. There is also an online set of Quiz Questions to help an organization evaluate their understanding of the security risk.
The following areas of focus, will help the security stakeholders to engage with IBM
Your organization’s security program - Understanding risks, establishing the right policies and programs and having a strong, cohesive team to implement changes is critical to building a next-generation security environment. IBM® Security can help you design an integrated framework to simplify the challenge of securely protecting the enterprise.
Tackle Advanced Threats – You are uncertain on the advanced threats and how to be prepared to manage them. IBM® Security can help you with a view the security landscape with a wide-angle lens to thoroughly understand the origins and distinctive features of attackers. Fraud, endpoint and data protection. Security intelligence and analytics. Incident response. They're all part of a comprehensive approach using extensive research and detective work to pinpoint, outsmart and stop them.
Protect critical assets – You have an internet facing business, with potentially millions of customers, partners, vendors accessing the systems. IBM® Security can help you leverage analytics and insight for rapid detection and response to protect critical systems and records.
Further, IBM provides services where the security experts will do an assessment of all the aspects of an organization from a security maturity model. This will allow the CISO to understand the security posture for the organization and the desired posture for each of the components mentioned above, like people, data, and infrastructure. This is an important step in securing organization and business against the cyber security threats.
Security of the WebSphere Commerce empowered storefront
The CISO’s governance and strategy will form the backbone of the eCommerce security. This will ensure that each of the applications follow the guidelines and demonstrate adherence to the security framework.
The eCommerce program manager will own the security aspects of the WebSphere Commerce implementation. It will therefore be the individual responsibility of every member involved in the implementation to understand the application guidelines for WebSphere Commerce and consider them at design, development, testing and deployment phases.
The product KnowledgeCenter has a topic in itself on ‘securing’, that details on the topics of authentication, authorization, session management. There is also section to cover security standards ‘National Institute of Standards and Technology’ (NIST) that provides guidance on the use of stronger cryptographic keys.
The Payment Card Industry (PCI) Data Security Standard (DSS) is applicable if your eCommerce system is capturing and holding credit card and payment-related data into the system. The product has created a summary of specific configuration that are required in the WebSphere Commerce implementation in order to comply with the PCI-DSS.
Having pointed the readers to the Knowledge Center for a lot of product documentation on the topic, I am also doing a PART 2 of this series to cover a simplified version of the aspects of the WebSphere Commerce product’s security considerations. Look out for the second part for a quick and clear insight areas of Access control, Hardening against common attacks types and Data security.
Credit: Thank You Sreekanth for your inputs and review on the topic.
Sreekanth Iyer is an Executive Architect with the IBM Cloud (CTO Office) team and works on defining the technical strategy and development of IBM Cloud Security. He has over 20 years of industry experience and has led several client solutions for Telco, Electronics, E&U, Govt, BPO & Banking industries. He is an Open Group Certified Distinguished Architect, IBM Master Inventor, Certified Ethical Hacker and Member of IBM Academy of Technology.
WebSphere Commerce Development Environment
WebSphere Commerce developer environment comes with a Rational Application Developer, a WebSphere Application Server runtime with an embedded web server and an Apache Derby database. While options are available to switch your database to DB2 or other RDBMS, if you go with default as a developer, you would like to access the database using a client with some GUI capabilities. So what are our options:
- Go with what is available in the product – you either have your server running and access the DB using the in-built web interface or use the interactive SQL scripting tool “ij” for offline access; neither of which is particularly attractive for all uses. For instance, browsing available table and their schema information, accessing history of previous commands executed, exporting results of queries are all cumbersome activities. And on top of this, current setup of Apache Derby allows only one active connection at any time – so if you are connected through ij, your server cannot even start and if your server is running ij will not be able to connect.
- Switch to a full feature DBMS which has proprietary or open source tools available. For example, DB2 comes with a control center that allows you to connect and browse multiple local and remote database schema. You may also use RAD’s “Database Development” perspective to create and manage connections to local and remote databases.
- Setup Rational Application Developer to connect and browse your derby database
- Setup a custom tool (such as SQuirreL) to connect and browse your derby database.
In this blog I shall cover the last two options – setting up your Rational Application developer to connect and browse the included Apache Derby database and later the same using SQuirreL .
Steps for connecting to Apache Derby using Rational Application Developer.
Launch your WebSphere Commerce toolkit
Click on Window > Open Perspective > Other. In the resulting pop-up, select “Database Development” and click OK
This opens the database development perspective with a data source explorer on the left side.
Right click on Database Connections and choose New.
In the Connections Parameters window, Choose Derby in the “Select a database manager” section, type in the following information in the “properties” section:
For e.g. C:\WCDE80\db\mall and jdbc:derby:C:\WCDE80\db\mall respectively.
(Note: Username is not needed for the default database packaged with WSC toolkit, so this string can be anything).
Next, in the JDBC driver section, choose “Derby 10.0 – Other Driver default” from the drop down. Then click on the button with three dots on the right to open the “Specify Jar List” window. Here, if the list is empty click on “Add jar/zip” or if there is a jar already present, then choose the jar and click on “Edit jar/zip” button. Then browse and choose “derby.jar” from your <TOOLKIT>\lib\ folder. Tip: You may choose any of the embedded driver options available since it is really the JAR that is configured against that driver that is important. Choosing the right version of jar is important as the tool will not allow connecting to databases created with newer version of Apache with older version of drivers. Hence choosing the driver that is present within the toolkit is important to keep the version same.
Click OK to come back to the connection parameters window. Here use “Test Connection” to test the connection – this should be successful. If not check the URL and make sure it is connectable from command line using ij interface.
Click Finish. Now you should see the connection in the “Data Source Explorer” section on the left. You can right click and choose connect to connect and browse the schema. You can also right click on the connection and choose “New SQL Script” to type in your own SQLs and choose and execute them. You can keep these handy script files around to use when you need. You can also extract and load data from one database to another (works best if done table by table) – so if you need to get data for specific tables, this is handy to have. And, if at all you need to develop new access beans, they are just a right click away too.
Steps for connecting to Apache Derby using SQuirreL .
This one is mostly following the steps documented in Using SQuirreL SQL Client with Derby
Only thing we need to make sure is choose the right version of derby.jar in the Extra Class Path of the “Apache Embedded Driver” configuration:
A note here: Do just the embedded driver setup. That works with about 10 minutes of effort. Setting up the client driver requires a Derby Network Server and make some involved changes to JDBC connection setup in our toolkit. That is also the prerequisite to allow multiple connections to the database. That is not covered in this blog.
There you go folks – a basic “how to” blog for a change. This was triggered because a new member of my team asked “how to I connect to our project’s DB in local?” and was not convinced with the options available out of the box.
Modified on by Anbu Ponniah
WebSphere Commerce Toolkit – Virtual Machine, Virtual Hard Disk and Docker
WebSphere Commerce toolkit is an essential development tool for customizing WebSphere Commerce application – to extend or override business logic, customize store flow, extend tool capabilities etc. It is built on top of Rational Application Developer which itself uses Eclipse at its core. It provides a derby database with WebSphere Commerce data model bootstrapped with WSC data and an integrated WebSphere Application Server runtime. It additionally provides plugin extensions needed to automate some of the commerce specific development tasks (JET comes to mind).
While official system requirements are found <here>, 8 to 12 GB of HDD (SDD will be nice), a fairly powerful modern CPU and 8GB of RAM is what I have come to expect – more if you want your operational activities to be zippy.
One of the common questions asked by IT teams is how do I ensure all developers are able to setup and use toolkits that have same configuration, bootstrap data, APARs etc. Everyone is looking for quick, error free and repeatable steps for enabling developers across a team to setup their toolkit and stay up to date. In this blog I am presenting some options that I have seen or discussed, what it takes to pursue those options and call out pros and cons for each. I would like to hear about other options readers may have seen or any trouble they had encountered when pursuing these options.
Option 1: Old style trial by fire – no pain no gain
Ask every developer to install the stack in their computing device. This typically involves creating a fairly detailed document with screenshots and step-by-step instructions starting from where to download the product packages, how to run the installer, what options to choose, where to click etc. Let us face it, this is error prone. I have seen even seasoned developers miss a step and end up with a corrupt toolkit. There are always people who seem to be destined to encounter exceptions and errors more frequently than others. It is also time consuming. And if for any reason the tookit gets corrupted, it could reset the development clock.
Pros: Simple on paper – just plan toolkit setup time for each developer in your project plan. Toolkit runs on native OS – so best performance. Also, developers will need to gain expertise on setup – they will be forced to understand how things work under the hood.
Cons: Time consuming, inefficient, error prone, lot of duplicate effort wasted, backing up is tedious, troubleshooting is tedious, requires some under-the-hood knowledge.
Tips: If this is your chosen path, then you can mitigate some of the risk by creating data and configuration bootstrap packages to bring up basic environment through automated scripts instead of manual steps. For example, create a single zip file for all configuration changes needed (ACP, search configurations etc) and a single script that can copy files around, run acploads, dataload, build search index etc. Standardize the folder path and versions of packages. In fact such bootstrap script is recommended no matter which option you choose. There is also a "central repository" setup that saves individuals from having to download software from IBM Passport Advantage account and host it in your intranet.
Option 2: Traditional Virtual Machines – share with overhead burden
VM promises setup once and deploy anywhere model – and to large extent delivers on that promise with few caveats. You can have a seasoned developer or WSC administrator to create the VM package and share it with others in the team. Team can just boot up the VM and become productive. This works to most extent – but virtualization overhead needs extra processing power and memory, there may be network setup required, moving around files from VM to host and to other developers require additional hoops that one needs to jump through and there may be licensing costs for your virtualization software and the guest operating system in addition to your employees’ host OS licenses.
Pros: Create once and deploy any number of times model, standardization is simple – just publish new VM image. Also, it allows folks running OS that does not support toolkits to run the toolkit (Mac or Linux).
Cons: Licensing overhead, execution overhead due to full virtualization – slow performance (it is slow for me even with 4GB RAM dedicated - 6GB is when it starts breathing easy), too big (~40GB). Extra steps needed to transfer around files in and out of the virtual machine.
Tips: Even here, to roll out updates, it may be simpler to publish a data or configuration load scripts and reserve publishing new VM image only for product upgrades. Beware of saved passwords of your colleagues – you may inadvertently update repositories under their proxies and cause confusion in the project. You need 4GB dedicated RAM for a VM to be functional – if it is less, it starts showing. Interestingly, SSD drives do not help as much in speeding up VMs as it does with regular native installations – my guess is this is mainly due to VMs reserving and managing entire chunk of disk space by themselves (This is relatively speaking – it was still faster than spinning drives).
Option 3: Light weight virtualization –pseudo will do
Virtual Hard Disk is probably the most popular choice I have seen in my projects in the past 2 years. VHDs are a Microsoft virtual hard disk format that behave like a mountable drive with fixed or dynamic space, typically configured to have a max capacity. This appears as a single file until it is mounted and is typically supported in all flavors of Windows OS. For WSC toolkit, one can install all product packages needed for the toolkit into the mounted drive (taking care to ensure installation manager’s repositories and license data directories are made to point into the mounted drive) and then just distribute the .vhd file. This can be as lean as 12GB in size and can be easily shared. As long as everyone mounts with the same drive letter, this works. It gives the benefits of running directly on native OS while standardization of installations and bootstrap configurations of a virtual machine.
Pros: Lightweight, easy to create, standardize and distribute, no extra virtualization costs or overheads
Cons: Works for Windows only, tied to the drive letter till death does it apart. Not everything works within a self contained VHD - DB2/Oracle DBs and web servers try to keep some important references in C: drives or in registry entries - so this works best for a "embedded" web server with cloudscape DB.
Tips: Drive letter management needs some foresight if you are a partner expecting to execute multiple projects and there are only 20 to play around with (assuming A, B, C, D, E and F are needed for various drives already). Installation manager keeps some information in a data config folder that defaults to C: even if you are installing the IM in another drive. This requires some tweaking of configuration to keep installation manager specific settings within the VHD mounted drive – else you may need to install some of the packages in your local computers before you can run stuff from the mounted VHD.
Note: I have been asked before – why not docker? Sure, docker works – now we have windows server containers available on docker. So theoretically one can install the toolkit and all necessary packages on such a container and share. But, we are not strictly adhering to “supported” versions as WSC developer edition is supported for professional and enterprise editions of Windows 7 only – no mention of Windows 8.1 or 10, leave out the server editions that have docker containers available. Other option is to have docker images for your DB and web server - which need work-arounds for VHD.
Option 4: Go solo – path for the brave
People who understand what they want to do with a development toolkit and don’t mind going solo – several options do exist. First a disclaimer – IBM does not recommend any of these. If you get into trouble while doing this, IBM may provide a best efforts based (kind) help but no guarantee of seeing out of any quagmire you work yourself into. Also, you may still want to use RAD if you do BOD, plugin or tool development.
Now we have docker containers for various pieces of the stack - eclipse can run in a container, so can db2 and liberty profile of WAS. Imagine this – a docker container running same DB as your QA server installations of WSC. That would make it so easy for you to extract and load all the test data for local developer use. While doing that between *nix and Windows is “fun” (tongue in cheek), with docker, you could potential distribute same data model and loaded data to both developers and testers! Change in development environment might actually work the first time in server environment! Alright, that solves the DB. How about building source control and building projects --- these are folders that your favorite editor can be mapped to and source control scripts or plugins that can keep it in sync with a repository. If you primarily work in Stores project, then no sweat. Ok, now you can develop and have data – how about doing some unit testing. Well, how about a docker container with WebSphere Application Server running in it – and configured to connect to your database docker instance! Then your ubiquitous compile and publish tasks of Rational Application Developer needs to be replaced by a build and deploy process – similar to how it happens in a server. It is cumbersome for java classes – but it can be simple copies for JSPs and JS files. You may also setup web server and search server containers while you are at it. All this makes it easy for you, the super developer, to manage exactly where you want to spend your resources and how you want to test it. It is fun, it can be fast and it can be daunting for not so savvy.
Pros: Fun that mechanics get taking apart a machine and piecing it together in new ways, database can be shared and exactly like server, topology can be server like, changes can be more accurately tested in local and defects more likely reproduced in local.
Cons: You need to be an expert and highly motivated, it can be daunting for not so savvy developers and standardization is piecemeal – needs a solid plan and extreme discipline – but that is similar to how configuration management and continuous integration is approached in server environments.
Tips: Works for innovation minded folks – network with some IBM folks . Your nous and your professional network are your safety nets here.
There you go folks – some options and musings around them. Hopefully got em brain cells thinking in the right direction.
**Edit1: Embedded links to installation information contextual to the blog
Modified on by Shweta Gupta IBM
The shoppers have loved the search bar on the shopping sites and come to expect them to be their first friend to take them close to the product they intend to shop on the site. This is akin to the quintessential ‘Shopping Assistants’ in the marts, supermarkets, our our big or small fashion outlets. The shop may be earmarked and categorized, however if we are not familiar that particular store, we tend to get awed by the array of aisles and have an instant liking towards the store assistant who can either guide you the aisle/corner of the store for your desired product or even better escort you to the product that you are seeking. You would agree that some of these assistants have converted how many bags you carried back home from your shopping expedition.
The current state of eCommerce has come a long way and with most search engines offering a set of standard technical capabilities, one would expect to find our shopping search experiences almost similar. However, we are actually at length from this experience. While I was looking for studies on the subject, I stumbled upon a 2014 report on smashingmagazine.com, where they benchmarked the search experience of the 50 top-grossing US e-commerce websites, rating each website across a set of 60 search usability parameters, which you would agree is rather very fine grained. Offcourse a study from 2014 would require a revisit in 2016 at the pace one would expect our eCommerce sites to launch capabilities in an agile way, to stay competitive.
What does this mean to you for as eCommerce Implementation Project Manager?
I have observed that even though search is an important functional feature on eCommerce sites, the search relevance testing is continued to be missed out of the plan. The traditional functional testing, includes search and browse feature test, where the test team is responsible for a set of use-cases for checking that the search functionality works as expected. However, this approach largely misses out on the desired shopper experience for the search results or/and the desired search results from the business point of view. The User Acceptance Testing (UAT) phase would typically have the business users look at search relevancy, however it remains more of an organic exercise, amongst the overall umbrella of UAT, not getting the due attention, time, effort and most importantly understanding of the approach, input and outcome.
What does this mean to you as the eCommerce Product Leadership for your business?
This is definite a very continual exercise for you, just as it is to deliver on the business requirements via your eCommerce and mCommerce channels. You will be doing analysis at the ‘search hits’ and ‘search misses’, hopefully on a weekly basis, and working towards tuning your results based on the data your shoppers are feeding into you. The strong point is that because you have invested into WebSphere Commerce, you will find that the Business Tools allow you with a lot of flexibility and ease to achieve your results.
Let us explore an approach towards search relevance, look at a proposal for the test strategy and guidelines for your WebSphere Commerce Search powered commerce site.
Outlining the search relevance testing proposal and approach
The Project Management of eCommerce sites must include a sprint (or more depending upon the catalog size) which focuses on the search relevance. I have had discussions with several test managers on the topic and found them wanting for guidance around the ‘search relevance testing’. This prompted me to create an approach document that can be used as starting guideline for your eCommerce site.
- Your list of search terms: Work with the business to identify a list of ‘n’ search terms which you would like to focus in your initial iterations. Where n could be 50, 100, 200 so on based on the width and depth of the catalog, product items.
- Variety in search terms: The search terms should include a variety, where you have single words, multi-words, words with units like weight (think grocery), phrases with attributes like color (think garments), variance in how shoppers search for certain products, brands which your catalog does not support, misspelled products …
- Get into the shopper’s shoes, sandals: Leverage analytics data from different sources to identify the changing trends and shoppers behavior. Go out there to your favorite, and not so favorite retailers with similar catalogs, and learn what you like and what you would wish did better.
- Product title/description recommendations: The quality of the search result highly depends upon the way the catalog data is created and maintained. The natural search results from the search engine is based on the query relevance as defined in the configuration. One of the important recommendation which would also come out of the relevance activity is on enriching the product descriptions and other searchable attributes.
- Tester training: Your army of testers may not be trained to think on the topic of search relevance. Plan for a training and get them to think and experience shopping around the domain. This may be simpler for certain industries like grocery and fashion, however this may slightly trickier incase your site is selling spare parts, or heavy industry tools.
- Search Developers: The search developer needs to be part of the iteration to support the relevance tuning. This will require to look at the relevancy scores, and providing analysis on the result. The developer will also be responsible for making any changes which come through search configuration changes.
- Inputs to the search relevance activity: At the start of the search activity, you will have the list of search terms, expected results, synonyms, and search replacement terms.
- Outputs of the search relevance activity: At the end of the search relevance activity, you will list of search terms, actual result-set, if it meets expected results and no what is the delta. You will also have a more expanded/modified list of synonyms, search replacement terms and search rules.
- Iterative: This exercise should be iterative based on how is your development life-cycle. If the catalog upload, products, brands are staggered towards different releases, plan the search relevance accordingly.
- Input to the performance cycle: The outcome from this exercise should also be fed into the performance system as the search performance should factor in the recommendations which will be applied on the production.
The search relevance is a very subjective criterion which varies by industry, business, catalog managers, merchandizers, the current promotions, and most of all your product catalog.
The search relevance activity needs to get under the skin of the shopper to be able to identify the patterns and results. The business needs to be engaged very closely in the process.
I would hope that the approach outlined in this article helps you to work towards the planning exercise and also gives a starting point. Do share your experience on search relevance, as I look forward to the different ways in which we are out to achieve the business outcome from our commerce sites – that is convert the searches into baskets and orders.
I am attaching a basic version of search template. If you think this is useful, do let me know so that I can share other versions on the same.
Search Relevancy Template - 2016-FEB-SearchRelevance.xlsxView Details
Previous related blog: Customizing search for shoppers and retaining them as their attention time dwindles
1 - Changing the relevancy of search index fields
2 - Tuning multiple-word search result relevancy by using minimum match and phrase slop in WCS Version 7
3 - Search Rules