Takeaways From the House Report on Russian Election Meddling · Newsbook: Giuliani Is Back! Three Books on Larger-
kontashya 50BXQGD720 912 Visualizações
Takeaways From the House Report on Russian Election Meddling · Newsbook: Giuliani Is Back! Three Books on Larger-
RC2W_Praveen_Chowdary 270003RC2W 4.349 Visualizações
I need your help in fixing one issue.
I installed WAS 220.127.116.11 and configured successfully with Websphere MQ 7.*. After applying the WAS 18.104.22.168fix pack, WAS is not communicating with MQ. IBM has published the known side effects of Fix pack 47, the URL is:
I followed the steps mentioned in link, http://www.ibm.com/support/knowledgecenter/en/SSAW57_7.0.0/com.ibm.websphere.nd.multiplatform.doc/info/ae/ae/tmj_wmqra_restoredefault.html but the issue is not resolved. I just want to know how to upgrade the JMS resource adapters in WAS.
After uninstalling the WAS 22.214.171.124 fix pack, WAS is communicating with MQ. Could you please help me in resolving this issue.
I followed the below steps to configure WAS with WMQ but the issue is not resolved.
If I am installing the WAS 41 FP, the communication is not happening between WAS and MQ.
Error Message: Error 404: SRVE0190E: File not found: /MessageProcessor/
Backups of Configuration in API Connect / Management
Every Client who uses API Management / Connect plans to take a regular backup of the configuration. The following is a procedure to take backups using CLI commands.
More on the above could be found @ https://www.ibm.com/support/knowledgecenter/SSWHYP_4.0.0/com.ibm.apimgmt.overview.doc/overview_backupcli_apimgmt.html
Note: If you are aiming at having a scheduled backup option in API Connect which takes a regular backups , as of today, there is no such option available. The only way is to use the above commands and developing scripts / cron jobs to work for your. Also, kindly note to insert the credentials into the scripts
Manikantan Rajagopalan (firstname.lastname@example.org) @manrajag270002F0PS is a senior subject matter expert on Middleware technologies primarily focusing on WebSphere Application Server security. He is an integral part of IBM's GxP offering and has immensely contributed to GxP and other offering through his expert development skills.
Christudhas Pitchai (email@example.com) is a senior subject matter expert on WebSphere brand specifically on Application Server (WAS). He leads several engagements on IBM’s cloud offerings and is a go to person when it comes to handling crit sits.
Gautam Bhat (firstname.lastname@example.org) is Chief Architect, WW Cloud Tech Sales and leads the design and development of several Cloud solutions involving OpenStack, SoftLayer and VMware.
SAML- Security Assertion Markup Language 2.0 (SAML 2.0) is a standard for exchanging authentication and authorization data between security domains. In Cloud, security is a major concern and clients using IBM Control Desk (ICD) require a sophisticated means to authenticate their users during sign on and in this regards, Single Sign On is the major ask from the clients. The concept of SAML though simple, requires several degrees of configuration to make it work in an integration environment with different products having varied architecture. The configuration of SAML for authenticating ICD users could be as challenging as its practical usage in real time customer's business and IT environment. This tutorial covers the step by step approach on configuring SAML to work with Service Provider application deployed in WebSphere Application Server 8.5 (WAS) and ICD product. It starts with the basic configuration using test simulator to act as an Identity Provider (IdP). As we progress towards advanced configuration, this document uses Tivoli Federated Identity Manager 6.2 (TFIM) to act as an IdP.
With TFIM as an IdP, Tivoli Directory Server (TDS) as an LDAP and WAS ACS as the Service Provider, this document covers 3 major scenario's. First scenario covers one client using SAML and other using LDAP and these clients are identified using the system IP from where the request is placed. Second scenario consists of 2 client users - one using SAML and other two using LDAP. Third scenario gets more complex with two clients using SAML and two clients using LDAP. Customers who use Cloud solutions through SCCD can manage their users and authentication with their existing LDAP through Federated repository or SAML Web Single Sign On (SSO). To use SSO in SCCD we have a SAML solution in WebSphere. This article describes the implementation and set up with help of use cases (which are referred as Scenario’s).
Let’s define few basic terms used in this context.
1. Service Provider (SP)
The Service Provider (as known as Relying Party ) is the Authentication Client from which the Auth Request originates from.
2. Assertion Consumer Service (ACS)
The Assertion Consumer Service ( ACS ) is the Authentication Client consumer ( application / endpoint ) of SAML Response generated by the IdP
3. Identity Provider (IdP)
The Identity Provider ( IdP ) is the Authentication Server that authenticates the User and generates the valid SAMLResponse upon successful user auth.
4. SAML Request
The SP-Initiated SSO will generate the SAMLRequest that gets sent to the IdP for processing of the User Authentication. This is an XML message.
5. SAML Response
The IdP-Validated SSO which is sent to the ACS for valid authentication. Upon receiving a valid SAML Response the ACS will issue an LTPAToken2 which is used by the Application to allow access to the restricted content.
6. IBM TFIM
IBM's Tivoli Federated Identity Manager is a bespoke solution from IBM for implementing SAML-based User authentication.
7. IBM LDAP Security Directory Server( ISDS formerly TDS )
IBM TDS or ISDS is the LDAP Server implementation from IBM.
8. IBM WebSphere Application Server ( WAS ) and IBM HttpServer ( IHS )
The middleware components for Web-based applications using Java and HTTP methods.
9. Federated User Repository(-ies)
The IBM WAS Security configuration that allows multiple Authentication Repositories to be used seamlessly within IBM WAS Applications.
10. IBM SmartCloud Control Desk ( IBM SCCD )
IBM SCCD provides a tightly integrated solution combining Ticketing / Self-Service / Industry standard templates as a single solution for Ticketing , Change Management, Asset Management , Problem Management and other methodologies for IT Processes.
11. IBM Maximo
IBM Maximo is the Ticketing system that provides the IT Process capability in IBM SCCD.
12. IBM Tivoli Provisioning Automation Engine ( IBM TPAE )
IBM TPAE is the core services process that allows components, models, objects for building and integrating inter-dependent applications within Maximo and related products.
Throughout this article, SCCD and ICD will be used interchangeably because SCCD is renamed to ICD with version 7.6
Directory Structure for Certificate
The Trust store file is stored in .p12 format and stored under
Default location in WAS8.5: /usr/IBM/WebSphere/v855/AppServer/profiles/lon02scc001ccz069-maximo-dmgr/etc
Requirements for SAML Configuration.
WebSphere Application Server Configuration Steps
/usr/IBM/WebSphere/v855/AppServer/bin/wsadmin.sh -f installSamlACS.py install <Cluster Name>
a) /usr/IBM/WebSphere/v855/AppServer/bin/wsadmin.sh -lang jython
b) AdminTask.addSAMLTAISSO('-enable true -acsUrl <External URL>/samlsps/<URI Pattern String>')
4 ) Add the TAI properties to the com.ibm.websphere.security.DeferTAItoSSO custom property (to enable SP Redirects to IdP Web SSO).
Global security > Trust association > Interceptors > com.ibm.ws.security.web.saml.ACSTrustAssociationInterceptor
5) Export the SAML configurationby running below Command to import in Idp
a) /usr/IBM/WebSphere/v855/AppServer/bin/wsadmin.sh -lang jython
b) AdminTask.exportSAMLSpMetadata('-spMetadataFileName /tmp/<SAML file Exported for IDP> -ssoId 1')
6) Add the new partner in Idp server using the exported metadata file and default options.
7) Export the metadata file from Idp
8) Import the Metedata file exported from Idp using the below command.
a) /usr/IBM/WebSphere/v855/AppServer/bin/wsadmin.sh -lang jython
b) AdminTask.importSAMLIdpMetadata('-idpMetadataFileName /tmp/from_myIdP_to_myApp_IdPmetadata.xml -ssoId 1 -idpId 1 -signingCertAlias idp1CertAlias')
Maximo ICD (SCCD) Changes
Enable AppServer Security in web.xml files and database
<web-resource-name>MAXIMO UI pages</web-resource-name>
<description>pages accessible by authorised users</description>
<web-resource-name>REST Servlet for Web App</web-resource-name>
<description>Object Structure Service Servlet (HTTP Post) accessible by authorized user</description>
<web-resource-name>MAXIMO UI utility pages</web-resource-name>
<description>pages accessible by authorised users</description>
<description>Roles that have access to MAXIMO UI</description>
<description>data transmission gaurantee</description>
Set the property in the MAXIMO database
select * from MAXPROPVALUE where propname='mxe.useAppServerSecurity'
update MAXPROPVALUE set propvalue=1 where propname = 'mxe.useAppServerSecurity'
Rebuild and redeploy the maximo EAR
When user accesses the SCCD application URL, based on the filter configuration in SSO, WebSphere SAML will redirect the user to the customer's Idp login page. Once the customer is authenticated with Idp SCCD allows the user to get into the SCCD application.
If the user request does not match with the filter configuration of the SSO, WebSphere will redirect the user to get authenticated with Federated repository. This is demonstrated pictorially below
For the customer users who are authenticated externally using a Single IdP for SSO but do not wish to share their login credentials with the Vendor(s) and requires that the non-customer users (or other non SSO customer users) continue to login separately using a different Federated Repository. For example, multiple LDAPs having distinct set of Vendor's users and other non SSO customer users can be configured through a federated repository while the SSO customer users continue to be authenticated with their own IdP (organisation SSO). A single IdP Provider for SAML Web SSO is configured while multiple User Identity Providers are configured in IBM WAS Federated Repositories as shown in the below picture. Disadvantage with this set up is that unique User ids require that the Federated Repositories have no conflicting entries at any time.
For multiple customers requiring authentication via SAML Web SSO while vendor users and other non SSO customer users are all enjoined to login through their separate login spaces through SAML Web SSO or via Federated Repositories (such as multiple LDAP). Multiple Identity providers for SAML Web SSO are configured with distinct login URLs while multiple User Identity Providers are configured in IBM WAS Federated Repositories. Unique userid requirements are only restricted to users stored in the Federated Repositories configured while the SAML Web SSO has no such restrictions and is distinct to each organisation having their own IdP. Security advantages of SAML-based authentication allows best practices to be safely implemented within organisations as only the Authenticated PASS/FAIL is transmitted through security-enabled channels of communication minimising sensitive-data loss.
In Cloud, security is a major concern and clients using IBM Control Desk (ICD) require a sophisticated means to authenticate their users during sign on and in this regards, Single Sign On is the major ask from the clients. The concept of SAML though simple, requires several degrees of configuration to make it work in an integration environment with different products having varied architecture. The configuration of SAML for authenticating ICD users could be as challenging as its practical usage in real time customer's business and IT environment. In this article we covered the step by step approach in configuring SAML to work with Service Provider application deployed in WebSphere Application Server 8.5 (WAS) and ICD (SCCD) product. In this article, we have covered the required to be made at SCCD and WebSphere level in order to integrate SAML with SCCD. SSO Filter configuration is the key factor that does the filtering of incoming request and it is important to keep it accurate. In addition to this, we briefly discussed through the three scenario’s which are based on real client / customer requirements. In short, configuration is one aspect of it and the most important part is application of this in real time client business cases/scenario’s.
Acknowledgement & Thanks
We would like to thank the following key members whose support and help was sought from time to time,
Fabio Benedetti (email@example.com)........ Distinguished Engineer, IBM Systems, Chief Architect for SCCD (ICD)
Vinod chavan (firstname.lastname@example.org)........................ Executive Architect, IBM Cloud, WW Cloud Tech Sales
Chunlong Liang (email@example.com)....................Software Developer, IBM Systems
IBM WebSphere Liberty profile is a fast, dynamic and easy-to-use Java EE application server. Liberty has fast startup times, picks up changes dynamically without the need to restart the server and has a flexible modular run-time that can be specified in a simple configuration file.
It has a very small install footprint (starts with < 60MB). You can add additional features as per your requirement to complement the various out-of-the-box essential features.
Liberty allows you to develop and deploy various Java EE applications like web apps, portlet apps etc. to name a few.
This blog will talk about the Portlet application for Liberty. When deployed to Liberty, portlet applications are just like standalone web apps.
However using features like WSRP (Web Services for Remote Portlet), you can have these portlets render on any portal server like the IBM WebSphere Portal server.
So you can develop and deploy your various portlet apps to Liberty and using WSRP, consume them on various Portal serves.
In this blog, you will take a look at how Liberty can be configured to deploy and use the portlet applications.
Installing Portlet specific features
Installing Portlet Container feature
The Portlet Container add-on feature for WebSphere Liberty is used to develop and test your portlet applications within the liberty profile run-time.
This is a separate add-on and you would need to install it. Follow the below steps to install Portlet Container feature for Liberty.
Installing Portlet Serving feature
The Portlet serving add-on feature for WebSphere Liberty provides the url addressability functionality for JSR 286 and JSR 168 portlets and is used
to invoke a portlet using a defined url in the browser.
This again is a separate add-on and you would need to install it. Follow the below steps to install Portlet Container feature for Liberty.
Configuration for Portlet specific features
Create and start the server
Before you start using the Portlet specific features, you need to create and start a server. To do so follow these steps
Configuration changes for Portlet support
To use the portlet specific features, you need to update the configuration file for the server instance that was created in the earlier section. To do so follow these steps
Testing a portlet
Now to test a simple Portlet app you need to do the following :
This blog has a sample portlet war attached that you can use for the testing purposes. You can hit the following url for testing the attached sample http://localhost:9080/.SampleLibertyPortlet/SampleLibertyPortlet
This assumes that you are testing the sample on a localhost and the port where Liberty is running is 9080. Please makes changes accordingly.
That's it folks for this blog. In the next blog, I will write about how to use some tooling options to develop some basic portlet applications for Liberty profile.
Here are some of the references that might be useful.
The JDBC objects provided by WAS are wrappers, wrapping the underlying jdbc driver objects. Examples are Connection, Statement, ResultSet etc. There may be scenarios where applications will have to access the wrapped objects, usually for invoking non jdbc standard & vendor specific methods and types. Traditionally cWAS has provided the WSCallHelper utility to help applications access the native connections and invoke vendor specific methods. However JDBC4.0 provided a more cleaner and standard way to achieve the same by introducing the java.sql.Wrapper interface and is generally called as the Wrapper Pattern. The Wrapper interface describes the standard mechanism to access the wrapped objects with the methods isWrapperFor() and unwrap(). Here is a sample code snippet which applications can make use of to access the native oracle connection
InitialContext ctx = new InitialContext(parms);
Nihilson 120000KFPY 9.645 Visualizações
The following exception can occur with the DB2 JCC Driver when calling rs.next() after the cursor has been positioned after the last row of the ResultSet.
Nihilson 120000KFPY 8.453 Visualizações
While troubleshooting a "resultset is closed" issue, the immediate thing that came into mind is the DB2 JCC driver's default behavior of implicitly closing the ResultSet upon find that the cursor has been placed after the last row (ie the result set has exhausted). And calling ResultSet.next() on such a state will throw the exception stating "resultset is closed". DB2 provides a workaround to tolerate the call next() on exhausted resultset by setting the property allowNextOnExhaustedResultSet = 1. But in the current case, it is not the next() that was causing the problem and instead the call to ResultSet.getMetaData() after the ResultSet has been exhausted. But who calls this getMetaData on a already closed resultset? From the exception stack, we noticed the application has been plugged in with the log4jdbc for monitoring the JDBC operations and it is the log4jdbc invoking the getMetaData() at an in-appropriate time which the DB2 JCC driver does not like.
Further digging into log4jdbc, we discovered that the stack of log4jdbc might correspond to version 1.12 or less. And from version 1.13 and above they have a fix for this problem where they verify if the resultset is usable by calling rs.isClosed() check before invoking the getMetaData() call. And there seems to be an issue with the isClosed() API when DB2 implicitly closes the ResultSet which is addressed by a WebSphere datasource custom property resultSetUnusableWhenNoMoreResults="true". So the problem can be resolved by upgrading the log4jdbc to a higher version along with the WebSphere datasource custom property resultSetUnusableWhenNoMoreResults="true".
Veeraiah_Balanedi 310000SME7 7.798 Visualizações
Steps to configure MQ link:
A WebSphere MQ link provides a server to server channel connection between a Service Integration Bus and a WebSphere MQ queue manager which acts as the gateway to the WebSphere MQ network. WebSphere MQ link connects to a specific foreign bus that represents a WebSphere MQ network, and enables messaging engines on a Service Integration Bus to exchange messages with queue managers on the WebSphere MQ network.
Creating SIBus resources in WebSphere Application Server
1) Create a Service Integration Bus.
Log on to the WebSphere Application Server administrative console and expand the Service Integration in the left panel and click on Buses and then create a SIBus called 'thebus’ by clicking on New. Enter the name of the bus and click on Next (Since this is a sample configuration, we haven't selected the Bus security and the security details are beyond the scope of this document).
Click on Finish to complete the creation of SIBus.
2) Add a member TestServer ( assume we have an application server TestServer) to the SIBus.
Once the SIBus is created, you can add a member to it. Navigate to
Buses > <Bus_name> and under the Topology section select Bus members and click on Add.
Choose Server to add an application server as a bus member and apply the changes (you can also add cluster as a bus member by selecting Cluster). When the application server is added as a member to the bus, a messaging engine gets created on the application server to handle all the messages. Click Next and leave the default options in the follow up panels and click Finish in the last panel to complete the configuration.
3) Creating the necessary Destinations in the SIBus.
A destination must be created inside the SIBus to hold all the messages.
3.1) Creating Queue destination
Navigate to Buses > <Bus_name> and under Destination Resources section select Destinations and click on New and then select destination type as 'Queue' and name the destination as SIBUSQueue.
Click on Next and leave default on the remaining panels. Click on Finish to complete the configuration.
Now, we have to create another destination of type “Alias”
3.2) Creating Destination of type "Alias"
Navigate to Buses > <Bus_name> and under Destination Resources section select Destinations and click on New and then select destination type as 'Alias'.
Click on Next and provide the following in the Alias queue panel
Bus: Choose the bus that is of interest. In this sample I am using “thebus”
Target identifier: MQQueue ( specify the name of the Queue defined in the WebSphere MQ )
Target bus: <Queue_manager_name> ( specify the name of the Queue Manager defined in WebSphere MQ)
4) Create a WebSphere MQ link
Next, we need to setup the link from the WebSphere SIBus to WebSphere MQ, which includes identifying the SIBus as a queue manager to WebSphere MQ. Following which we have to configure MQ Link sender and MQ Link receiver channels which will match up with channels in the queue manager.
Navigate to Service Integration → Buses → <Bus_name> → Foreign Bus Connections
Click on New and enter or select the following details
Bus Connection Type : Direct Connection
Foreign Bus Type : WebSphere MQ
Local Bus Details :
messaging engine to host connection : VxxxxNode.TestServer-thebus ( select the messaging engine of your interest )
virtual queue manager name : thebus ( enter bus name where messaging engine is running )
WebSphere MQ details :
Foreign bus name : QM_xxxxxx ( Queue Manager name defined in MQ)
MQ link name : MQLink_Demo ( name of the MQ link )
WebSphere MQ receiver channel name : BUS_TO_MQ_CHNL ( name of sender channel in MQ )
Host name : x.xxx.xxx.xxx ( host name/ip of the host where MQ is installed )
Port : 1414 (default port of queue manager)
WebSphere MQ sender channel name : MQ_TO_BUS_CHNL ( name of the receiver channel in MQ)
click Next and leave the defaults as is and click Finish to complete the MQ link configuration. Restart the application server 'TestServer' to effect the changes.
Figure: MQ Link setup wizard
Creating MQ resources in WebSphere MQ:
1) Create a Queue in WebSphere MQ
Right-click on Queues then select New => Local Queue.
On the Create Local Queue ,select the General tab and enter the following value
2) Creating Remote Queue in MQ
On the Remote Queue panel enter or select the following values
Queue Name: SIBUSQueue
Remote Queue : SIBUSQueue
Remote queue manager : thebus
Transmission Queue : TQueue
3) Create Transmission Queue in MQ :
Create a transmission queue that provides the QM_xxxxx queue manager with a link to the SIBus as a queue manager.
Right-click on Queues then select New => Local Queue.
4) Create Receiver channel in MQ
Right-click on Channels, then select New => Receiver Channel.
5) Create Sender channel in MQ
Right-click on Channels, then select New => Sender channel.
Restart the servers to effect the changes.
How to create a Simple REST application and deploy it on WebSphere Application Server Liberty profile
Ashok kumar Appavu 0600021JCR 14.161 Visualizações
Steps to create a REST based application as WAR file in Eclipse:
(1) Create a new Web project
(2) Create a project name and uncheck “add project to EAR”
(3) Add a simple POJO which is annotated with JAX-RS annotations to turn it into a JAX-RS resource
(4) Add the following code under CreditScoreService class,
(5) Create the javax.ws.rs.core.Application subclass to define to the JAX-RS runtime environment which classes are part of the JAX-RS application. The resource classes are returned in the getClasses() method.
(6) Add the following code under CreditScoreApplication class,
(7) Add “javax.ws.rs-api-2.0-m10.jar”in project’s build path
(8) Create web.xml to add servlet mapping for the resource.
(9) Export the project as WAR file.
Steps to deploy WAR file on Liberty profile
(1) Download Liberty profile, Developer edition (~50MB) from “WASDev.net” under the following URL
(2) Extract wlp-developers-runtime-8.5.5.x.zip
(3) Create a new server (server1)
../wlp/bin/server create server1
(4) Add JAX-RS feature for”server1” under server.xml as follows
(5) Add application name in server.xml as follows
(6) Place the application archive (SimpleREST.war) under the folder,
(7) Start the server
../wlp/bin/server start server1
(8) Following is an indication of successful start of “sever1” and “SimpleREST.war” in messages.log which is under the location ../wlp/bin/server/server1/logs
(9) After successful server start up, hit the following URL in browser,
In the above URL,
Host name = localhost
Http port number of server1 = 9080
Context-root = SimpleREST
Servlet-path in web.xml = creditscoreservice
Resource path (as per @path annotation) = creditscore/8391
(10) Expected output:
Credit score for MemberId- <id> = <some random number between 400 and 800>
For more information about REST based services, please refer the following IBM info center URLs.
Nihilson 120000KFPY Marcações:  datasource oracle unsatisfiedlinkerror links 12c oci 26.479 Visualizações
Recently was helping a out a situation to setup up a datasource in WebSphere Application Server to connect using the Oracle OCI JDBC Driver. At a first look, for anybody who has configured a datasource this would be a pretty simple configuration, where you will have to replace the word "thin" from the conventional URL to "OCI" and the URL would look like "jdbc:oracle:oci:@//hostname:1521/dbname". However this was not the case as they were getting the famous java.lang.UnsatisfiedLinkError: ocijdbc12 (Not found in java.library.path) and this was in spite of setting the LIBPATH pointed to the appropriate Instant Client directory that was housing all the required OCI Libraries. Interestingly the same configuration when tested in my local environment worked all good like an angel. We were assuming that the Instant client files are one and the same between the failing environment and the passing environment and after a extensive workout we realized they weren't. Can you make out ?
Problematic directory listing of the OCI driver files :
-rw-r--r-- 1 root system 63169440 Oct 30 2014 libclntsh.a
In the failing environment the OCI library path that was configured were actually symbolic links to the actual files and this was preventing the libraries from getting loaded properly resulting in the java.lang.UnsatisfiedLinkError. Once the LIBPATH was pointed to the actual files instead of the symbolic links, the problem went away and the connectivity worked all good. This concluded that symbolic links to the OCI JDBC libraries were not allowing them to get loaded properly within the JVM.
Also I felt listing down the steps required to configure Oracle OCI JDBC driver in WebSphere Application Server may help whoever needs it .
Steps to configure the Oracle OCI JDBC Driver in WebSphere Application Server :
1) Download a fresh copy of the Oracle 12c OCIJDBC Drivers :
Instant Client Package - Basic: All files required to run OCI, OCCI, and JDBC-OCI applications
Application servers > server1 > Process definition > Environment Entries
Deployment Manager > Process definition > Environment Entries [ Optional ]
# export LIBPATH=/home/instantclient_12_1
9) start all the JVMs
An elaborate documentation around OCI is available in the WebSphere Knowledge center :
Nihilson 120000KFPY 15.968 Visualizações
Applications can get hold of the jdbc connections directly from the datasource. Howerver sometimes, they may need to get hold of the JDBC Connection that is used by the JPA. Consider the below persistence.xml which indicates that it is using a datasource bound to the resource reference jdbc/oracleref. JPA uses this information to get hold of the datasource to obtain connection to the database for persistence. Now how can the application access the JDBC connection that from the JPA EntityManager ?
<?xml version="1.0" encoding="UTF-8"?>
Here is the sample code snippet that applications can make use of to extract the jdbc connection from the JPA EntityManager when using OpenJPA in WebSphere Application Server.
String PERSISTENCE_UNIT_NAME = "JPATest";
OpenJPAEntityManager oem = em.unwrap(OpenJPAEntityManager.class);
Working with thin Client for JMS with WebSphere Application Server and generating jms trace in stand-alone thin client applications
Steps to setup Thin Client for JMS with WAS:
We can connect to Service Integration Bus from a stand-alone java client program by using the Thin Client for JMS with WebSphere Application Server. The Thin Client for JMS with WebSphere Application Server can be used to connect and work with default messaging provider messaging engines for WebSphere Application Server Version 6.0.2 or later. The Thin Client for JMS can be obtained from the WebSphere Application Server installation directory or the Application Client installation directory.
To use the Thin Client for JMS, copy the com.ibm.ws.sib.client.thin.jms_7.0.0.jar present at %WAS_HOME%/runtimes directory.
If your application requires JNDI support, then you have to use the thin client for EJB namely com.ibm.ws.ejb.thinclient_7.0.0.jar, which can be obtained from the %WAS_HOME%/runtimes directory or the application client’s runtimes directory.
If the application makes use of the Oracle/Sun JDK, then you will also need the “com.ibm.ws.orb_7.0.0.jar” as IBM libraries rely on IBM ORB implementation.
Place the above mentioned jars in the classpath of your application as per your requirement.
Steps to generate WebSphere JMS trace in stand-alone thin client application:
When stand-alone thin client applications have problems interacting with the WebSphere Application Server, it might be useful to enable tracing for the application. Enabling trace for client programs will cause the WebSphere Application Server classes used by those applications, such as naming-service client classes and other classes, to generate trace information.
There are two ways to generate the trace from stand-alone thin client applications -
1) Through eclipse IDE
2) Through command-line prompt
Steps to generate JMS trace in the eclipse IDE:
1) Create a sample java application (in a java project) as shown below
2) Add the required thin client jars in the build path
3) To enable trace for the WebSphere Application Server classes in a client application running in eclipse, place “MYTraceSettings.properties” file in src directory of your java application and add the following system property as VM argument while running the stand alone java application
Note: The file must be a properties file in the class path of the application client or stand-alone process. You must create a trace properties file by copying the %install_root\properties\TraceSettings.properties file and modify as per your requirement.
Generating JMS trace through command line:
Alternatively we can generate the trace through command line, to enable trace for the WebSphere Application Server classes in a client application, add the following system property to the startup script or command of the client application:
java -cp C:\JMSSamples\lib\com.ibm.ws.ejb.thinclient_7.0.0.jar;
In Pure Application Systems the VMFS is allocated on per cloud group basis. Each cloud group is allocated 1.8 TB of VMFS from storage LUN. When a pattern with one or multiple components is deployed on this specific cloud group a part of this 1.8 TB is allocated to the corresponding generated Virtual Machines. The spaces are visible in System Console > Hardware > Storage Devices > Storage Node and LUNs. The Virtual Machines shares this default storage space equally. For example if a VM is allocated x GB of space then total 1.8 TB/x VMs can run on this cloud group. It is there fore best to use minimum cloud groups so that Storage LUNs could be used for other application specific mount points through Block Storage etc. Creating large number of Cloud Groups could there fore has stress on PAS storage and one may run out of storage.
Pure Application Systems, Pre Sales Architect
Khandelwal.Abhishek 270001KX2V Marcações:  85 wcm_search jcr_search authoring_search 8.642 Visualizações
JCR Text-Search component is used to search for content/artifacts from authoring portlet in IBM Lotus Web Content Manager. It is also used when search is performed on pages using predefined content templates from Content Template Catalog. JCR search internally uses WebSphere Portal Search Engine (PSE) for text-search functions which has adopted Haifa Research Lab's PSE based on Apache's Lucene search engine.
A change in WebSphere Portal 8.5 as compared to previous portal versions is that all jcr properties specified in icm.properties file have been moved to Resource Environment Providers in admin console under custom properties of JCR ConfigService PortalContent provider shown in Fig 1.
Figure 1 – JCR Properties
Configure JCR Content Model Search
In order to configure text-search for Base/Virtual portal, we need to ensure that following properties are set in JCR ConfigService provider.
1. JCR text search property should set to true.
jcr.textsearch.enabled = true
Set this property to false if you want to disable text-search. With this disabled, authoring portlet search will not work. No new documents will be collected by crawler for indexing.
2. Enable document conversion service by setting the property as below
jcr.textsearch.convertor = com.ibm.icm.ts.convertor.WpsConvertor
There are other options available for this property, recommended is the above value. If this is not set correctly, rich text in web content and text in attachments will not be searchable. To know more about convertor options, please refer to the article.
3. Proper absolute path should be specified for the property as below
jcr.textsearch.indexdirectory = /opt/IBM/WebSphere/wp_profile/PortalServer/jcr/searchIndexes
JCR collections are created at the path specified in this property
4. PSE Type property should be set as, jcr.textsearch.PSE.type = localhost, for standalone environment. The other options are Simple Object Access Protocol (SOAP) and Enterprise
Java Beans (EJBs), which are used to configure remote search service for a clustered
Manage Search Portlet is used to administer search services, collections, and scopes.
JCR Collection out-of-the-box creation
In standalone portal, when we navigate to search collections in Manage Search portlet, it shows Default collection only (shown in Figure 2). JCR collections are not shown in fresh installation until search indexing is triggered which when triggered creates JCR collection out-of-the-box.
Figure 2 – Manage Search Portlet showing available collections in new portal
In order to trigger creation of JCR collection out-of-the-box, we need to ensure that properties mentioned above are set and perform below action.
Navigate to WCM Authoring portlet via path
Applications > Content > Web Content Authoring > Web Content Library and Modify/Create/Delete a content. Once the content is updated, navigate to Manage Search portlet and click on refresh button to see JCR Collection (shown in Figure 3).
Figure 3 – Manage Search Portlet showing JCR Collection.
JCRCollection<workspace_id>.properties file gets created for each JCR collection in the directory specified as the value for the jcr.textsearch.indexdirectory property. This properties file contains the configuration parameters required for creating a collection manually. You might need to create a collection manually if you have deleted the JCR collection. You can recreate a collection manually by referring the properties from the JCRCollection<workspace_id>.properties file. A snapshot of the index directory location displaying the properties file is shown in Figure 4. Contents of properties file is shown in Figure 5.
Figure 4 – TextSearch index directory listing the properties file for JCRCollection1
Figure 5 – Contents of JCRCollection1.properties file.
Creating JCR Collection manually
If the out-of-the-box collection fails to get created automatically, then you would need to create the JCRCollection manually. Before proceeding for manual creation, you must refer the troubleshooting section for failed creation of collection. It is not recommended to create the collection manually until advised by the L2/L3 team.
JCR collection is named using the format JCRCollection<wsid> where wsid is the workspace id. The Base Portal content is stored in the ROOTWORKSPACE of JCR which has id as 1, so the collection should be named as JCRCollection1.
Steps to create the JCR Collection manually in a stand-alone environment.
1. From Administration menu of portal goto Search Administration – Manage
Search – Search collections. Click the New Collection button (shown in Figure 6)
Figure 6 – Manage Search Portlet showing button to be clicked for collection creation.
2. In Create Search Collection form, choose the search service as Default Search Service, the name of collection as JCRCollection1 ( based on naming convention JCRCollection<wsid> ) and collection language as English.Click OK. ( shown in Figure 7)
Figure 7 – Create new collection form
A message is displayed for successful creation of collection and JCRCollection1 is visible under collections in the portlet. (shown in Figure 8)
Figure 8 – Manage Search Portlet with successful creation message and newly created collection JCRCollection1
3. Click on newly created JCRCollection1 and then click on New Content Source to create a new content source for JCRCollection1 (shown in Figure 9). The content source manages indexing of the documents. We specify the crawler details under this.
Figure 9 – Manage Search Portlet showing New Content Source button.
In New Content Source input form, choose the content source type as Seedlist Provider, the name of the new content source, in this case, we named it as JCRContentSource and the value of the URL as http://server name:portnumber//wps/seedlist/server?Action=GetDocuments&Format=ATOM&Locale=en_US&Range=5&Source=com.ibm.lotus.search.plugins.seedlist.retriever.jcr.JCRRetrieverFactory&Start=0&SeedlistId=1@OOTB_CRAWLER1 (shown in Figure 10).
Figure 10 – New Content Source form
We gave the value for the URL in New Content Source form as below:
The URL above has parameters Action, Format, Locale, Range, Source, Start and SeedlistId.
The workspace id is 1 for Base Portal and the unique identifier can be anything. We have chosen OOTB_CRAWLER1 as unique identifier for simplicity. For a virtual portal, the workspace id is different. Hence the SeedlistId parameter for a virtual portal would be different from that of base portal.
The Retriever sends the response to the crawler in XML ATOM format, and the response is sent as pages, each of the pages contains a range number of documents.
For example, if there is a list of 100 updates to be indexed, and the range is set as 10, then the retriever sends 10 pages to the crawler, each of which contains 10 documents. By default, the range value is set as 100.
The administrator can change the range parameter in the URL based on the portal requirement. The crawler has a timeout of 10 minutes set internally; however, if the portal is too slow and the Retriever is unable to retrieve 100 documents in 10 minutes, then the administrator can reduce the range value.
When the Content Source is created successfully, a message will be displayed at the top of the page.
Note: These instructions are in the WebSphere Product Documentation topic “Setting up JCR search collections” for creating the JCRCollection1 collection. Using this collection and content source, you are able to search for items within the WCM authoring portlet.
If the JCRCollection1 is created manually, then the scheduling interval must be configured for the Content Source such that the crawler runs automatically in the configured interval.
To do this:
1. Use the Schedulers tab to set the frequency with which the crawler should run to update the search content (shown in Figure 11).
Figure 11 – Content Source scheduler tab view
2. Choose the date, time, and update interval when the crawler should start running. Click Create.
For JCRCollection1, which is created automatically by the application, the index maintenance is scheduled to run every 60 minutes. If you want to change this frequency, you can configure it in the scheduler. Delete the existing scheduled Updates, and choose the day, time, and interval, click Create.
Creating JCR Collection manually for Virtual Portal
The procedure is same as specified above for Base Portal. The Crawler URL of Virtual Portal differs that of the Base Portal.
Note: All the stand-alone environments must use the Search service as Default Search Service.
Base Portal Crawler URL:
Since the virtual portal uses a different workspace, the workspace ID of Virtual Portal will be different than that of a Base Portal. Hence the SeedlistId parameter will be different in the case of a Virtual Portal.
For example if the workspace id of a Virtual Portal is 3, then the crawler url of Virtual Portal would be
Steps to determine the workspace ID of the Virtual Portal
1. Enable the JCR TextSearch trace com.ibm.icm.ts.*=finest in Portal - Administration – Portal Analysis - Enable Tracing.
2. Add or modify any WCM document and save it in Virtual Portal. This gives the workspace id information in trace logs.
In trace.log, you will find the trace information similar to this:
[6/6/14 18:51:04:337 IDT] 000001c3 BaseDBImpl 3 insertSeedlistEvents:Inserted event:Event:action='Update_Node(3)', timestamp='2014-06-06 18:51:04.337', document
id=<workspace: 3, itemid: AB001001N13F05B8320005B295>', parentID:<workspace: 3, itemid: >', wsid: 3