Kevin Griffith on building solutions with IBM software
I'm on a Websphere Portal migration from v188.8.131.52 to v184.108.40.206. I'm starting with the pre-prod cluster which has 2 nodes. New VMs are being used for v7 nodes.
I installed WAS/Portal, IHS, and the plugin without any problems on each of the nodes. Following the infocenter guide I used defaults.isBinaryInstall="true" parameter to install only the binary code. With this option no profiles are created during the install process. Profiles and nodes are created later using manageprofiles as part of the migration prodecure of each node, using WAS 7 migration tools.
I installed WAS fixpack 19 and portal fixpack 220.127.116.11. The pre-requisites say that WAS FP 13 is the minimum required for portal 18.104.22.168. I went ahead and installed WAS FP 19 to take advantage of the latest fixes.
The deployment manager migration went ok using the WASPreUpgrade and WASPostUpgrade tools.
The primary and secondary nodes ran into a problem on the WASPostUpgrade task, with the following error thrown
After doing a quick forum & web search to see if this is a known problem and if there is a workaround, I found that the source portal needs to be at 22.214.171.124 level.
Here is the technote : http://www-304.ibm.com/support/docview.wss?uid=swg21497724
My current project involves setting up WAS 8 environments for the enterprise. Multiple environments need to be installed: training, qualification, integration, pre-prod, prod with many application servers in each environment. Eventually there will probably be hundreds of application servers installed. Servers must be physically located at several geographic locations.
The install is scripted so that the basic installations will be identical. In the diagram below you see the components.
An install proceeds as follows:
To follow-up on the previous post about Sametime install, I went back thru my notes.
We set up a WebSphere cluster with 2 ST Proxy servers in Europe and 2 ST Proxy servers in Asia Pacific. The ST Console was installed in Europe. We wanted to manage servers from one single ST Console.
When we started the members of the cluster we saw errors in the log saying the members in AP failed to join or establish a view with the dmgr in Europe. After analysis and testing we concluded that network delays between geographies were the cause. To get around this we deactivated the HA Manager. Doing this had the side-effect of losing session failover because session tokens where no longer replicated between cluster members. When a user is sent to another server they get a message saying the session had expired. By activating sticky sessions with the load balancer we were able to keep the user on the same server, so under normal operating conditions sessions are not lost. In cases where a server goes down, there is still a loss of session when the user is switched to another server, but this is a rare occurence fortunately.
A better ST deployment architecture across distant geographies would be to set up a separate cluster in each geography. It is possible to have a single ST Console govern servers that are not in the same cluster. In our case this means having a cluster in Europe with the dmgr, the ST Console, and 2 ST Proxy servers. And in AP, set up a cluster with a dmgr, but without a ST Console, plus the 2 ST proxy servers. These servers would be registered with the ST Console in Europe. This allows managing policies from one ST Console.