pureScale on Linux
CiaranDeB 2700033FRG 2,273 Views
CiaranDeB 2700033FRG 2,198 Views
The more I talk to people about pureSystems the more I think that the old way of provisioning systems is nuts. Do people really need to design a solution, buy various "bits" like storage, networking, servers seperately from different vendors with different support agreements, cable it all up and put a stack of software on there "manually" themselves?
If you went to buy a car and they said you had to do this: figure out what parts you need, then buy the engine from one company, the wheels from another, the chassis from yet another etc and then put it together yourself and hope it all works ok you would think they were crazy, right?
pureSystems the smarter way to buy IT! http://www.ibm.com/ibm/puresystems/us/en/index.html
Today I saw that the status of one of our testing pureScale instances was red.
Yes, red, it had a red sign, as this instance is on a PureData System for Transactions. The DB2 pureScale instances panel lets you have a quick look at the status of your instances and the databases deployed on them.
Connecting via ssh to the instance, and trying to run db2instance -list shown that this particular instance was not in a good shape:
db2vr1@compute05:/home/db2vr1> db2instance -list The member, CF, or host information could not be obtained. Verify the cluster manager resources are valid by entering db2cluster -cm -verify -resources. Check the db2diag.log for more information.
Following the tip, I ran db2cluster -cm -verify -resources and it shown that the cluster state was inconsistent. At this point, I had a look at the db2diag.log and I could see that there were some errors related to cluster resources.
After seeing that the issue was due to the cluster resources for pureScale, I decided to try to stop and restart the cluster services with the following commands:
1) Restarting pureScale cluster services
db2vr1@compute05:/home/db2vr1/sqllib/bin> ./db2cluster -cfs -stop -all All specified hosts have been stopped successfully.
db2vr1@compute05:/home/db2vr1/sqllib/bin> ./db2cluster -cfs -start -all All specified hosts have been started successfully.
2) Verifying and repairing the instance
At this point, I tried to verify again the status of the cluster with db2instance command:
db2vr1@compute05:/home/db2vr1/sqllib/bin> db2instance -list The member, CF, or host information could not be obtained. Verify the cluster manager resources are valid by entering db2cluster -cm -verify -resources. Check the db2diag.log for more information.
Now, trying again to repair the cluster resources with the db2cluster command:
db2vr1@compute05:/home/db2vr1/sqllib/bin> db2cluster -cm -repair -resources All cluster configurations have been completed successfully. db2cluster exiting ...
db2vr1@compute05:/home/db2vr1> db2instance -list ID TYPE STATE HOME_HOST CURRENT_HOST ALERT PARTITION_NUMBER LOGICAL_PORT NETNAME -- ---- ----- --------- ------------ ----- ---------------- ------------ ------- 0 MEMBER STOPPED compute05 compute05 NO 0 0 compute05-net1 1 MEMBER STOPPED compute06 compute06 NO 0 0 compute06-net1 128 CF STOPPED compute05 compute05 NO - 0 compute05-net1 129 CF STOPPED compute06 compute06 NO - 0 compute06-net1 HOSTNAME STATE INSTANCE_STOPPED ALERT -------- ----- ---------------- ----- compute06 ACTIVE NO NO compute05 ACTIVE NO NO
3) Starting the instance
So now the pureScale cluster was healthy again, so I was able to finally to start the instance successfully:
db2vr1@compute05:/home/db2vr1> db2start 10/01/2013 13:12:02 0 0 SQL1063N DB2START processing was successful. 10/01/2013 13:12:03 1 0 SQL1063N DB2START processing was successful. SQL1063N DB2START processing was successful.
db2vr1@compute05:/home/db2vr1> db2instance -list
ID TYPE STATE HOME_HOST CURRENT_HOST ALERT PARTITION_NUMBER LOGICAL_PORT NETNAME
-- ---- ----- --------- ------------ ----- ---------------- ------------ -------
0 MEMBER STARTED compute05 compute05 NO 0 0 compute05-net1
1 MEMBER STARTED compute06 compute06 NO 0 0 compute06-net1
128 CF PRIMARY compute05 compute05 NO - 0 compute05-net1
129 CF CATCHUP compute06 compute06 NO - 0 compute06-net1
HOSTNAME STATE INSTANCE_STOPPED ALERT
-------- ----- ---------------- -----
compute06 ACTIVE NO NO
compute05 ACTIVE NO NO
CiaranDeB 2700033FRG 2,879 Views
Please see here for a form to request access.
I thought you might be interested in a nice little demo tool that is being being developed by Jorge Mira and Christopher La Pat here in the lab.
This lightweight demo tool's purpose is to graphically demonstrate the key elements of pureScale WLB graphically and in real time on any pureScale database and without changing the databse or application running. It is not for production use!
psMon allows for platform independent monitoring of a pureScale cluster to graphically demonstrate the key elements of pureScale WLB graphically and in real time. PsMon provides the user with a view of a number of useful pieces of data regarding the current operating status of a pureScale cluster. The data displayed is broken down into two graphs:
In addition to the graphs, tables below the system resource graphs provide information on what applications are using the highest amounts of system resources.
The entire client application is built in Java and is therefore platform independent. It is a lightweight executable JAR file that can easily be run from almost any computer and can connect to any machine on which the PSMServer application has been deployed.
We have heard that a “tens of kilometres” limit applies to the distance between the two sides of a Geographically Dispersed pureScale Cluster (GDPC). Buy why?
This is based on a physical limitation i.e. the speed of light in glass (fibre) which is about 5 µs / km. From this we can calculate a round-trip times from member to CF as follows at these distances:
3km = 30 µs
10km = 100 µs
50 km = 500 µs
100 km = 100 µs (or 1 ms)
300km = 3000 µs (or 3 ms)
This will have a significant effect on the performance of the cluster, especially when we start to get into tens of kilometres. The “normal” times for RDMA actions are of the order of 15 µs so to those we need to add this latency for the distance. Compared to a normal pureScale cluster (all in one location) an RDMA action will be slower at a distance as follows.
3km = 3 times slower
10k = 8 times slower
50k = 33 times slower
100k = 66 times slower
300k = 200 times slower
µs = microseconds or 10-6 seconds
ms = milliseconds or 10-3 seconds
We are currently setting up the tpc-c benchmark on the cluster. Tpc-c is the standard benchmark for OnLine Transaction Processing ( http://www.tpc.org/tpcc/ ). We will be doing some test runs on the pureScale cluster and some tuning to see what kind of throughput we can get out of the cluster for typical OLTP workloads . We will start out with 4 nodes first with the default parameters, then start tuning and tweaking. Please let me know if you want to know more?
A quick word on what circumstances pureScale is best suited to.
First to say what it is not not suited to i.e. data warehouse type applications. It is a shared disk solution and as such not really suitable for data warehousing. This is because of tendency of large transactions being the main workload in such an environment.
It is suited to OLTP loads.
Do you need to come up with a database solution for your application? This could be a new build or replacing old hardware and software.
Do you have an application that generates a lot of small of smallish transactions?
Do you need continuous availability and built in resilience?
Do you need to be able to ramp up the capacity of your system easily in the future rather than buying all of the hardware and licenses you might need over the next 2 - 5 years now?
If the answer if yes to most of these questions then pureScale is for you.