I recently created a post about the XIV Host Attachment Kit (amusingly called the HAK). IBM has released an update to the HAK, taking us from version 1.5 to version 1.6. The updated versions, along with release notes and installation instructions can be found at the following links:
IBM XIV Host Attachment Kit for AIX, Version 1.6
IBM XIV Host Attachment Kit for HP-UX, Version 1.6
IBM XIV Host Attachment Kit for RHEL, Version 1.6
IBM XIV Host Attachment Kit for SLES, Version 1.6
IBM XIV Host Attachment Kit for Windows, Version 1.6
Whats changed you asked? Great question! Checking the Release Notes for each Operating System (which can be found in the links above), I found some common improvements to the HAK for every OS:
- The xiv_diag command now provides the HAK version number when used with the --version argument. This is handy to confirm what version of HAK you are currently running.
- More information is collected with the xiv_diag command.
- The xiv_devlist command can now display LUN sizes in different capacity units, by using the –u or --size-unit argument. I give an example below.
Usage: -u SIZE_UNIT, --size-unit=SIZE_UNIT
Valid SIZE_UNIT values: MB, GB, TB, MiB, GiB, TiB
- The xiv_devlist output can be saved to a file in CSV or XML format, by adding the –f or --file argument. I give an example below.
There are also several other fixes which are mainly common between Operating Systems. Given that a major part of the HAK are Python scripts such as xiv_attach, xiv_devlistand xiv_diag and given that the output and behaviors of these script are very similar for each OS, this is not surprising.
I installed the new version 1.6 HAK onto my 64-bit Windows 2008 server and found another pleasant surprise: When I ran the xiv_attach command it detected that my Qlogic driver was downlevel. In this example it detected I am running a Qlogic QLE2462 on driver version 9.18.25 and suggested I should instead run driver version 9.19.25.
I then tried out the xiv_devlist command, displaying volume sizes in both decimal (GB) and binary (GiB). Note the syntax I used to get the GiB output: xiv_devlist -u GiB
Finally I offloaded the output of the xiv_devlist command to a CSV file. Again please note the syntax as you may find it useful:
xiv_devlist -t csv -f devlist.csv -u GiB
You could use -t xml instead of getting CSV output. Clearly you could also change the file name devlist.csv to any filename you like.
You do not need to worry about which version of firmware your XIV is running. The release notes confirm HAK version 1.6 will work with XIV firmware 10.1.0, 10.2.0, 10.2.2, 10.2.4 and 10.2.4a, which should cover pretty well every machine in the world.
One final note: Under Known Limitations the release notes state that you should not map a LUN0 volume. This simply means leaving LUN0 disabled (which is the default). In the example below I start mapping volumes from LUN1 and have NOT clicked to enable mapping of volumes to LUN0. This should be the norm.
Any confusion or questions? You know where to find me.
As Barry Whyte pointed out in this blog post, the release 6.2 code is available for download and installation onto your SVC and Storwize V7000.
- The Storwize V7000 release of the 6.2 code is here.
- The SVC release of the 6.2 code is here.
I thought I would quickly check out two of the announced features of the 6.2 release: the new Performance Monitor panel and support for greater than 2 TiB MDisks. So on Sunday I got busy and upgraded my lab Storwize V7000 to version 220.127.116.11.
Remember that in nearly every aspect the firmware for the SVC and Storwize V7000 are functionally identical, so while I am showing you a Storwize V7000, it equally applies to an SVC.
Firstly I tried the performance monitor panel, and what better way to show you what I saw than on YouTube? This is my first YouTube video so please forgive me if its not slick. I started the performance monitor and captured two minutes of performance data using Camtasia Recorder. Because it is fairly boring to stare at graphs slowly moving right to left, I then sped it up eight times, and this is the result:
The video is shot in HD, so if what your seeing is grainy or hard to read, change the display to 720p or 1080p. Now if you want to see the performance monitor at its actual speed, here is the original normal speed video. Remember this is the same video as above, just slower. It can also be viewed in 720p.
So what are you seeing?
- The top left hand quadrant is CPU utilization.
- The top right hand quadrant is volume throughput in MBps as well as current volume latency and current IOPS.
- The bottom left hand quadrant is Interface throughput (FC, SAS and iSCSI).
- The bottom right hand quadrant is MDisk throughput in MBps as well as current MDisk latency and current IOPS.
You will note that each metric has a large number (which is the current metric in real time) and a historical graph showing the previous five minutes. You can also change the display to show either node in the I/O group.
I found the monitor to be genuinely real time: the moment I changed something in the SAN (such as starting or stopping IOMeter or starting or stoping a Volume Mirror), I immediately saw a change.
Greater than 2 TB MDisk support
Next I logged onto my lab DS4800 and created two 3.3TiB volumes to present to the Storwize V7000. I chose this size because I had exactly 6.6 TiB worth of available free space on the DS4800 and I wanted to demonstrate multiple large MDisks. On versions 6.1 and below, the reported size of the MDisks would have been 2 TiB (as I discussedhere). Now that I am on release 6.2 with a supported backend controller, I can present larger MDisks. In the example below you can clearly see that the detected (and useable size) is 3.3 TiB per MDisk.
What controllers are supported for huge MDisks?
The supported controller list for large MDisks has been updated. The links for Storwize V7000 6.2 are here and for SVC here. If your backend controller is not on the list, then talk to your IBM Sales Representative about submitting a support request (known as an RPQ).
If your reading my blog, your probably interested in IBM Storage hardware (since apart from Bow Ties, thats all I talk about). So I would hope your already subscribed to IBM's notification service that you will find here. Rob Jackard from the ATS Group (an IBM Business Partner based in the USA) puts together a summary of these notifications which he sends to me on a regular basis. So I am bringing them to you here. Now hopefully none of these alerts are news to you... but please, have a read and if you have not done so already.... SUBSCRIBE!
DS3000 / DS4000 / DS5000:
(2011.06.09) IBM Retain Tip# H202771 – Expanding Dynamic Capacity Expansion (DCE) large arrays may fail due to out of memory conditions.
NOTE: The 7.xx firmware for the DS Storage Controller is affected. This is a permanent restriction. Possible workarounds are available.
(2011.05.27) Documentation: Instructions for opening the IBM System Storage DS Storage Manager interface are incorrect.
(2011.05.25) DS3950 / DS4000 / DS5000 Recommended Firmware Levels.
(2011.05.18) IBM Retain Tip# H202849 – Dynamic Volume Expansion is not possible on a LUN which is in an active mirror relationship with write-mode of ‘Asynchronous not write-consistent’.
NOTE: The DS Storage Controller is affected. A workaround is available.
(2011.05.10) IBM Retain Tip# H202771- Expanding (DCE) large arrays may fail due to out of memory conditions.
DS8000 / DS6000:
(2011.06.07) DS8800 Code Bundle Information.
(2011.06.03) EXN3500 (2857-006) Storage Expansion Unit Publication Matrix.
(2011.05.19) Excessive drive spinning up (0x2 – 0x4 0x1) messages on healthy EXN3000.
(2011.05.05) NEWS: Recommended Releases for IBM System Storage N series Data ONTAP.
(2011.04.28) DataFabric Manager (DFM) 4.0.2 Publication Matrix.
(2011.05.10) Cisco MDS Field Notice: FN-63416 – DS-C9124 & DC-C9148 have incorrect MAC Programming; UMPIRE Program in Place.
(2011.05.19) Intel has reported PAGE FAULT OR CORRUPTED DATA USING 64-BIT APP IN 64-BIT NOS (Fix is Now Available).
SVC / Storwize V7000:
(2011.06.10) IBM SAN Volume Controller Code V18.104.22.168.
(2011.06.10) IBM Storwize V7000 Code V22.214.171.124.
(2011.06.10) SAN Volume Controller and Storwize V7000 Software Upgrade Test Utility V6.5.
(2011.06.10) IBM Storwize V7000 Initialization Tool.
(2011.06.10) Storwize V7000 Concurrent Compatibility and Code Cross Reference.
(2011.06.10) IBM Storwize V7000 V6.2.0 – Installable Information Center and Guides.
(2011.06.10) IBM System Storage SAN Volume Controller and Storwize V7000 V6.2 – Command-Line Interface Guide.
(2011.06.10) IBM System Storage SAN Volume Controller and Storwize V7000 V6.2 – Troubleshooting Guide.
(2011.06.10) Incorrect Usage of Drive Upgrade Command May Cause Loss Of Access to Data.
NOTE: This issue was resolved by APAR IC74636 in the V126.96.36.199 release of the Storwize V7000 software.
(2011.06.10) Storwize V7000 and SAN Volume Controller FlashCopy Replication Operations Involving Volumes Greater Than 2 TB in Size Will Result in Incorrect Data Being Written to the FlashCopy Target Volume.
NOTE: This issue is fixed by APAR IC76806 in the 188.8.131.52 and 184.108.40.206 PTF releases.
(2011.06.08) IBM SAN Volume Controller Code V220.127.116.11.
(2011.06.08) IBM Storwize V7000 Code V18.104.22.168.
(2011.05.27) IBM SAN Volume Controller Code V22.214.171.124.
(2011.05.27) IBM Storwize V7000 Code V126.96.36.199.
(2011.05.27) Storwize V7000 Systems Running V188.8.131.52-V184.108.40.206 Code May Shut Down Unexpectedly During Normal Operation, Resulting in a Loss of Host Access and Potential Loss of Fast-Write Cache Data.
NOTE: If a single node shutdown event does occur when running V220.127.116.11, this node will automatically recover and resume normal operation without requiring any manual intervention. IBM Development is continuing to work on a complete fix for this issue, to be released in a future PTF, however customers should upgrade to V18.104.22.168 to avoid an outage.
(2011.05.09) SVC V4.3.x End of Service – April 30, 2012.
SSPC / TPC / TPC-R:
(2011.06.10) Administration of the TPC Environment: A Guide for TPC Administrators.
(2011.06.08) TPC4.2.x - Supported Storage Products Matrix.
(2011.06.08) TPC 4.1.x – Supported Storage Products List.
(2011.05.23) Shutdown sequence for TPC for Replication.
(2011.05.19) Fabric probe causes instability in Brocade DCFM server.
NOTE: Brocade Defect 332161 has been identified and is resolved in DCFM version 10.4.5.
(2011.05.19) Configuring Oracle for TPC for Databases.
(2011.05.17) TPC web browser support – Firefox 4.x and Internet Explorer 9.
(2011.05.06) TPC- Resolving Issues with Cisco Switches.
(2011.05.05) How to resolve TSPC Server service start problems when TIVGUID is mistakenly uninstalled.
(2011.04.29) Tivoli Storage Productivity Center v4.1.1 Fix Pack 6 (April 2011).
(2011.04.29) TPC database size increases after upgrade to 4.2.1.
(2011.06.09) IBM XIV Host Attachment Kit for AIX v1.6.
(2011.05.24) Potential Problem on XIV Storage System ranging microcode versions 10.2.2 thru 10.2.4.a that can be caused by changing system time via Network Time Protocol (NTP) or when changing the clock via XCLI.
(2011.05.10) IBM XIV Storage System Planning Guide.
- This is a Severity 1 issue at 2am
I wrote a blog post recently about my favourite podcasts. One of those I listed wasBackground Briefing, a radio program broadcast by the Australian Broadcasting Corporation (Australia's ABC). A recent episode entitled Fatigue Factor really sparked my interest. It talked about the affects of fatigue on professions such as:
- Air Traffic Controllers
- Train drivers
- Truck drivers
It contained some alarming facts about the potential affects of fatigue and is well worth taking the time to listen to. However in my opinion there was one major omission:
It did not mention workers in the IT industry.
For many years I worked as the Account Engineer for several of IBM's System z customers, mainly banks. Most weekends I skipped Saturday night as a sleep night. If I was lucky I might get to sleep from 10pm to 1am and would then head off to vast, noisy, dehydrating air conditioned computer rooms to perform various system changes. If I did my job well, had no hardware issues and the client confirmed everything was running as expected, I got to head home about 7am on Sunday. So that night I would have slept somewhere between zero and three hours. I would then spend the rest of the week recovering, before doing it all over again the following weekend at a different customer.
I mention all of this because fatigue was something I learnt to live with. Even when I moved to a support role, I still occasionally worked through the night on critical situations (something IBM calls Crit Sits). I also worked on a support roster which could involve 3am callouts to assist my fellow IBMers across the Asia Pacific region. So when I later moved to a Pre-Sales role, it certainly did wonders in helping me re-establish normal sleep patterns.
Listening to this podcast really brought home to me that the IT industry is just as guilty of failing to deal with fatigue, as the other industries that the podcast discusses. Now if your thinking this means it's an IBM problem, think again. Most weekends I was working alongside representatives from EMC, HDS, Storagetek, etc. Plus of course there were the clients themselves, many of whom were also missing a nights sleep to satisfy their change and business requirements.
One of the major issues raised in the podcast is that there is no accepted way to measure how fatigued an employee actually is. This is a major problem. There are established tests to confirm how affected someone is by alcohol, or by drugs. But we cannot easily confirm how badly fatigued a worker is; plus many people are unwilling or unable to admit that they are suffering from fatigue
If we think about many of the major IT related outages that have occurred recently, I ponder what role fatigue played in each one. Even if it didn't cause the initial issue, did making your employees work around the clock to resolve an issue, actually extend the outage time? For example, have a read of Amazons explanation of its recent Service Disruption. Just picking on some of the lines in the report:
At 12:47 AM PDT on April 21st, a network change was performed...
At 2:40 AM PDT on April 21st, the team deployed a change...
By 5:30 AM PDT, error rates and latencies again increased ....
At 11:30AM PDT, the team developed a way to prevent....
Was the person doing the change working out of their usual sleep pattern? Was the team working to resolve the issue working out of their normal sleep pattern? Did fatigue compound the outage? Its an interesting idea. Now it may well be that fatigue hadNOTHING to do with this outage. It is pure speculation on my part. But I am certain that the root causes of many of the recent IT meltdowns and their extended after affects (such as Sony's ongoing issues), MUST include the debilitating affects of fatigue.
Plus here is another rather disturbing fact. To quote from the podcast:
... if you're sleep deprived, you're more likely to crave chips over lettuce, and feel less like climbing the stairs. And that can become a vicious cycle, because many people who are overweight are even more prone to sleep disorders....
So please take the time to listen to the podcast. You will find it here and in places likeiTunes.
I joined IBM on June 26 1989, so this Sunday brings up my 22 year anniversary with the company. No small achievement, but I am still three years away from the mystical IBM Quarter Century Club. Of course for some: 22 years is nothing! I recently learned that Robert Neidig, who has been (and remains) a leading light in promoting IBM's Mainframe products, joined IBM on June 21, 1961. So this year bring up his 50th anniversary with the company!
For those with long memories, Bob has worked with the following IBM systems: 1401, 1410, S/360, S/370, 3031, 3032, 3033, 3081, 3083, 3084, 3090, ES9000, S/390, eServer zSeries, and System z. They have all been enhanced by Bob's contributions.
If you want to check out the history of some these world changing products, visit The IBM Mainframe Room. I particularly loved the Photo Album. There are some truly classic images of IBM products of old. If your forward looking, feel free to also visit the System z homepage.
So thanks Bob for your commitment and leadership on your half-centennial, truly a remarkable achievement!
If your a user of XIV, or your considering purchasing an XIV, then there is one tool that you will truly love. It's called XIVTop. The XIVTop application comes packaged with the XIV GUI and is one of the handiest add-ons I have ever seen. It lets you monitor your XIV in real time, seeing exactly how much IO or throughput is being achieved and at what response time (in milliseconds). You can immediately answer questions like:
- Is poor application response time being caused by poor storage response time?
- What application is currently generating so much traffic on the SAN?
- What effect has performing file de-fragmentation had on performance?
- Are the backups running and how much traffic are they generating?
- What happens when I run multiple application batch jobs at the same time?
The ability to get this information in real time is what makes XIVTop so invaluable.
So in the tradition of always pushing my boundaries, I thought I would create a narrated video about XIVTop. What I discovered is just how terribly hard doing narrated videos are: You need to write a script... you need to stick to the script... you need to not fluff any words.... you need to speak slowly and clearly and not start talking in a strange accent. I had trouble with all of these, so I made take after take after take after take, until I was heartily sick of the process. I have now got a much greater respect for newsreaders and film actors. This narration stuff is hard!
So please check out my final take. It's still far from perfect, but all feedback is very welcome. The only other thing that is quite strange is Youtubes choice of videos to watch after mine. Its worth watching just to see the list. I think the term performance confuses the algorithm.
I recently got a great email from an IBMer in the Netherlands by the name of Jack Tedjai. He sent me two screen shots, taken with the new performance monitor panel (that comes with the SVC and Storwize V7000 6.2 code). He wrote:
I am working on a project to migrate VMware/SRM/DS5100 to SVC Stretch Cluster and one of the goals is to prevent using ISL (4Gbps) and VMware Hypervisor/HBA load during the migration. For the migration we are using VMware Storage vMotion. To minimize the impact of the migration on production, we tested VAAI for Storage vMotion and template deployment and it worked perfectly.
So whats this all about? Well one of the improvements provided with VAAI support is the ability to dramatically offload the I/O processing generated by performing a storage vMotion. Normally a storage vMotion requires an ESX server to issue lots of reads from the source datastore and lots of writes to the target datastore. So there is a lot of I/O flowing from ESX to the SVC, and then from the SVC to its backend disk. What you get is something that looks like the image below. In the top right graph we have traffic from SVC to ESX (host to volume traffic). In the bottom right graph we have traffic from the SVC to its backend disk controllers (DS5100 in this case). This is SVC to MDisk traffic.
When we add VAAI support to the SVC, we suddenly change the picture. Suddenly VMWare does not need to do any of the heavy lifting. There is almost no I/O between VMWare and the SVC (no host to SVC volume traffic) related to the vMotion. The SVC is still doing the work, but it is happening in the background without burning VMWare CPU cycles or HBA ports (in that there is still SVC to MDisk traffic).
This difference translates to: Faster vMotion times, far less SAN I/O and far less VMware CPU being used on this process.
So do VMware support this? They sure do! Check this link here. It currently shows something like this (taken on June 23, 2011):
So what are your next steps?
- Upgrade your Storwize V7000 or SVC to version 6.2 code. Download details arehere.
- Download and install the VAAI driver onto your ESX servers. You can get it from here. If your already using the XIV VAAI driver you need to upgrade from version 22.214.171.124 to version 1.2. There is an installation guide at the same link.
And the blog title? It means friendly greetings in Dutch. So to Jack (and to all of you), vriendelijke groeten and please keep sending me those screen captures.
I thought I would write a quick post about an issue that's not new, but is certainly worth being aware of....
One of the interesting tricks with the change to 8 Gbps Fibre Channel is that it required a change to the way the switch handles its idle time... the quiet time when no one is speaking and nothing is said. In these periods of quiet contemplation, a fibre channel switch will send idles. When the speed of the link increased from 4 Gbps to 8 Gbps, the bit pattern used in these idles proved to not always be suitable, so a different fill pattern was adopted, known as an ARB. All of this came to intrude on our lives when it became apparent that some 8 Gbps storage devices were having trouble connecting to IBM branded 8 Gbps capable Brocade switches because of this change. This led to two things:
- IBM released several alerts regarding how to handle the connection of 8 Gbps capable devices to 8 Gbps capable fibre channel switches.
- Brocade changed their firmware to better handle this situation.
An example of what was said?
"Starting with FOS levels v6.2.0, v6.2.0a & v6.2.0b, Brocade introduced arbff-arbff as the new default fillword setting. This caused problems with any connected 8Gb SVC ports and these levels are unsupported for use with SVC or Storwize V7000.
In 6.2.0c Brocade reintroduced idle-idle as the default fillword and the also added the ability to change the fillword setting from the default of idle-idle to arbff-arbff using the portcfgfillword command. For levels between 6.2.0c and 6.3.1 the setting for SVC and Storwize V7000 should remain at default mode 0.
From FOS v6.3.1a onwards Brocade added two new fillword modes with mode 3 being the new preferred mode which works with all 8Gb devices. This is the recommended setting for SVC and Storwize V7000"
So there are several tips that I will point you to, depending on your product of interest:
Brocade Release Notes
For most environments, Brocade recommends using Mode 3, as it provides more flexibility and compatibility with a wide range of devices. In the event that the default setting or Mode 3 does not work with a particular device, contact your switch vendor for further assistance. IBM publishes all the release notes for Brocade Fabric OS here.
Check out this link if your connecting an 8 Gbps capable DS3500, DS3950 or DS5000 to a 8 Gbps capable switch: http://www-947.ibm.com/support/entry/portal/docdisplay?brand=5000028&lndocid=MIGR-5083089
There is no tip for the DS8800 but the advice remains effectively the same as for the Storwize V7000. I can confirm that using a fill word setting of 3 works without issue.
SAN Volume Controller or Storwize V7000
Check out this link if your connecting a Storwize V7000 or CF8 or CG8 SVC node to an 8 Gbps capable switch: https://www-304.ibm.com/support/docview.wss?uid=ssg1S1003699&wv=1
The XIV Gen3 comes with 8 Gbps capable Fibre Channel connections. It does not support idle Fill Words meaning that the portCfgFillWord value should not be set to 0.
When an IBM System z server attaches an 8 Gbps capable FICON Express-8 CHPID to a Brocade switch with 8 Gbps capable SFPs, you should upgrade your switches or directors to Fabric OS (FOS) 6.4.0c or 6.4.2a and set the fill word to 3 (ARBff).
LTO5 and TS1140
IBM have two tape drives that are capable of 8 Gbps, the LTO-5 drive and the TS1140. Setting the fill word to 3 can actually cause issues with these drives. To avoid issues do one of the following (you only have to do one of these, not all three):
- Load the tape drive with firmware that has the access fairness algorithm fix for loop:
- LTO5 drives should be on BBN0 and beyond (you may need to contact IBM support to get this code.
- TS1140 drives should be on drive firmware 5CD or beyond.
- Change the Fibre Channel topology to point-to-point (N port) (as opposed to L or NL). This is my preferred option.
- Change the Fibre Channel speed to 4Gbps. This sounds slightly retrograde, but it is very rare for an individual drive to sustain a speed above 400 MBps (unless your data is very very compressible).
**** UPDATED 28 Feb 2012 - Added System z FICON and Tape info ****
The Storwize V7000 and SVC release 6.1 introduced a new WEB GUI interface to assist with service issues, known as the Service Assistant. The Service Assistant interface is a browser-based GUI that is used to service your nodes. Much of what you traditionally did with the SVC front panel can all be done using the Service Assistant GUI. You can see a screen capture of the Service Assistant below:
While I would like to be optimistic and hope that you will never have to use the Service Assistant, you should always ensure your toolkit is equipped with every possible tool. I say this because one thing I have noted is that the majority of installs are not configuring the Service Assistant IP addresses. This is particularly apparent as clients upgrade their SVC clusters to release 6.1.
By default on Storwize V7000, the Service Assistant is accessible on IP addresseshttps://192.168.70.121 for node 1 and https://192.168.70.122 for node 2 (don't try and point your browser at them right now, as your network routing won't work - you would need to set your laptop IP address to the same subnet and be on the same switch. Details to do that are here). For SVC there are no default IP addresses, although we traditionally asked the client to configure one service address per cluster. The best thing for you to do is approach your network admin and ask for two more IP addresses for each Storwize V7000 and/or SVC I/O group. Once you have these two extra IP addresses, record them somewhere and then set them using the normal GUI.
Its an easy five step process as shown in the screen capture below. Go to the Configuration group and then choose Network (step 1). From there select Service IP addresses (step 2) and the relevant node canister (step 3). Choose port one or port two (step 4) and then set the IP address, mask and gateway (step 5).
You can also set them using CLI (replace the word panelname with the panel name of each node, which you can get using the svcinfo lsnode command).
satask chserviceip -serviceip 10.10.10.10 -gw 10.10.10.1 -mask 255.255.255.0 panelname
If you forget these IP addresses, you can reset them using the same CLI commands or using the Initialization tool as documented here.
Finally having set the IP addresses, visit the service assistant by pointing your browser at each address. This is just to confirm you can access it. You logon with your Superuser password. With the process complete, ensure the IP addresses are clearly documented and filed away. So now if requested, you will be able to perform recovery tasks (in the unlikely chance they are needed). If for some reason your browser keeps bringing you to the normal GUI rather than the Service Assistance GUI, just add /service to the URL, e.g. browse to https://10.10.10.10/service rather than https://10.10.10.10.
So what should you do now?
If your an SVC customer on SVC code version v5 and below, please get two IP addresses allocated for each SVC I/O group, so you can set them the moment you upgrade to V6. Do this once the upgrade is complete.
If your an existing Storwize V7000 client or an SVC client already on V6.1 or V6.2 code, then hopefully you should already have set the service IP addresses. If not, please do so and test them.
I read a great blog post recently on Written Impact that talked about how to create effective presentations. It's well worth reading and can be found here. They describe several different formats that will help you develop interesting presentations, ones that don't put your subjects to sleep.
Talking of presenting, I recently presented at the IBM Power and Storage Symposium in Manila. It was a great event and was very well attended. We even had cake to celebrate IBM's 100th birthday.
There are two IBM Symposiums coming up in Australia that I would love for you to attend:
The next IBM Power Systems Symposium will be held in Sydney running from August 16 to 19, 2011. We are currently finalizing the agenda on this one and while this symposium is dedicated mainly to IBM Power Systems... I will be attending and presenting on storage related topics. To check out the details and enroll, please head over to here.
An IBM Storage Symposium will be held in Melbourne running from November 15 to 17, 2011. The agenda is still being set, so if you have ideas about what you would like to see, please let me know. To check out the details and enroll, please head over to here. And yes! I will be attending and I will be presenting.
I have an admission: I am a bit of an Apple fanboy. Well actually not a full on Apple fanboy, I have an iPhone and an iPad but I don't have a MacBook (although if IBM start offering a cash payment instead of giving laptops to mobile employees, that might change). But not everything is perfect in the land of Apple. Let me give you an example, one that I routinely find people are not aware of (apologies if you learnt all of this months ago).
The picture below appears to show three identical Apple charger packs (with Australian pins). You may have a similar collection. But are they all identical? Sadly not.
Only those with very good eyes can spot the difference by reading the rather pale decal on the bottom section of each charger. The text is so small and faint, I struggled to take a decent picture, but here is my sad attempt for one of them (they are all different):
So how are my three power adapters different?
- The first is marked as a 10 Watt USB Power Adapter (it came with an iPad). Its output is amusingly marked as 5.1 Volts DC at 2.1 Amps, which suggests 10.7 Watts.
- The second one is marked as a 5 Watt USB Power Adapter (it came with my iPhone). Its output is marked as 5 Volts DC at 1 Amp, which is indeed 5 Watts.
- The third is marked as an iPod USB Power Adapter. No stated wattage, but its output is marked as 5 Volts DC at 1 Amp, which again suggest 5 Watts. So perhaps my 5 Watt adapter and my iPod adapter are actually the same.
The big question that comes up: Are they interchangeable? The answer: Yes but with caveats.
If you have an iPad you should use the 10W adapter. If you use the 5W adapter it will still charge but at a much slower rate. Apple confirm this here where they state:
iPad will also charge, although more slowly, when attached to an iPhone Power Adapter (by which they mean a 5 Watt adapter).
If you have an iPhone or an iPod can you use the 10W adapter? The answer is yes! It will recharge with no ill effects. Apple confirm this here, where they state:
While designed for use with the iPad, you can use the iPad 10W USB Power Adapter to charge all iPhone and iPod models.
So I am putting my 5 Watt and iPod adapter in the cupboard and using the 10 Watt adapter exclusively. If you have an iPad and finds it's recharging slowly, you may be using an older 5 Watt adapter (but you may need a magnifying glass to spot the difference!).
My suggestion to Apple? A few more cents worth of ink please, to make things more obvious.
To close, on my first Apple focused blog entry, let me pose a question:
Will it blend?
Tivoli Pulse is coming to Melbourne July 27 and 28, 2011 at the Crown Promenade in Melbourne.
How many chances do you get to listen to the following speakers in one place?
- Nigel Phair, Director, Centre for Internet Safety, University of Canberra
- Steve Van Aperen, Human Lie Detector and Director of SVA Training and Australian Polygraph Services
- Laura Guio, Vice President, Storage Sales, STG Growth Markets, IBM
- Joao Perez, Vice President of Worldwide Tivoli Software Sales, IBM USA
- Jamie Thomas, Vice President Tivoli Strategy and Development, IBM USA
- Glenn Wightwick, Director, IBM R&D Australia
There are 12 customer case studies presented mainly by customers. There are over 70 sessions in eight streams, expert keynote speakers, case studies and presentations. Pulse 2011 provides the tools to help advance your infrastructure goals.
Registration is free so what excuse do you have? Registration is free so what excuse do you have? Find out more here. You can enroll here. The agenda front page is here. The detailed agenda is here.
I will presenting on day two. talking about Virtualization and Storwize V7000 so maybe I will see you there!
Bob Leah is one of our leading lights in the developerWorks team. His blog (found here
) is a great resource for Web designers. He recently created a new set of templates to enable a mobile page for developerWorks blogs. You can read his article about the new template here
This morning I boldly went and installed the new templates and so far I think it looks fantastic, not only on the iPhone, but also the iPad and on regular browsers. My only complaint is that I lost the banner image of my Golden Retriever (my loyal hound Suzie). Bob assures me she will reappear soon. In the meantime, I would love to hear feedback about the new template. This is what it looks like on my iPhone:
Over at SearchStorage.com.AU they recently published an article entitled Six reasons to adopt storage virtualisation. You can find the article here. The six given reasons are:
- Storage virtualisation reduces complexity
- Storage virtualisation makes it easier to allocate storage
- Better disaster recovery
- Better tiered storage
- Virtual storage improves server virtualisation
- Virtual storage lets you take advantage of advanced virtualisation features
Its a well written article and I agree with every point. But one could be forgiven for reading the article and thinking that either storage virtualisation is new, or that storage virtualisation is something you might consider AFTER doing server virtualisation. Both of which are not true.
IBM embraced storage virtualisation in June 2003 when we announced our SAN Volume Controller (the IBM SVC). I even found a CNET.com article from way back then. You can find it here (the image below is a screen capture of that CNET website).
IBM's SVC product has been enhanced repeatedly since 2003 with an enormous list of supported host servers and backend storage controllers. We have added new functions every year including Easy Tier, split cluster, VAAI, an enhanced GUI and a new form factor for the SVC code in the form of the Storwize V7000.
So let me give you a seventh reason for adopting storage virtualisation: A vendor who has shown genuine support for this technology. No vendor has embraced storage virtualisation with more enthusiasm than IBM. We have an industry leading solution with phenomenalSPC benchmarks, an enormous number of case studies and an architecture that does not lock you in. Indeed it is an architecture that can grow as you grow and that can be upgraded without disruption.
So please consider storage virtualisation from IBM, using either the SVC or the Storwize V7000. If your in Australia, we have demo centers dotted around the country. Many of our Business Partners can also demonstrate IBM storage virtualisation using their own Storwize V7000s. If your in Melbourne feel free to give me a call and schedule a time to drop into Southgate.
I think this picture speaks for itself: Three XIVs. Three cities. Three way iSCSI.
All the mirror connections were created in seconds using drag and drop in the XIV GUI.
I can now take a volume in one city and mirror it to another.
And yes.... IBM Australia now has a demo XIV in each of three major cities, so why not drop by and have a look?