Last week I talked about the differences between the XIV Generation 2 and XIV Gen3 by just looking at the rack. This week we open the front door to see if we can spot any more differences...
First up you notice that it looks almost exactly the same..... but appearances can be deceiving.
So what actually is different? From the front there are three obvious visible differences, two of which are not that interesting....
- The XIV Generation 2 has a storage grid that uses two 48 port Gigabit Ethernet network switches for interconnection between the modules (these are only visible from around the back of the rack). However these switches get redundant power via an RPS-600 Redundant Power Supply (RPS) which sits at the front of the rack (directly above module 6).
The XIV Gen3 on the other hand uses two 36 port Infinband switches that have redundant power supplies built in. So the Gen3 does not need the RPS. Thus in the XIV Gen3 it is no longer there. But its spot has not remained empty....
- The XIV Generation2 has a special server called a Maintenance Module located at the rear of the rack. You may notice the USB modem plugged into it. The XIV Gen3 uses an IBM System x3250 M3 mounted at the front of the rack. This server is used for maintenance, upgrades and remote access (if necessary, via modem). You can spot it here directly below the nameplate where the RPS used to be:
- If you look closely at the disks in a Gen3 you will notice they are marked as SAS drives, not SATA. This gives us a performance boost even though the rotation speed remains the same.If you want to see this closeup yourself, check out this Kaon 3D model of the XIV Gen3.
This got me wondering why SAS drives that have the same rotational speed and seek time as SATA drives, could perform better. The main two reasons are that SAS is full duplex and that SAS supports tagged command queuing. There is a great article regarding the differences here that references SPC testing that Seagate performed. A quote from the article:
SAS drives offer a significant improvement in performance over SATA drives in both throughput and IOPs primarily due to their full duplex, bi-directional I/O capabilities. Published Storage Performance Council (SPC) benchmark results demonstrate this feature with up to 64 percent improvement in the SPC-2 benchmark (based on multiple workload testing).
Wikipedia also talks about the differences here.
So is that it for differences between XIV Generation 2 and Gen3? Well visibly from the front... yes it is. The big changes are around the back and inside the modules, which I will cover in a future blog post.
In the meantime, check out my new Visios for XIV. I have added three new stencils (which I am still working on). Check them out and let me know what you think. You will find them here. If and when I update them, you will get a notification so you can keep up to date.
Rob Jackard from the ATS Group does a great job amalgamating IBM storage site updates so I am sharing them here with you. Here is my high level view:
AIX Users: Review the service dates for your technology level.
DS3500 users: Upgrade your firmware to 7.70.45.00 or 7.77.19.00.
DS8000 users: Take note of the limitation on resizing a space efficient repository. I dealt with this recently at a client but writing a script to delete the flashcopy targets, delete and recreate the repositories and then create the flashcopy targets again.
SVC and Storwize V7000 users: Upgrade to 126.96.36.199 or 188.8.131.52. Be aware of the limitations on split cluster and Global Mirror intra-cluster mirroring.
(2011.06.28) Technical Bulletin: AIX 5.3 Support Lifecycle Notice:
NOTE-1: After October, 1, 2011, IBM will no longer provide generally available fixes or interim fixes for new defects on systems at AIX 5300-11.
NOTE-2: End of Support for AIX 5.3 has been announced as 04/30/2012.
NOTE-3: IBM is no longer providing generally available fixes or interim fixes for new defects on systems at AIX 53-TL06, -TL07, -TL08, -TL09, -TL10.
(2011.06.28) Technical Bulletin: AIX 6.1 Support Lifecycle Notice:
NOTE-1: After October, 1, 2011, IBM will no longer provide generally available fixes or interim fixes for new defects on systems at AIX 6100-04.
NOTE-2: Sometime after May 1, 2012, IBM will no longer provide generally available fixes or interim fixes for new defects on systems at AIX 6100-05.
NOTE-3: IBM is no longer providing generally available fixes or interim fixes for new defects on systems at AIX 61-TL00, -TL01, -TL02, -TL03.
(2011.06.09) SDDPCM fail_over path selection algorithm may fail even though non-preferred paths are available.
(2011.07.25) Command Line Interface and Script Commands Programming Guide- IBM DS3000 / DS4000 / DS5000.
(2011.07.23) IBM DS ESM/HDD firmware package 1.75.
NOTE: IBM DS4000 / DS5000 Hard Disk Drive and ESM Firmware Update Pack Version 1.75.
(2011.06.30) DS3950 / DS4000 / DS5000 Recommended Firmware Levels.
(2011.06.29) IBM DS Controller firmware version 184.108.40.206 code package.
NOTE: Applicable for the DS3500-all models, DS3700-all models, DS3950-all models, DS5020-all models.
(2011.06.13) IBM RETAIN tip# H203084- Potential data integrity issue involving cache memory Error Correcting Code (ECC) logic- IBM System Storage DS3500.
NOTE: IBM highly recommends that users upgrade all of the DS3500 controllers to either the 7.70.45.00 or 7.77.19.00 firmware version immediately.
(2011.06.11) IBM DS Controller firmware version 7.77.18.00 code package.
NOTE: Firmware for DS5100 and DS5300 all models.
(2011.06.10) IBM DS Controller firmware version 7.70.45.00 code package.
NOTE: Applicable for the DS3500 (DS3524/DS3512) subsystems.
(2011.06.10) IBM DS Controller firmware version 7.77.19.00 code package.
NOTE: Applicable for the DS3700 and DS3500 subsystems.
(2011.06.29) Space Efficient Flash Copy Repository size should not be changed for an existing repository.
NOTE: The code fix, to fail the chsestg command if the size of the repository is changed, is available for Release 4.3 (Bundle 220.127.116.11) and Release 5.1.5 (Bundle 18.104.22.168).
(2011.06.23) DS8800 Code Bundle Information.
(2011.07.21) Storwize V7000 performance data constraint violations.
(2011.07.15) Do Not Upgrade to V6.2.0.x if Using a Split-Cluster Configuration.
(2011.07.11) IBM SAN Volume Controller Code V22.214.171.124.
(2011.07.11) IBM Storwize V7000 Code V126.96.36.199.
(2011.07.11) SAN Volume Controller and Storwize V7000 Software Upgrade Test Utility V6.6.
(2011.07.11) Storwize V7000 and SAN Volume Controller Software Upgrades to V188.8.131.52 May Stall if Performance Monitoring Activities are Performed During the Upgrade Process.
NOTE: This issue has been fixed by APAR IC77000 in the V184.108.40.206 PTF release.
(2011.06.10) IBM Storage Tier Advisor Tool for SVC and Storwize V7000.
(2011.06.10) Disk Space Low Warning When Upgrading From SAN Volume Controller V5.1.0.x to V6.1.0.x or V6.2.0.x.
(2011.06.10) IBM Storwize V7000 Initialization Tool.
(2011.06.10) Storwize V7000 Concurrent Compatibility and Code Cross Reference.
(2011.06.10) IBM Storwize V7000 V6.2.0 – Installable Information Center and Guides.
(2011.06.10) IBM System Storage SAN Volume Controller and Storwize V7000 V6.2 – Command-Line Interface Guide.
(2011.06.10) IBM System Storage SAN Volume Controller and Storwize V7000 V6.2 – Troubleshooting Guide.
(2011.06.10) Incorrect Usage of Drive Upgrade Command May Cause Loss Of Access to Data.
NOTE: This issue was resolved by APAR IC74636 in the V220.127.116.11 release of the Storwize V7000 software.
(2011.06.10) Storwize V7000 and SAN Volume Controller FlashCopy Replication Operations Involving Volumes Greater Than 2 TB in Size Will Result in Incorrect Data Being Written to the FlashCopy Target Volume.
NOTE: This issue is fixed by APAR IC76806 in the 18.104.22.168 and 22.214.171.124 PTF releases.
(2011.06.09) SAN Volume Controller and Storwize V7000 Intra-Cluster Global Mirror Not Supported With V6.1.0.x or V6.2.0.x Code.
(2011.06.08) Potential Loss of Access for Split Clustered Systems During Site Failure.
(2011.06.08) IBM SAN Volume Controller Code V126.96.36.199.
(2011.06.08) IBM Storwize V7000 Code V188.8.131.52.
As a child I used to love the spot the difference cartoon in the Sunday paper.
You usually had 10 differences to circle... and I could never find the last one.
Look carefully at the two machines below. Can you spot the differences?
It's an XIV Generation 2 on the left and an XIV Gen3 on the right. For me it's the side panels that give it away (of course the Gen3 printed on the front panel helps).
The big change is the rack that the product uses:
The Generation 2 IBM XIV uses an APC AR3100 (also called a NetShelter).
The Gen3 IBM XIV uses an IBM T42 rack .
So why the change?
Three good reasons:
- Using the T42 lets us offer an optional Ruggedized Rack Feature, providing additional hardware that reinforces the rack and anchors it to the floor. This hardware is designed primarily for use in locations where earthquakes are a concern. As you may be aware there have been some major earthquakes around the world recently (with tragic results). Clearly our clients in earthquake prone areas need us to provide a model that can be hardened for use in earthquake zones.
- Using the IBM T42 rack lets us offer an optional IBM Rear Door Heat Exchanger, which is an effective way to assist your Air Conditioning system in keeping your datacenter cool. It removes heat generated by the modules in the XIV before the heat enters the room. Inside the door of the heat exchanger are sealed tubes filled with circulating chilled water. Its unique design uses standard fittings and couplings and because there are no moving or electrical parts, helps increase reliability. It can be opened like any rear cover, so serviceability of an XIV Gen3 fitted with a heat exchanger is as easy as the standard air cooled version.
- Using the T42 lets us offer a rack which matches our standard rack offering. It's a sturdier rack and travels far better over both short and long distances. To put it simply: Its a more substantial rack.
One nice feature that both products offer is feature code 0200 (weight reduction for shipping). When ordered it tells the plant to ship the XIV in a weight reduced format. For Generation 2 this means the rack that IBM ship will weigh around 300kg (unpacked from the shipping crate). The rest of the machine (the modules and the UPSs) are shipped in separate boxes. The XIV Gen3 will weigh more as less hardware is removed, although I am still confirming what that will be. The advantage is that you can user lower rated goods lifts and move the XIV across floors that are not rated for the maximum weight. You just have to ensure that planned location of the XIV can support the final weight. And the really nice thing? This feature is available at no extra cost.
(edited 27/7/11 to clarify feature code 0200 will be different for XIV Gen3).
Tiny toast or giant hand? It's not an optical illusion.
When IBM offered 2 TB drives on the XIV, I thought I was seeing an illusion: the overall power consumption had dropped (not risen). Guess what? With XIV Gen3, power consumption drops yet again.
Using worst case power consumption numbers I can see the following maximums:
180 drive XIV Generation 2 with 1 TB drives(79TB usable): 8.4 kVA
180 drive XIV Generation 2 with 2 TB drives(161TB usable): 7.1 kVA
180 drive XIV GEN3 with 2 TB drives (161TB usable): 6.7 kVA
So with every update to the product, power consumption has kept dropping. Lets compare the first Generation 2 to the XIV Gen3:
- Usable capacity up by 103%
- Power consumption down by 20%
- Heat output down by 20%
- Noise level down by 33%
- Performance up by up to 400%
How about this as a measure: lets compare the Microsoft Exchange ESRP report for XIV Generation 2 found here with the newly released report for XIV Gen3 found here. While the ESRP program is not a bench marking program, the results are truly impressive.
For more information on Power Consumption and the cost of running XIV, check out these white papers:
Driving down power consumption with the IBM XIV Storage System
Comparing Cost Structures for IBM XIV and EMC V-Max Systems
If your an existing XIV user and your interested in measuring your current power consumption, check out my tutorial here. If you want the spreadsheet shown in the video, drop me a comment (your email address will appear in my comments dashboard but will not be visible to anyone else).
(image care of www.interestingfacts.org)
If you have Brocade fibre channel switches in your SAN, you need to be aware of the method which Brocade use to manage firmware releases. All 4 and 8 Gbps Brocade SAN switches use a Linux based firmware which Brocade call Fabric Operating System or FOS. Updates to this firmware are released in families. This started with version 4, then version 5 and then version 6. Each family has had a series of updates. Version 5.0.x went to 5.1.x, 5.2.x and 5.3.x. Version 6.0.x went to 6.1.x, 6.2.x, 6.3.x and currently 6.4.x.
The good news is that you can non-disruptively update firmware on Brocade switches:
So you can move to higher releases without an outage (note there may be exceptions to this, always read your release notes to be sure).
However you need to be aware of a rule regarding the from and to versions. Since FOS 6.0.0 Brocade have a one-release migration policy. This allows more reliable and robust migrations for customers. By having fewer major changes in internal databases, configurations, and subsystems, the system is able to perform the upgrade more efficiently, taking less time and ensuring a truly seamless and non-disruptive process for the fabric. The one-release migration policy also reduces the large number of upgrade/downgrade permutations that must be tested, allowing Brocade to spend more effort ensuring the supported migration paths are thoroughly and completely verified.
Disruptive upgrades are allowed, but only for a two-level migration (for instance from 6.2 to 6.4, skipping 6.3).
So why should you care?
Well your upgrade philosophy may be: If it aint broke, don't fix it. Or you may have the policy: We do fix on fail, apart from that, we don't update firmware. Much as I can understand the attraction of this, when you finally do perform an update, you may find yourself having to do many upgrades or kangaroo hops as I call them.
Lets document some possible kangaroo hops from an old to what is currently the newest release:
4.1.2f → 4.4.0e → 5.1.1b → 5.3.2c → 6.0.1a → 6.1.2c → 6.2.2b → 6.3.1a → 6.4.1b
As you can see from the steps above, you may have a very long change window if you are choosing to not perform updates on a regular basis. There are also lots of caveats and restrictions based on the hardware of the switch you are running. It is very important you consult the release notes that can be found at the following links:
IBM SAN b-type Firmware Version 4.x Release Notes
IBM SAN b-type Firmware Version 5.x Release Notes
IBM SAN b-type Firmware Version 6.x Release Notes
If your looking for a recommended version to install, the Version 6 release notes page above gives advice on this. Currently it says: IBM recommends that Open System customers that currently use FOS 6.1 or earlier limit migration to FOS 6.2.2b, 6.3.0d, 6.3.1a, or 6.4.0c or later only.
Please notes that the release notes published on IBM's website are Brocade documents. While this is good (since it means they come straight from the manufacturer), you need to decode which Brocade hardware model is which IBM machine type. The version 6 URL above also contains a product cross-reference which lets you convert Brocade product names to IBM product names. I am also working on a post which will help you with this, so watch this space.
Edit July 19, 2011. The original post suggests 6.4.0a as a go-to level, this has since been removed. It has also been pointed out to me that some blade type switches may not be capable of hot-code-load (HCL). You need to check your vendor release notes to be certain.
Just a quick post as I am leaving Singapore to return to Melbourne, I thought I would share two more photos with you.
No trip to Singapore is complete without a visit to the Merlion, the mythical creature who acts as a mascot for Singapore. The fish body represents Singapore's origin as a fishing village when it was called Temasek, which means "sea town" in Javanese. The lion head represents Singapore's original name — Singapura — meaning "lion city" or "kota singa" (thanks to Wikipedia for the text).
Here is a night view across to the Marina Bay Sands, the integrated resort fronting Marina Bay in Singapore. Developed by Las Vegas Sands, it is billed as the world's most expensive standalone casino property at S$8 billion, including cost of the prime land. The remarkable building on the left hand side is the ArtScience Museum. The architecture is said be a form reminiscent of a lotus flower (again, thanks to Wikipedia for the information).
I would love to return to Singapore soon in the form of a tourist, it is an amazing city full of vibrant energy. With Singapore National Day coming up on August 9 and the Formula One in September, there will certainly be plenty for visitors to see and do.
If you want to see more of my photographs, feel free to visit my Flikr account.
I am in Singapore this week running a teach the teacher seminar on Storwize V7000. We are creating more instructors as demand for courses on the product continues to increase.
I am fortunate to be staying at the Marina Bay Sands Resort, which is one of the most mind blowing facilities I have ever seen. Check out this view of the Infinity Pool from the Skydeck up on the 57th floor.
Out my hotel window they are building a huge new facility known as Gardens by the Bay, but frankly I think it looks more a spaceport!
According to Wikipedia these are Supertrees: tree-like structures that dominate the Gardens landscape with heights that range between 25 and 50 metres. They are vertical gardens that perform a multitude of functions, which include planting, shading and working as environmental engines for the gardens.
But doesn't this look like two huge crashed spaceships?
Actually they will be giant conservatories, again according to Wikipedia they are the Flower Dome and the Cloud Forest.
They certainly know how to think big in Singapore!
XIV Gen 3 modules are built on a new generation of Intel microprocessors based on the Nehalemmicro-architecture. Nehalem is the most profound architecture change that Intel has introduced in the 21st century. Some of the key changes and their benefits are:
- Integrated memory controller: The memory controller now sits on the same silicon die as the processors. It runs at the same clock-speed as the processors instead of at the lower speed of an external front-side bus. This dramatically improves memory performance and therefore overall system performance.
- No need for buffered memory: Previously, buffered memory was required to improve the performance of the memory sub-system. Buffered memory is relatively expensive and energy hungry. With the faster Nehalem integrated memory controller, the system can deliver improved performance without needing buffered memory, saving cost as well as energy. XIV Gen 3 will be faster and cooler at the same time using unbuffered DDR3 RAM. And since the memory is cheaper, we can put more in.
- Increased memory capacity: Nehalem supports more memory chips at higher speeds. In XIV Gen 3 this translates into a 50 to 200% increase in system cache, significantly lifting the performance headroom of an already stellar performer.
- No more front-side bus: Memory, second CPU package and peripherals no longer have to share and wait on a single bus to communicate. The connections are now direct or switched, enabling increased parallelism and the ability to do more work simultaneously.
- PCI Express Generation 2: The I/O sub-system doubles in speed with the introduction of PCI Gen-2. This enables faster network and I/O adapters for XIV Gen 3:
- 8 Gbps fibre-channel host connections.
- More iSCSI host connections (including at the entry configuration of 6 modules)
- Multi-channel, low latency infiniband as the inter-module connection.
- A slot for solid state disk (SSD).
- Better systems management instrumentation: The system supports increased monitors for sub-systems for more sophisticated self diagnostics and healing. Remote management capability has also been improved.
Furthermore, the new motherboards have additional expansion capacity (more processors, memory and I/O) that can be utilized to deliver future improvements in performance and increased software functionality.
XIV Gen 3 is not the first storage sub-system to adopt the Nehalem architecture. Some of our competitors (EMC and NetApp for example) have already done so with their dual-controller arrays. XIV Gen 3 takes the Nehalem architecture advantage forward, not twice, but six to fifteen times.
Many thanks to Patrick Lee for writing up this great summation.
Today IBM is announcing a new member of the XIV family, which we are calling XIV Gen3. I thought I would give a brief history of how we got here before I get too carried away with details.
What was Generation 1 of the XIV?
In 2002 an Israeli startup began work on a revolutionary new grid storage architecture. They devoted three years to developing this unique architecture that they called XIV.
They delivered their first system to a customer in 2005. Their product was called Nextra(does it look familiar?).
What was Generation 2 of the XIV?
In December 2007, the IBM Corporation acquired XIV, renaming the product the IBM XIV Storage System. The first IBM version of the product was launched publicly on September 8, 2008. Unofficially within IBM we refer to this as Generation 2 of the XIV.
The differences between Gen1 and Gen2 were not architectural, they were mainly physical. We introduced new disks, new controllers, new interconnects, improved management, additional software functions.
As anyone who has read my blog knows, I have been working on the Generation 2 XIV since the day IBM began planning to release it as an IBM product. So it is very exciting to be able to share with you that we are now releasing Generation 3 of the IBM XIV Storage System.
What is Generation 3 of the XIV?
Generation 3 of the XIV is a new member of the XIV family, that will be an alternative to the Generation 2 XIVs we currently offer. It does not change the fundamental architecture, that remains the same. What it does do is bring significant updates to almost every part of the XIV, including:
- Introducing Infiniband interrconnections between the modules.
- Upgrading the modules to add 2.4 Ghz quad core Nehalem CPUs; new DDR3 RAM and PCI Gen 2 (using 8x slots that can operate at 40 Gbps) .
- Upgrading the host HBAs to operate at 8 Gbps.
- Upgrading the SAS adapter.
- Upgrading the disks to native SAS.
- A New rack.
- A new dedicated SSD slot (per module) for future SSD upgrades.
- Enhancements to the GUI plus a native Mac OS version.
I will be blogging about each of these changes over the coming days and weeks as we move to general availability date, so watch this space. In the meantime, why not visit the official XIV page here and check out the ITG Report linked there.
I have received this question several times, so it's clearly something people are interested in.
The Storwize V7000 has two controllers known as node canisters. It's an active/active storage controller, in that both node canisters are processing I/O at any time and any volume can be happily accessed via either node canister.
The question then gets asked: what happens if a node canister fails and can I test this? The answer to the question of failure is that the second node canister will handle all the I/O on its own. Your host multipathing driver will switch to the remaining paths and life will go on. We know this works because doing a firmware upgrade takes one node canister offline at a time, so if you have already done a firmware update, then you have already tested node canister fail over. But what if you want to test this discretely? There are four ways:
- Walk up to the machine and physically pull out a node canister. This is a bit extreme and is NOT recommended.
- Power off a node canister using the CLI (using the satask stopnode command). This will work for the purposes of testing node failure, but the only way to power on the node canister is to pull it out and reinsert it. This is again a bit extreme and is not recommended. This is also different to an SVC, since each SVC has it's own power on/off button.
- Use the CLI to remove one node from the I/O group (using the svctask rmnode command). This works on an SVC because the nodes are physically separate. On a Storwize V7000 the nodes live in the same enclosure and a candidate node will immediately be added back to the cluster, so as a test this is not that helpful.
- Place one node into service state and leave it there will you check all your hosts. This is my recommended method.
First up this test assumes there is NOTHING else wrong with your Storwize V7000. We are not testing multiple failure here. You need to confirm the Recommend Actions panel as shown below, contains no items. If there are errors listed, fix them first.
Once we are certain our Storwize V7000 is clean and ready for test, we need to connect via the Service Assistant Web GUI. If you have not set up access to the service assistant, please read this blog post first.
So what's the process?
Firstly logon to the service assistant on node 1 and place node 2 into service state. I chose node 2 because normally node 1 is the configuration node (the node that owns the cluster IP address). You need to confirm your connected to node 1 (check at top right) and select node 2 (from the Change Node menu) and then choose to Enter Service State from the drop down and hit GO.
You will get this message confirming your placing node 2 into service state. If it looks correct, select OK.
The GUI will pause on this screen for a short period. Wait for the OK button to un-grey.
You will eventually get to this with Node 1 Active and Node 2 in Service.
Node 2 is now offline. Go and confirm that everything is working as desired on your hosts (half your paths will be offline but your hosts should still be able to access the Storwize V7000 via the other node canister).
When your host checking is complete, you can use the same drop down to Exit Service State on node2 and select GO.
You will get a pop up window to confirm your selection. If the window looks correct, select OK.
You will get the following panel. You will need to wait for the OK button to become available (to un-grey).
Provided both nodes now show as Active, your test is now complete.
I think this picture speaks for itself: Three XIVs. Three cities. Three way iSCSI.
All the mirror connections were created in seconds using drag and drop in the XIV GUI.
I can now take a volume in one city and mirror it to another.
And yes.... IBM Australia now has a demo XIV in each of three major cities, so why not drop by and have a look?
Over at SearchStorage.com.AU they recently published an article entitled Six reasons to adopt storage virtualisation. You can find the article here. The six given reasons are:
- Storage virtualisation reduces complexity
- Storage virtualisation makes it easier to allocate storage
- Better disaster recovery
- Better tiered storage
- Virtual storage improves server virtualisation
- Virtual storage lets you take advantage of advanced virtualisation features
Its a well written article and I agree with every point. But one could be forgiven for reading the article and thinking that either storage virtualisation is new, or that storage virtualisation is something you might consider AFTER doing server virtualisation. Both of which are not true.
IBM embraced storage virtualisation in June 2003 when we announced our SAN Volume Controller (the IBM SVC). I even found a CNET.com article from way back then. You can find it here (the image below is a screen capture of that CNET website).
IBM's SVC product has been enhanced repeatedly since 2003 with an enormous list of supported host servers and backend storage controllers. We have added new functions every year including Easy Tier, split cluster, VAAI, an enhanced GUI and a new form factor for the SVC code in the form of the Storwize V7000.
So let me give you a seventh reason for adopting storage virtualisation: A vendor who has shown genuine support for this technology. No vendor has embraced storage virtualisation with more enthusiasm than IBM. We have an industry leading solution with phenomenalSPC benchmarks, an enormous number of case studies and an architecture that does not lock you in. Indeed it is an architecture that can grow as you grow and that can be upgraded without disruption.
So please consider storage virtualisation from IBM, using either the SVC or the Storwize V7000. If your in Australia, we have demo centers dotted around the country. Many of our Business Partners can also demonstrate IBM storage virtualisation using their own Storwize V7000s. If your in Melbourne feel free to give me a call and schedule a time to drop into Southgate.
Bob Leah is one of our leading lights in the developerWorks team. His blog (found here
) is a great resource for Web designers. He recently created a new set of templates to enable a mobile page for developerWorks blogs. You can read his article about the new template here
This morning I boldly went and installed the new templates and so far I think it looks fantastic, not only on the iPhone, but also the iPad and on regular browsers. My only complaint is that I lost the banner image of my Golden Retriever (my loyal hound Suzie). Bob assures me she will reappear soon. In the meantime, I would love to hear feedback about the new template. This is what it looks like on my iPhone:
Tivoli Pulse is coming to Melbourne July 27 and 28, 2011 at the Crown Promenade in Melbourne.
How many chances do you get to listen to the following speakers in one place?
- Nigel Phair, Director, Centre for Internet Safety, University of Canberra
- Steve Van Aperen, Human Lie Detector and Director of SVA Training and Australian Polygraph Services
- Laura Guio, Vice President, Storage Sales, STG Growth Markets, IBM
- Joao Perez, Vice President of Worldwide Tivoli Software Sales, IBM USA
- Jamie Thomas, Vice President Tivoli Strategy and Development, IBM USA
- Glenn Wightwick, Director, IBM R&D Australia
There are 12 customer case studies presented mainly by customers. There are over 70 sessions in eight streams, expert keynote speakers, case studies and presentations. Pulse 2011 provides the tools to help advance your infrastructure goals.
Registration is free so what excuse do you have? Registration is free so what excuse do you have? Find out more here. You can enroll here. The agenda front page is here. The detailed agenda is here.
I will presenting on day two. talking about Virtualization and Storwize V7000 so maybe I will see you there!
I have an admission: I am a bit of an Apple fanboy. Well actually not a full on Apple fanboy, I have an iPhone and an iPad but I don't have a MacBook (although if IBM start offering a cash payment instead of giving laptops to mobile employees, that might change). But not everything is perfect in the land of Apple. Let me give you an example, one that I routinely find people are not aware of (apologies if you learnt all of this months ago).
The picture below appears to show three identical Apple charger packs (with Australian pins). You may have a similar collection. But are they all identical? Sadly not.
Only those with very good eyes can spot the difference by reading the rather pale decal on the bottom section of each charger. The text is so small and faint, I struggled to take a decent picture, but here is my sad attempt for one of them (they are all different):
So how are my three power adapters different?
- The first is marked as a 10 Watt USB Power Adapter (it came with an iPad). Its output is amusingly marked as 5.1 Volts DC at 2.1 Amps, which suggests 10.7 Watts.
- The second one is marked as a 5 Watt USB Power Adapter (it came with my iPhone). Its output is marked as 5 Volts DC at 1 Amp, which is indeed 5 Watts.
- The third is marked as an iPod USB Power Adapter. No stated wattage, but its output is marked as 5 Volts DC at 1 Amp, which again suggest 5 Watts. So perhaps my 5 Watt adapter and my iPod adapter are actually the same.
The big question that comes up: Are they interchangeable? The answer: Yes but with caveats.
If you have an iPad you should use the 10W adapter. If you use the 5W adapter it will still charge but at a much slower rate. Apple confirm this here where they state:
iPad will also charge, although more slowly, when attached to an iPhone Power Adapter (by which they mean a 5 Watt adapter).
If you have an iPhone or an iPod can you use the 10W adapter? The answer is yes! It will recharge with no ill effects. Apple confirm this here, where they state:
While designed for use with the iPad, you can use the iPad 10W USB Power Adapter to charge all iPhone and iPod models.
So I am putting my 5 Watt and iPod adapter in the cupboard and using the 10 Watt adapter exclusively. If you have an iPad and finds it's recharging slowly, you may be using an older 5 Watt adapter (but you may need a magnifying glass to spot the difference!).
My suggestion to Apple? A few more cents worth of ink please, to make things more obvious.
To close, on my first Apple focused blog entry, let me pose a question:
Will it blend?