On the news today, they mentioned it was "Happy Pi Day"
. Today is the 14th day of the 3rd month, and "pi" is about 3.14159, the ratio of the circumference of a circle to its diameter. So, in Tucson it is celebrated on 3/14, at 1:59pm MST.
The ratio has a lot to do with storage.
Tape wrapped around a hub. Tape is thin, but not completely, so wrapping hundreds of meters on tape results in a change in diameter of the spool. This impacts the rotational velocity needed to get the linear meters-per-second on the tape media consistent as the diameter changes when you wind down from a full spindle toward the hub. IBM has variable speed motors and other clever technologies to handle this adjustment.
Disks spin at consistent speeds, but tracks on the outside edge travel faster across the head than the inside tracks.Currently, the top speeds for disk are 15000 Revolutions per minute (RPM). As faster rotational speeds are investigated, the researchers find they need to make the diameters smaller to compensate.
The diameters of disks were based on "U", the unit height of standard 19" racks. A "U" is 1.75 inches, and standard floppy diskettes were 5.25 inch (3U) and 3.5 inch (2U). For those who have a difficult time remember how many inches a "U" is, it is the height of a standard two-by-four (2x4) piece of lumber.
The value of "pi" has been calculated to over a billion significant digits. Here is a cuteapplet to use if you ever need the value to any level of accuracy.
technorati tags: IBM, disk, tape, pi, Pi Day, U, RPM
Yesterday, most of the USA moved its clocks forward an hour. Arizona and Hawaii don't bother, as there is plenty of daylight in both states. While it may seem that Arizonans are not "affected" by Daylight Saving Time (DST)
, we are, because we have to deal with the time zone offsets with those we talk to in other states. (Note: it is SAVING not SAVINGS
, many people mistakenly say "Daylight Savings
Time", which is incorrect).
Year round, Arizona is on Mountain Standard Time (MST), which is GMT-7. Figuring out what time Arizona can be remembered by a simple mnemonic:
- In the winter time, Utah, Colorado, New Mexico, and Arizona are all on MST, so best American ski resorts are all on the same time zone. People who hop from one ski resort to another by helicopter don't have to reset their watches as they move into or out of Arizona.
- In the summer time, Arizonans head to San Diego, Los Angeles or other parts of California, where it is not so hot. California is on PDT, which is the same as MST. People who hop from Arizona wineries and vineyards to those in California and Oregon can easily cross the Arizona-California border without having to reset our watches.
Those in Second Life may have noticed that "Second Life time" (SL time) shifted from PST to PDT. That is because their servers reside in San Francisco, California.
technorati tags: IBM, Daylight Saving Time, DST, Second Life, Arizona, Hawaii, PDT, MST, GMT
The blogosphere has quieted down a bit over the two papers on MTBF estimates for Disk Drive Modules (DDM).One article on SearchStorage.com by Arun Taneja asksIs RAID passé?
Disk capacity is growing at a faster rate than DDM reliability. During the hours to rebuild a DDM, companies are at risk of additional failures that could require recovery from a copy, or result in data loss, depending on how well your Business Continuity (BC) plan is written and followed.
I'll discuss two comments in particular.
Joerg Hallbauer felt I did not address all the issues raised:
... The problem with that is that it's the DISK ARRAY that determines when a drive has failed an starts the rebuild process. That IS under the control of IBM, specifically the controller. But more importantly, it effects my risk of data loss.
As I see it, my risk of data loss with RAID-5 is influenced by two main factors. 1 - The drive replacement rate and 2 - The rebuild time (which to a great extent is a function of the drive size) both of which IBM has some control over.
So, I think that the question in my mind is, what's the tipping point? Where does the risk of using RAID-5 protection exceed what I'm willing to accept, and I need to move to some other protection mechanism like RAID-6? Is it when the rebuild times exceed 12 hours? 24 hours? 48 hours?
Also, I wonder why IBM isn't publishing some information to help me make these kinds of decisions?
Bill Todd felt I was not technical enough:
Oh, dear - while Tony doesn’t seem to be parrying vigorously (as Seagate, Hitachi, and Chunk were doing), his contribution sounds more like IBM marketing than the kind of detailed, technical response one might have hoped for
... well, he *is* a manager, and a marketing one at that, so perhaps we shouldn’t expect more).
Both are fair comments. Disk arrays do run microcode to assist or perform the RAID function, detect failures and start the rebuild process, and so clever designs to support spare disks, process the rebuild quickly, and so on, can differentiate one vendor's offering from another.
On the issue of what does IBM provide to help its clients make the right decisions for their environments, Jon William Toigo at DrunkenData points his readers to IBM's Business Continuity Self-Assessment tool. In normal data center conditions, DDMs will fail, and a Business Continuity plan shouldbe written and developed to handle this fact. Using 2-site and 3-site mirroring, complemented with versions of tape backups, can help address some of these concerns and mitigate some of the risks involved with using disk systems.
For those who want a more technical answer, IBM has just published a series of IBM Redbooks.
Each client's situation is different, so no simple answer is possible. However, IBM does have a lot of experience in this area, and would be glad to help you write or update your existing Business Continuity plan.
technorati tags: IBM, disk, MTBF, estimates, papers, Arun Taneja, Jon William Toigo, StorageMojo, Business Continuity, Redbooks
For those interested in performance, my IBM colleague Elisabeth Stahl hasstarted up her own blog on the subject, called Benchmarking and Systems Performance
. Check it out!
technorati tags: IBM, systems, performance, TPC-C, benchmarking, blog, Elisabeth Stahl
Tuesday is always good for announcements. Today, Gartner, Inc.
announced that IBM has taken over HP in its climb to the top. I'll quote directly from today's press release:
STAMFORD, Conn., March 6, 2007 — Worldwide external controller-based (ECB) disk storage revenue totaled $15.2 billion in 2006, a 4.1 percent increase over 2005 revenue of $14.6 billion, according to Gartner, Inc.IBM overtook Hewlett-Packard for the No. 2 position in 2006 (see Table 1). IBM’s worldwide ECB market share increased to 15.8 percent, while HP’s market share dropped to 13.1 percent.
IBM beat HP both in 4Q06, as well as 2006 full year.You can read more about it from Gartner Dataquest report “Market Share: Disk Array Storage, All Regions, All Countries, 1Q05-4Q06" on their website. (Note: non-IBMers might need an account with Gartner to access this, not sure)
The focus was on external controller-based disk, not external controller-less SCSI/SAS disk, not disk arrays posing as virtual tape libraries, nor any disk sold inside HP, Sun, IBM or Dell servers. This is to compare with disk-only vendors such as EMC and HDS. The revenues reflect hardware only, including hardware-related parts of financial leases and managed services. Revenues from optional priced software features such as multi-pathing drivers, management software, or advanced copy services were excluded.I discussed these types of analyst reports back in blog post last September: Space Race Heats Up.
These marketshare numbers are based on revenues, not units or terabytes. When a box gets sold, the revenue was counted toward the vendor that sold it, not the manufacturer that built it. In this last report:
- When Dell sells an EMC box, it gets counted as Dell. When Fujitsu Siemens sells an EMC box, it gets counted as "Other".
- When HP sells an HDS box, it gets counted as HP. When Sun sells the HDS box, it gets counted as Sun.
- When IBM sells its System Storage N series (from the OEM agreement with NetApp), it gets counted as IBM. Both IBM and NetApp experienced growth in the NAS/unified storage arena.
It's still cold here in the Washington DC area, but at least good news like this helps warm me up!
technorati tags: IBM, disk, external controller-based, ECB, Gartner, 4Q06, 2006, revenue, marketshare, HP, EMC, Sun, Dell, NetApp, HDS, NAS
Well, this week I am in Maryland, just outside of Washington DC. It's a bit cold here.
Robin Harris over at StorageMojo put out this Open Letter to Seagate, Hitachi GST, EMC, HP, NetApp, IBM and Sun about the results of two academic papers, one from Google, and another from Carnegie Mellon University (CMU). The papers imply that the disk drive module (DDM) manufacturers have perhaps misrepresented their reliability estimates, and asks major vendors to respond. So far, NetAppand EMC have responded.
I will not bother to re-iterate or repeat what others have said already, but make just a few points. Robin, you are free to consider this "my" official response if you like to post it on your blog, or point to mine, whatever is easier for you. Given that IBM no longer manufacturers the DDMs we use inside our disk systems, there may not be any reason for a more formal response.
- Coke and Pepsi buy sugar, Nutrasweet and Splenda from the same sources
Somehow, this doesn't surprise anyone. Coke and Pepsi don't own their own sugar cane fields, and even their bottlers are separate companies. Their job is to assemble the components using super-secret recipes to make something that tastes good.
IBM, EMC and NetApp don't make DDMs that are mentioned in either academic study. Different IBM storage systems uses one or more of the following DDM suppliers:
- Seagate (including Maxstor they acquired)
- Hitachi Global Storage Technologies, HGST (former IBM division sold off to Hitachi)
In the past, corporations like IBM was very "vertically-integrated", making every component of every system delivered.IBM was the first to bring disk systems to market, and led the major enhancements that exist in nearly all disk drives manufactured today. Today, however, our value-add is to take standard components, and use our super-secret recipe to make something that provides unique value to the marketplace. Not surprisingly, EMC, HP, Sun and NetApp also don't make their own DDMs. Hitachi is perhaps the last major disk systems vendor that also has a DDM manufacturing division.
So, my point is that disk systems are the next layer up. Everyone knows that individual components fail. Unlike CPUs or Memory, disks actually have moving parts, so you would expect them to fail more often compared to just "chips".
If you don't feel the MTBF or AFR estimates posted by these suppliers are valid, go after them, not the disk systems vendors that use their supplies. While IBM does qualify DDM suppliers for each purpose, we are basically purchasing them from the same major vendors as all of our competitors. I suspect you won't get much more than the responses you posted from Seagate and HGST.
- American car owners replace their cars every 59 months
According to a frequently cited auto market research firm, the average time before the original owner transfers their vehicle -- purchased or leased -- is currently 59 months.Both studies mention that customers have a different "definition" of failure than manufacturers, and often replace the drives before they are completely kaput. The same is true for cars. Americans give various reasons why they trade in their less-than-five-year cars for newer models. Disk technologies advance at a faster pace, so it makes sense to change drives for other business reasons, for speed and capacity improvements, lower power consumption, and so on.
The CMU study indicated that 43 percent of drives were replaced before they were completely dead.So, if General Motors estimated their cars lasted 9 years, and Toyota estimated 11 years, people still replace them sooner, for other reasons.
At IBM, we remind people that "data outlives the media". True for disk, and true for tape. Neither is "permanent storage", but rather a temporary resting point until the data is transferred to the next media. For this reason, IBM is focused on solutions and disk systems that plan for this inevitable migration process. IBM System Storage SAN Volume Controller is able to move active data from one disk system to another; IBM Tivoli Storage Manager is able to move backup copies from one tape to another; and IBM System Storage DR550 is able to move archive copies from disk and tape to newer disk and tape.
If you had only one car, then having that one and only vehicle die could be quite disrupting. However, companies that have fleet cars, like Hertz Car Rentals, don't wait for their cars to completely stop running either, they replace them well before that happens. For a large company with a large fleet of cars, regularly scheduled replacement is just part of doing business.
This brings us to the subject of RAID. No question that RAID 5 provides better reliability than having just a bunch of disks (JBOD). Certainly, three copies of data across separate disks, a variation of RAID 1, will provide even more protection, but for a price.
Robin mentions the "Auto-correlation" effect. Disk failures bunch up, so one recent failure might mean another DDM, somewhere in the environment, will probably fail soon also. For it to make a difference, it would (a) have to be a DDM in the same RAID 5 rank, and (b) have to occur during the time the first drive is being rebuilt to a spare volume.
- The human body replaces skin cells every day
So there are individual DDMs, manufactured by the suppliers above; disk systems, manufactured by IBM and others, and then your entire IT infrastructure. Beyond the disk system, you probably have redundant fabrics, clustered servers and multiple data paths, because eventually hardware fails.
People might realize that the human body replaces skin cells every day. Other cells are replaced frequently, within seven days, and others less frequently, taking a year or so to be replaced. I'm over 40 years old, but most of my cells are less than 9 years old. This is possible because information, data in the form of DNA, is moved from old cells to new cells, keeping the infrastructure (my body) alive.
Our clients should approach this in a more holistic view. You will replace disks in less than 3-5 years. While tape cartridges can retain their data for 20 years, most people change their tape drives every 7-9 years, and so tape data needs to be moved from old to new cartridges. Focus on your information, not individual DDMs.
What does this mean for DDM failures. When it happens, the disk system re-routes requests to a spare disk, rebuilding the data from RAID 5 parity, giving storage admins time to replace the failed unit. During the few hours this process takes place, you are either taking a backup, or crossing your fingers.Note: for RAID5 the time to rebuild is proportional to the number of disks in the rank, so smaller ranks can be rebuilt faster than larger ranks. To make matters worse, the slower RPM speeds and higher capacities of ATA disks means that the rebuild process could take longer than smaller capacity, higher speed FC/SCSI disk.
According to the Google study, a large portion of the DDM replacements had no SMART errors to warn that it was going to happen. To protect your infrastructure, you need to make sure you have current backups of all your data. IBM TotalStorage Productivity Center can help identify all the data that is "at risk", those files that have no backup, no copy, and no current backup since the file was most recently changed. A well-run shop keeps their "at risk" files below 3 percent.
So, where does that leave us?
- ATA drives are probably as reliable as FC/SCSI disk. Customers should chose which to use based on performance and workload characteristics. FC/SCSI drives are more expensive because they are designed to run at faster speeds, required by some enterprises for some workloads. IBM offers both, and has tools to help estimate which products are the best match to your requirements.
- RAID 5 is just one of the many choices of trade-offs between cost and protection of data. For some data, JBOD might be enough. For other data that is more mission critical, you might choose keeping two or three copies. Data protection is more than just using RAID, you need to also consider point-in-time copies, synchronous or asynchronous disk mirroring, continuous data protection (CDP), and backup to tape media. IBM can help show you how.
- Disk systems, and IT environments in general, are higher-level concepts to transcend the failures of individual components. DDM components will fail. Cache memory will fail. CPUs will fail. Choose a disk systems vendor that combines technologies in unique and innovative ways that take these possibilities into account, designed for no single point of failure, and no single point of repair.
So, Robin, from IBM's perspective, our hands are clean. Thank you for bringing this to our attention and for giving me the opportunity to highlight IBM's superiority at the systems level.
technorati tags: IBM, Seagate, Hitachi, HGST, EMC, NetApp, HP, HDS, Sun, Google, CMU, DDM, Fujitsu, MTBF, MTTF, AFR, ARR, JBOD, RAID, Tivoli, SVC, DR550, CDP, FC, SCSI, disk, tape, SAN,
Sometimes, it's difficult to explain the products I manage to people outside the IT storage industry. How do you explain FCP vs. FICON, Giant Magnetoresistive (GMR) heads, the SMI-S interface, etc. enough to then explain how your job relates to those technologies. At least my friends and family read this blog, so they can somewhat understand some of the things I am working on. When I visit my folks on Sundays, we sometimes discuss items they read in my blog that week.
In addition to a "take your children to work day", we have discussed within IBM a "take your parents to work day", especially for the young new hires who have a hard time explaining what their new job is to the rest of their family.
Seth Godin points to a video ad to fill a job position and the confusion therein with the "recruiter" who just doesn't understand the job involved.
The problem is not just your parents, but any of your co-workers old enough to be parents who haven't bothered to keep up with the latest advancements in Web 2.0 technology. Here are some examples:
- A project leader working with a technology partner asked if me if there was a difference between a "blog" and a "wiki" and which should his team use. This was not a simple yes/no answer, and involved some explanation, conversation and understanding of what he was trying to accomplish.
- For one of my meetings, someone instant-messaged me asking where it was, was it "face-to-face" (F2F) or Conference call (CC). I replied back, "A2A w/CC" (avatar-to-avatar with voice over conference call). When you are meeting other avatars in-world in Second Life, it gets quite distracting having everyone typing away, with their hands and fingers moving furiously, so we use a conference call to complement our 3D interaction.
That's why I was very excited to seeLinden Lab announces voice beta in Second Life. It won't be fully ready until later this year, but adding voice to Second Life will greatly reduce the hurdles we now have trying to coordinate conference calls with in-world activity.
I realize not everyone can keep up with all the new and different technologies, but the social networking aspects of some of these new developments are worth looking into.
technorati tags: IBM, blog, wiki, social networking, technology, avatar, voice, Second Life, GMR, FICON, FCP, SMI-S
Tonight I had dinner with Henry Daboub (an SVC expert from Houston, TX) and some clients, who asked what I would blog about tonight, and I figured it made sense to blog about the SVC.
Hu Yoshida clarifies his position about storage virtualization, including the statement: "As a result they can not provide the availability, scalability, and performance of a DS8300. If they could, there would be no need for a DS8300."
Of course, if humans descended from apes, why are there still apes? Now that we have cars, why are there still trains? But perhaps a better question is: now that there are supercomputers, why are there still mainframe servers?
The issue is the difference between scale-up versus scale-out. Scale-up is making a single box as big and beefy as possible. When the SVC was introduced, the major vendors all had scale-up designs: IBM ESS 800, HDS Lightning, EMC Symmetrix. Like the mainframe, they were for customers that wanted everything in a single monolithic container.
SAN Volume Controller was the result of IBM Research asking the question, if you could put anyone's software (feature and functionality) on anyone's hardware (monolithic scale-up design), what combination would you choose? What if the brains inside today's monolithic systems could be snapped into the another vendor's frame? What if you could run SRDF on an HDS box, or ShadowImage on an IBM box? The surprising response was that most customers would want a single software for consistency, but wanted the option to choose from different vendors hardware, to negotiate the best price of the commodity iron. Based on this feedback, the SVC was born.
The idea was simple, put all the brains in a separate appliance. The appliance would do the non-disruptive migrations, the caching, the striping, and all the copy services. This lets the customer chose then the hardware they want, any mix of FC and ATA disk, from any vendor.
The SVC design was based on IBM's long history in supercomputers. Using the same "scale-out" technology, the power comes not from having it all in one monolithic box, but rather in a design that combines small nodes together. While the cache is not globally shared, the data is shared between node-pairs, and the logical-to-physical mapping is routed around to all nodes in a cluster. Each SVC node talks to each other SVC node through the FCP ports, eliminating the need for additional wiring. For the most part, each node does its own separate work, but when it needs to, they can communicate across, just like nodes in a supercomputer.
Both the SVC and the DS8300 Turbo have better than 99.999 percent availability, based on redundant components designed for no single point of failure (SPOF). IBM has sold thousands of each, and they have been in the field enough time that we can make that claim. There is nothing between scale-up versus scale-out that makes on inherently more available than the other.
Both the SVC and the DS8300 Turbo can scale from as little as a few TB of disk, to hundreds of TB of disk. We have yet to meet a customer that is too big for the SVC. The DS8300 Turbo is able to scale by adding up to four extension frames, but is still considered a single box from a scale-up perspective. From a processor perspective, an 8-node SVC cluster has 16 Intel Xeon processors, and the DS8300 has 8 POWER5+ processors (dual 4-way). The key advantage of scale-out is that you can add capacity to the SVC in smaller increments. Jumping from a DS8100 (dual 2-way) to a DS8300 (dual 4-way) is a big jump.
SVC remains the fastest disk system in the industry, based on both the SPC-1 and SPC-2 benchmarks. The latest model now supports 8GB per node, for a total of 64GB for an 8-node cluster. This can be used for both read and write non-volatile storage. By comparison, DS8300 Turbo has 32GB write non-volatile storage, and up to 256 GB of read-only cache. The SVC is able to do 155,519 IOPS, faster than the 123,030 IOPS for the DS8300, and of course faster than anything from EMC, HDS, HP or Texas Memory Systems. Of course, workloads vary, and there might be some workloads where the 256GB of read-only cache of the monolithic DS8300 is the better choice.
- copy services
Both SVC and DS8300 Turbo offer FlashCopy (point-in-time copy), Metro Mirror (synchronous) and Global Mirror (asynchronous). SVC provides the additional benefit that it can perform a FlashCopy from one frame to another, and the ability to migrate data seemlessly from one box to another.
Interestingly, IBM has seen a resurgence in both mainframe sales, as well as interest in supercomputers. Both have their place, based on the workload characteristics, and so IBM will continue to offer both modular scale-out designs, as well as monolithic scale-up designs, to meet the different needs of the marketplace.
technorati tags: IBM, disk, SAN, Volume, Controller, DS8300, Turbo, Hu Yoshida, FlashCopy, Metro Mirror, Global Mirror, SPC, benchmarks, HDS, HP, EMC, mainframe, supercomputer
Modified by TonyPearson
Back in 1986, when I first started with IBM, my first job was working on a software product called Data Facility Hierarchical Storage Manager (DFHSM). This did "Information Lifecycle Management" (ILM) by moving data sets from one storage tier to another. (The phrase "Information Lifecycle Management" was coined by StorageTek in 1991, and later resurrected by EMC a few years ago. As is often typical, things that appear new to the distributed systems crowd, are often well-established concepts in the mainframe arena).
To help explain DFHSM and its sister product Data Facility Data Set Services (DFDSS), an enterprising sales rep in Los Angeles named C.D. Larsen made a video called "Re-arranging the sock drawer". He explained that sometimes you want the socks you wear the most on the top drawer, and socks that you only wear now and again in lower drawers. DFHSM can re-arrange your sock drawer based on policy-based automation, determining which ones you wear most often, and moving the others down the "hierarchy" accordingly.
To explain DFDSS, he pulled out an entire drawer of socks, and move it to another level. DFDSS was able to do volume-level backups and dumps to tape very quickly, since it did not process individual data sets, but rather the entire volume image as a whole. These two products are now DFSMShsm and DFSMSdss components of the DFSMS element of the z/OS operating system.
Mainframes use an interesting naming convention for its data sets. 44 characters, divided up into qualifiers that are 1-8 characters long, separated by periods. For example:
The first qualifier indicated it belonged to me, that it was for my Project A, that it was a testcase, and specifically TEST1 job control language. Arranging them in this order meant that I could easily find all the data needed for project A, but if I wanted to keep all the testcase data together, I might have put that as the second qualifer instead.
On Linux, UNIX and Windows, most people are more familiar with hierarchical file systems, so the same file might be stored as:
Same concept. You set up a taxonomy of they way you want to organize your data, so that related data can be grouped together and easier to manage. Whereas we used to tell customers that "Qualifiers are your friend", we now tell people "sub-directories are your friend". This is true when organizing the files on your laptop, in your Lotus Notes, and in Second Life.
Since starting Second Life last November, I have picked up all kinds of free things along the way, and now have thousands of objects in my "inventory". Basically, its like keeping things in your pocket, when you want it, you just take it out of your pocket, and *poof* it appears magically on the ground. I was having a hard time finding things in my inventory, so I decided to re-arrange with sub-folders. This is done in-world, and I found it best to do this away from other avatars asking "what are you doing?" which can get quite annoying. Find a remote island or the rooftop of some building when doing "house cleaning".
I've arranged my main folders as follows. These all appear on a single screen, and makes it easy to find exactly what I am looking for.
- Body Parts
- Calling Cards
- Lost and Found
- Photo Album
In Second Life, you can make complete "outfits" which include your body shape, skin, eyes, hair, and clothes. However, saving away many outfits means duplicating a lot of items. Therefore, I separated them out. I keep body shape, skin, eyes and hair in the folder "Body Parts" and all of the clothing items under "Clothing". Under clothing, I separated everything out into the major categories:
I could have a separate folder for "socks", but I keep those in the "shoes" folder.
technorati tags: IBM, DFHSM, DFDSS, DFSMS, z/OS, qualifiers, taxonomy, Second Life, inventory
Well, I'm back from Mexico.
The flight back was uneventful, except for the leg from Houston to Tucson. The lady in the window seat had "overallocated storage" and required a "distance extension" on her safety belt. To accomodate her, her husband and I flipped up the "logical partitions" between the seats, and "compressed" to take up less space to accomodate. Luckily, it was only for two hours.
On the flight to Houston, I was asked what kind of drink I wanted, in Spanish, as the crew were all from Mexico. Here's a quick Spanish lesson:
- this stands for drink in general, and can include liquor and soft drinks
- this stands generically for soft drink. They will often use "Coke" to refer to any cola beverage, regardless of brand.
It is interesting that Spanish language in each country is slightly different. The Mexicans I met with and spoke Spanish to immediately recognized I was from South America, and not from Central America. Likewise, folks in Puerto Rico knew I was from somewhere from South America, and not from Mexico or Central America. In Columbia, Argentina, and even Brazil, my speech is more recognizable as being from Bolivia.
Before IBM got into an OEM agreement with Network Appliance, I used to indicate that EMC and NetApp were the "Coke and Pepsi" of the NAS marketplace. IBM had a presence, but it was in the single digits, whereas these two major players had roughly equal marketshare, just as Coke and Pepsi dominate equally the US marketplace. That analogy doesn't work in other countries, as in some cases the country might be more heavily in favor of one or the other.
On my flight over from Houston to Tucson, however, I was asked what kind of "pop" I wanted. I always say "soda" to refer generically to soft drinks, but realize that others say "pop" instead. Not only can Americans be able to detect what part of the country people are from by accent, but also by the words they use.
Now I see a blog that explores in great detail the issue of Pop vs Soda vs Coke.
So, it looks like I'll need to "retire" my Coke vs. Pepsi analogy, not because their marketshare has changed, but because IBM's parntering with NetApp greatly skews the advantage over EMC.
technorati tags: IBM, Mexico, NAS, OEM, NetApp, EMC, Coke, Pepsi, Bolivia, Pop, Soda