RichardSwain 060000VQ8G Tags:  tsm tivoli snapshots rto data wine sonas nas hsm rpo backup ltfs recovery ibm protection 7,659 Views
How does one judge a glass of wine? There are a few tests, how it looks, smells and taste are the basic three. But as the wine is poured you may or may not know that your wine is made up of different varieties of grapes. A producer sits down and experiments with different percentages of grapes and this allows some creativity in making a better glass of wine for the consumer. Of course there are many more factors that play into this process but its by in large the same no matter what wine you enjoy. You enjoy the wine as a whole, a combination of things put together for you with out you having to know or even understand all that went into making that glass of wine.
When we talk to clients about their data backup strategy, we find a very similar process to that of wine making. The end user rarely knows all that goes in to creating a backup of their data and protecting it for them. They just enjoy the knowledge that their data is safe and will be there if they need to access it. But what we see in the making of the backup is a blend of technologies and a creative element that allows administrators some work around constraints like budget and man power.
As data evolves, we are seeing multiple layers of protection and depending on the severity of the data will determine the recovery point and recovery time as well as retention period. Backup technologies usually mean more than doing a bunch of incrementals and then a full off to disk pools and then tape. There are many different levels of protection that we can use.
Snapshots seem to be more common today than 5 years ago. They allow for a clean and consistent recovery point of a database or file system. But snapshots are used for more than just a quick backup, with writable copies we can quickly setup copies for test and dev environments and also rapidly deploy virtual images for desktops or servers. Snapshots are usually set to the same disk set that data is sitting on, and can be moved around via a vault technology or a mirror to another site. This can be used for long term storage if needed but typically snapshots are used for quick recoveries of less than 7 days. Snapshots are also vulnerable to data corruption. If a software bug comes in and corrupts data on the storage system, that can effect the snapshots and mirrors.
Backups are more traditional where the file system is scanned for changes and then those changes are sent off to a device where the data is stored until needed. In the past it has taken more time to backup file systems and as storage has gotten larger, those backup times grow longer. The technology has tried to keep up with adding larger backup servers and more tape drives allowing for more streams coming in. Now with the idea of using spinning disk for tape pools, we can backup a little quicker as the disk can write data faster than tape. There are many things that have evolved out of this technology, for example Long Term File System or Hierarchical Storage Management.
When clients are looking for strategies on protecting their data, they will use a combination of these techniques, and a mixture of both disks and tape to fully protect their environment. Depending o the data type, you may want to just use snapshots as the data changes rapidly and you do not need to restore from a week or a year ago. Snapshots are really useful in the case, and so is mirroring or even metro mirroring if the RTO is small enough. There are other factors such as Sarbanes-Oxley that will require longer term recovery methods like backups.
Just like a great wine, there is fewer rules today and room for creativity in designing data protection. And just like wine, there are many consultants that will help you find a good balance of technology to match levels of protection with data. Spend the time looking at your protection schemes and see if there are any better ways of balancing this equation. Maybe, with the right planning, you will be able to enjoy a glass of wine instead of spending time recovering from a disaster.
RichardSwain 060000VQ8G Tags:  ibm video cloud #ibm100 think 100_years_of_ibm_in_100_s... watson history bigdata centennial punchcard fractals 13,914 Views
If you haven't heard (get from under that rock) IBM is turning 100 this year and the company is having an awesome time celebrating our longevity. From technical advances, the Apollo program to blazing trails through race and gender equality, IBM has and IS doing the job for all of the world. The company has changed in so many ways and has to adapt in ways only IBMers can but we have survived and thrived.
Find more information about our centennial celebration here.
Here is a great 100 second video of all the cool and great things IBM has done over the last 100 years.
RichardSwain 060000VQ8G Tags:  nseries nas lsi marriage ibm ds5000 xiv ds4000 sql exchange netapp engenio onstor ds3000 13,126 Views
When I first started working at IBM, we had a couple of NAS storage devices: NAS 100, NAS 300(G) and the NAS 500. The NAS 100 was a 1U server appliance that used Windows 2000 and so did the NAS 200 device, all built on IBM hardware. The NAS 500 was on an AIX system also from the IBM stock. They were traditional NAS type systems and IBM sold them as let us build the system for you so you don't have to. Somewhat limited in functionality but did the job they were designed to do, serve NAS data.
That same year, IBM decided to partner with a company that was doing some things in the storage market that looked really interesting. Network Appliance had just started gaining steam with their Data Ontap code (6.something if I remember correctly) and had broken the barrier that IBM systems lacked. Unified protocols from a single architecture and integration into other products like Exchange and SQL using their cool snapshot technology. It took some time to get up to speed on the new Netapp technology with snap this and snap that, but soon we were all talking about waffles and aggrs.
Through out the years, the product set grew and so did the hardware offering. We kept up with the releases and for the most part a 20-60 day lag in release of new software was ok for most IBM customers. We partnered with the sales teams and support teams to help grow the N series customers base and to keep them happy. As with any partnership there are bumps along the way and there seemed to be two parents telling each other they agree to disagree. All in all the N series system has been very successful at IBM.
But as the years progressed, new technology like XiV, Real Time Compression, TSM Flash Copy Manager etc, have filled some of those voids previously filled by N series in the IBM portfolio. As with many companies there are products that overlap and N series does overlap over half of the product line at IBM Storage. Positioning became harder as sales teams questioned when to sale N series and when to sell something "blue". We quickly learned that customers really liked what N series brought to the table and how the solution could be so flexible.
Now with the news of Netapp purchasing Engenio I wonder how the relationship between IBM and Netapp will survive. IBM also rebrands the Engenio products as the IBM DS 3k, 4k and 5k. I guess the bigger question is now what will Netapp do with that product line? If history is any indicator, they will simple keep things like they are for some time and slowly move the customers over to a Data OnTap product. The other question is how long will IBM keep sending money over to Netapp for products that we sale and support?
RichardSwain 060000VQ8G Tags:  ian_wright storwize video v7000 ibm storage youtube 1 Comment 3,385 Views
My friend and colleague Ian Wright has put together an awesome You Tube video of the V7000 with the Flash Copy manager software. Ian has made several videos of the V7000 including a tour of the GUI, how to do things and now this. Ian says in an email to me earlier: "The video starts out with a restore of an accidentally deleted email (but not a restore of the spam that was deleted) and goes on to show recovering an accidentally deleted database. Both are actions that I think should resonate with customers using these applications."
I thought this was an awesome example of the V7000 and the Rapid Application Storage Solution that was release a few months back. Please take a few minutes to go through the video and give Ian some feedback.
This week, I am at SNW in San Jose, CA. If you have never heard of the conference, its all about storage and networking and pulls in all of the big vendors to put on labs, lectures and a vendor hall. People come from all over the world to this event to learn what is new and how to do things better.
One thing that I love doing at these events is talking to customers and potential customers about IBM storage technology solutions. Often we find the conversations do not talk about products as much as the technology in them that fix some sort of issue in the data center. I think this is best seen when you come in to the IBM booth. There is no hardware to see blinking lights or yank cables. We have something better, people who know the solutions to your issues.
If you ask any of the IBMers that work these events, they always say it’s a love hate relationship. The hours are long and you stand on feet for 4-8 hrs. The best part is talking about IBM solutions and finding out what people are doing in the field. This is the best way to help drive innovation, listening to the customer. IBM has programs that send our developers into the field to listen to customers and this is just one example of that program.
Another event at SNW this year was a gathering of the storage social media moguls. This is a non-vendor specific event and is open to everyone. It is associated with a certain hash tag of #storagebeers and they have been going on all over the world. Last night was the largest storagebeers to date and it was a whos who of this community. But what was better than meeting the people that you see on twitter or those who write blogs, was the idea of putting all of the vendor fighting behind us and just a group of people who work in the storage industry talking about whatever was on their mind. If you find yourself at an event like SNW or VMWorld, check to see if there is a #storagebeers and go back and meet some really cool people.
If you are at SNW and want to come by for a chat, you will find me at the IBM Booth today between 11am and 3pm. I would love to spend some time learning about what you are doing in the data center.
RichardSwain 060000VQ8G Tags:  pearson nseries x3650 sonas tony ibm storage r1.2 nas 4,176 Views
May 9th has been a target on my calendar for some time now. Inside of IBM, we have been waiting for this day to come so we could talk about the new things being released in the storage platform. It almost feels like Christmas morning with a bunch of new presents under the tree. Each gift has inside something that is either really cool or something very useful. The only difference is your Aunt Matilda and her little dog is not coming over for brunch.
Under the IBM tree today is a slew of presents for almost the entire storage platform. I will concentrate on just the IBM NAS ones but if you are interested in knowing what is going on elsewhere, you can find more information at the main website.
SONAS must have been a good boy because there are plenty of gifts for him under the tree this morning. Not only did he find presents under the tree but there were a few little things in his stocking. Here is what Santa brought:
This SONAS release is labeled R1.2 and can be obtained by contacting the technical advisor assigned to you.
Santa was also at the N series house and dropped off a few gifts. A new N6270 to replace the N6070. This new system is in line with the N6200 series with larger amounts of RAM and processors. Just like the smaller N6240, there is an expansion controller where customers can add more PCI control cards like HBAs, 10GbE or even FCoE. A new disk shelf was also released which uses the smaller 2.5 inch drives with improved back end performance.
And over at the Real Time Compression house they got new support for EMC Celerra.
Over all a very busy time of year for IBM (and Santa) as these were just a fraction of the Storage announcements today. Also today is the IBM Storage Executive Summit in New York City. My friend and fellow blogger Tony Pearson is covering this great event and will be updating his blog and twitter feed. If you were not able to make it to NYC for the event, feel free to tweet him your questions @az990tony You can also send questions to our IBM Storage feed at @ibmstorage
I was just thinking the other day that I really need to write an article for my blog about the upcoming releases. When I opened the page it said I had not written anything since May of this year. Time really flies when you are having fun, so they say.
IBM just released a new XiV system dubbed the Gen 3. Generation 1 of course was built by the XiV company before IBM purchased them, then came Gen2 shortly there after. As you expect the system has to keep up with customer demands and technology refreshes but some thing very unique caught my eye. The performance with this system will be heads and shoulders above the competition.
Nehalem micro-architecture now makes up the heart of the processing power within the grid with tons more cache to boot. There is a change in the inter-connectivity from Ethernet to Infiband. I can’t wait to see the new SPC2 numbers when they are published.
I suspect with the introduction of more cache (via SSD) and the switch over to near-line SAS drives is only going to help increase performance from gen2 to a gen3 system. The self tuning/healing, tierless storage is still at the heart of the system and still redefines how storage is done today.
There are plenty of blogs and articles on the specifics of the release but here is the IBM announcement page http://www-03.ibm.com/systems/storage/news/announcement/big-data-20110712.html
RichardSwain 060000VQ8G Tags:  nseries ibm storage rtc #ibmtechu sonas ibmtechconfs 3,868 Views
Every year IBM puts on a conference for all of our clients, business partners and strategic partners.
RichardSwain 060000VQ8G Tags:  netapp nseries ibmstorage ftc nas unified storage privacy ibm cloud 4,784 Views
Last week at the IBM Technical Conference I was able to spend some time with a couple of friends discussing technology. It is always interesting to hear their take on where the storage market is going and what lays ahead in the future. As my Netapp pal and I were chatting about the messaging around unified architecture, we both noted that unified to one perceptive is disjointed to another.
IBM and Netapp have been using the term unified for its NAS/SAN device for about 5 years now. The idea is to share a common code base on the same hardware to increase functionality and usability of that storage. Other vendors have gone similar routes using multiple code bases and/or hardware but I see that as a NAS gateway in front of SAN storage system.
This has been very successful in data centers both large and small. But the idea of how we manage storage is changing. Virtualization is changing the idea of how and even where our data may be stored. The term cloud is something of a marketing term but I like the term Storage Utility better. Utility companies such as electric, water, sewer and even cable provide a product to its consumers and storage utility vendors could do the same.
Most people are not concerned about process companies take to make water drinkable or how electricity is generated as long as it is safe, reliable and easy for them to consume. Storage as a Utility is no different, it is only when the storage is offline or hacked in by outsiders the consumers are concerned. There are laws that govern utilities and the FTC has put some privacy laws together to help consumers but I believe we can take it a little further (a blog for another time).
As our data is changing from traditional spinning drives in our data center to a storage utility, we will need some type of bridge that will ease the pain of transition. The main reason people do not adapt new technology is because the transition is often too painful and the benefit of new technology is less than the need to move. Whether it is a software package that helps move data or a hardware device, it will have to give access to both file based data and object based data. This will allow for users to read the files as needed no matter what their connectivity or location. It could also be used to help drive efficiencies up buy allowing data to move from file based (high cost) to object based (lower cost) environments.
Today there are some vendors who have early versions of this type of unified solution. They are bridging the gap between what we have today in private data centers and the future of public utility storage. This is very early in the transition but with this type of technology, we will be able to adapt and provide a better way of storing data. Will it still be called a unified solution? Only the marketing people can tell us that.
Labor day has come and gone and so has all of the holidays between now and Thanksgiving. This is only augmented with the hope that your favorite football team (both American football and what we call Soccer) has a great weekend match and you get to celebrate with the beverage of your choice.
During your work-week, which can and sometimes does include weekends, all you hear is no more money to do the things you have to do to keep the business running. If you have kept up with squeezing more out your systems with virtualization that’s great but your network is now overtaxed. The staff that used to take care of certain aspects of the day to day running of your data center has been let go and their job has been ‘given’ to you with no thought of compensating you for the extra tasks.
The Earth is warming, the weather is out of control and the price of gas is so high that you decide to bike to work to help save the planet. You spend more time on the road commuting and look like you need a shower when you get to work after dodging traffic all morning. Your coffee is priced higher now because the coffee house wants to use Fair Trade coffee from farmers in a county you have never been. And your dog is on anti-depressing meds because you are not home as much and he can’t go out in the yard because of the killer bees migrating north from Mexico.
Our lives seem to be getting more complicated and it’s nice when we find things that not only help us but are easy to use. When you come across these items they make such an impression that you like to tell others about your great fortunes. I came by a solution that was very easy to use and the value was so great that at first I didn’t believe the whole story.
About a year ago, I was asked to help out on the Storewize/Real Time Compression (RTC) team as it transitioned into the IBM portfolio. I met with the engineers and sales people and all had wonderful things to say about the technology. I listened but was hesitant to drink all of the kool aid they were pouring.
A year later I am very much a believer of the RTC technology and think it really could be a game changer in the market. If you keep up with IDC, Gartner and the other analyst, they all point to compression of the data as being one of the larger items for handling future growth. There are a lot of vendors that claim they can compress data but it’s not all done the same.
One of the things that stood out from day one is the idea of using LZ compression in real time to compress data instead of deduplication. Coming from a N series (*Netapp) background I understood how deduplication works and where it was useful. But this was compression which is a different ball game. Now we are able to shrink the storage footprint that wasn’t exactly the same as before. Given that Netapp has issues with block size and offsets, this is exactly what is needed in the market.
The next question I always get and one I had was “That’s great, you can compress data with the best, but whats the overhead?”. I waited a long time to see what the performance numbers were going to be and found an astonishing outcome. The RTC appliance made a performance improvement on the overall solution. It does help by adding cache and adding processing to the serving of data but it also improves the performance of the system by having to process less data.
For example, if a system has to save 100GB of data with no compression, then all of the data has to be laid out on the disk, that sping for 100GB of data, cache, CPUs, I/O ports all have to work harder to save 100GB of data. But if we get 2:1 or 3:1 compression ratios, then all of the components have to work less. No longer are they working to save 100GB of data but 50GB or 25GB or data. This allows the system to process more data and have cycles to respond quicker to I/O requests (IE lower latency).
So the final thing is always the question of how hard is this to install. Is there a period of time that you have to wait or have 5 IBM technicians to install it. All I have to say is its easy. So easy that there is a good YouTube video that goes through the entire process of unpacking to racking to compressing data. I think the video speaks for itself:
So if you are back at work today and find your life swirling around you like a hurricane, stop and be reassured there is a few things out there that still can make your life a little easier. It doesn’t make the killer bees go away but maybe it will give you peace of mind that your storage doesn’t run out in the near future.
RichardSwain 060000VQ8G Tags:  nas ibm trainer" sonas "tom cloud chris_mellor ddn storage 7,686 Views
I just read the blogs from Chris Mellor from the Register and Tom Trainer Network Computing and thought how insightful are these two outsiders about the inner workings of IBM.
First off, yes IBM is no longer selling the DCS9900, a DDN OEM rebranded system in the very large IBM storage portfolio. There is no question that this product is no longer available after the October 15 date.
Second, the DCS 3700 is already part of our portfolio and is now an OEM box from Netapp/Engenio/LSI. The density of this system is the same as the DCS 9900 and makes sense to use the DCS 3700 as a replacement for the DCS9900.
Third, Tom’s blog about SONAS being a monolithic NAS storage is very skewed. SONAS is a very flexible in the way we can scale both storage and the throughput with out affecting either variable. Most “scale out” systems you have to scale both in order to keep up with demand. SONAS uses some of the best technology on the market with a huge amount of throughput.
His statement about IBM dropping DDN from SONAS is un-true and goes to show how much research Tom put into writing this blog. I am sure Tom is looking out to write a non-biased blog for Network Computing but maybe those days at HDS are still making a big influence in his ability to look at announcement letter and make a extrapolations about other products.
Finally, If HDS thought BlueArc was so great, why didn’t they buy them back when they could have gotten the company for a better deal? Has the product changed THAT much since 2006? I wish HDS only the best for dealing with the transition and getting that product under the HDS umbrella.
If you do your homework and base your assumptions on facts instead of conjecture, you will find SONAS is a solid platform in the enterprise NAS market. SONAS has proven it can be the market leader with a low cost to performance ratio and will only get better as time goes on.
RichardSwain 060000VQ8G Tags:  cloud v7000 storage unified sonas san ace gpfs nas 4 Comments 32,876 Views
Storwize V7000 Unified, A marriage of SAN and NAS
Storwize V7000 and the IBM NAS software were married Wednesday, October 12, 2012 at midnight at IBM Storage chapel in San Jose, California. The Reverend Rod Adkins officiated. Following the ceremony, the bride’s parents hosted a reception at the Almaden Research Center.
The bride comes from the NAS family who were in attendance. She also has ties with the Tivoli and GPFS families deep within the storage community. There were family members from the X series family who were at the ceremony.
The groom comes from a long line of storage products. XiV, DS8800 and SVC were all part of the festivities and supported the groom throughout entire day.
The couple will honeymoon Redwood City, California with a visit to the Storage Performance Council.
After long anticipation, IBM is now in the unified storage market with the introduction of the Storwize V7000 Unified (SV7kU?). The system stands as small as 6U of rack space and can flex up to four clustered systems (via RPQ) supporting internal SAS, SATA or virtualized external disk from other vendors.
The V7000 Unified is a midrange disk system that will allow new V7000 or existing V7000 customers the ability to integrate their NAS workload into the system. Using the standard V7000 shelf, IBM has added two 3650m3 servers with the IBM NAS software stack to complete a unified architecture.
A new GUI that ingrates the NAS portion of the software is now available that will combine management for both technologies with a few mouse clicks. Setup of the system stays the same with the simplified USB key approach. Customers have reported that between the USB key installation and the wizard driven alerts, the V7000 has been one of the easiest systems to install and configure. IBM decided to keep these features in the enhanced GUI.
V7000 Unified will support NFS/CIFS/FTP/HTTPs/SCP protocols in addition to block functions FCP and iSCSI. It will also support file replication and file level snapshots for business continuity in addition to existing block functions.
Another function in the V7000 Unified that will help customers is the introduction of the IBM Active Cloud Engine. What is it? Think of it as a very smart, very fast robot – that never sleeps – keeping your cloud storage neat, tidy and running smoothly. Think Rosie the robot from The Jetsons.
This engine is a policy driven engine that will help improve the storage efficiency by automatically placing, moving and deleting files to the appropriate storage. The efficiency gain comes from storing the files where they should be with out an administrator manually moving them. As data is gets older, the engine can move the file to another location where the price per TB is less and even delete the file if necessary.
The movement is done seamlessly and the end user does not have any idea their data has moved. Another aspect of the engine is identifying files for backups or replication to a DR location. As the data ages, the data continues the life cycle through the data center without storage administrators intervention.
Data can be moved from internal disk to external virtualized disk and even to tape. The diagram below shows the movement from file creation to 180 days old and off to deduped tape.
The policy can be created from a wizard in the V7000 Unified GUI by creating thresholds and start times. Customers can also exclude certain files by different file attributes like size or last accessed. For the more advanced customer, an edit feature of the policy is allowed.
Another question people are asking is about the relationship with Netapp and how will this product effect the N series product line. IBM is expanding the midrange storage portfolio by offering both the new V7000 Unified along with our N series products to focus on different client needs.
N series continues to be IBM’s offering focused on clients who have a primary need for NAS optimized (file) workloads. Existing N series clients with growing data requirements will continue to require additional N series disk drives, expansion units, and new systems to meet their needs.
IBM Storwize V7000 Unified will particularly appeal to clients who have a primary need for storage to support block optimized workloads with additional needs to consolidate file workloads for greater efficiency (unified storage). Storwize V7000 Unified is also targeted to clients that can benefit from the unique capabilities of IBM Active Cloud Engine or to clients that already are using Storwize V7000 or SONAS.
Just like in real life, we have seen other marriages come and go but this one seems to be different. The V7000 Unified is using the best of the storage portfolio and bringing value to the customer. IBM is also leveraging the investments made over 10 years of innovation; Virtualization, Easy Tier, Simplified GUI, Active Cloud Engine and is producing a product that will accomplish the lowering total cost of ownership.
As goes with the tradition of the bride to have good-luck:
“Something old, something new, something borrowed,
something blue, and a silver sixpence in her shoe."
(You can find this poem in Leslie Jones' book "Happy is the Bride the Sun Shines On."). We find the IBM version of this offering good luck with the following:
Something Old: 4,500 V7000 systems sold last year
Something New: Active Cloud Engine
Something Borrowed: Storage Virtualization
Something Blue: Storwize V7000 Unified, a true IBM organic product
I am still looking for the sixpence but feel free to mail us one and we will attach it to the bezel of each controller.
Now available is the IBM System Storage N series with VMware vSphere
Redbooks are a great way of learning a new technology or a reference for configuration. I have used them for years not just in storage but for X series servers and for software like TSM. The people that write the books spend a great deal of time putting them together and I believe most of them are written by volunteers.
This is the third edition of this Redbook and if you have read this before here are some of the changes:
-Latest N series model and feature information.
-Updated the IBM Redbook to reflect VMware vSphere 4.1 environments
-Information for Virtual Storage Console 2.x has been added
This book on N series and VMware goes through the introduction of both the N series systems and VMware vSphere. There are sections on installing the systems, deploying the LUNs and recovery. After going through this Redbook, you will have a better understanding of a complete and protected VMware system. If you need help with how to size your hardware there is a section for you. If you are looking to test how to run VMs over NFS, its in there too!
One of the biggest issues with virtual systems is making sure you have proper alignment between the system block and the storage array. This will negatively impact the system by a factor of 2 in most random reads/writes as two blocks will be required for one request. To avoid this costly mistake or to correct VMs you have already setup a section in the book called Partition alignment walks you through the entire process of correctly setting the alignment or fixing the older systems correctly.
Another area that I will point out is the use of deduplication, compression and cloning to drive the efficiency of the storage higher. These software features allow customers to store more systems on the storage array than if they used traditional hard drives. Also there is how to use snapshots for cloning, mirrors for Site Recovery Manager and long term storage aka Snapvaults. At the end of the book are some examples of scripts one might use for snapshots in hot backup modes.
Whether you are a seasoned veteran or newbie to the VMware scene, there is a great guide that will help you from start to finish setting up your vSphere environment. The information is there, use the search feature or sit down on a Friday with a high-lighter, which ever fits your style and learn a little about using a N series system with VMware.
Here is the link to this Rebook:
IBM N series and VMware vShpere
For more information on Redbooks go here!
RichardSwain 060000VQ8G 3,218 Views
My father is a retired teacher but loves to work with his hands. I can remember very early on in my up bringing, him teaching me that it is good to measure twice and cut once. Whether it was building a deck or just a bird house the point was it took more time to cut something wrong and then has to re-cut the board shorter or even wastes the old board and cut a whole new one.
When I was preparing for this article I remember having to learn that lesson the hard way and how much effort really is put into that second cut. The problem in the storage industry is the misaligned partitions from a move of a 512 byte sector to a new 4096 byte sector. This has to be one of the bigger performance issues with virtualized systems and new storage.
Disk drives in the past had a limit on the number of sectors to 512 bytes. This was ok when you had a 315 MB drive because the number of 512 byte blocks was not nearly as large as what is in a 3 TB drive of today’s’ systems. Newer versions of Windows and Linux will transfer the 4096 data block that match the native hard disk drive sector size. But during migrations even new systems can have an issue.
There is also something called 512 byte sector emulation which is where a 4k sector on the hard disk is remapped to 8 512 byte sectors. Each read and write would be done in eight 512 byte sectors.
When the older OS is created or migrated, it may or may not align the first block in the eight block group with the beginning of the 4k sector. This causes misalignment of a one block segment. As the reads and writes are laid down on the disks the misalignment of the logical sectors from the physical sectors mean the 8 512 byte blocks now occupy 2 4k sectors.
This now forces the disk to perform an additional read and/or write to two physical 4k sectors. It has been documented that sector misalignment can cause a reduction in write performance of at least 30% for a 7200 RPM hard drive.
This issue is only magnified when adding other file systems on top of this misalignment. When using a hyper visor like VMWare or Hyper-V, the virtual image can be misaligned and cause even further performance degradation.
There are hundreds of articles and blogs written on how to check for you disk alignment. A simple Google search of the words “disk sector alignment” and you will find this has been a very popular topic. Different applications will have different ways of checking and possibly realigning the sectors.
One application that can help you identify and fix these is a tool called the Pargon Alignment tool. This tool is easy to use and will automatically determine if a drive’s partitions are misaligned. If there is misalignment the utility then properly realigns the existing partitions including boot partitions to the 4k sector boundaries.
I came across this tool when looking for something to help N series customers who have misalignment issues in virtual systems. One of the biggest things I saw as an advantage was this tool can align partitions while the OS is running and does not require the snapshots to be removed. It also can align multiple VMDKs within a single virtual machine.
For more information on this tool and alignment check out the Paragon Software Group website.
In the end, your alignment will effect how much disk space you have, how much you can dedupe and the overall performance of your storage system. It pays to check this before you start having issues and if you are already seeing problems I hope this can help.