Comments (6)
  • Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

1 cbartlett commented Permalink

Hi There! <div>&nbsp;</div> I'm testing SSP and excited about the possibilities. One thing I would like to see... or know how to do is stripe a LU across more than one pv. <div>&nbsp;</div> We balance the load on our CLARiiON subsystem by assigning a LUN from each service processor and using LVM to stripe. This works very well for us. <div>&nbsp;</div> In a single VIO environment I use LVM on the VIO server to create striped LV's that I then take and map to the client as vscsi devices giving them a hdisk that's already striped. Nice in single VIO environments as it drastically reduces the number of pv's I manage. <div>&nbsp;</div> In our dual VIO environment I map the pv's to the client and use LVM to stripe. <div>&nbsp;</div> In the case of SSP's I would like the ability to stripe the LU when it is created. As a work around I can create two SSP's and assign storage from the separate storage processors to their respective SSP and then present two LU's to the vhost and stripe as I normally would in a dual VIO client.... <div>&nbsp;</div> Sorry if that's confusing. <div>&nbsp;</div> I really enjoy your blog. <div>&nbsp;</div> -Caleb

2 AnthonyEnglish commented Permalink

Hi Caleb, <div>&nbsp;</div> Generally I leave the hardware to do any striping. Can your Clarion not do that? The difficulty with software striping is that you're assigning a couple of LUNs to the VIO server, then creating them in a volume group on the VIOS, so that you can create logical volumes which are striped between the two LUNs. Apart from the overhead involved in the LVM layer (probably not too significant, since the AIX LVM is very mature), there's the sheer complexity of it which seems to me to defeat the benefit. It's also quite difficult to scale, I find. <div>&nbsp;</div> I would look to the Clarion to provide the hardware redundancy (including striping, if it's possible), and then assign a whole SSP to both VIOS. This is a better approach, I think, than to treat each SSP as a dedicated device for a VIO server, and then to have to manage the striping or redundancy. It seems to me that the whole point of a cluster is to save you all of that management work. <div>&nbsp;</div> Thanks for your encouraging words, too. We AIX guys have to stick together! <div>&nbsp;</div> Anthony

3 cbartlett commented Permalink

Hey Anthony, <div>&nbsp;</div> I completely agree. The only reason we are striping is to make sure the load on the service processors is as even as possible. (Okay... its really to keep the SAN guys happy) ... It appears, that CLARiiON arrays don't do their own SP load balancing on the back end... or more likely we decided not to pay for a feature. Many of my partitions are very heavy hitters with large amounts of IOPs. So I'm left working with the additional complexity to make sure we hit the SPs as evenly as possible. Powerpath is active/passive with the CLARiiON. <div>&nbsp;</div> There's a lot to be said for simplicity and stability. The larger the environments get the less moving parts I want. So it sounds like something to discuss with the storage team moving forward. Especially running a one man show. <div>&nbsp;</div> -Caleb

4 AnthonyEnglish commented Permalink

I wonder about the scaleability of that solution. If you have to map out each stripe onto a different Service Processor, it makes it difficult to grow. I prefer to avoid giving a software solution to make up for hardware limitations, but you have to work with the equipment you've got. <div>&nbsp;</div> Is the CLARiiON really maxing out? I'm always happy to push any performance bottlenecks upstream and leave them to sort it out. I suppose I'm taking a siloed approach "if the SAN goes down, it's not my problem" ... unless I'm the SAN guy, of course!

5 cbartlett commented Permalink

We can dynamically increase LUN sizes to meet needs so growth isn't much an issue. Adding space is simply a matter of telling the SAN guy which LUN or LUN pair and how much capacity to add. Then a 'chvg -g' after space is added; A chfs and I'm done. No more running cfgmgr on the VIO (or multiple VIO) creating vscsi devices, running cfgmgr on the recieving lpar and then running extendvg just to add a PV's to increase the capacity of a VG. I just grow the LUN's I got to meet the need. <div>&nbsp;</div> I would much rather have one LUN and done but I'm not complaining when I can support a large database server with a small number of LUNs (As little as four including OS). It's like two steps forward and a half step back. If it was not for balancing SP load I would have single LUN VG's and or SSP's. I can go into that more ... I have my own style of doing things and forget that other people might not be in tune with me. But feel free to e mail or IM caleb.bartlett at gmail. <div>&nbsp;</div> -Caleb

6 AnthonyEnglish commented Permalink

The dynamic increase of a LUN is a great feature, so I quite understand where you're coming from. <div>&nbsp;</div> By the way, I couldn't get through to that gmail address. Perhaps you could contact anthonyengl at gmail.