<   Previous Post  SVC 4.3.0 - Space-Ef...
SVC 4.3.0 - Virtual...  Next Post:   >

Comments (7)
  • Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

1 localhost commented Permalink

Phew!....long post...small font...wish you would stop prevaricating Barry and just get into the nitty gritty! :)<div>&nbsp;</div> Time to digest that little lot!

2 localhost commented Permalink

Looking forward to some details on VDM. I have not actually seen that in action yet.<div>&nbsp;</div> I am curious if it can be used on top of the striped vdisks we normally use, which would mean that SVC will effectively have RAID 10 capabilities.<div>&nbsp;</div>

3 localhost commented Trackback

So a 4k write with a 64k grain size would allocate 64k, use 4k, and make 60k available for future contiguous writes? What happens if you run a defrag on the server and it deletes and rewrites a bunch of fragments that are currently the sole occupants of grains?

4 localhost commented Permalink

OSG,<div>&nbsp;</div> So your statement is correct. As for defrag, use a decent file system? No, seriously though, at present there is no SCSI standard to allow block deletes to be notified to a block system like SVC. So once allocated the grain exists. We have some patents and ideas covering this area so we are investigating various angles.<div>&nbsp;</div> Barry

5 localhost commented Trackback

The standards bodies are currently defining proposals for SCSI "release block" and other thin provisioning-related atomics - hopefully IBM and SVC are participating in these...especially since many of the database vendors and file system developers are already aligned.<div>&nbsp;</div> Better that you guys join the team that's already working the issue, than for you to blaze yet another proprietary trail.<div>&nbsp;</div> You know how to contact me if you'd like pointers into the existing efforts - the more of us that get behind them, the sooner we get a solution that works for everyone.<div>&nbsp;</div> TSA

6 localhost commented Permalink

Barry, any thoughts on performance considerations when using SEVs? Any general observations on how much overhead there is?

7 localhost commented Permalink

We're working on a white paper that goes into some details of this, and there will be an update to the Best Practises Performance Redbook later this year. <div>&nbsp;</div> Its a difficult question, given that there are a lot of 'it depends'... <div>&nbsp;</div> I plan to devote a couple of posts to cover some of this in the not too distant future.<div>&nbsp;</div> There is more work for the SVC nodes and disks to do, with directory misses causing additional I/O requests from the disks, I think my main recommendation at the moment would to take care and benchmark performance of an SEV implementation when you are provisioning for a performance critical environment. (This goes for any thin provisioned solution) <div>&nbsp;</div> I'll go into more detail of some of our benchmark workloads in the future posts I've mentioned, once we get this white paper completed.<div>&nbsp;</div> Sorry its a bit vague, but really this is very specific to the applications, data layout and workload, so the best bet is to plan and test new environments and verify what works best for your application while maintaining required response times and throughput.