<   Previous Post  The problems/joys/im...
Open Forum - Q & A...  Next Post:   >

Comments (28)
  • Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

16 dszubert commented Permalink

Hi Barry, I found this comparison table on a the site of an IBM distributor (I believe). <div>&nbsp;</div> http://content.sdgroup.eu.com/isi/ibm/2012_11_06_IBM_V3700_comparison.htm <div>&nbsp;</div> Are the first two rows of the table correct?

17 orbist commented Permalink

Yes and No. <div>&nbsp;</div> 1st Line - V3700 is licensed as machine code - rather than software. Subtle licensing difference, so all references to the SVC software in V3700 is "machine code" <div>&nbsp;</div> 2nd Line - We only support dual canister mode <div>&nbsp;</div> 3rd Line - We don't yet support SAS host

18 spriva commented Permalink

Hi Barry, <br /> It looks like a great entry level machine. However, 4 expansion enclosures, for a customer needs to mix SAS and LFF NL-SAS drives, it is a little bit too small. Is a support for more than 4 enclosures planned and/or scale out like in the v7000? <br /> Thanks <br /> Shai

19 Steven_Avnet commented Permalink

Barry, does the v3700 support Direct Access for Microsoft and/or VMware based machines?

20 JoséMiguel commented Permalink

Barry guess it's been sometime since the release of the v3700 and you posted this. <div>&nbsp;</div> I received shipment of my v3700 a few weeks ago and today I got down to benchmarking it. <div>&nbsp;</div> I compared this unit against a few diferent scenarios using HDTach. <div>&nbsp;</div> <div>&nbsp;</div> My unit is the simple off the shelf v3700 with 8 600GB 10krpm SAS 2.5" disks. <div>&nbsp;</div> The two scenarios are the local attached storage (146GB 10krpm SAS), and an Oracle 7310 unit. <div>&nbsp;</div> the SAS disks can yield upwards of 150MBps throughput. The Oracle unit gives about 90MBps. <div>&nbsp;</div> the V3700 just breaks 50MBps. this is aweful! 50MBps for this thing is hardly anything to brag about. If you buy the off the shelf unit all you can expect of it is a decent place to throw backups into. I was expecting to dedicate this thing to an SQL server by way of presenting the disks by iSCSI. <div>&nbsp;</div> is this consistent with your findings? do you have any tips to getting this throughput up to the 100's? <div>&nbsp;</div> Thank you! <br />

21 orbist commented Permalink

Jose, <div>&nbsp;</div> Sounds like something is misconfigured. 50MB/s is woeful as you say, a single array should be capable of 500MB/s+ <div>&nbsp;</div> Sounds like you are using iSCSI, 1Gbit? If so maybe misconfigured. Have you set MTU 9000 for example - much more efficient. Are you using all 4 ports into the switch? Otherwise you will cap out at ~100MB/s per port read, and ~80MB/s write. <div>&nbsp;</div> Def something wrong with the setup.

22 orbist commented Permalink

Spriva, <div>&nbsp;</div> I can't discuss future enhancements on this forum, sorry.

23 orbist commented Permalink

Steven avnet, <div>&nbsp;</div> I understand it does support direct attach - you should check the SSIC website for details of support matrix.

24 JoséFerreira commented Permalink

Hi, back to the question about how it is configured, I have gone through and configured this in various ways following recomendations. <div>&nbsp;</div> it is a simple 2 port iSCSI (1 from each controler to a switch configured with Jumbo frames) <div>&nbsp;</div> the nic on the server updated with the latest drivers, following Microsoft's initiator recomendations as well as IBMs' redbooks recomendations. <div>&nbsp;</div> This is the simplest scenario you can find for this config. <br /> Network would limit this connection at 1Gbps or .125GBps. I'm seeing 50MBps on my HDTach testing, and even less with copies (though I won't be too mindful of these later ones). <div>&nbsp;</div> You say i should see 500MBps (though i think you meant if i had an FC configuration or 10Gbps iSCSI.... <div>&nbsp;</div> So that's what i'm at. I wish someone would benchmark this configuration to see if the results are consistent with what i have found...... <div>&nbsp;</div> <div>&nbsp;</div>

25 wanghh commented Permalink

Barry, as you said in Nov 12 2012, 3 of Mini-HD form factor SAS ports are disabled in the v3700. Are them enabled now ?

26 orbist commented Permalink

Jose, <div>&nbsp;</div> Sure, what i meant was a single array can do 500MB/s so any limit sub-that is protocol related. Look out for the latest PTFs on 6.4.1 coming soon that do boost some 1Gbit iSCSI performance <div>&nbsp;</div> Wnaghh, <div>&nbsp;</div> The SAS "host attach" support is not available on any 6.4.1 code levels ... (can't disclose more)!

27 remaho commented Permalink

I can observe problem with performance too. SEQUENTIALL read has limit ~280MB/s, write ~220MB/s (single user config). My old DS4800 can do ~360/250MB/s with same infrastructure (4Gb FC) and shared with 6 VMware hosts/30VM running . My tests are with CrystalDiskMark, 4Gb Brocade FC SAN, one VMware test VM. Tested config RAID0, RAID10, RAID5 with 1-16 disks. Read limit 280MB/s is reached with 2-3 disks. It is something wrong with V3700 ....

28 1BD9_Florian_Seefried commented Permalink

Barry, <div>&nbsp;</div> we started to recommend Storwize V3700 to our customers, instead of DS3500. However I miss the feature to expand a disk-pool by adding a single disk. This feature is pretty important to us for small environments. According to IBM Support it is not possible to do that. <div>&nbsp;</div> Is this correct? If yes - will this feature be implemented in the future. <div>&nbsp;</div> Best regards, <div>&nbsp;</div> Florian