<   Previous Post  Thats just a FUD...
SVC Storwize User...  Next Post:   >

评论 (5)
  • 添加评论
  • 编辑
  • 更多操作 v
  • 隔离此条目

1 gSweely 发表了评论 永久链接

Wouldn't a great use case of the extra interfaces be to separate out the backend storage fabric links from the front end host connections. This would potentially increase available interface bandwidth and allow specialized tuning of the backend and frontend fabrics.

2 orbist 发表了评论 永久链接

gSweely, <div>&nbsp;</div> You could do that, but our ports are quite happy being both target and initiator, and since FC ports are full duplex, we can easily get the maximum 800MB/s per port for both reads and writes. <div>&nbsp;</div> The main advantage to boost bandwidth is to use the extra 4 ports for node to node comms only, this double the write bandwidth per IO group from 3.2 to 6.4 GB/s - as in a write case, we have to write to partner and write to disk - so in that case, you have the same ports sharing the "out" part of the duplex port. So normally get 1.6GB/s actual write miss workload (per node)

3 Kirson 发表了评论 永久链接

Very interesting indeed - i will try it in our lab soon - adding additional HBA + CPU. <br /> I just wonder how will the GUI handle it .... <div>&nbsp;</div> I know some people from the RTC group - apparently the additional CPU will do ONLY compression - CPU affinity i guess. very interesting ..

4 al_from_indiana 发表了评论 永久链接

Barry, <div>&nbsp;</div> Is there a RPQ anumber ssociated with this or can existing customers get these upgrades via the normal 'mes' route? <div>&nbsp;</div> Al

5 orbist 发表了评论 永久链接

Al, yes, will find them...