Comments (4)
  • Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

1 mafe commented Permalink

I fully agree regarding the basics; the picture could change when we have to take a look at the application layer (kernel extensions, I/O bandwidth, security, etc); many reasons why it is not applicable to run the application in a WPAR (even it would be more efficient)

2 brunotm commented Permalink

The hypervisor dispatch time changed? The last time i read it was 10 milliseconds cycle.

3 tlp commented Permalink

The post states "100 LPARs require the Hypervisor to switch between LPARs on shared CPU cores at a 1 milli-second sort of level with zero shared memory cache between LPARs, so a binary in one LPAR will push the same binary in a different LPAR out of the memory caches." . <div>&nbsp;</div> The memory efficiency might be improved for lpars on systems using the new memory De-dup Power7 'C' model features, right? Of course, you'll have to setup and manage the AMS/Dedup features properly.

4 Allan Cano commented Permalink

In general I agree with the article; however, if fails to point out that one mistake in a global and you loose a bucket of systems. One mistake in an LPAR and you loose 1 system. <div>&nbsp;</div> And, as the first comment points out, application support is the biggest draw back to WPARs. We spent a week hacking up Flash Copy manager to work with WPARs and this is an IBM software product. Also, don't forget that you could have an issue where app A requires a patch that app B won't support so you have to start shuffling WPARs and hope A can move to a high level bucket or not put on recommended updates. <div>&nbsp;</div> While I like WPARs in concept (and we use a couple here) they can require a little more thought in planning your architecture. <div>&nbsp;</div> As a side note. If you use XIV storage you could easily build out 100 lpars in a day, but your right in that it took 2 days to write the script to do it.