• Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

Comments (6)

1 kperrier commented Permalink

Wow, this article is very timely for things that are going on here where I work. Thanks!

2 avandewerdt commented Permalink

Great post Chris, love the parm name, how often do you get to work with ghosts?

3 aldridged commented Permalink

Excellent article chris. Some good pointers and a new attribute to use

4 brian_s commented Permalink

Great post Chris. I hadn't heard about the ghostdev attribute before. It will definitly come in handy.

5 Jame5.H commented Permalink

So are the same WWPN's preserved for the Primary/DR partition? My biggest issue was creating the NPIV devices on the DR hardware from a replicated rootvg which generated new device WWPN's that had to be zoned seperately to all of the storage devices. <div>&nbsp;</div> I'm in the process of deploying our first AIX servers for our organisation for TSM v6.3 and essentially we're doing the same thing on AIX 7.1.... <div>&nbsp;</div> Redundant VIOS at each site with redundant SAN per VIOS, all data LUNS including rootvg are SAN attached using SVC vdisks, with syncronoush metro-mirroring to DR. <br /> Our test so far has included an AIX 7.1 client lpar running TSM v6.3.3.0 with replicated san attached VSCSI for rootvg, NPIV VFC attached data luns, NPIV VFC attached 3592 libraries (22 drives in total), plus a NPIV VFC attached Protectier (16 virtual LTO) TS7650G. Using SDDPCM and IBM atape control path / data path failover, I can shutdown the TSM server, break the replication and power on the DR partition and not lose any data in the DB2 db, or on a file storage pool that had not yet been migrated to tape and it all comes online at the DR site in 5 minutes without the need to restore a TSM DB or an AIX Operating system from backups. <br /> I also, tested changes to the client at the DR site and replicating back and it all works ok. Although I didin't know about ghostdev, but the rootvg is presented via the two VIOS as VSCSI (not NPIV VFC) and had no issues booting, I did notice that the hdisk count increased, but all the VG's sorted themselves out fine. Stale dev's came back online as the original device id's once I failed back to the prod site.

6 cggibbo commented Permalink

We generate new WWPNs at the DR site. <div>&nbsp;</div> Thanks for sharing James. Very interesting setup and I 'm glad its all working OK for you.

Add a Comment Add a Comment