Getting Ready for RSDC
JoePluta 100000KMX3 Visits (1674)
It's been an interesting few weeks as Chris Laffra and I have been working on the scheduler application for RSDC. To give you a quick overview, I'm running the EGL part of the application on a Windows workstation (in fact, I'm running it inside of RBD's test environment). That in turn talks to my IBM iSeries model 270 and exposes the RPG business logic as web services. The front end that consumes the web services (written in EGL Rich UI) is being served either through that same RBD instance or from a public PHP server in Norway somewhere.
So as it stands, you could be using a browser in South America to talk to a server in Chicago that talks to an iSeries in the same room, or you could be using an iPhone in Japan to talk to a PHP server in Norway that talks to that same iSeries in Chicago. And all of this without Chris or I having to write a single line of Java code or do much of anything except generate the WSDL using RBD (me) and consume it (him). And all with excellent speed. And you should see it when the whole thing is on a single LAN!
Anyway, the really interesting bit happened last week. The iSeries (and its successors, the System i and now the IBM i) share an incredible reliability. They're very good about handling RAID disk and they also will tell you about any problems they're having. I happen to have two servers: an older workhorse model 270 that I own and that I use for day-to-day application serving, and then a newer development box that I lease, getting a new box every year or two to keep up with the latest technologies.
Well, the production machine, where I was running the application, started telling me it had a pending disk drive error. I love that - not only does it support RAID, but it also tells you when one of the drives is acting up even before it fails so that you can be ready to replace it. And while replacing a drive is easy enough, rebuilding a RAID set takes time. Since we were getting close to the conference and actually had the application up for live testing, I didn't really want to bring the machine down for the couple of hours it would take to rebuild the RAID set.
So I made an executive decision. I brought down the WAS server in RBD, did a save/restore of the RSDC library from the production i machine (the model 270) to the development i machine (the model 520), changed the server name on the Linkage parts, did a quick regen of the app, and restarted the WAS server. All of this took about five minutes, and nobody was the wiser. And reflecting on this, since I use a hardcoded HOSTS table on the workstation, I could have simply changed that entry to point to the other model 520 and saved myself two of those five minutes.
The point of all this? Well it seems that the architecture is doing exactly what it's supposed to do. Chris was able to do initial testing without even needing the back end. Then, once the back end was up, it was easy to run the front end simultaneously on two different machines. And as for the back end, failing over to a different machine is as easy as changing an address in the HOSTS table. Chris and I have been able to concentrate on features rather than infrastructure, and that really is the promise of EGL.