When we started working on developing a web client for editing, a big concern I had was whether this would even work. Would it be fast enough that it could compete with a 'native' editor like vi or emacs on Unix systems or ISPF on z/OS?
My thoughts were that if it was sluggish, it would never be truly successful. No one wants to wait around to do basic things like editing a file.
I became hopeful when I saw Orion - an open source editor that can load very large files in a couple seconds (probably faster now), and provides a good edit experience.
So, we built our own editor framework on top of Orion and created a separate navigation framework for looking at datasets on z/OS, which is our first file system we are playing with. We now have a 'resource explorer'.
Our 'resource explorer' consists of 3 major components:
Web Client: This is the interface to the user, and is built on dojo and Orion.
App Server: All requests for resources go to the app server, which we run on WebSphere Application Server Liberty 8.5. The App Server also manages information about the current logged-in user, such as credentials and settings.
Resource Server: The app server sends all requests for resources to the resource server. Our intent is that there could be many resource servers on a variety of platforms in the future, but for right now, we've got a z/OS Resource Server, which manages datasets (directories) and members (files), submitting jobs (processes), and managing results from submitted jobs.
To navigate through thousands of datasets (think of them as simple directories) on a z/OS server, and then to navigate through possibly tens of thousands of dataset members (think of them as flat files) isn't going to be super fast if you take a simplistic approach to the problem of just downloading lists of datasets or members. And uploading large files every time a change is made is going to be sluggish too. Especially for traditional COBOL programmers that like to put entire programs in one big file.
So we organized the datasets into a 'hierarchy' - which is fairly obvious, but hasn't been done with many of the current remote z/OS development environments out there. It looks something like this:
Next, we stream-lined dataset requests, so that only the datasets initially being displayed were requested, with the other datasets asynchronously being downloaded. The image that follows shows the network request for the first page of entries (0 to 19). Subsequent requests for more items are farther down:
We will do the same thing for dataset members - grabbing just the first 20 or so entries to populate the display, then getting the rest in the background. Again - for people that do web-applications, this is fairly obvious, but many of the current remote z/OS development environments aren't optimized to break work into chunks, doing things asynchronously to provide a nimble client..
Now - what about actually editing the file? It turns out that Orion can pull down big files fast enough that we haven't found the 'read' of the file to be too onerous. But... When someone updates the file, we don't want to send the entire file back up-stream every time. We've gone to quite a bit of trouble to make reading and writing files not only fast, but also reliable (hey - if you can't trust the basic read/write of a tool, you just won't use it).
Making Reading And Writing Reliable
We use the ETag technology built into web browsing so that we can ensure that the files we are reading (and more importantly, updating) are what we expect. When we read a dataset member, we get the timestamp for the file and send it along with the contents. Later, when an update occurs, we flow the timestamp back to the server and, before the update, we can check that the file we are updating really is what we think it is (so if someone changes it from another client, we can detect it, because the time stamp will have been updated). This makes editing the files safe, even in a multi-user environment. We flow timestamps for a bunch of other operations too - like touching files.
Making Update Fast
One thing that drives me nuts is when you make a one line (or maybe even a one character) change, and it seems to take forever to do the save when the file is remote. That doesn't happen with our technology. When you open the file to edit it, we pull it down and put a copy of the original file in memory. When you hit 'save', we do a diff of the file to be saved with the original file in memory. Then, we send the diff command to the resource server (via the app server), which runs the command. So - if you change one line of text, all you send to the server is a one line 'change' diff command. Super fast. The image that follows shows the network request, sending a one line 'diff' command for the one line of text that was changed in the file being saved:
There's a bunch of other things I think are pretty cool about our 'resource explorer'. I plan to write about some of them soon, so stay tuned.