If you're thinking of adopting RTC EE within your organization, you're likely already aware that it can be a daunting task. Whether you're evaluating RTC as a possible solution or looking to grow your existing RTC implementation from distributed to mainframe, there's a lot of work to be done.
The benefits of a single tool for all of your developers is obvious. Getting there, is not. I'm not here to sell you on a tool or a wistful dream, I'd prefer to talk facts. We've successfully implemented RTC EE, and here's how we did it:
First, the obvious truth is that an adoption like this requires cohesion between three main focuses:
Here we'll discuss only the technology side of the effort and we'll stay high level in order to maintain interest while getting our point across. Let's dig in, below are the pieces that make up the puzzle in no particular order:
- Importing your mainframe artifacts into RTC. This is far and away the easiest step. You'll need to organize your artifacts into streams and components. Make sure that everything is broken out in a way that makes sense in the context of RTC while also allowing for an easy transition for your existing development staff. Our team has developed several utilities that wrap the IBM zImport tool in order to make this process quick and easy.
- Keeping the RTC SCM in sync with the mainframe. This is an asynchronous step. Few have the ability to flip a switch and abandon all legacy tools and processes for the new and improved. There will always be an existing release in development or in-flight changes that were made during the import process. There exists a need to keep the mainframe datasets in sync with the RTC version control system. This need may exist temporarily while you're onboarding teams into RTC or there may be requirements that call for perpetuity. Our team has developed utilities that leverage both build toolkit extensions and the java plain client libraries (each for different cases) providing for an automated sync up solution.
- Keeping your build and deploy options open. For some, the task of importing and training developers to use RTC is enough. Build may be a sought after dream, but one that simply isn't on the horizon. We've worked with customers that chose to take their RTC adoption one step at a time. A phased approach is often much better than an all encompassing solution. In this case, your code is in RTC and your developers are happily doing development in their fancy new IDE, but you need a way to get back to the mainframe for build and deploy. You have a few options, you can piggy back on the existing RTC dependency based build while using a tricked out translator that calls IEFBR14 (or something of the sort), or you can leverage one of our build toolkit plugins that allow for more customization.
- Creating System Definitions for Build. Whether you’ve decided that you’re up to the task of implementing some or all of the RTC EE build capabilities, you'll need to create quite a few system definitions. We're talking about data set definitions, language definitions, and translators here. If you're going for the full solution, we recommend starting by focusing on the basics: compiling COBOL, ASM, Easytrieve, PLI, calling the necessary pre-compilers for DB2 or CICS, linking, etc. When you get that far, give yourself a big pat on the back. Next comes the fun part, identifying all custom processes built for your environment and integrating them into RTC via system definitions. However you choose to find these, make sure to enlist the help of a seasoned RTC specialist. In depth knowledge of RTC is critical, you won't want to waste time understanding how RTC utilizes BPXWDYN to allocate datasets or any other unknowns. For ourselves, we've created Ant scripts that utilize the Antz tasks in the RTC client to build a boiler plate for all of the basics mentioned above.
- Applying overrides at the versionable level. You've got your code in RTC, your steps for build have been properly re-created in the likes of translators, but you've still got to bring in and persist individual overrides on the modules themselves. Whether it's options for a DB2 pre-compile, a custom preprocessor, or a link, you'll want them version controlled in RTC and passed in via variables to your translators. You may also have default options for each individual application (these can be passed into translators via build definitions). Applying these versionable properties requires careful planning and knowledge of public RTC APIs. It's not feasible to bring these values into RTC manually, and the zImport tool doesn't provide a mechanism to do so. For ourselves we built a solution that utilizes the plain java client libraries to automate this task.
- Integration with other products. If it isn't broke don't fix it. Sometimes it just doesn't make sense to ditch a process that has been working successfully for the past 30+ years. In this case, we say leave it alone. There are many opportunities to extend RTC and call out to custom routines. The examples we have are endless, but one such interesting case is a build integration with an existing build and deploy system. Something that we've done is to leverage and existing product called Serena Changeman for build and deploy. Our solution itself consisted of calling custom REXX routines via translators that use the CMAN XML Services to automate such tasks as check-in, stage, and build. This serves as a model for what can be done with any build and deploy process, whether it be another tool or in house routines.
As you can see, along the way of our success, we've built many utilities and solutions that allow us to successfully migrate our customers to RTC cheaper, easier, and faster.
If you have any questions or would like to hear more about any one of the above tasks, leave a comment.
We'd be happy to elaborate further in a separate post, just tell us what you want to hear!