New tools for old code, how to tackle your legacy code problem
bostonWhaler 270001WUQK Comment (1) Visits (4135)
Recently I’ve been thinking a lot about change. No, not nickels and dimes stuck in the sofa cushions, or getting to the gym more frequently, (although that’s clearly indicated). Software change, where we add some new features to the application, and hope to not break the existing features.
As we all know, working as a software engineer means creating, editing, changing, and (hopefully) improving a particular piece of software code. Now, I’ve worked on many different software projects in the years before I came to IBM Rational, but I remember one piece of code that I just never wanted to touch. It wasn’t just me, mind you, nobody understood the code, or knew exactly how it did what it did, or the algorithms and logic behind it. The mathematician who wrote the routine had retired, and there were no lab notes left behind or comments in the code, and a small part of it was written in assembler! There were a lot of math transformations performed in the code that relied on tricks of the particular microprocessor. Because of that, for years, we all avoided that monolithic routine, just used it in our robotic controller programs, but never modified it. And, we were successful because we were shipping very complex manufacturing equipment that never got updated in the field, i.e., pre-Internet.
Years later I would become deeply steeped in this exact same software routine, as I became embroiled in my own personal, software version, of New England’s Perfect Storm. I had to modify that code, on-site, half way around the world in an automotive factory that was still under construction. The sparks from overhead steel welding were falling on my hard hat and clothes, and the backdrop was an active volcano that a year later blew it’s top, and a typhoon brewing off the Japanese islands, soon to knock out power to the area and destroy part of my hotel.
Now I‘m not saying that this software routine was responsible for all this chaos, but when I think back to 1992, those trips to Japan, and that software routine, come immediately to mind, in equal proportions.
I don’t believe I’m alone in being averse to software changes - I think it is that way with all of us that build software for high up-time, production machinery, where downtime can cost $25,000 a minute. I’ve seen it in the offshore oil drilling and automotive manufacturing industries, the software simply cannot crash, business cannot wait for a daily, 5 minute / $125,000 reboot to clear an error condition.
Imagine then, the concern we feel when we hear that our computer hardware platform has gone End of Life. We know that the replacement board will have substantially greater resources (which is mostly a good thing), however, at higher clock rates, with multiple cores, and many new devices. The timing of our software running on this hardware will be drastically different, and we likely will get race conditions providing wildly unpredictable behavior. This stress is only slightly less when we hear that a critical tool chain vendor has been bought, sold, or gone bankrupt. And of course, let’s don’t forget environmental concerns. Remember the RoHS (Reduction of Hazardous Substances, like lead) change over a while ago, and Y2K before that?
When we face these issues with the certain reality that we WILL be modifying this mission critical code, perhaps onto new untried hardware, and perhaps into a now regulated environment with more stringent safety requirements, and have to connect it to the Internet Of Things, we realize we need a change. The change in this case, is in how we do software development! How much we revise our Software Development Life Cycle (SDLC) has much to do with the amount of warning we’ve been given on hardware obsolescence, forthcoming safety and security mandates, and frankly, the CxO’s opinion of the Software Engineering team; does the CxO see Software Development as a strategic component of innovation, or simply as a cost center? The answer to that (those) questions and others will dictate how much of the SDLC can be up for modernization, and how much support there will be for adopting new methods.
New methods, now that’s an (over)loaded expression. Think Agile, extreme programming, daily deployment, Cloud, Internet of Things, etc… all great ideas, however, if you’re tasked with effecting some element of SDLC change in your firm, let’s start with a common idea, an easy one that should be part of everyone’s toolkit. I’ve already alluded to it, I had to refactor that source code on the factory floor with sparks falling all around me.
Now refactoring software is not a new idea, it goes back at least to the ‘70s, however, I’d like to suggest informed code refactoring with the aid of UML Object Model Diagrams (OMD). Remember that UML is the Unified Modeling Language, basically, a way to graphically describe a software system. The reader can refer to here for a quick refresher and tutorial. [htt
This would have suited me well because rather than have to juggle remembering a dozen different methods and attributes, I could have viewed a UML Object Model Diagram, OMD, and quickly seen the methods and attributes for a given class or file. Note that one can also see what other classes that method may rely on, and what that association is. I can develop a family of OMDs and call that a baseline, and can then begin studying the baseline to see 1) how the code is put together, and 2) how I can re-architect for the same functional effect, and finally, 3) how I can add new functionality. Now, if this sounds like a lot of work, remember that the IBM Rational Rhapsody can aid you. That includes graphical display of code structure via OMDs, automatic creation of flowcharts so you can deduce the logic of the operation, lots of source code and modeling checks (over 200+) that automate a lot of the error prone and mundane work, for example confirming argument lists, or checking for comments!
It could be that getting some of that refactoring done and establishing a baseline is all the change your project schedule will allow (and that might be a success right there!), however, if one of the stated needs in the project is to upgrade the software by developing new behaviors and capabilities, these can be logically designed, modeled, and tested, using Rhapsody.
The next step(s) do get messy, that is, how much modernization and refactoring can you do, in the amount of time that you have, to a given level of risk? And what is the new target environment like?
Remember that it is possible for Rhapsody to reliably and repeatedly generate source code [C, C++, Java] for your incremental UML models; a full refactor could include Reverse Engineering the source code into a Rhapsody project, and forward generating that source code into a compiled application. Further if you want to add the source code from the model that represents those new capabilities, that can be linked into the executable as well. Remember that you can even link in existing libraries without reverse engineering them, you add them by reference. The output from Rhapsody is an executable program that runs on your product, whether it’s a glucose monitor or a 200 ton construction crane.
At the point that you have Rhapsody generating source code for your updates, compiling, building, and running your application, you are now poised to move to a new level of SDLC. That can include automated testing, integration into third party environments like MatLab / Simulink, parametric equation solving, and overall integration or porting into a new Real Time Operating System or 3rd party environment. And of course, all of that can be on top of the Safety Critical software environment that you can leverage from Rhapsody.
So, when you look to the future and think about what capabilities your team might want or need to do, know that Rhapsody is there to help you. When you are ready, your first step is to Reverse Engineer your source code to get a glimpse of structure for refactoring and subsequent extension.
Don’t wait until the typhoon hits, like I did.
Thomas Hall, Boston, July 22, 2014
Note... Thomas is presenting on this topic on July 30, 2014. You can sign up for the presentation at http