Archive

Three reasons why you shouldn’t upgrade servers

Share this post:

I really hate upgrading servers. In order to upgrade, the server usually has to stop serving for a little while, and as the name “server” implies, this isn’t really desirable. In addition, there’s often too much art and not enough science in an upgrade. “Oh dear,” you hear someone say at three in the morning. “The test machine didn’t do that. We’re going to have to change the plan a little.”

three reasons to not update your servers
Photo courtesy of stofiska and is reproduced with permission

 

Upgrading servers is painful. Plus, what you end up with, after a few upgrades, is a server that you probably don’t know how to rebuild if needed — after all, it just grew that way over time! One of my personal philosophies is to avoid doing things that are painful and have poor results (which is incidentally also why I rarely play sports).

So, here are three reasons why you can often avoid upgrading servers in the cloud:

1. Virtual machines are disposable in the cloud.

Yes, just like paper plates, Styrofoam cups and plastic forks, except without the environmental consequences. You snap your fingers, and one appears. You give it the “thumbs down” gesture, and it disappears back into the dust from whence it came. Your data needs to remain persistent, but the computing power (CPU and RAM) can be created and destroyed as needed.

2. There’s always a spare machine around when you need one in the cloud.

Most clouds have a characteristic called “rapid elasticity,” meaning that you can scale your usage up or down almost instantly and there will be resources available. This means that you can “float” additional virtual machines for a while. Plus, most cloud services are billed in short time intervals, such as hourly. Even if your budget is very tight, you can probably afford to run two copies of your application for a few hours or days.

3. You can automate creation and deletion of these disposable, spare machines in the cloud.

Invest some time creating an automated server build script that uses the cloud’s Application Programming Interface (API) to provision a server on the cloud, install the latest “production-ready” version of your application and run some tests against it. This is probably just an extension of existing automated build scripts that you have.

With the ingredients above, the next time you need to upgrade your application you can simply create a new instance of the application. Use your build script to provision a new server that has the correct OS configurations, middleware software and the latest version of your application. Test the new server, then substitute the new server for the old one. If the new one doesn’t work, put the old one back into place; if the new one works, toss the old server in the virtual landfill. You get a fresh server each time, and you know exactly how it was created and that there are no surprises lurking there, waiting until three a.m. on the next upgrade cycle.

Most applications do have persistent data, so you have to have a way to maintain data consistency when creating new instances of the application. This is a solvable problem for most applications by using the correct architecture — keeping the data separate from the application — and data replication strategies. In some cases, you might still have to perform an upgrade on the database server (if it’s not feasible to create a new instance of it), but you may also be using a platform as a service database service that allows you to take a database snapshot from the old production instance and create a new database from that.

There’s nothing especially revolutionary about not upgrading servers; this is how we’ve treated code for a long time. The difference is that, with cloud technologies, we can now treat the infrastructure (such as the virtual machine hosting the application) as if it’s part of the code.

More stories

Why we added new map tools to Netcool

I had the opportunity to visit a number of telecommunications clients using IBM Netcool over the last year. We frequently discussed the benefits of have a geographically mapped view of topology. Not just because it was nice “eye candy” in the Network Operations Center (NOC), but because it gives an important geographically-based view of network […]

Continue reading

How to streamline continuous delivery through better auditing

IT managers, does this sound familiar? Just when everything is running smoothly, you encounter the release management process in place for upgrading business applications in the production environment. You get an error notification in one of the workflows running the release management process. It can be especially frustrating when the error is coming from the […]

Continue reading

Want to see the latest from WebSphere Liberty? Join our webcast

We just released the latest release of WebSphere Liberty, 16.0.0.4. It includes many new enhancements to its security, database management and overall performance. Interested in what’s new? Join our webcast on January 11, 2017. Why? Read on. I used to take time to reflect on the year behind me as the calendar year closed out, […]

Continue reading