October 6, 2016 | Written by: Osai Osaigbovo
Share this post:
Today’s businesses operate in an environment of accelerated transformation and rapidly changing business models. It is critical for concerned IT leaders to reduce the risk of failure.
It’s no secret that application deployment failures and slow deployment timelines lead to massive financial losses. Potential damage to one’s businesses reputation and, ultimately, the loss of customers make failure one of the top priorities for every management level, from CEOs to IT Directors, according to a recent ADT report.
The costs alone are intimidating. Infrastructure failures can cost as much as $100,000 per hour. Production outages cost roughly $5,000 per minute. Critical applications can cost organizations $500,000 to $1 million per hour in some cases.
Why all the problems? Based on my 13 years of IT experience working with clients of all sizes across various industries, these are some key causes of application deployment failure:
Operational resilience means more than the ability to recover from failure. It also includes the ability to prevent failures and take actions to avoid them. Many organizations do not have the appropriate operational resilience maturity required for their IT and business. It is practically impossible to prevent application failures completely, but it is important that organizations take the time to find, predict and fix them.
2. Lack of consistency in the release pipeline.
Many organizations experience a mismatch of software deployment models through their IT systems. This results in failures because systems are typically interconnected in IT landscapes.
Some environments are complicated by the myriad of different toolsets and deployment procedures used by development and operational teams. The vast array of tools creates multiple tooling domains with embedded manual processes between the domains, which results in process complexity. In addition, there are examples where the provisioning and deployment processes are very different at the opposite ends of the release pipeline.
Lack of standardization and flexibility throughout the development and release process show up commonly in application vulnerability scanning. These weaknesses are caused by development teams not carrying out the appropriate security testing because they lack the appropriate governance measures. In some cases, testing can be viewed as expensive and time consuming, leading to a tendency to minimize efforts.
Every organization has its hero developers or operations experts who can single-handedly solve every problem. Overtime processes are built around these individuals, making these processes difficult to run when they move on. It is crucial to have processes that are not just built around one or two critical resources, but that also scale and are repeatable and automated to meet the changing demands of the organization.
A lack of proper communication and interoperability between the demand and supply side of IT, development and operational teams, results in situations in which actions taken in isolation seem sensible, but put together end to end, result in failure. In many organizations, the majority of changes are incremental additions or alterations. These changes can often attract less oversight and control than major projects.
How can one avoid the big bad six? An easy way is to discover faults that could result in failure early on in the release cycle. Doing so will reduce the cost of fixing the faults and eliminate the cost that could have been incurred from an application deployment failure.
In my next post, I will discuss approaches to resolve these causes of failure.
Connect with IBM Cloud Advisor Osai Osaigbovo on LinkedIn.