Gartner estimates: “Spending on banking and securities IT is expected to top $471 billion this year, up 14 percent from 2010, and rise by a fifth again to hit $563 billion in 2017”
The Reuter’s article, Insight: New Masters of the Universe? Banks see future in IT hires, describes the growing trend of banks hiring more, and more, IT personnel to drive the technology side of the business. The article states "With IT expertise now a must for the boardroom, banks' conservative workplaces are likely to undergo cultural change as they welcome ambitious, differently-minded people."
Leading banks like Barclays, JP Morgan and Goldman Sachs are hiring technical personnel in greater numbers to lead their IT operations while cutting costs in other areas of the business. Goldman Sachs is an example of the increasing emphasis being placed on IT: "Goldman Sachs has added 6 percent more IT staff since 2009, while cutting elsewhere. That has left it with 8,000 technology employees, making its department bigger than many technology firms, and it works hard to lure professionals away from Silicon Valley with the message that its technology business is key."
This trend is not confined to banking industry only. The growing impact of IT on enterprises was also documented in the IBM 2012 CEO Study, Leading Through Connections , with industry leading CEO’s ranking technology as the most important factor... [Continue Reading]
5 reasons your IT Infrastructure may leave you asking yourself, well, how did I get here? Or, a few other annoying questions.
In a past life, long before becoming a Marketing professional, I was a DJ, spinning and mixing records to pay my way through college (yeah, records!). During this period I became a huge Talking Heads fan! The lyrics from their critically acclaimed song, “Once in a Lifetime,” often interpreted as dealing with mid-life with crisis, sacrifice and questionable choices could honestly be questions posed by many IT professionals about the state of many current IT infrastructures. Let’s queue this up.
“You may ask yourself, well, how did I get here?”
Let’s face it; traditional infrastructures have grown increasingly complex and inflexible, making it difficult, in most cases, to be responsive to the fast changing business needs of many enterprises. Datacenter sprawl, multitudes of heterogeneous hardware platforms, hypervisors, operating systems and applications; all with their own management systems, make it difficult to address changing business requirements, get accurate insight from data, or deliver new offerings or services. It simply takes too long to manually build, set up, deliver and tear down servers, storage, network devices the old fashion way. Factor in unpredictable occurrences, like a sudden spike in traffic or transactions and, “You may ask yourself, well, how... [Continue Reading]
According to the 2012 IBM Data Center Operational Efficiency study , only 1 in 5 clients have highly efficient IT infrastructures and are able to allocate more than 50% of their IT budget to new projects.
"Wonder what defines highly efficient IT infrastructure?"
These are infrastructures where clients have broken down silos and moved to a new era of interconnected, intelligent and instrumented computing. The unlimited data generated daily is used as a source of information to make informed decisions. IT is cutting manual work of operations and moving IT managers out of data centers to provide them an infrastructure that is programmable , yet cost-effective; scalable,flexible and accessible from anywhere. Highly efficient infrastructures help clients anticipate customer preferences, respond to the dynamic market changes and outpace competition.
So "What are non-efficient IT infrastructures missing?"
"Non-efficient IT infrastructures typically have silos of separate servers, storage, network devices, operating systems and management systems. Siloed infrastructures can often be extremely complex and often require highly skilled resources to operate and manage. Operations to assign workloads to resources, map resources to applications are manually done, consume time and reduce productivity. Organizations with these... [Continue Reading]
During the latter half of my career I’ve spent a lot of time working with disruptive application technologies, so I know firsthand just how dynamic and unpredictable new business workloads can be from the perspective of infrastructure utilization. Yet, IT staffs are mainly trying to support this new breed of applications with data center technologies, processes and procedures that were originally developed to manage highly repetitive and predictable sequential transactions. The tension between twenty-first-century workloads and twentieth-century IT is almost palpable, and the answer, according to some, will be something called the software-defined environment (SDE).
Being an inquisitive IBMer (is there any other kind?); I wanted to better understand our SDE strategy. After my searches turned up very little formal information—mainly this brief article on the IBM PartnerWorld website and a short YouTube video —I decided to pay a visit to my good friend and colleague Jeff Frey.
Jeff is an IBM Fellow and Chief Technology Officer of the System z platform. His fingerprints are all over every major advance in mainframe technology for the past 30 years, so I had a feeling that he’d be able to fill my knowledge gap. I was not disappointed!
Read the rest of the post on the Smarter Computing blog .
In our earlier blog , Matt Hogstrom, CTO, IBM Software Defined Environment (SDE) explained about IBM’s SDE approach towards supporting the complete stack of the data center infrastructure from computer hardware to end-user software based on OpenStack. And now, Matt’s exclusive tete-a-tete with Datacenter Dynamics author, Penny Jones, describes IBM’s vision and mission towards Software-Defined Everything for Smarter IT infrastructure.
Well, the interview is quite interesting because it not only discusses the SDE foundations but the technology, the best practices as well as the intelligence for managing the Software-Defined Infrastructure. Let’s take a look at the key highlights of the conversation (according to Matt):
SDE is viewed from the overall infrastructure as critical and foundational basis. The ability to capture information about workloads and the way information is processed, to set levels or objectives from a workload perspective, and manage these according to SLAs
Software Defined Environment can identify policies on business level and human level allowing the infrastructure make the appropriate decisions
SDE captures best practices by practically controlling and automating every facet of the data center, right form the network provisioning to the storage set up and provisioning
IBM... [Continue Reading]
The thought becomes evidence and is more likely to be adopted when we hear straight from the horse’s mouth…The thought can be about anything – behavior, approach, process, practice or a technology. The same happened with the technology eyeing wide adoption – Software Defined architecture in a data center, which has led many organizations to be ready now for what’s next ! Forrester, a leading research firm, in its report , has answered many questions that were unanswered about a Software Defined Data Center (SDDC) that are realistic and noteworthy. In our first and second release , we have already discussed about Forrester’s evaluations and explanations on emergence, opportunities and architecture of Software Defined Data Centers .
Forrester believes a Software Defined Data Center is a comprehensive abstraction of a complete data center and the future of infrastructure architecture . Forrester anticipates that SDDC solutions will be very lucrative for technology vendors because they will almost certainly drag along substantial services revenue. The reality is that the ease of use and simplification features that will be open to the business users will probably require complex integration, although the vendors who minimize this complexity will have a benefit. Forrester believes IBM, a major vendor, is well positioned to lead SDDC solutions ,... [Continue Reading]
The rise of cloud computing , big data and demand for IT Infrastructure as a Service , combined with the need to reduce costs and scalability, has pushed data centers to the verge of next transformation . This transformation is the outcome of the pressure public cloud computing relays on IT organizations to compete with cloud providers, as well as the need to use automated processes to manage huge volumes of data. The underlying data centre , mainly the one that is software-defined , will enable business leaders to balance these realities by using automated and integrated IT processes built on open architecture for the flexible management of workloads ranging from big-data analytics to cloud services.
At the upcoming IBM InterConnect 2013 global conference, from October 9 to 11 in Singapore, Jacqueline Woods , IBM Vice President for Growth Solutions, will address the implications of these trends in detail in her exclusive session, “ A New Era of Computing: Are You "Ready Now" To Build A Smarter Enterprise? ”. The session will help data center professionals to understand the best practices and potential of disruptive technologies such as cloud, virtualization, software defined environments, big data, mobile and more to harness them to deliver business value.
The 3-day conference,... [Continue Reading]
We are moving towards the "New Era of Smarter Computing " and we need to transform our business model and culture to gain competitive advantage.
So what is this “New era of Smarter computing” we are talking about?
This is the era where businesses react with speed and flexibility with shorter deployment times, manage infrastructure with predictive analytics and drive more sales with open standards to achieve reduced operational costs, optimized utilization and better customer experience.
Now the question is, what is inhibiting us to more to new era of computing?
Firstly all the businesses are cutting cost over IT expenditure for maintenance and administration, Secondly, limited access to data when and where required and thirdly, our resistance to move beyond our traditional silos infrastructure.
With all these inhibitions, how to move to New era of Smarter computing?
We need to move to infrastructure that is
Defined by software that can automatically assign workloads to resources,
Designed for big data for instantaneous insights for competitive advantage
Open and collaborative to protect the investment.
If you want to learn more about overcoming the inhibitions of moving to “ New era ” and achieve a business model with efficient and agile infrastructure please join IBM's leading expert at the following... [Continue Reading]
Today’s IT organizations face immense challenges. They must deliver game-changing cloud , big data and analytics capabilities. But they’re also expected to drive innovation and growth with declining budgets. Clients are telling us that in order to compete they must invest in their IT infrastructures to improve customer service, enable better decisions through improved use of data, and enhance collaboration across their value chain. More than ever, we are hearing that infrastructure matters !
Beginning Monday, October 21 st IBM is hosting the Enterprise2013 Conference at the Bonnet Creek Conference Center, Orlando, Florida.
Attending the conference will enable you to gain an understanding of why IT architecture choices enable creation of business value, and why clients have made IT decisions to deliver superior customer experience and drive business transformation and market leadership. Many sessions will be conducted by expert industry analysts, IBM executives and product managers, who will share thoughts on future directions of the IT infrastructure and the impact on business results.
I would like to highlight a number of sessions that might be of interest to many of you that have been following our Software Defined blog . These sessions will provide an overview of Software Defined Environment (SDE) , Software Defined Storage , Software Defined Networks (SDN) and Software Defined Data Centers ; and the role Software Defined plays in... [Continue Reading]
Everyone seems to have a software-defined play these days. When IBM talks about software-defined, we use the more global term, Software Defined Environment (SDE), an environment that takes care of every element of data center infrastructure right from computer hardware to middleware to end-user software. But why there is a need of a Software Defined Environment, how IBM SDE is different from other software-defined architectures and what kind of impact and opportunities it will bring to the data center infrastructures? Keeping these into mind, IBM brings to you an exclusive Software Defined Environment Solution Brief that explicitly describes how SDE has evolved and become the foundation for an efficient IT infrastructure. In the solution brief, IBM offers many evidences and opportunities to help you take full advantage of the Software Defined Environment, let’s take a look:
The right strategy at the right time
With technology getting complex, business leaders are prompted to look for a simplified, responsive and adaptive infrastructure to meet the IT challenges and demands. A Software Defined Environment is the next step in the evolution of agile, optimized information technology that brings far more responsiveness and flexibility by automating the entire data center infrastructure.
Creating a workload-aware IT infrastructure
The Software Defined Environment framework transforms static infrastructure into a dynamic, continuously... [Continue Reading]
IBM's latest study, Undercloud cover: How leaders are accelerating competitive differentiation , states that “Over the next three years, cloud’s strategic importance to business users is expected to double from 34 percent to 72 percent, even surpassing their IT counterparts at 58 percent."
What are market leaders doing differently?
Today cloud is a business reality, a phenomenon where everything is done, executed, stored and distributed through internet of things. The leading organizations, called pacesetters , have discovered cloud as a growth engine and have adopted cloud to the highest levels. These organizations draw valuable insights from their data and transform how they make decisions. It enables them to tap expertise from across their entire ecosystem and enjoy competitive advantage through customer engagement, better decisions and deeper collaboration.
What are the other organizations loosing?
The other organizations that are still in initial stages of cloud deployment are falling behind the pacesetters in reinventing customer relationship by 136%, using analytics by 170% and leverage expert knowledge across their ecosystem by 79%.
Are you thinking of moving to cloud and are worried about some issues?
Adopting cloud at highest level can have a few concerns as security, speed and disruption to existing business, exposure to new competitors, the need to develop and... [Continue Reading]
Much before there was a buzz around “Software Defined”, a lot of IT experts had started sharing their perspective around the next big thing in IT. A platform which makes the IT Infrastructure of organizations simplified, responsive and adaptive !
Renato Recio , an IBM Fellow & CTO of IBM System Networking spoke about the state of Software Defined Networking in his blog . As per Recio SDN is in the early adoption phase today, but it is no longer technologies for companies that can spend significant resources in developing their own networks (e.g., Google, Microsoft). Instead smaller companies, such as Tervela and Selerity are using IBM’s SDN solutions in production environments today.
He goes on describe that one of the issues SDN has faced is the lack of a widely available, common platform that application and appliance developers can focus on.
Dr. Casimer DeCusatis also talks about the 5 reasons why software defined networking makes a difference . In his post he describes SDN as: “SDN is fundamentally distinguished from other networking technologies because it abstracts the underlying hardware complexity, separating the management and control planes from the data plane. Some consequences of this abstraction include more centralized management, perhaps through cloud middleware or NaaS such as the... [Continue Reading]
It seems that almost everywhere the rush to “Cloud” and programmable infrastructure has generated a number of conversations around Software Defined ... Software Defined Datacetners (SDDC), Software Defined Compute (SDC), Software Defined Storage (SDS), Software Defined Networking (SDN), Software Defined Infrastructure (SDI) to name the predominant references. So many companies, consultants, etc. have started using the terminology but actually mean different things. So, what does IBM mean when we talk about Software Defined?
At IBM we see a bigger picture than just the Datacenter elements, we see a Software Defined Environment (SDE). Let's first talk about the progression of "Software Defined" and how we got here. Consider it a progression of Software Defined Environments 1.0, 2.0 and 3.0.
The progression as visualized above is something that has been happening for several years. Currently the industry is largely in the 2.0 phase and moving toward 3.0. Here is a brief description of the stages.
Software Defined Environments 1.0
To put this in perspective, consider that the IT industry is continuously on a transformational journey. The most recent transformation has been virtualization across all infrastructure platforms and elements. Virtualization started with Compute to better utilize compute resources which generated better ROI on compute and software investments.... [Continue Reading]
Effective management and use of virtualized IT resources is a key pillar of the IBM Software Defined Environment (SDE) strategy. Of course, virtualized IT is nothing new and was invented by IBM back in the late 60s and used until today by many organization as part of Virtual Machine/370 and follow on systems. Users and applications were allocated virtual machines that gave them virtual compute, storage and even cool things like virtual printers and punches!
So what is different about the technology and the environment now that brings virtualization into the forefront of enabling a new wave of IT automation for today's demanding mobile , big data & analytics workloads?
Earlier mainframe virtualization environments and the more recent emergent UNIX and x86 virtualization solutions were based on proprietary formats and interfaces. This left anyone trying to implement an IT automation solution on top of these systems to write multiple implementations or use plugins and abstraction layers to hide the differences. Today, with OpenStack receiving wide spread acceptance as an open standard for virtual IT resource management, solution developers can develop to one interface.
In my early days as a programmer, I wrote automation programs to create and configure VM/370 virtual resources in support of diverse applications. This included carving out virtual disks and allocating... [Continue Reading]
Your organization might have deployed a cluster or grid on site. But can these resources always meet your peak demands? For example, what happens when several large projects move into the same simulation and design phase at the same time?
Simply adding hardware to address peak workload requirements, especially if they are short term, is probably not an option. Expanding the physical infrastructure can require significant time, expertise and budget. And the data center may already be maxed out on power, cooling and real estate. What’s the answer?
To address these challenges, at Pulse 2014, IBM announced the IBM Platform Computing Cloud Service , which provides ready-to-run clusters in the SoftLayer cloud that are optimized for compute-intensive technical computing and analytics applications. The Cloud Service comes complete with Platform LSF (SaaS) and Platform Symphony (SaaS) workload management software, dedicated physical machines and the support of the Platform Computing Cloud Operations team.
Organizations that have on-site clusters or grids can quickly address spikes in infrastructure demand by implementing a hybrid cloud. Platform Computing Cloud Service enables these organizations to forward workloads from local infrastructure to a Platform LSF or Platform Symphony cluster in the SoftLayer cloud, quickly accommodating demand without being concerned about security or... [Continue Reading]