According to Gartner’s new research report, IBM is the worldwide market leader in flash storage solid state arrays (SSA’s) based on revenue for 2013.
Customers across the world are turning to IBM flash storage systems than any other company for faster access to insights from Big Data . IBM FlashSystem , the memory technology used in everything from mobile phones and tablet PCs to thumb drives, is taking the enterprise by storm; helping large and small businesses improve the performance of their applications as well as analytics in the era of Big Data.
The largest non-profit healthcare system in Southeast Texas, Memorial Hermann Health System , which includes 12 acute care hospitals, migrated from physical to Electronic Medical Records (EMR) from Cerner Corp. last year, to accelerate the sharing of patient information among medical staffs. This move radically improved efficiencies and created a swell of digital information that needed to be quickly accessed for analysis. And to do that, the company turned to the IBM FlashSystem 840 and SAN Volume Controller storage virtualization software, and the IBM Flex System x240 and saw a dramatic improvement in performance.
“By quickly processing the medical records of all patients across our hospitals in real-time, it enables us to detect patterns that indicate the onslaught of a bacterial infection that may lead to sepsis... [Continue Reading]
In just a short span, big data has become one of the core disruptors of the new digital age. This year we saw many big data initiatives inside the enterprises and 2015 is going to be no exception. As big data continue to evolve, cio.com predicts five major developments which will dominate big data technology in the New Year. Let’s take a look:
1. Data Agility Emerges as a Top Focus
Data agility has been one of the big drivers behind the development of big data technologies. In 2015, data agility will become even more central as organization shift their focus from simply capturing and managing data to actively using it.
2. Organizations Move from Data Lakes to Processing Data Platforms
Data Lake will continue to evolve in 2015 with the capability to bring multiple compute and execution engines to the data lake to process the data in-place. The big trend in 2015 will be around the continuous access and processing of events and data in real-time to gain constant awareness and take immediate action.
3. Self-Service Big Data Goes Mainstream
In 2015, IT will embrace self-service big data to allow business users self-service to big data and empower developers, data scientists and data analysts to conduct data exploration directly.
4. Hadoop Vendor Consolidation: New Business Models Evolve
Open Source Software (OSS) adoption has provided tremendous value to the market. Cio.com believes 2015 will see the evolution of a new,... [Continue Reading]
5 reasons your IT Infrastructure may leave you asking yourself, well, how did I get here? Or, a few other annoying questions.
In a past life, long before becoming a Marketing professional, I was a DJ, spinning and mixing records to pay my way through college (yeah, records!). During this period I became a huge Talking Heads fan! The lyrics from their critically acclaimed song, “Once in a Lifetime,” often interpreted as dealing with mid-life with crisis, sacrifice and questionable choices could honestly be questions posed by many IT professionals about the state of many current IT infrastructures. Let’s queue this up.
“You may ask yourself, well, how did I get here?”
Let’s face it; traditional infrastructures have grown increasingly complex and inflexible, making it difficult, in most cases, to be responsive to the fast changing business needs of many enterprises. Datacenter sprawl, multitudes of heterogeneous hardware platforms, hypervisors, operating systems and applications; all with their own management systems, make it difficult to address changing business requirements, get accurate insight from data, or deliver new offerings or services. It simply takes too long to manually build, set up, deliver and tear down servers, storage, network devices the old fashion way. Factor in unpredictable occurrences, like a sudden spike in traffic or transactions and, “You may ask yourself, well, how... [Continue Reading]
According to the 2012 IBM Data Center Operational Efficiency study , only 1 in 5 clients have highly efficient IT infrastructures and are able to allocate more than 50% of their IT budget to new projects.
"Wonder what defines highly efficient IT infrastructure?"
These are infrastructures where clients have broken down silos and moved to a new era of interconnected, intelligent and instrumented computing. The unlimited data generated daily is used as a source of information to make informed decisions. IT is cutting manual work of operations and moving IT managers out of data centers to provide them an infrastructure that is programmable , yet cost-effective; scalable,flexible and accessible from anywhere. Highly efficient infrastructures help clients anticipate customer preferences, respond to the dynamic market changes and outpace competition.
So "What are non-efficient IT infrastructures missing?"
"Non-efficient IT infrastructures typically have silos of separate servers, storage, network devices, operating systems and management systems. Siloed infrastructures can often be extremely complex and often require highly skilled resources to operate and manage. Operations to assign workloads to resources, map resources to applications are manually done, consume time and reduce productivity. Organizations with these... [Continue Reading]
Cloud , analytics , mobile and social are transforming the world and bringing opportunities to business. This is an incredible moment. But many organizations have not been able to harness the value of their investments in this space. We need a new IT model, designed for the new era.
It comes down to the fact that infrastructure matters . The right infrastructure helps deliver real-time insights so you can make better business decisions, faster. It delivers performance efficiently, optimizes IT resources and is easy to consume. It’s also scalable and secure, providing safe, shared access to all relevant information no matter where that information resides.
All of that sounds great. But, as I often hear clients ask, how do you make sure you’re choosing the right infrastructure for your business?
It doesn’t have to be hard. Coming up on May 19-23 at Edge2014 in Las Vegas, you can tap into IBM’s point of view on the right choices for your business – formed by what we’ve learned in over 20,000 client engagements globally, in every country, every market. Edge2014 is the premier event for infrastructure innovation, where leading worldwide industry experts will discuss why IT infrastructure matters and how organizations can realize competitive advantage building their cloud, analytics, mobile and social initiatives optimized by IBM infrastructure innovations. Over the course of the event, you... [Continue Reading]
Your organization might have deployed a cluster or grid on site. But can these resources always meet your peak demands? For example, what happens when several large projects move into the same simulation and design phase at the same time?
Simply adding hardware to address peak workload requirements, especially if they are short term, is probably not an option. Expanding the physical infrastructure can require significant time, expertise and budget. And the data center may already be maxed out on power, cooling and real estate. What’s the answer?
To address these challenges, at Pulse 2014, IBM announced the IBM Platform Computing Cloud Service , which provides ready-to-run clusters in the SoftLayer cloud that are optimized for compute-intensive technical computing and analytics applications. The Cloud Service comes complete with Platform LSF (SaaS) and Platform Symphony (SaaS) workload management software, dedicated physical machines and the support of the Platform Computing Cloud Operations team.
Organizations that have on-site clusters or grids can quickly address spikes in infrastructure demand by implementing a hybrid cloud. Platform Computing Cloud Service enables these organizations to forward workloads from local infrastructure to a Platform LSF or Platform Symphony cluster in the SoftLayer cloud, quickly accommodating demand without being concerned about security or... [Continue Reading]
In our earlier blog , Matt Hogstrom, CTO, IBM Software Defined Environment (SDE) explained about IBM’s SDE approach towards supporting the complete stack of the data center infrastructure from computer hardware to end-user software based on OpenStack. And now, Matt’s exclusive tete-a-tete with Datacenter Dynamics author, Penny Jones, describes IBM’s vision and mission towards Software-Defined Everything for Smarter IT infrastructure.
Well, the interview is quite interesting because it not only discusses the SDE foundations but the technology, the best practices as well as the intelligence for managing the Software-Defined Infrastructure. Let’s take a look at the key highlights of the conversation (according to Matt):
SDE is viewed from the overall infrastructure as critical and foundational basis. The ability to capture information about workloads and the way information is processed, to set levels or objectives from a workload perspective, and manage these according to SLAs
Software Defined Environment can identify policies on business level and human level allowing the infrastructure make the appropriate decisions
SDE captures best practices by practically controlling and automating every facet of the data center, right form the network provisioning to the storage set up and provisioning
IBM... [Continue Reading]
Nothing stays the same. Change is the one constant in our world, especially in the world of information technology (IT). This paradigm certainly holds true for the storage software technology that until recently was known as IBM General Parallel File System (GPFS) . It has changed, and continues to evolve to meet evolving challenges faced by enterprises around the globe. To reflect these significant changes, GPFS is now known as Elastic Storage. Rapid evolution is also the rule with another intriguing IT technology – flash storage. IBM FlashSystem represents the culmination of many years of very successful solid state storage engineering. When combined, these two complementary technologies offer a new and very powerful solution for a wide spectrum of enterprise storage challenges.
Elastic Storage has its roots in solving the storage and processing challenges found in high performance computing (HPC) environments. For example, if you needed to efficiently and simultaneously process the thousands of individual files involved in sequencing human genomes, Elastic Storage couldn’t be matched for this task. And it still can’t. But times have changed, and so has Elastic Storage. Now Elastic Storage offers solutions for a wide range of workloads to enterprises of all types and sizes.
Cloud, analytics, mobile, and social environment offers a good example of how Elastic Storage now offers industry leading advantages in... [Continue Reading]
The thought becomes evidence and is more likely to be adopted when we hear straight from the horse’s mouth…The thought can be about anything – behavior, approach, process, practice or a technology. The same happened with the technology eyeing wide adoption – Software Defined architecture in a data center, which has led many organizations to be ready now for what’s next ! Forrester, a leading research firm, in its report , has answered many questions that were unanswered about a Software Defined Data Center (SDDC) that are realistic and noteworthy. In our first and second release , we have already discussed about Forrester’s evaluations and explanations on emergence, opportunities and architecture of Software Defined Data Centers .
Forrester believes a Software Defined Data Center is a comprehensive abstraction of a complete data center and the future of infrastructure architecture . Forrester anticipates that SDDC solutions will be very lucrative for technology vendors because they will almost certainly drag along substantial services revenue. The reality is that the ease of use and simplification features that will be open to the business users will probably require complex integration, although the vendors who minimize this complexity will have a benefit. Forrester believes IBM, a major vendor, is well positioned to lead SDDC solutions ,... [Continue Reading]
Recently, OpenDaylight become the first ever open source project to earn best of Interop 2014 award for its significant contributions to advancing IT. Interop , the most respected networking industry trade show, awarded OpenDaylight with the Best of Interop Grand Prize selected from nine category winners across key facets of IT. This caps off a year of accomplishments that have taken OpenDaylight from nascent underdog to one of the most important open source projects in the world.
One year ago at the Open Networking Summit , we announced OpenDaylight—a cross-industry consortium tasked with building an open source community and platform for Software-Defined Networking (SDN) solutions. OpenDaylight is the culmination of IBM's efforts to build an open platform that can provide the base for one of the three pillars of Software Defined Environments (SDE) alongside Software-Defined Storage (SDS) and Software-Defined Compute (SDC). The consortium launched with other major industry players including Brocade, Cisco, Citrix, Ericsson, Juniper, Microsoft and Red Hat.
While some people were enthusiastic from the beginning, others were understandably skeptical. Many of the companies involved were networking equipment vendors who might oppose a truly open platform that could rapidly bring sweeping changes to how we build networking infrastructure. During the announcement, Inder Gopal, IBM's Vice President of Network Development... [Continue Reading]
It seems that almost everywhere the rush to “Cloud” and programmable infrastructure has generated a number of conversations around Software Defined ... Software Defined Datacetners (SDDC), Software Defined Compute (SDC), Software Defined Storage (SDS), Software Defined Networking (SDN), Software Defined Infrastructure (SDI) to name the predominant references. So many companies, consultants, etc. have started using the terminology but actually mean different things. So, what does IBM mean when we talk about Software Defined?
At IBM we see a bigger picture than just the Datacenter elements, we see a Software Defined Environment (SDE). Let's first talk about the progression of "Software Defined" and how we got here. Consider it a progression of Software Defined Environments 1.0, 2.0 and 3.0.
The progression as visualized above is something that has been happening for several years. Currently the industry is largely in the 2.0 phase and moving toward 3.0. Here is a brief description of the stages.
Software Defined Environments 1.0
To put this in perspective, consider that the IT industry is continuously on a transformational journey. The most recent transformation has been virtualization across all infrastructure platforms and elements. Virtualization started with Compute to better utilize compute resources which generated better ROI on compute and software investments.... [Continue Reading]
IBM's Strategy for Software Defined Network (SDN) is going to be one of the key areas to be covered at Edge 2014 – The premier event for infrastructure innovation. I invite you to join me to explore how we plan ahead with Software Defined Network and our key initiatives around SDN. At Edge 2014, from May 19-23 at the Venetian in Las Vegas, I am going to detail step-wise rollout plan that will fully capture the transformation in your network architecture without vendor lock-in. This unique opportunity will also help you realize the potential of OpenDaylight , an open-source based Network Virtualization framework that integrates with OpenStack , supporting hypervisors such as VMware, KVM, and others. This lecture will describe IBM's data center strategy to finally complete the virtualization framework and leverage the power of cloud using industry best practices designs and open software tools. Come join me to learn about the vital differentiation of our SDN solutions in this rapidly evolving space. Here are the complete session details:
Date & Time
IBM's Strategy for Software Defined Network
Thursday, May 22th, 2014 - 1.45 pm–2.45 pm PT
Friday May 23rd, 2014 - 10.30 am-11.30 am PT
Lido 3104... [Continue Reading]
Everyone seems to have a software-defined play these days. When IBM talks about software-defined, we use the more global term, Software Defined Environment (SDE), an environment that takes care of every element of data center infrastructure right from computer hardware to middleware to end-user software. But why there is a need of a Software Defined Environment, how IBM SDE is different from other software-defined architectures and what kind of impact and opportunities it will bring to the data center infrastructures? Keeping these into mind, IBM brings to you an exclusive Software Defined Environment Solution Brief that explicitly describes how SDE has evolved and become the foundation for an efficient IT infrastructure. In the solution brief, IBM offers many evidences and opportunities to help you take full advantage of the Software Defined Environment, let’s take a look:
The right strategy at the right time
With technology getting complex, business leaders are prompted to look for a simplified, responsive and adaptive infrastructure to meet the IT challenges and demands. A Software Defined Environment is the next step in the evolution of agile, optimized information technology that brings far more responsiveness and flexibility by automating the entire data center infrastructure.
Creating a workload-aware IT infrastructure
The Software Defined Environment framework transforms static infrastructure into a dynamic, continuously... [Continue Reading]
Recently, IBM's Richard Goldgar, Client Architect, blogged on Thoughtsoncloud.com to offer top ten tips for CIOs looking to move to the cloud. Let’s take a look at the tips he suggested that could help CIOs better succeed in their cloud efforts:
Stop being religious
According to Richard, cloud is about making things “fast, anywhere, now” (he calls this as FAN), while saving costs and simplifying IT. Richard suggests cloud should help us broaden, not narrow, our horizons while we keeping in mind our existing architecture and IT goals.
Pay careful attention to total cost of ownership
Richard suggests make conscious choices about any compromises. Make sure we compare our current real costs (staff, equipment, software licensing, cost of security risks and others) with the cloud options.
Security should be at the center
It is always advisable to start moving to the cloud only a few things for example a small application with few security requirements or perhaps office automation tools.
To start with, try something simple that does not require a lot of security or technical IT support.
Find the low-hanging fruit
To start with, try something simple that does not require a lot of security or technical IT support.
Choose your friends carefully
Choosing the right cloud vendor is very important... [Continue Reading]
Gartner estimates: “Spending on banking and securities IT is expected to top $471 billion this year, up 14 percent from 2010, and rise by a fifth again to hit $563 billion in 2017”
The Reuter’s article, Insight: New Masters of the Universe? Banks see future in IT hires, describes the growing trend of banks hiring more, and more, IT personnel to drive the technology side of the business. The article states "With IT expertise now a must for the boardroom, banks' conservative workplaces are likely to undergo cultural change as they welcome ambitious, differently-minded people."
Leading banks like Barclays, JP Morgan and Goldman Sachs are hiring technical personnel in greater numbers to lead their IT operations while cutting costs in other areas of the business. Goldman Sachs is an example of the increasing emphasis being placed on IT: "Goldman Sachs has added 6 percent more IT staff since 2009, while cutting elsewhere. That has left it with 8,000 technology employees, making its department bigger than many technology firms, and it works hard to lure professionals away from Silicon Valley with the message that its technology business is key."
This trend is not confined to banking industry only. The growing impact of IT on enterprises was also documented in the IBM 2012 CEO Study, Leading Through Connections , with industry leading CEO’s ranking technology as the most important factor... [Continue Reading]