In today’s world, the incessant data growth is challenging traditional storage and data management solutions. These outdated systems lack performance and are expensive to administer and scale. As a result, business outcomes are impacted. In order to quickly and cost effectively respond to the staggering growth of data, there is a need for a scalable, high-performance, reliable and collaborative storage and data management infrastructure.
IBM’s General Parallel File System (GPFS) is a proven, scalable, high-performance data and file management solution that’s being used widely across multiple industries. GPFS provides simplified data management and integrated information lifecycle tools capable of managing petabytes of data and billions of files, in order to arrest the growing cost of managing ever growing amounts of data. GPFS removes data related bottlenecks by providing parallel access to data, eliminating single filer choke points or hot spots.
IBM GPFS ensures end-to-end data availability, reliability & integrity and also simplifies data management at large scale by providing a single namespace that can be scaled easy on demand by simply adding additional compute or storage resources. Conventionally deployed for high performance computing (HPC), today GPFS has gained market share in non-HPC solutions like Healthcare & Finance and supporting high available scale-out relational databases, helping... [Continue Reading]
The vision of the software defined infrastructure (SDI) is to deliver virtualized capabilities to the entire set of resources required by the application so they can be deployed automatically and quickly with little to no human intervention. Storage is one the major building blocks in accomplishing the software defined infrastructure vision. In order to achieve the SDI vision, it’s critical that storage hardware and software architectures must adapt effectively so that storage can be provisioned and responsive to the dynamically changing requirements of the SDI. The flash technology is positioned as a key enabler for these new storage architectures and with the right combination of hardware and software; facilitates efficient, cost-effective and high-performance storage services delivery. Flash-based storage improves I/O performance and efficiency for many applications like database acceleration, server & desktop virtualization and cloud environments. Flash storage has become a way to compress data, reduce power, and increase performance making it a superior enabler of virtualization and a perfect fit for the SDI vision.
Recognizing the growing importance of flash in a software defined infrastructure, IBM is offering end-to-end technical education sessions on flash technologies at Edge 2014 from May 19-23 at Venetian, Las Vegas.
At Edge 2014 – the premier event for infrastructure innovation,... [Continue Reading]
There's a lot of buzz about cluster file systems being the ultimate Software Defined Storage (SDS) . I couldn't agree more. I like to think of cluster file systems as the Swiss army knife of storage -- many styles of storage possible with one cluster file system product -- high IOPS, high streaming bandwidth, or cost effective styles all possible, with high availability, high elasticity at the same time.
Cluster file systems can leverage cost effective, commodity storage rich servers, and the wealth of Hard Disk Drive (HDD) and Solid-state Drive (SSD) storage options they offer, to create enterprise-grade storage. IBM's General Parallel File System (GPFS) is a mature, robust, award winning product, well known for performance and scaling in some of the world's largest supercomputer installations, that makes a great software defined storage subsystem for many, many applications.
IBM recently published an online book titled Software Defined Storage for Dummies , which shows the dramatic capabilities of GPFS-as-SDS and many real world applications. The Software Defined Storage For Dummies, IBM Platform Computing Edition , not only examines data storage & management challenges but also explains how software defined storage delivers an innovative solution for high-performance, cost-effective storage using IBM’s GPFS.
It’s... [Continue Reading]
Cloud has emerged as the growth engine for business. A recent survey of enterprises showed that those who adopt and leverage cloud computing for competitive advantage on average grow twice as fast and double their profit. Building an effective cloud environment requires a disciplined approach. Standards (both formal and informal) are rapidly evolving to ensure portability, interoperability (ISO/IEC JTC1/SC-38, OASIS TOSCA, IEEE P2302 etc.), and manageability of the cloud environment. IBM is well-positioned to offer end-to-end cloud solutions based on open architectures that deliver both interoperability and value to our customers across the world. Recently, IBM made major developments in the cloud. From acquiring SoftLayer last year to its commitment to invest $1.2 billion , IBM Cloud is creating exceptional opportunities for enterprises to transform business models, supply chains, and their interactions with customers and partners. Innovation around IBM's open cloud architecture is dramatically changing the entire digital fabric inside, and outside of the data center. One of them is Codename: Bluemix (Open Standards based Platform as a Service). Introduced at Pulse2014 , Bluemix is a new IBM PaaS offering based on Cloud Foundry , is an open-standards, cloud-based platform for building, managing and running apps of all types (web, mobile, big data, new smart devices).
We are seeing a lot of innovations by IBM in the... [Continue Reading]
After the successful accomplishment of Pulse2014, we’re ready to host yet another grand event – Edge2014 . Starting from May 19-23 at Venetian Las Vegas, IBM Edge2014 brings together IT professionals and practitioners from all industries to sharpen their expertise, discover the latest technologies, and share best practices in infrastructure innovation. The premier global event for infrastructure innovation will be focused on simplifying IT infrastructure and accelerating performance to deliver business value by using IBM’s cloud , big data , mobility and security solutions.
Let’s take a look at some of the key highlights of Edge2014:
550 expert technical sessions across 14 tracks
Exciting technology innovations and announcements
Latest and greatest business partner education
A comprehensive and expanded showcase Solution Center giving you access to the latest storage, System x and PureSystems solutions from IBM and our sponsors
Top-notch entertainment for over 5500 attendees
Wait, we have more in store for you...
The top 5 reasons to attend Edge2014:
Be the first to learn about the latest IBM technology innovation and their business implications
Be the first to see newest IBM solutions and live demos at the Solution Center
Hear real-life customer case studies and best practices tailored to CxO and Line of Business (LoB) issues and... [Continue Reading]
Many organizations are wrestling with the economics of cloud computing . This is especially true in High Performance Computing (HPC) and analytics where applications often demand clustered, scaled-out infrastructure. These types of workloads are often “spiky” or unpredictable and the costs associated with infrastructure can be substantial.
As a few examples:
A life sciences firm may need compute capacity only at particular stages in the drug development lifecycle
An engineering firm’s workload may vary depending on their active contract portfolio or the specific nature their projects
An insurance firm may require large amounts of computing power to meet regulatory reporting obligations but only for brief periods at month or quarter end
Provisioning infrastructure to meet periodic peaks is costly. Ideas like peak-shaving, out-sourcing and hybrid clouds are not new but organizations seeking to leverage public Infrastructure-as-a-Service (IaaS) offerings can run into a variety of technical and business challenges.
How to guarantee quality-of-service (QoS) in multitenant environments
Data management and security
How to manage, meter and throttle the usage of variable cost resources
How to manage commercial software licenses
How to ensure that local assets are fully utilized before tapping assets in the cloud
These business... [Continue Reading]
Businesses large and small are turning to Hybrid Cloud because it unites the best of all worlds - Public Cloud, Private Cloud and dedicated servers working together in any combination. A Hybrid Cloud combines the use of on-premises cloud infrastructure with cloud service provider infrastructure to create a firm’s overall capacity from the combined capabilities. While a hybrid approach promises cost savings and significant gains in IT and business flexibility, some concerns remain around how to manage, secure and integrate on-premises infrastructure with cloud services in hybrid cloud architecture…
Identifying these imperatives, we bring you an exclusive Video Debate session where top IT experts will discuss on when the hybrid clouds are and are not the be-all and end-all and if the infrastructure supporting the hybrid clouds really matters. On Tuesday, March 11th from 11 am-12 pm EDT, the fifth installment of our What’s Next for IT Infrastructure Video Debate will also discuss about the developments in cloud , data , security and many other latest topics and trends that influence your IT infrastructure.
The debate session will be moderated by Kevin Jackson , founder of GovCloud and author of GovCloud: Cloud Computing for the Business of Government . Among the panelists, we will have Steve Strutt - CTO for Cloud Computing, IBM UK and Ireland, Laura DiDio - Principal of ITIC, and Michael A. Salsburg -... [Continue Reading]
Effective management and use of virtualized IT resources is a key pillar of the IBM Software Defined Environment (SDE) strategy. Of course, virtualized IT is nothing new and was invented by IBM back in the late 60s and used until today by many organization as part of Virtual Machine/370 and follow on systems. Users and applications were allocated virtual machines that gave them virtual compute, storage and even cool things like virtual printers and punches!
So what is different about the technology and the environment now that brings virtualization into the forefront of enabling a new wave of IT automation for today's demanding mobile , big data & analytics workloads?
Earlier mainframe virtualization environments and the more recent emergent UNIX and x86 virtualization solutions were based on proprietary formats and interfaces. This left anyone trying to implement an IT automation solution on top of these systems to write multiple implementations or use plugins and abstraction layers to hide the differences. Today, with OpenStack receiving wide spread acceptance as an open standard for virtual IT resource management, solution developers can develop to one interface.
In my early days as a programmer, I wrote automation programs to create and configure VM/370 virtual resources in support of diverse applications. This included carving out virtual disks and allocating... [Continue Reading]
Those of us who live in the northern hemisphere generally love this time of year. We are on the downhill side of winter and about a month from the beginning of spring. Many of my friends here in the United States are already swinging baseball bats (or at least watching their children swing baseball bats). And at IBM, the smell of Pulse is in the air. Next week in Las Vegas, thousands of our valued clients and trusted partners will engage in a bold discussion on Cloud .
For those who are attending the Pulse Open Cloud Summit on Sunday February 23, or the main Pulse sessions beginning Monday February 24, one speaker of particular interest should be Jamie Thomas , General Manager, IBM Software Defined Systems. Take a moment to follow her on Twitter @tjamieibm .
I was able to get advance copies of both Jamie’s main tent keynote Monday morning at 10am (session KEY-2550A) and her Cloud & Software Defined Environments (SDE) track kickoff Monday afternoon at 1pm (session CET-1463A) . Jamie is a bold thinker and you should expect to walk away with good perspective on Cloud and SDE.
Jamie’s keynote centers on how cloud is changing the way work gets done. The lines between business leaders, developers and IT operations are blurring as they work in concert to compose new business models in a dynamic cloud. You’ll want to pay special attention as Jamie will be making some big announcements... [Continue Reading]
For more than a year, I have been talking to Cloud Application Architects at Managed Service providers (MSPs) and Telcos around the world looking to develop super scalable storage-based applications to create new businesses for their organizations. They want to do everything in software and drop it on commodity hardware like they see Google and Facebook doing – but they don’t have the resources Google and Facebook have to develop everything from scratch. Their requirements quickly coalesce around developing to an open API and targeting commodity hardware so they aren’t tied to a specific vendor’s API and can scale cost effectively. They quickly conclude that OpenStack is the way to go. And by and large, they are happy developing to OpenStack’s compute API, Nova , as they feel comfortable working with open source code at the compute level, since they can architect-in resilience and robustness and any tradeoffs are worth it for the open API.
For them, that comfort level stops at OpenStack’s storage components, Cinder and Swift . While they love the OpenStack storage APIs, there is a much higher bar, set by them, as far as resiliency, robustness, reliability, etc. for storage, compared to compute. They feel they can lose a server or a VM, but they can’t risk losing data. What they said they wanted was risk-free, tested, resilient, reliable, robust, industrial strength storage – done in... [Continue Reading]
Over a year ago, I embarked on a fantastic journey that led me to meet with some of IBM’s most prestigious, cutting edge customers around the world and talk to them about IBM’s SDN offering and strategy. These customers, no matter what industry, geography or business model all had one important question on their mind: How do I optimize my data center operations to be more like the large “cloud operators” of this world. It had become clear to them that they could no longer operate their infrastructure in silos and most of the inefficiency came from the cross-function groups working together. While each group (i.e. Server, Storage and Networking) respectively did a fair job in its own area, when it came time to deploy a new system, utilizing several elements in multiple areas forced the cost and time to deploy to increase exponentially.
With this premise in mind, we paid close attention to details when developing the new IBM Software Defined Network for Virtual Environment (SDN VE) controller. Our focus with SDN VE is on the system point of view and not the individual parts. For example, in many organizations network overlays (also known as network virtualization) are often deployed and managed by the server virtualization team. Why? Simply because the virtual switch is part of the hypervisor and because it directly connects to the Virtual Machines (VM). While a... [Continue Reading]
It seems that almost everywhere the rush to “Cloud” and programmable infrastructure has generated a number of conversations around Software Defined ... Software Defined Datacetners (SDDC), Software Defined Compute (SDC), Software Defined Storage (SDS), Software Defined Networking (SDN), Software Defined Infrastructure (SDI) to name the predominant references. So many companies, consultants, etc. have started using the terminology but actually mean different things. So, what does IBM mean when we talk about Software Defined?
At IBM we see a bigger picture than just the Datacenter elements, we see a Software Defined Environment (SDE). Let's first talk about the progression of "Software Defined" and how we got here. Consider it a progression of Software Defined Environments 1.0, 2.0 and 3.0.
The progression as visualized above is something that has been happening for several years. Currently the industry is largely in the 2.0 phase and moving toward 3.0. Here is a brief description of the stages.
Software Defined Environments 1.0
To put this in perspective, consider that the IT industry is continuously on a transformational journey. The most recent transformation has been virtualization across all infrastructure platforms and elements. Virtualization started with Compute to better utilize compute resources which generated better ROI on compute and software investments.... [Continue Reading]
IBM's latest study, Undercloud cover: How leaders are accelerating competitive differentiation , states that “Over the next three years, cloud’s strategic importance to business users is expected to double from 34 percent to 72 percent, even surpassing their IT counterparts at 58 percent."
What are market leaders doing differently?
Today cloud is a business reality, a phenomenon where everything is done, executed, stored and distributed through internet of things. The leading organizations, called pacesetters , have discovered cloud as a growth engine and have adopted cloud to the highest levels. These organizations draw valuable insights from their data and transform how they make decisions. It enables them to tap expertise from across their entire ecosystem and enjoy competitive advantage through customer engagement, better decisions and deeper collaboration.
What are the other organizations loosing?
The other organizations that are still in initial stages of cloud deployment are falling behind the pacesetters in reinventing customer relationship by 136%, using analytics by 170% and leverage expert knowledge across their ecosystem by 79%.
Are you thinking of moving to cloud and are worried about some issues?
Adopting cloud at highest level can have a few concerns as security, speed and disruption to existing business, exposure to new competitors, the need to develop and... [Continue Reading]
If you are like most of the clients I deal with, you are starting to recognize the storage part of your infrastructure represents a BIG opportunity for improvement in 2013 – in agility, in efficiency, and in cost. When demand (data growth) outpaces supply (ability of hardware vendors to increase areal density driving down costs) as dramatically as it has begun to do, something has to change in the way storage infrastructure is approached in order to help balance the equation again. That ‘change’ creates a perfect economic environment for vendor innovation resulting in creative new solutions for clients. If you have been paying attention to the storage space, you’ve noticed an increased investment pace as vendors explore technical innovations and try to explain these innovations to potential clients. One of my biggest frustrations though is when the industry can’t settle on terminology for describing a solution approach leaving clients thoroughly confused and paralyzed.
Think about how long it took us to settle down on the term ‘cloud’. Most folks felt like ‘cloud’ was going to help them, but it has taken quite a while for the industry at large to understand what exactly ‘cloud’ means and how to get there. Software-defined Storage (SDS) is another of those terms that holds great promise for IT managers, but is suffering from lack of definition. ESG analyst Mark Peters recently noted in an... [Continue Reading]
According to the recent IBM Center for Applied Insights Study , the number of enterprises moving to cloud will more than double in the next few years as they seek to transition their IT from a cost center to a strategic center of business innovation. Cloud can improve the responsiveness of the IT infrastructure and implementing the right cloud framework can help you achieve the desired business outcomes without increasing risk or cost. Think of flexibility, automation, integration; these are the enablers to an efficient cloud delivery and components of a Software Defined Infrastructure (SDI) , an IT infrastructure that brings high degree of flexibility, integrates the data center workloads and resources by automating the entire data center. A Software Defined Infrastructure or Environment is an enabler and ideal architecture for private, public and hybrid clouds where enterprises take advantage of virtualization not just in servers, but also in networking and security , storage and availability, management and automation. With flexibility and automation capabilities, Software Defined Environment (SDE) is a vital component of cloud that enable data center administrators to use a single graphical user interface to do everything from deploying virtual machines to assigning storage to configuring networks hence... [Continue Reading]