Modified on by rossella
IBM Cloud Orchestrator 2.5 comes with a set of interesting new features.
First of all the support for OpenStack Kilo; this opens the door to a set of very interesting scenarios related to software defined environments (think about the neutron capabilities in Kilo). Moreover you can now either leverage the OpenStack distribution provided by IBM (IBM Cloud Manager with OpenStack 4.3) or another OpenStack distribution based on Kilo. Orchestrating workloads on a non IBM OpenStack distribution does no longer rely on the Public Cloud Gateway.
The list of supported public clouds has been enriched with the addition of Microsoft Azure: from IBM Cloud Orchestrator self service user interface you can now register and manage Microsoft Azure regions, deployment artifacts and manage resources.
The pattern engine is now based on OpenStack Heat, no more proprietary technology involved; the user experience has been enhanced allowing to store and select heat templates from the self service UI.
The installation procedure has been simplified; the number of needed servers has been shrunk to reduce hardware requirements, a prerequisite checker has been added to enable a fast detection of possible failure points.
For further information visit IBM Cloud Orchestrator 2.5 knowledge center
IBM Cloud Orchestrator 2.5 announcement letter is here
Modified on by b1stern
I wanted to let everyone know that a Trial Virtual Machine is available for the SmartCloud Monitoring version 7.2 FP1 product. The Trial provides a 90 day trial of the software to monitor your virtualized environment and includes the Capacity Planning tools for VMware and PowerVM. These tools can help you optimize your virtualized environment and save money.
Within a few hours you can have the Virtual Machine up and running and monitoring your Virtualized environment.
This is a great tool if you are working with a customer on a Proof of Concept. Or, if you are customer, it is a really quick and easy way to evaluate the software.
The Trial includes the SmartCloud Monitoring product plus a little bit of extra content. It includes monitoring for:
PowerVM including (OS, VIOS, CEC, and HMC)
Citrix XenApp, XenDesktop, XenServer
Log File Monitoring
Agent Based and Agent-less Operating System monitoring
Integration with Tivoli Storage Productivity Center
Integration with IBM Systems Director
The trial also includes Predictive Analytics, Capacity Planning and Optimization for VMware and PowerVM
You can find the software at the following URL: https://www.ibm.com/services/forms/preLogin.do?source=swg-ibmscmpcvi2
If you have any questions or need assistance, you can send me an email at email@example.com
Modified on by Nimesh Bhatia
IBM made a significant commitment to OpenStack by joining the OpenStack Foundation as a Platinum Member. The IBM SmartCloud Orchestrator v2.2 product has adopted OpenStack to provide enterprises the functionality needed to effectively create and manage their cloud implementations.
The IBM Cloud Labs team is innovating in the area of cloud analytics. A new feature has been created named Information Hub for SmartCloud Orchestrator that adds exciting new reporting dashboards. The new feature will be available as an add-on at ISM Cloud MarketPlace.
The Information Hub dashboard has been designed for cloud users, administrators, planners and decision makers to provide information about the cloud infrastructure at their finger tips. It provides usage trend graphs, determines when a critical resource will run out, and aggregates the information for multi-OpenStack environments. Additionally, the information is made available for mobile devices.
These capabilities improve the productivity for cloud users and administrators. It helps cloud capacity planners to see the pace of cloud adoption in the enterprise and plan ahead. Decision makers can take the information with them and make informed business decisions about the cloud infrastructure.
Is Cloud the Next Utility, hmmm there's an interesting thought. I know many of us are still trying to define "what is Cloud" and "what does Cloud mean for my business." I spend my days in IBM's Cloud & Smarter Infrastructure team working on bringing "Cloud Solutions" to our customers. This turns out to be simply helping customers use what they have, maybe with a little twist, to build out their data center in a more robust, efficient manner. It's all about helping them meet the needs of their business. IT teams have been doing derivations of "do more with less" for years and as technology matures, specifically with virtualization and the management of virtual environments, IT teams are able to improve their Quality of Service, reduce their Total Cost of Ownership, while increasing the services they offer their consumers. How?, you ask. IT managers everywhere are trying to figure out how to provide their services in a Utility-like format utilizing "the Cloud." Think about how natural it is for us to expect that flipping on a light switch, magically makes electricity flows and the light comes on; how about if we could select a service, magically systems are provisioned, business processes are established and the service is available - would that provide business value? I'll use this blog post to start you thinking about "Cloud as the Next Utility" and get you wondering if we do indeed have front row seats to a Cloud Computing Revolution.
Should get I my electricity from the local power company or do I invest in sustaining my own private electricity? As individuals focus on a greener way of life, they may be asking themselves that question as they focus on using what they have while minimizing their own carbon footprint. People have options that they can ponder over to figure out what is best for their families, their life and their utility usage.
Organizations are iterating over multiple decision points to provide the growing IT infrastructure requirements to run their business and it is continually influenced by many different things. Do they already have a large IT investment? Do their IT requirements expand and contract depending on current activities? Do they have a diverse investment and skills on existing service management tools? It's apparent that these same questions can be asked regardless of the size of your IT shop - big or small companies have come to depend on IT services and this dependency continues to grow in this computing revolution. One direction an IT manager might take is a hybrid approach. Such a strategy allows them to utilize their current investment of infrastructure, tools and skills to build out their own Private Cloud while keeping a Public Cloud vendor accessible when their computing needs spike. Companies have options that they can ponder over to figure out what is best for their company's business, their IT environments and their utility usage.
How is it that when I buy a new appliance I don't even think about if it'll will work with my electrical outlets when I get home. Standardization on how electricity is delivered and consumed is ubiquitous and expected worldwide. As consumers, we buy a utility and have an expectation that it will work with all our appliances to meet our needs.
Most companies have already made large investments in both infrastructure and tools, as well as the skills to maintain them. The need to deliver more with what is available, requires technology that can bridge from existing environments to new environments regardless of the influences. By adopting and adhering to computing standards, tooling can utilize data from existing systems while meeting the needs and different delivery methodologies of the business in new ways. This means I can ease the coordination of complex tasks and workflows in my data center while leveraging the existing skills, processes and technology artifacts to deliver services in new, different ways. By employing computing standards in my data center strategy, I can be assured that I can rely on Public Cloud for my processing spikes; I can utilize both my older infrastructure while integrating in new infrastructure; all with the same extensible tooling for deployment and management of compute, storage and networking. As you're defining the next milestones in your cloud strategy/roadmap, I would encourage you to investigate what is in use today in your data center and how you need to evolve/ reuse that investment. IBM offers SmartCloud Orchestrator to manage your existing datacenter(s). It provides capabilities based on industry standards and can extend your current processes to Cloud, helping you bring Cloud Services to your users with the same self-service manner as flipping on a light switch or plugging in a new appliance. A utility that will work to meet your users needs.
Until next time, keep your head in the Clouds.
This webcast will be held on June 27, 2013 at 11:00 AM ET USA
Click Here to Register
Virtualization, cloud computing, and the consumerization of IT continue to drive fundamental shifts in data center management priorities. Many organizations are implementing multi-hypervisor architectures and hybrid public and private cloud strategies.
Advanced automation and orchestration solutions are key to helping IT data center operations teams effectively manage increasingly complex enterprise computing environments.
Join this webinar to learn more about IBM SmartCloud Orchestrator, one of the industry’s first cloud solutions built on OpenStack technology. IBM SmartCloud Orchestrator can support a broad range of hypervisors, storage, network and compute resources, allows companies to quickly deploy applications onto the cloud infrastructure of their choosing by lining up the virtual and physical compute, storage and network resources, to quickly deploy and manage consistent and compliant cloud services.
About the Speaker:
Marco Sebastiani, Product Manager for IBM SmartCloud Orchestrator and Cloud Solutions, at IBM Tivoli/IBM Cloud & Smarter Infrastructure.
I am pleased to announce that we have a new public forum for SmartCloud Orchestrator where users can discuss technical topics related to the product and address questions they might have.
SmartCloud Orchestrator has just been released. It is IBM's new private cloud offering based on OpenStack and other cloud standards.
You can read more about it in the Announcement Letter and we would be very happy to see you join the SmartCloud Orchestrator beta program.
This forum is a discussion platform and does not replace the IBM support. I hope you find this forum useful and it helps in the formation of an online user community.
Birgit Nuechter, Field Quality Manager for IBM SmartCloud Orchestrator
Please join the Tivoli User Community for a live Webcast, Best Practices in Data Center Infrastructure Management with the new IBM and Emerson Network Power Solution, on Tuesday, May 21, 2013 at 11:00 AM ET USA.
Click Here to Register Now!
IBM and Emerson Network Power have recently partnered to provide an end-to-end Data Center Infrastructure Management (DCIM) solution. This solution combines Tivoli's IT Service Management expertise with Emerson's real-time infrastructure optimization capabilities, enabling holistic management of the data center ecosystem. Join this live webcast to learn:
The definition of real-time DCIM
The state of the data center today
Primary challenges facing the data center
Benefits of DCIM and how DCIM is being adopted
How the combination of IBM Tivoli and Emerson Trellis deliver visibility, control, and automation to all of the components of the data center
Vikul Banta, Strategy and Product Management, IBM Software Group, Cloud & Smarter Infrastructure
Sean Nicholson, VP and General Manager, Worldwide IBM Tivoli Business, Emerson
The Official Tivoli User Community is the largest online and offline organization of Tivoli professionals in the world – home to over 160 local User Communities and dozens of virtual/global groups from 29 countries – with more than 26,000 members. The TUC community offers Users blogs and forums for discussion and collaboration, access to the latest whitepapers, webinars, presentations and research for Users, by Users and the latest information on Tivoli products. The Tivoli User Community offers the opportunity to learn and collaborate on the latest topics and issues that matter most. Membership is complimentary. Join NOW!
Modified on by alucches
IBM SmartCloud Orchestrator, the first new private cloud offering based on OpenStack and other cloud standards, is now available. Users are looking for Cloud solutions that increase agility, cost savings and offer a competitive advantage. IBM SmartCloud Orchestrator exceeds those needs:
Patterns of expertise learned from decades of successful Client and Partner Engagements - SmartCloud Orchestrator captures best practices for complex tasks, abstracted not hardcoded. Built in best practices KPIs, Measurement & Policies in the patterns to allow for semi-automated or automated vertical scaling up & down. Deploy applications rapidly with repeatable patterns across private and public clouds: SmartCloud Orchestrator enables third-party software deployments and custom pattern creation to “build once” and deploy across private and public clouds.
Robust, automated, high scale cloud provisioning - requested VMs will be up and running in under a minute using standard hardware
SmartCloud Orchestrator includes OpenStack!
End to End Orchestration, bridging domains, cloud, infrastructure, back-end integration, processes, service processes, etc. Dynamic at runtime to ensure you have the latest Human and Automated Interaction.
Lower operational costs by leveraging existing hardware and hypervisor - single management platform across different infrastructures reduces complexity and operational cost. Integrates compute, network, storage and application delivery: enable organizational integration
Get started today!
SmartCloud Orchestrator Analyst and PressCoverage:
A Control Center for Next Generation IT Asset and Service Management
Please join the Tivoli User Community for a live Webcast
and opportunity for Q&A, Tuesday, April 9, 2013 at 11:00 AM ET USA and 6:00
PM ET USA.
Register for 11:00 AM ET USA:
Register for 6:00 PM ET USA:
In 2012, IBM
introduced the IBM SmartCloud Control Desk, a unified platform for IT Service
In Feb 2013,
we released updates to SmartCloud Control Desk, adding more features, and
another step towards the vision of a true control center for your enterprise.
Key new features include
- Ability to set up an internal “enterprise app
store” with automated back end fulfillment of the applications
- Map integration which enables ability to narrow
down the geographic location of incidents and assign nearby resources
- “What-if” impact analysis to model potential
changes to the environment before they happen to identify high risk
- Usability improvements that help IT become more
Chris Dittmer, Worldwide Sr. Market
Manager, IBM Tivoli IT Service Management
CJ Paul, Senior Technical Staff Member and Chief Architect, IBM IT
Service Management Solutions
The Official Tivoli User Community is the
largest online and offline organization of Tivoli professionals in the world –
home to over 160 local User Communities and dozens of virtual/global groups
from 29 countries – with more than 26,000 members. The TUC community
offers Users blogs and forums for discussion and collaboration, access to the
latest whitepapers, webinars, presentations and research for Users, by Users
and the latest information on Tivoli products. The Tivoli User Community
offers the opportunity to learn and collaborate on the latest topics and issues
that matter most. Membership is complimentary. Join NOW!
The challenges of
virtualized environments are driving the shift to greater integration of
service management capabilities such as image and patch management, high-scale
provisioning, monitoring, storage and security. Join us for this webcast to learn how
organizations can realize the full benefits of virtualization to reduce
management costs, decrease deployment time, increase visibility into
performance and maximize utilization.
If you're in North America, register here for the April 16th session:
If you're in Asia Pacific, register for the April 23rd session:
My team and I have been heads down working to get Smart Cloud Orchestrator
, our newest cloud offering, to market. Last week we had our annual Pulse conference
in Vegas. I'm just recovering from its aftermath now and wanted to write a short blog about the experience. It should be no surprise that folks like James Governer of Redmonk offered some interesting perspectives
along with Infoworld
, and Wired
. While I am very pleased to hear the overwhelmingly positive press coverage, I am truly stoked about the direct customer feedback I got during the event.
Between sessions, Vegas dinners, and the occasional shut eye, I had a lot of customer meetings. Since we first announced our involvement with OpenStack, Chris Ferris, Todd Moore and I have been meeting with customers all over the world. Most of these discussions were with customers already working with OpenStack on their own. Last week, we had the band together again meeting with customers together and independently. What was interesting for me was that it's no longer just the bleading edge early adopters! Many customers are realizing that OpenStack is the future of the datacenter and they don't want to get left behind. Similarly, more and more of our enterprise customers have seen the benefits of DevOps and its relationship to cloud technologies. Things really have changed a lot during this past year!
While standardizing on the IaaS is a critical first step, I was thrilled to hear how many customers are using Chef
These arguably represent the second step towards the fruits of DevOps. It really feels like we're finally ready for the next step in this journey. Ironically, less than two weeks before Pulse, OpenStack Heat was voted in as a core OpenStack project
after a year of incubation. Heat
was started by RedHat as an open source implementation of Amazon's Cloud Formations
which enables users to easily combine multiple cloud resources together to form more meaningful solutions, applications, or services. Just as OpenStack compute moved past its original Amazon compatible APIs onto its own truly open APIs, I expect we'll see the same evolution in Heat. In fact, there is already an Oasis standards technical committee working on this very problem called TOSCA
. I really think these two efforts need to converge so that TOSCA is the open standard specification and Heat is the open source reference implementation. The Heat team has been talking about this since its inception.
I really liked the way Jesse Andrews
, one of the OpenStack founders
, put it. Jesse has long been using the analogy of the linux kernel to describe OpenStack and does not want it to stray from this for its own good. When we talked about heat last week he again used an analogy from linux. This time he chose the debian package manager tool APT
to describe heat as the package manager for the cloud operating system. I think this is a brilliant analogy, because the success of any operating system hinges upon the applications that run on it. Similarly, the value of cloud is in the applications or services that run on it.
I'm excited about heat and I'm looking forward to the next OpenStack summit
to discuss its evolution. Our Smart Cloud Orchestrator is all about open reusable automation content
. Be it native packages, chef recipes/cookbooks, virtual images, TOSCA templates, or BPMN standards
we want our customers, partners, and open source communities to be able to share and reuse cloud automation. I hope heat and TOSCA become the enabler for distributing and operating cloud applications and services. Anyone interested to help on this, please contact me and join me next month at the Havana summit!
Determine the right cloud orchestration strategy to
address the unique needs and pain points of your organization while increasing
productivity and spurring innovation. And learn more about the recently announced orchestration capabilities from IBM that leverage OpenStack to manage heterogeneous hybrid environments. Sign up today!
Here at Pulse the best part, for
me, are the client conversations. The efforts of clients to understand our IBM
categories and for me to understand the customers’ scenario have led to
interesting exchanges and raised some strange questions. Talking to a business partner,
I found myself asking "What is the shape of your computation?" Does
it look like a banana or a dolphin? A whale tail or a multi-drop jet? A rhizome or a Pacific atoll map?
Does it matter? It is
certainly a useful insight to visualise the general shape of how the business
flows unfold. When using a workload automation tool, each action becomes a unit
of work. These units of works are linked together by the conditions and
dependencies that sequence their execution in the right order. When large
graphs of such units of works are built and executed, the layout of thousands
of small units of work can take the most diverse shapes, and that shape tells
something about what is being accomplished. The case of this Business Partner
and his project with Big Data and massively parallel micro-ETLs, makes no
exception to that rule.
Big data projects have shown
their capability to extract insight from data through powerful operators and
clever data transformation, but often the result needs data cleaning,
preparation, and looks experimental unless an important polishing effort is
applied. In fact, multiple analysts have recognized the need for Big Data to
become more automated and repeatable in order to serve as key input into decision
making, especially if the kind of decision making is disruptive to mainstream
That is where the origin of
the data sources, the sequence of the processing steps and the conditions that
link the local "islands of processing" become of importance to
stabilise the global calculation map, share a common understanding about how
insight is constructed and lead to agreement about the right way to proceed. This interesting article warns that the quantity of applications and systems
involved in information management is the first obstacle to address, and can
easily be worsened by the use of Big Data powerful systems.
So the shape of your
computation indeed provides a visual cue to resolve the next challenge of fruitful
usage of Big Data, and it is probable that by using such graphical
representation we collectively build better pattern recognition and discussion
capabilities, like "Oh yes, your streams are too thin, you might have forgotten some data correlation."
Someone might think a rocket scientist is needed to display that "computation shape," but solutions for Workload Automation provide such images automatically, among other
benefits, when business processes are described into it. As business processes
are described once to be repeatedly executed, they will be triggered
automatically with a lot of fringe benefits including:
- The general visual aspect of how work unfolds,
either in the present, the past, or in the foreseeable future
- Snapshot or trending statistics about how each
piece of the total flow behaved in the past
- Reports on the evolution of definitions for the
flows, and about who changed what
- Automated tracking and alerting over the most
critical flows, those who have associated SLAs and possible penalties.
In short, Workload Automation
provides governance over even the most complex systems and a set of tools
designed to take the conversation to the next level -- above daily operations and
experimental setups -- whether the system handles Big Data, SAP jobs or a
robotic tape arm. Providing Visibility, Control and Automation over numerous
business flows is called Unattended Automation, and the new pressures created
by Cloud and Big Data have raised attention on it to a high level.
Next time you consider implementing a business application -- in addition to a
picture of its architecture, and a map of the community of persons that
contribute to it -- think about its computation shape, how you think it looks
like, and how you would like it to be.
Even if you
weren’t at IBM Pulse, trending right now on the web is chatter about IBM’s
announcement to leverage open technologies pervasively in the development
of its cloud offerings.
SmartCloud Orchestrator—an integrated platform to standardize and manage
heterogeneous hybrid environments—IBM is launching its first commercial offering
based on OpenStack. And with SmartCloud Orchestrator, IBM is also redefining
the scope of orchestration
to encompass the streamlining and integration of all resources, workloads and
for this kind of capability is addressed in the latest IDC
report which discusses why it will become a priority as organizations look
to improve operational efficiency and reduce the mess and complexity of growing
to standardize and automate cloud services includes integrating performance and
capacity management, usage and accounting, and rich image lifecycle management.
In addition, services and tasks such as compute and storage provisioning,
configuration of network devices, integration with service request and change
management systems and processes can all be streamlined. Out-of-the-box robust
workload patterns also enable fast development of cloud services.
SmartCloud Orchestrator, it’s all brought together to seamlessly manage
heterogeneous environments, allowing organizations to build on existing
investments and open source technologies.
haven’t had time to catch up on what’s trending, here’s the short version on
how IBM is helping to advance
the cloud to drive innovation.
TUC Webcast: Maximize the Benefits of Virtualization for Greater ROI
Please join the Tivoli User Community
for a live Webcast and opportunity for Q&A, Thursday, February 21, 2013 at 11:00 AM ET USA.
Click here to reserve your webcast seat now
About this Webcast:
Challenges with virtualized environments are driving the shift to
increased integration of service management capabilities such as image
and patch management, high-scale provisioning, monitoring, storage and
security. In this webcast, learn how organizations can realize the full
benefits of virtualization including reduced management costs, decreased
deployment time, increased visibility into performance and maximized
About the Speaker:
Matthew Rodkey, Product Manager focusing on Tivoli Cloud Solutions
Matt Rodkey is a Product Manager focusing on Tivoli Cloud Solutions.
In 13 years with IBM, Matt has worked in a number of areas in the Tivoli
portfolio including Security, Monitoring, and Service Delivery.
The Official Tivoli User Community is the largest online and offline
organization of Tivoli professionals in the world – home to over 160 local User
Communities and dozens of virtual/global groups from 29 countries – with more
than 26,000 members. The TUC community offers Users blogs and forums for
discussion and collaboration, access to the latest whitepapers, webinars,
presentations and research for Users, by Users and the latest information on
Tivoli products. The Tivoli User Community offers the opportunity to
learn and collaborate on the latest topics and issues that matter most.
Membership is complimentary. Join NOW!
Please join the Tivoli User
Community for their next webcast on IBM Smartcloud Consumer Monitoring. This webcast will be held on
Tuesday, January 22, 2013 at 11:00 AM ET, USA.
IBM SmartCloud Consumer Monitoring is a new product
developed for cloud consumers and service providers. An innovative new architecture embeds
monitoring technology in library images, so newly deployed VMs are discovered
and monitored within seconds of being instantiated. “Fabric Nodes” used innovative distributed
database technology to display data for nodes and applications, or logical
groupings of them, and run as virtual machines alongside the application
VMs. New fabric nodes come online as
needed, and shut themselves down when no longer needed, ensuring optimum use of
ABOUT THE SPEAKER:
Ben Stern, Executive IT Specialist, IBM Cloud & Virtualization products
Stern is an Executive IT Specialist.
For the past several years, he has defined Best Practices for Tivoli's
SAPM portfolio. Most recently, he has
taken on Best Practices role for the Cloud and Virtualization products.
The Official Tivoli User Community is the largest online and offline
organization of Tivoli professionals in the world – home to over 160 local User
Communities and dozens of virtual/global groups from 29 countries – with more
than 26,000 members. The TUC community offers Users blogs and forums for
discussion and collaboration, access to the latest whitepapers, webinars, presentations
and research for Users, by Users and the latest information on Tivoli
products. The Tivoli User Community offers the opportunity to learn and
collaborate on the latest topics and issues that matter most. Membership
is complimentary. Join NOW!
Even though server proliferation can be partially addressed through virtualization, the usage of virtual and physical assets becomes complex to accurately assess or manage. Cost management is crucial to integrate into overall service management, especially with a move into cloud. This webcast discusses how to implement a financial management roadmap and the key requirements for cloud transparency-- the ability to allocate IT costs, usage, and value.
Are you a cloud addicted?
Do you want to be always updated on the latest technologies?
Do you want to contribute to development decisions in the IBM SmartCloud Provisioning world?
Are just curios to see what's cooking in IBM SmartCloud Provisioning development?
If you answered "yes", have a look at the Upcoming Feature
page in the IBM SmartCloud Provisioning WIKI
It starts getting populated with recorded demos and screenshots showing what is currently into the developers'mind.
Each page is equipped with a set of buttons to immediately and easily provide your feedback directly to the development team.
More ideas, thoughts, videos will be added shortly on the same page, so stay tuned!
...Just do not get lazy, click the links!!!
My three favorite things about OpenStack are
- The People
- The Innovation
- The Interoperability
San Diego was my second OpenStack summit. Many of the same faces were in the design
summit sessions I attended, but there were many new faces as well. One of the most exciting observations from
the Folsom design summit was the incredible talent pool assembled. The Grizzly summit was no different – it’s
great to interact with so many incredibly smart, deep and experienced
people. I’m convinced that a single
company could never amass such a collection of quality talent for one
project. I guess it’s no wonder they’re
saying OpenStack is the fastest growing open source project ever.
I must apologize in advance, because I am sure to miss
someone, but I want to tell you about some of the people I interacted with in
the nova, glance, and cinder design sessions.
Over the past few months I’ve really been impressed with the PTL
leads. They’re very smart, highly
motivated, and excellent facilitators.
The design sessions invariably get into open debate, but productive
debate. I was impressed with the PTLs’
natural abilities to channel the discussion to bring out the key issues and
land on some concrete next steps.
I got to meet Microsoft’s Peter Pouliot, who’s heroic and
tenacious efforts successfully delivered HyperV support after a rather dodgey mess earlier in the year. Peter is not your stereotypical Microsoft
developer. He’s an open source guy
through and through. It’s clear that his
personal spirit had a lot to do with corralling the community to deliver
quality code in a very short time frame.
It was great to meet Peter and some of his non-Microsoft
colaborators. Great job guys!
I also had the pleasure to meet with some of VMware’s
developers and not just those acquired via the billion dollar Nicira
acquisition. The Nicira guys are great –
no question but I was also very pleased to meet the VMware developer who
completely rewrote the less than adequate VMware compute driver. I hope to work closely with them to ensure
the hypervisor is well supported and as interoperable as possible with other
properietary and open source technologies.
Of course, I can’t speak of OpenStackers without mentioning
RackSpace. Over the past two summits, I
got to interact with a number of RackSpace developers, aka Rackers. I got to hand it to them, they really do
have a great bunch of people and definitely
bring a massive scale service provider
perspective to the discussion. Of
course, being an IBMer myself, I can’t help but bring the enterprise customer
perspective into the mix. I think
OpenStack benefits greatly from these two perspectives brought together in open
OpenStack has done a great job defining an extensible
framework for IaaS. This flexibility not
only helps accommodate the varied needs from enterprise to service provider,
but it also enables a massive sea of innovation. Since the Nicira acquisition there’s been a
lot of attention on the innovation around software defined networking and
quantum, the OpenStack project that provides the abstraction for a variety of
implementations ranging from proprietary,
to pure open source like Open V Switch, to traditional standard
networking equipment. I think storage is
even hotter than networking these days with a slew of vendors combining
commodity 10Ge switches with commodity Intel servers with a combination of SSDs
and spinning disks to provide new approaches to storage for virtualized
environments. Of course software plays a
critical role in many of these virtualized storage solutions. Dreamhost’s open source distributed file system Ceph has been getting a lot of interest.
Enterprise storage vendors like NetApp, IBM, and HP have also
contributed cinder drivers to support their products within OpenStack
clouds. There were also a number of
summit discussions about exposing the different backend implementations of the
abstractions with different qualities of service. Some people, including one of my developers,
have begun to use “Volume Types” as a way to let users choose the kinds of
volumes they need. I believe this is
critical for compute clouds to cover the broadest spectrum of workloads. Of course this principle applies to other
resources and not just cinder volumes.
I saw a lightning talk about a nova driver for smartOS, a cool open source project from Joyent combining solaris zones, zfs, and kvm. There were ARM and Power CPU
support presented as well as a couple bare metal solutions. Intel, KVM, and OpenStack certainly make a
nice combination, but there’s so much more that’s possible with OpenStack
Finally, perhaps the most important thing about an OpenStack
cloud is interoperability. Starting with
the hypervisor, IBM has a solution that enables interoperability of images,
volumes, and networks across Xen, KVM, VMware and Hyper-V. We had a few sessions where we
discussed how we can bring the same interoperability to OpenStack. To start with, we need to be able to register
readonly cinder volumes as glance images.
Next to ensure we can scale out we need to be able to register multiple copies of the same image.
Finally, to take advantage of performance we need to abstract the clone
operation to enable Copy on Write (CoW), Copy on Read (CoR), as well as the
current local cache plus CoW mechanism for backwards compatibility and to
support 1Ge networks. Combining these
will enable images to work across multiple different hypervisors.
We also need interoperability with existing images which
means VMware and Amazon as the two most common forms of images. Today, it’s quite easy to automate simple
image formatting differences, but the challenge is in the assumptions made by
the images. The current direction for
OpenStack is to use config drive v2 to pass instance metadata to the
guest which is responsible to pull key system configuration such as hostnames,
credentials, and IP addresses. Typical VMware
images on the other hand generally expect either a push model, where the
hypervisor manipulates the filesystem prior to booting the image, or via their
guest agent, VMware tools.
To make matters worse, the current OpenStack assumes
different image formats for each supported hypervisor. One of the sad punchlines from Troy Torman’s keynote was that RackSpace’s private cloud distro named Alamo does not interoperate
with their publci cloud even though they’re both OpenStack. The good news is that, as Troy went on to
say, the time has come to focus on interoperability.
I got into a great conversation with Jesse Andrews, one of
the original OpenStack guys now at Nebula.
He described the approach to image interoperability by enabling cloud
operators to provide custom image workers at image ingestion time. This enables cloud providers to register
custom image processing code that gets called whenever an image is uploaded to
Glance. The simplest case of this is to
convert image formats to enable Alamo KVM images to run on RackSpace’s Xen
based public cloud.
Fortunately, IBM’s SmartCloud Provisioning (SCP) includes
some image management technologies which can help with the more challenging
problems mentioned above. Today’s SCP
2.1 will interrogate images in the library and check for cross hypervisor
compatibility. Users gain visibility
into this information and can optionally automate fixes wherever possible. We also use this technique to detect the
presence of a critical guest agent.
This brings me to one of my favorite little open source
projects, cloud-init created by Scot Moser at Canonical. If only it wasn’t GPL ;-). Many OpenStackers are using cloud-init to
automate the system configuration pull from config drive v2 mentioned above. This little bootstrap can do much much more,
but this is certainly a great job for this trusty little tool. Unfortunately, it’s only for linux. It’s even been made to work with fedora and
will likely be included in RHEL. Since
we cannot use GPL code in IBM products, we have a similar bootstrap for both
Windows and Linux guests. We’re working
with our lawyers to get approval to contribute this code to cloud-init. Of course,
if Canonical wants to use a more commercial friendly license like
OpenStack has done, then I could spend less time with lawyers and more time
hacking code ;-).
The beauty of this little bootstrap is its simplicity. This simplicity enables us to automatically
inject the bootstrap into Windows and Linux images. This will let us automatically fixup any old VMware,
or Hyper-V image so that it works on OpenStack.
This is a critical first step towards interoperability.
OpenStack is truly becoming an industry changing
and historic project. With so many
incredibly talented people from countless companies across the globe it’s no
wonder there is so much innovation in the community. I’m really happy to be a part of this growing
community. Together I believe we can
change the industry for the better. If
you would like to be part of this growing and innovative project, check out the
“community” link at www.openstack.org.
Also, we would like to invite you to check back here for future blogs on
OpenStack and IBM’s involvement. OpenStack
is a big part of IBM’s open cloud strategy and we want to be sure to keep you
up to date on our progress
DevOps has become something of a buzzword lately but the idea behind it can be truly powerful. Using a combination of technology and best practices to increase collaboration between development and operations teams can accelerate the application development lifecycle while improving software quality and reducing costs.
For many, the development process has become more complex and segregated from operations. Factors such as inefficient communications, manual processes and poor visibility into the deployment process result in production bottlenecks as well as subpar quality throughout the development and delivery cycle.
To address these challenges, organizations have often turned to adhoc and siloed efforts. And so gaps still exist due to lack of integration across people, processes and tools. The reality is that an effective DevOps solution requires an integrated approach of continuous delivery that optimizes and accelerates the application lifecycle in every phase: development, testing, staging and production.
What this means is that changes made in development are continuously built, integrated and tested for function, performance, systems verifications, user acceptance, and then staged, ready for production. And it can all be brought together through an integration framework that can automate the individual tasks across the various stages of the pipeline and continuously deliver changes, providing end-to-end lifecycle management. Continuous automation is necessary in the following key areas:
• Continuous integration
provides faster validation and delivery of code changes via automated, repeatable execution of build processes with continuous feedback
• Continuous deployment
provides on-demand environment configuration and the ability to continuously deploy code and configuration middleware.
• Continuous testing
automates testing in production-like environments.
• Continuous monitoring
increases visibility into application performance and provides data to trace and isolate product defects.
With an automated process for moving application changes through progressively richer test environments that mirror the production environment, chances for error and roll back are greatly reduced.
The result is increased visibility into the delivery pipeline, standardized communication between Dev and Ops and more efficient and accurate delivery of software projects. And the delivery process can scale dynamically as business needs grow.
Here’s how IBM is addressing DevOps, with the launch of SmartCloud Continuous Delivery
--an agile, scalable and flexible solution for end-to-end lifecycle management that allows organizations to reduce software delivery cycle times and improve quality. SmartCloud Continuous Delivery is also available on Jazz.net