We know that cloud computing offers a myriad of benefits like rapid service delivery and lower operating costs. But it can also lead to challenges in data governance, access control, activity monitoring and visibility of dynamic resources—in essence, all aspects of IT security.
The IT organization must have the capabilities to both deliver services more quickly to meet the demands of the business and be able to provide high levels of security and compliance. In the past the delivery of the services was typically the bottleneck in providing new services, but now with automated cloud and self service delivery models the teams responsible for change management and security can quickly become the bottleneck due to manual processes and siloed tools.
For example, organizations need the ability to patch all of their systems, both physical and virtual, whether distributed or part of a cloud. Operations teams need better insight into and control of deployed virtual systems, including OS patch levels, installed middleware applications and related security configurations. And there can be too many security exposures with offline and suspended VM’s that haven’t been patched in weeks or months.
A holistic approach is needed that addresses rapid provisioning of services and automation of key security and compliance requirements. Together these capabilities can keep you in control of rapidly changing cloud environments. First let’s look at the capabilities needed in a cloud provisioning solution.
Cloud provisioning should combine application and image provisioning for workload optimized clouds and deliver:
· Reduced costs with automated high-scale provisioning; multiple hypervisor options and HW of choice
· Accelerated time-to-market with standardized pattern-based deployment for workload optimized cloud
· Image sprawl prevention with in-built advanced image lifecycle management capabilities
· Ease of adoption and clear roadmap to move to advanced cloud capabilities
Second, a unified endpoint management approach is required to provide visibility and control of your systems, regardless of context, location or connectivity, and needs to deliver:
· Heterogeneous platform support with seamless patch management for multiple operating systems, including Microsoft Windows, Unix, Linux and Mac OS, as well as hypervisor platforms
· Automatic assessment and “single click” remediation, which shortens time to compliance by automatically identifying necessary patches and enabling users to target and remediate endpoints quickly
· Enterprise-class scalability and security to provide proven scalability, including fine-grained authorization and access control capabilities
Explore these capabilities with the new IBM SmartCloud Patch Management.
Modified on by rossella
IBM Cloud Orchestrator 2.5 comes with a set of interesting new features.
First of all the support for OpenStack Kilo; this opens the door to a set of very interesting scenarios related to software defined environments (think about the neutron capabilities in Kilo). Moreover you can now either leverage the OpenStack distribution provided by IBM (IBM Cloud Manager with OpenStack 4.3) or another OpenStack distribution based on Kilo. Orchestrating workloads on a non IBM OpenStack distribution does no longer rely on the Public Cloud Gateway.
The list of supported public clouds has been enriched with the addition of Microsoft Azure: from IBM Cloud Orchestrator self service user interface you can now register and manage Microsoft Azure regions, deployment artifacts and manage resources.
The pattern engine is now based on OpenStack Heat, no more proprietary technology involved; the user experience has been enhanced allowing to store and select heat templates from the self service UI.
The installation procedure has been simplified; the number of needed servers has been shrunk to reduce hardware requirements, a prerequisite checker has been added to enable a fast detection of possible failure points.
For further information visit IBM Cloud Orchestrator 2.5 knowledge center
IBM Cloud Orchestrator 2.5 announcement letter is here
IBM® Tivoli® Service Automation
Manager (TSAM) has delivered a new extension to configure
extra disks in addition to the boot disk when requesting virtual machines
within a Project with VMWare servers. Downloading the installation package from
the Integrated Service Management Library and installing it on top of TSAM 7.2.2
platform enables the cloud administrator to prepare and manage a multi-tenant, customer-segregated
environment for hosting the additional disks. In particular, the cloud
administrator can select the VMWare data stores that he wants to use for
additional disks grouping them in TSAM storage pools that can be then
associated with one or more customers (*), meaning that only those customers
can carve storage from the data stores. She can also limit the amount of
storage that each customer can use on a TSAM storage pool. Finally, the cloud
administrator can flag this type of TSAM storage pool to be thin provisioned.
Once the cloud administrator has
prepared the environment, then the users of the cloud can request virtual
machines equipped with extra disks – in addition to the boot disk, taken from
one of the TSAM storage pools they are authorized to. The extension
automatically formats and attaches the disks to the virtual machines, so when
the users log in they can start working.
The life-cycle of the extra disks is
tied to the life-cycle of the virtual machine to avoid any inconsistency of
data, which means that they are saved, restored, and deleted together with the
The Extension for Additional
Disk has some gaps that should be filled in one of next releases: the
users cannot expand extra disks and cannot modify the configuration of a
virtual machine to attach or detach extra disks.
(*) This article focuses on a public
cloud solution, where the service provider sells services to his customers. The cloud administrator is the administrator of the entire cloud platform.
Modified on by Nimesh Bhatia
IBM made a significant commitment to OpenStack by joining the OpenStack Foundation as a Platinum Member. The IBM SmartCloud Orchestrator v2.2 product has adopted OpenStack to provide enterprises the functionality needed to effectively create and manage their cloud implementations.
The IBM Cloud Labs team is innovating in the area of cloud analytics. A new feature has been created named Information Hub for SmartCloud Orchestrator that adds exciting new reporting dashboards. The new feature will be available as an add-on at ISM Cloud MarketPlace.
The Information Hub dashboard has been designed for cloud users, administrators, planners and decision makers to provide information about the cloud infrastructure at their finger tips. It provides usage trend graphs, determines when a critical resource will run out, and aggregates the information for multi-OpenStack environments. Additionally, the information is made available for mobile devices.
These capabilities improve the productivity for cloud users and administrators. It helps cloud capacity planners to see the pace of cloud adoption in the enterprise and plan ahead. Decision makers can take the information with them and make informed business decisions about the cloud infrastructure.
Modified on by cynthyap
Orchestration can be one of those ambiguous concepts in cloud computing, with varying definitions on when cloud capabilities truly advance into the orchestration realm. Frequently it’s defined simply as automation = orchestration.
But automation is just the starting point for cloud. And as organizations move from managing their virtualized environment, they need to aggregate capabilities for a private cloud to work effectively. The automation of storage, network, performance and provisioning are all aspects handled in most cases by various solutions that have been added on over time as needs increase. Even for organizations that take a transformational approach -- jumping to an advanced cloud to optimize their data centers -- the management of heterogeneous environments with disparate systems can be a challenge not simply addressed by automation alone. As the saying goes, “If you automate a mess, you get an automated mess.”
The need to orchestrate really becomes clear when various aspects of cloud management are brought together. The value to the organization at this stage of cloud is simplifying the management of automation – otherwise a balancing act to manage multiple hypervisors, resource usage, availability, scalability, performance and more -- based on business needs from the cloud, with the ultimate goal of delivering services faster.
With orchestration, the pieces are woven together and can be managed more effectively to ensure smooth and rapid service delivery -- and delivered in a user-friendly catalog of services easily accessible through a single pane of glass. In essence, cloud orchestration = automation + integration + best practices.
Without cloud orchestration, it’s difficult to realize the full benefits of cloud computing. The stitching together of best practices and automated tasks and processes becomes essential to optimize a wide spectrum of workloads types.
In addition to rapid service delivery, the benefit of orchestration is that there can be significant cost savings associated with labor and resources by eliminating manual intervention and management of varied IT resources or services.
Some key traits of cloud orchestration include:
• Integration of cloud capabilities across heterogeneous environments and infrastructures to simplify, automate and optimize service deployment
• Self-service portal for selection of cloud services, including storage and networking, from a predefined menu of offerings
• Reduced need for intervention to allow lower ratio of administrators to physical and virtual servers
• Automated high-scale provisioning and de-provisioning of resources with policy-based tools to manage virtual machine sprawl by reclaiming resources automatically
• Ability to integrate workflows and approval chains across technology silos to improve collaboration and reduce delays
• Real-time monitoring of physical and virtual cloud resources, as well as usage and accounting chargeback capabilities to track and optimize system usage
• Prepackaged automation templates and workflows for most common resource types to ease adoption of best practices and minimize transition time
In short, many of the capabilities that we associate with cloud computing are really elements of orchestration. In an orchestrated environment, organizations gain tools to manage their cloud workloads through a single interface, providing greater efficiency, control and scalability. As cloud environments become more complex and organizations seek greater benefit from their computing resources, the need for sophisticated management solutions that can orchestrate across the entire environment will become ever clearer.
Learn more about how cloud orchestration capabilities can help your business. And join the Cloud Provisioning and Orchestration development community to test out the latest cloud solutions and provide feedback to impact development.
Tivoli Usage and Accounting Manager (TUAM) development are pleased to announce the release of the IBM® Tivoli® Service Automation Manager (TSAM) - Extension for Usage and Accounting v1.0.
This TSAM extension delivers cloud cost management capability by enhancing the integration, reporting and services between TUAM and TSAM. The extension allows cloud users to view historical invoice reports that show the charges associated with each project.
The Usage and Accounting v1.0 extension provides the following features:
- Easier Cloud Usage Report Access - Enabling Cloud users to access and view historical Usage and Accounting Manager Cognos reports directly from TSAM. Single sign on is configured between the two systems to allow for easier report access.
- Role-based Report Security - Security access can now be configured to ensure that users that belong to the TSAM Cloud security groups can only access the TUAM Cognos reports that they are assigned to. For example, users that belong to the Cloud Customer and Cloud Team administrator user groups in TSAM can now be assigned access to specific TUAM Cognos reports.
- Account Code Report Security - Account code security is used for customer and team reporting data segregation based on cloud roles in TSAM. This is achieved by data synchronization between TSAM and TUAM which involves aligning TSAM entities such as customers, teams, security groups and users with TUAM entities such as clients, users and user groups. After the synchronization process has completed, account code security is applied to the reports that TSAM users access.
The following table shows the evolution in the TSAM/TUAM integration. .
The diagram below show how the Usage and Accounting v1.0 extension facilitates the integration between TSAM and TUAM.
For more information about the Usage and Accounting v1.0 extension, log on to the Information Center
The extension is available free of charge and is part of the TUAM 184.108.40.206 FixPack, which is available on Fix Central
: A Rates Preview and Charges Preview of costs is available now on the ISM Library
as fully supported.
Modified on by b1stern
I wanted to let everyone know that a Trial Virtual Machine is available for the SmartCloud Monitoring version 7.2 FP1 product. The Trial provides a 90 day trial of the software to monitor your virtualized environment and includes the Capacity Planning tools for VMware and PowerVM. These tools can help you optimize your virtualized environment and save money.
Within a few hours you can have the Virtual Machine up and running and monitoring your Virtualized environment.
This is a great tool if you are working with a customer on a Proof of Concept. Or, if you are customer, it is a really quick and easy way to evaluate the software.
The Trial includes the SmartCloud Monitoring product plus a little bit of extra content. It includes monitoring for:
PowerVM including (OS, VIOS, CEC, and HMC)
Citrix XenApp, XenDesktop, XenServer
Log File Monitoring
Agent Based and Agent-less Operating System monitoring
Integration with Tivoli Storage Productivity Center
Integration with IBM Systems Director
The trial also includes Predictive Analytics, Capacity Planning and Optimization for VMware and PowerVM
You can find the software at the following URL: https://www.ibm.com/services/forms/preLogin.do?source=swg-ibmscmpcvi2
If you have any questions or need assistance, you can send me an email at email@example.com
I am pleased to announce that we have a new public forum for SmartCloud Orchestrator where users can discuss technical topics related to the product and address questions they might have.
SmartCloud Orchestrator has just been released. It is IBM's new private cloud offering based on OpenStack and other cloud standards.
You can read more about it in the Announcement Letter and we would be very happy to see you join the SmartCloud Orchestrator beta program.
This forum is a discussion platform and does not replace the IBM support. I hope you find this forum useful and it helps in the formation of an online user community.
Birgit Nuechter, Field Quality Manager for IBM SmartCloud Orchestrator
Service Health for IBM SmartCloud Provisioning has officially GA'ed and is now available on IBM Integrated Service Management Library ( ISML ).
Service Health provides pre-built integrations between IBM SmartCloud Provisioning and IBM SmartCloud Monitoring utilizing a custom agent, OS agents, and the ITMfVE agents. A product provided navigator offers a concise overview on the health of the IBM SmartCloud Provisioning infrastructure enabling the ability to identify and react to issues in your environment quickly minimizing the impact, such as an unresponsive compute node, high disk usage on storage nodes or key kernel services not responding. It also provides visibility into the KVM and ESXi hyper-visors.
This solution can be downloaded from the IBM Integrated Service Management Library( ISML ) following this link -> Service Health for IBM SmartCloud Provisioning
liked the post “Rapid deployments with IBM
SmartCloud Provisioning” that explains
how simple and fast it is to deploy instances using SmartCloud Provisioning.
But after the instances are deployed the next question is:
- How can I "easily" monitor the
performance and availability of the OS and applications of launched
solution is to integrate IBM SmartCloud Provisioning with IBM
Tivoli Monitoring (ITM) so that all the running instances will be
connected to the ITM Server and managed according the performance expectations
It can be
achieved by exploiting the current integration between IBM
SmartCloud Provisioning and the Image Construction and Composition Tool
(ICCT), available in IBM SmartCloud
Provisioning version 1.2, and performing the following steps:
- Download from the Integrated Service
Management Library (ISML) website the bundle “ICCT Bundle to deploy IBM
Tivoli Monitoring Agent for Linux OS v6.X”
- Extend an OS base image available in IBM SmartCloud Provisioning, adding the above
In this way, a new image will be
available in IBM SmartCloud
Provisioning with the ITM Agent installed and configured to connect to the IBM
Tivoli Enterprise Monitoring Server
that, when the extended image is launched, the ITM agent will automatically
start and connect to the ITM Server without requiring any user action.
Then, from the ITM console you will be able to see and monitor it and take
actions to address performance issues
If you are
interested in other extensions available on ISML this is a list of available
bundles you can download and use to extend a base image (search in ISML for “ICCT”):
- ICCT Bundle to deploy IBM Tivoli Directory
- ICCT Bundle to deploy IBM HTTP Server 7.0
- ICCT bundle to deploy IBM WAS Update
- ICCT Bundle to deploy Apache Web Server
- ICCT Bundle to deploy IBM DB2 Server 9.7
- ICCT Bundle to deploy IBM Tivoli
Monitoring Agent for WAS
- ICCT Bundle for IBM WebSphere MQ 220.127.116.11
- ICCT Bundle to deploy IBM WebSphere
Application Server Network Deployment 8.0
- ICCT Bundle to deploy IBM WebSphere
Application Server Community Edition 3.0
- ICCT Bundle to deploy IBM WebSphere MQ
- ICCT Bundle to deploy IBM
Tivoli Monitoring Agent for DB2
- ICCT Bundle to deploy IBM
Tivoli Endpoint Manager Agent
Software bundle for IBM License Metric Tool Agent for System X platform
A usual adoption
pattern for cloud computing are desktops. It's really straight
forward because in general each company has standardized desktops:
only some specific version of the operating system are supported,
only specific flavours, only some applications are allowed and
typically everything is managed by the IT team.
If we think at the
benefits of adopting desktop cloud, some of them really jump
powerfully in front of the eyes: the IT team can really enforce
standardization (e.g. you can select as desktop only one of the
proposed flavours); the maintenance of the hw becomes far easier
given its consolidation; old, out-dated PCs can be used just as
connectors to the desktop hence gaining new life. From the desktop
user point of view he does no longer need to carry on some company
asset to work: healthier (no more heavy hw to take home or
travelling); safer (data is in the cloud).
But this is
nothing new, desktop cloud solution are already on the market, so
let's see if IBM SmartCloud Provisioning can bring additional
benefits to the desktop world.
What if we start
dealing with non-persistent desktop images?
images are the ones that disappear once you shut them down. You might
be asking yourself “well, that's not so clever, what about my data?
Are they lost?”. This is actually a very good point and this is the
keystone of the benefits coming with the adoption of non-persistent
The idea is that
all user data get stored into external (persistent) volumes that can
be attached/detached on demand to the non-persistent image.
If we now apply
this technology to the desktop world, it shed an interesting new
light on some typical and painful scenarios:
System or Software patching
the compliance of the desktops
changes in the amount of desktop users
In a traditional
infrastructure, when the operating system goes or is getting close to
go out of maintenance, a massive migration campaign starts: all
desktops need to be migrated. Now the migration statistically does
not go smoothly for all users and hence some of them will be stuck
even for days. If you use non-persistent images, you can easily
overcome this either creating a new master image with the new
operating system or upgrading a single instance of the image, do your
test campaign to make sure everything keeps working, then deploy it
in as many instances as the desktops you need to upgrade are, attach
to the new images the volumes with the user data and get rid of the
old images. If you leverage the incredible deployment speed of IBM
SmartCloud Provisioning, you'll have a brand new set of desktops in
Analogously we may
think about patching the operating system or a software running on
the desktop: they key idea behind this is that you're always going to
patch either the operating system or a specific software, never the
user data that keep living into separate volumes.
If we think at the
compliance aspect, remember that the user cannot save any change he
does on the boot disk of the image since nothing gets ever stored on
the disk. He is only empowered to write his own stuff on the
additional volumes. This should discourage him from even trying
installing new software or editing the operating system
configuration, since everything will be lost at the first shutdown.
I know in your
company you may have different configuration flavours of the same
operating system according to the department for which the desktop is
tailored. For example you may need to have different firewall
configurations according to the security level the end user is
entitled to. Well, with IBM SmartCloud Provisioning you can leverage
the User Data field at deployment time to specify these special
configurations. Of course this may even not be shown to the end user,
but you may mask it enlarging the list of the offering with the
specific configuration. Under the covers the instance is launched
with the proper parameters: no master image duplication, no manual
configuration; everything is automated and standardized.
optimizing resources? Desktops by their nature have all the same
operating system and configuration (at least for department),usually
they come also with the same applications installed on top. If you
deal with non-persistent images you are just saving lots of
duplicated, useless copies of the same operating systems and software
on the disk. Then, if you think that once the desktop is shut down,
its resources are released (i.e. cores and memory), you can better
optimize your hardware using those resources for other
applications/users (they may even be server application or desktops
for users residing in a different timezone).
coming on board? A project outsourced to an external work-force?
You may want to
have this people productive more than immediately. With IBM
SmartCloud Provisioning, their desktops will be up and running in
information about IBM SmartCloud Provisioning can be found in IBM
SmartCloud Provisioning WIKI and in IBM
SmartCloud Provisioning infocenter
See IBM SmartCloud Provisioning working
in a recorded
Most generally accepted definitions of Cloud Computing imply the notion of Pay per use. For a Service Provider this means defining how they intend to bill for Cloud Services, while for a Cloud enabled DataCentre in the enterprise this implies some form of showback/chargeback model. As for those consumers actually using the Cloud, they want to understand the financial implications (what will it cost?) before committing their workloads to it.
As a Cloud User
- Do you want to see what your project will cost before you provision it?
- See a price list for all the services you can provision - comparing prices for different options?
- Use a calculator to help you predict what a project will cost per month (or day or year)?
- See what the effect of changing the resources used by a project will do to the cost?
As a Cloud Provider
- Do you want to define different prices for a Service depending on the options that the user chooses?
- Set different prices for each service for different customer groups?
The following screenshots illustrate how the new cloud cost management
capability delivers solutions to these problems. The new TSAM Extension for Usage and Accounting is available to download now via the ISM Library
See the Prices for the different Cloud offerings and compare different options
first dropdown in the view shown below shows the Offerings that are available to the customer.
Offerings can be anything the Cloud provider chooses to make available, for example: Virtual Servers, Storage or even PaaS or Saas offerings. The consumer can see up front what the different rates are for each component, and compare these across different offering types..
See what it would cost per month to run a new project in the Cloud
In this example, we want to have one machine to run an Application Server and one machine to run a Database and we need additional Tier1 storage in order to store the database data. The calculator shows how much this will cost per month overall and in terms of the two Service Offerings that this particular Cloud provides..
.Different customers can be assigned to different subscriptions
A subscription is a means to segment your customers into different groups such as by geography or customer type (direct, business partners etc).
In this example, the RATIONAL and TIVOLI customers are assigned to the US (United States) subscription. Customers with this subscription share the same set of available offerings and pay the same price for those offerings..
Offerings are defined once and then added to Subscriptions
Once they are part of a subscription, the actual rate values (price per unit) can be defined for each element of the offering template.
If you wish to join the TUAM group
to get more involved in reviewing new features and testing beta capability, then let me know and I can send you an invite.
After releasing Tivoli Service Automation Manager 7.2.2 in July with a great deal of new capabilities to cover customer use cases, now IBM Service Delivery Manager 7.2.2 is available.
In addition to leveraging Tivoli Service Automation Manager V7.2.2, it
- Adds new monitoring capabilities of the virtualized infrastructure
through Tivoli Monitoring
for Virtual Servers V6.2.3
- Provides enhanced metering and accounting capabilities, leveraging Tivoli Usage and Accounting Manager
- Is delivered as a set of virtual machines for simplified deployment
and can help recognize faster time-to-value
More details here in the Announcement Letter
As businesses adopt cloud environments to control IT complexity, pool resources, and improve cost efficiencies, the TUAM development team have been engaged in evolving the usage and accounting capability in IBM Tivoli Usage and Accounting Manager
(TUAM) beyond traditional Enterprise charge-back.
In such a shared cloud environment the ability to accurately assess which IT resources and services are being utilized, how much they are being utilized, and by whom is fundamental if service providers are to justify the cost of the IT resource and expense.
The latest release of IBM Tivoli Usage and Accounting Manager, Version 7.3
, provides Cloud Cost Management
for those businesses needing to understand the new and dynamic usage of shared IT resources in Cloud and Virtualized environments, and seeking to bill or charge business units for their share of resource use including compute, storage, networks, energy, and personnel.
Read more about the new TUAM Cloud Cost Management Extension v1.0
for Tivoli Service Automation Manager (TSAM) in our blog update
IBM Tivoli Usage and Accounting Manager allows businesses to:
- Link their Cloud IT expenditures to business value delivered
- Accurately allocate cost across functions and departments/projects
- Understand true IT costs resulting in better IT investment decisions and get more out of their current investments
- Quantify the costs associated with services delivered including virtualized, cloud, storage area network (SAN), and service-oriented architecture (SOA) environments
- Interactively report and, if desired, bill or charge departments and functions accurately for their use of IT resources
Additionally the development team are working to supplement these core capabilities with new price tiering and invoice preview features for Cloud administrators and consumers. These features will be provided to TUAM users via the IBM Integrated Service Management Library
from October 2011.
Please contact our usage and accounting architect John Buckley (firstname.lastname@example.org) if you wish to understand or share your thoughts on the new Cloud use cases.
Welcome to the Cloud/Virtualization Management
blog is one of several within the Service Management Connect community,
and its purpose is to provide readers with ideas and perspectives about
the Cloud/Virtualization Management solution directly from the
technical experts. Follow this blog, and you can get tips, tricks, and
perspectives on several Cloud/Virtualization Management topics, including:
- Technical tips and tricks
If you have specific topics for which you would like to see blog entries, don't hesitate to provide feedback.
Is Cloud the Next Utility, hmmm there's an interesting thought. I know many of us are still trying to define "what is Cloud" and "what does Cloud mean for my business." I spend my days in IBM's Cloud & Smarter Infrastructure team working on bringing "Cloud Solutions" to our customers. This turns out to be simply helping customers use what they have, maybe with a little twist, to build out their data center in a more robust, efficient manner. It's all about helping them meet the needs of their business. IT teams have been doing derivations of "do more with less" for years and as technology matures, specifically with virtualization and the management of virtual environments, IT teams are able to improve their Quality of Service, reduce their Total Cost of Ownership, while increasing the services they offer their consumers. How?, you ask. IT managers everywhere are trying to figure out how to provide their services in a Utility-like format utilizing "the Cloud." Think about how natural it is for us to expect that flipping on a light switch, magically makes electricity flows and the light comes on; how about if we could select a service, magically systems are provisioned, business processes are established and the service is available - would that provide business value? I'll use this blog post to start you thinking about "Cloud as the Next Utility" and get you wondering if we do indeed have front row seats to a Cloud Computing Revolution.
Should get I my electricity from the local power company or do I invest in sustaining my own private electricity? As individuals focus on a greener way of life, they may be asking themselves that question as they focus on using what they have while minimizing their own carbon footprint. People have options that they can ponder over to figure out what is best for their families, their life and their utility usage.
Organizations are iterating over multiple decision points to provide the growing IT infrastructure requirements to run their business and it is continually influenced by many different things. Do they already have a large IT investment? Do their IT requirements expand and contract depending on current activities? Do they have a diverse investment and skills on existing service management tools? It's apparent that these same questions can be asked regardless of the size of your IT shop - big or small companies have come to depend on IT services and this dependency continues to grow in this computing revolution. One direction an IT manager might take is a hybrid approach. Such a strategy allows them to utilize their current investment of infrastructure, tools and skills to build out their own Private Cloud while keeping a Public Cloud vendor accessible when their computing needs spike. Companies have options that they can ponder over to figure out what is best for their company's business, their IT environments and their utility usage.
How is it that when I buy a new appliance I don't even think about if it'll will work with my electrical outlets when I get home. Standardization on how electricity is delivered and consumed is ubiquitous and expected worldwide. As consumers, we buy a utility and have an expectation that it will work with all our appliances to meet our needs.
Most companies have already made large investments in both infrastructure and tools, as well as the skills to maintain them. The need to deliver more with what is available, requires technology that can bridge from existing environments to new environments regardless of the influences. By adopting and adhering to computing standards, tooling can utilize data from existing systems while meeting the needs and different delivery methodologies of the business in new ways. This means I can ease the coordination of complex tasks and workflows in my data center while leveraging the existing skills, processes and technology artifacts to deliver services in new, different ways. By employing computing standards in my data center strategy, I can be assured that I can rely on Public Cloud for my processing spikes; I can utilize both my older infrastructure while integrating in new infrastructure; all with the same extensible tooling for deployment and management of compute, storage and networking. As you're defining the next milestones in your cloud strategy/roadmap, I would encourage you to investigate what is in use today in your data center and how you need to evolve/ reuse that investment. IBM offers SmartCloud Orchestrator to manage your existing datacenter(s). It provides capabilities based on industry standards and can extend your current processes to Cloud, helping you bring Cloud Services to your users with the same self-service manner as flipping on a light switch or plugging in a new appliance. A utility that will work to meet your users needs.
Until next time, keep your head in the Clouds.
This webcast will be held on June 27, 2013 at 11:00 AM ET USA
Click Here to Register
Virtualization, cloud computing, and the consumerization of IT continue to drive fundamental shifts in data center management priorities. Many organizations are implementing multi-hypervisor architectures and hybrid public and private cloud strategies.
Advanced automation and orchestration solutions are key to helping IT data center operations teams effectively manage increasingly complex enterprise computing environments.
Join this webinar to learn more about IBM SmartCloud Orchestrator, one of the industry’s first cloud solutions built on OpenStack technology. IBM SmartCloud Orchestrator can support a broad range of hypervisors, storage, network and compute resources, allows companies to quickly deploy applications onto the cloud infrastructure of their choosing by lining up the virtual and physical compute, storage and network resources, to quickly deploy and manage consistent and compliant cloud services.
About the Speaker:
Marco Sebastiani, Product Manager for IBM SmartCloud Orchestrator and Cloud Solutions, at IBM Tivoli/IBM Cloud & Smarter Infrastructure.
Please join the Tivoli User Community for a live Webcast, Best Practices in Data Center Infrastructure Management with the new IBM and Emerson Network Power Solution, on Tuesday, May 21, 2013 at 11:00 AM ET USA.
Click Here to Register Now!
IBM and Emerson Network Power have recently partnered to provide an end-to-end Data Center Infrastructure Management (DCIM) solution. This solution combines Tivoli's IT Service Management expertise with Emerson's real-time infrastructure optimization capabilities, enabling holistic management of the data center ecosystem. Join this live webcast to learn:
The definition of real-time DCIM
The state of the data center today
Primary challenges facing the data center
Benefits of DCIM and how DCIM is being adopted
How the combination of IBM Tivoli and Emerson Trellis deliver visibility, control, and automation to all of the components of the data center
Vikul Banta, Strategy and Product Management, IBM Software Group, Cloud & Smarter Infrastructure
Sean Nicholson, VP and General Manager, Worldwide IBM Tivoli Business, Emerson
The Official Tivoli User Community is the largest online and offline organization of Tivoli professionals in the world – home to over 160 local User Communities and dozens of virtual/global groups from 29 countries – with more than 26,000 members. The TUC community offers Users blogs and forums for discussion and collaboration, access to the latest whitepapers, webinars, presentations and research for Users, by Users and the latest information on Tivoli products. The Tivoli User Community offers the opportunity to learn and collaborate on the latest topics and issues that matter most. Membership is complimentary. Join NOW!
Modified on by alucches
IBM SmartCloud Orchestrator, the first new private cloud offering based on OpenStack and other cloud standards, is now available. Users are looking for Cloud solutions that increase agility, cost savings and offer a competitive advantage. IBM SmartCloud Orchestrator exceeds those needs:
Patterns of expertise learned from decades of successful Client and Partner Engagements - SmartCloud Orchestrator captures best practices for complex tasks, abstracted not hardcoded. Built in best practices KPIs, Measurement & Policies in the patterns to allow for semi-automated or automated vertical scaling up & down. Deploy applications rapidly with repeatable patterns across private and public clouds: SmartCloud Orchestrator enables third-party software deployments and custom pattern creation to “build once” and deploy across private and public clouds.
Robust, automated, high scale cloud provisioning - requested VMs will be up and running in under a minute using standard hardware
SmartCloud Orchestrator includes OpenStack!
End to End Orchestration, bridging domains, cloud, infrastructure, back-end integration, processes, service processes, etc. Dynamic at runtime to ensure you have the latest Human and Automated Interaction.
Lower operational costs by leveraging existing hardware and hypervisor - single management platform across different infrastructures reduces complexity and operational cost. Integrates compute, network, storage and application delivery: enable organizational integration
Get started today!
SmartCloud Orchestrator Analyst and PressCoverage:
A Control Center for Next Generation IT Asset and Service Management
Please join the Tivoli User Community for a live Webcast
and opportunity for Q&A, Tuesday, April 9, 2013 at 11:00 AM ET USA and 6:00
PM ET USA.
Register for 11:00 AM ET USA:
Register for 6:00 PM ET USA:
In 2012, IBM
introduced the IBM SmartCloud Control Desk, a unified platform for IT Service
In Feb 2013,
we released updates to SmartCloud Control Desk, adding more features, and
another step towards the vision of a true control center for your enterprise.
Key new features include
- Ability to set up an internal “enterprise app
store” with automated back end fulfillment of the applications
- Map integration which enables ability to narrow
down the geographic location of incidents and assign nearby resources
- “What-if” impact analysis to model potential
changes to the environment before they happen to identify high risk
- Usability improvements that help IT become more
Chris Dittmer, Worldwide Sr. Market
Manager, IBM Tivoli IT Service Management
CJ Paul, Senior Technical Staff Member and Chief Architect, IBM IT
Service Management Solutions
The Official Tivoli User Community is the
largest online and offline organization of Tivoli professionals in the world –
home to over 160 local User Communities and dozens of virtual/global groups
from 29 countries – with more than 26,000 members. The TUC community
offers Users blogs and forums for discussion and collaboration, access to the
latest whitepapers, webinars, presentations and research for Users, by Users
and the latest information on Tivoli products. The Tivoli User Community
offers the opportunity to learn and collaborate on the latest topics and issues
that matter most. Membership is complimentary. Join NOW!
The challenges of
virtualized environments are driving the shift to greater integration of
service management capabilities such as image and patch management, high-scale
provisioning, monitoring, storage and security. Join us for this webcast to learn how
organizations can realize the full benefits of virtualization to reduce
management costs, decrease deployment time, increase visibility into
performance and maximize utilization.
If you're in North America, register here for the April 16th session:
If you're in Asia Pacific, register for the April 23rd session:
My team and I have been heads down working to get Smart Cloud Orchestrator
, our newest cloud offering, to market. Last week we had our annual Pulse conference
in Vegas. I'm just recovering from its aftermath now and wanted to write a short blog about the experience. It should be no surprise that folks like James Governer of Redmonk offered some interesting perspectives
along with Infoworld
, and Wired
. While I am very pleased to hear the overwhelmingly positive press coverage, I am truly stoked about the direct customer feedback I got during the event.
Between sessions, Vegas dinners, and the occasional shut eye, I had a lot of customer meetings. Since we first announced our involvement with OpenStack, Chris Ferris, Todd Moore and I have been meeting with customers all over the world. Most of these discussions were with customers already working with OpenStack on their own. Last week, we had the band together again meeting with customers together and independently. What was interesting for me was that it's no longer just the bleading edge early adopters! Many customers are realizing that OpenStack is the future of the datacenter and they don't want to get left behind. Similarly, more and more of our enterprise customers have seen the benefits of DevOps and its relationship to cloud technologies. Things really have changed a lot during this past year!
While standardizing on the IaaS is a critical first step, I was thrilled to hear how many customers are using Chef
These arguably represent the second step towards the fruits of DevOps. It really feels like we're finally ready for the next step in this journey. Ironically, less than two weeks before Pulse, OpenStack Heat was voted in as a core OpenStack project
after a year of incubation. Heat
was started by RedHat as an open source implementation of Amazon's Cloud Formations
which enables users to easily combine multiple cloud resources together to form more meaningful solutions, applications, or services. Just as OpenStack compute moved past its original Amazon compatible APIs onto its own truly open APIs, I expect we'll see the same evolution in Heat. In fact, there is already an Oasis standards technical committee working on this very problem called TOSCA
. I really think these two efforts need to converge so that TOSCA is the open standard specification and Heat is the open source reference implementation. The Heat team has been talking about this since its inception.
I really liked the way Jesse Andrews
, one of the OpenStack founders
, put it. Jesse has long been using the analogy of the linux kernel to describe OpenStack and does not want it to stray from this for its own good. When we talked about heat last week he again used an analogy from linux. This time he chose the debian package manager tool APT
to describe heat as the package manager for the cloud operating system. I think this is a brilliant analogy, because the success of any operating system hinges upon the applications that run on it. Similarly, the value of cloud is in the applications or services that run on it.
I'm excited about heat and I'm looking forward to the next OpenStack summit
to discuss its evolution. Our Smart Cloud Orchestrator is all about open reusable automation content
. Be it native packages, chef recipes/cookbooks, virtual images, TOSCA templates, or BPMN standards
we want our customers, partners, and open source communities to be able to share and reuse cloud automation. I hope heat and TOSCA become the enabler for distributing and operating cloud applications and services. Anyone interested to help on this, please contact me and join me next month at the Havana summit!
Determine the right cloud orchestration strategy to
address the unique needs and pain points of your organization while increasing
productivity and spurring innovation. And learn more about the recently announced orchestration capabilities from IBM that leverage OpenStack to manage heterogeneous hybrid environments. Sign up today!
Here at Pulse the best part, for
me, are the client conversations. The efforts of clients to understand our IBM
categories and for me to understand the customers’ scenario have led to
interesting exchanges and raised some strange questions. Talking to a business partner,
I found myself asking "What is the shape of your computation?" Does
it look like a banana or a dolphin? A whale tail or a multi-drop jet? A rhizome or a Pacific atoll map?
Does it matter? It is
certainly a useful insight to visualise the general shape of how the business
flows unfold. When using a workload automation tool, each action becomes a unit
of work. These units of works are linked together by the conditions and
dependencies that sequence their execution in the right order. When large
graphs of such units of works are built and executed, the layout of thousands
of small units of work can take the most diverse shapes, and that shape tells
something about what is being accomplished. The case of this Business Partner
and his project with Big Data and massively parallel micro-ETLs, makes no
exception to that rule.
Big data projects have shown
their capability to extract insight from data through powerful operators and
clever data transformation, but often the result needs data cleaning,
preparation, and looks experimental unless an important polishing effort is
applied. In fact, multiple analysts have recognized the need for Big Data to
become more automated and repeatable in order to serve as key input into decision
making, especially if the kind of decision making is disruptive to mainstream
That is where the origin of
the data sources, the sequence of the processing steps and the conditions that
link the local "islands of processing" become of importance to
stabilise the global calculation map, share a common understanding about how
insight is constructed and lead to agreement about the right way to proceed. This interesting article warns that the quantity of applications and systems
involved in information management is the first obstacle to address, and can
easily be worsened by the use of Big Data powerful systems.
So the shape of your
computation indeed provides a visual cue to resolve the next challenge of fruitful
usage of Big Data, and it is probable that by using such graphical
representation we collectively build better pattern recognition and discussion
capabilities, like "Oh yes, your streams are too thin, you might have forgotten some data correlation."
Someone might think a rocket scientist is needed to display that "computation shape," but solutions for Workload Automation provide such images automatically, among other
benefits, when business processes are described into it. As business processes
are described once to be repeatedly executed, they will be triggered
automatically with a lot of fringe benefits including:
- The general visual aspect of how work unfolds,
either in the present, the past, or in the foreseeable future
- Snapshot or trending statistics about how each
piece of the total flow behaved in the past
- Reports on the evolution of definitions for the
flows, and about who changed what
- Automated tracking and alerting over the most
critical flows, those who have associated SLAs and possible penalties.
In short, Workload Automation
provides governance over even the most complex systems and a set of tools
designed to take the conversation to the next level -- above daily operations and
experimental setups -- whether the system handles Big Data, SAP jobs or a
robotic tape arm. Providing Visibility, Control and Automation over numerous
business flows is called Unattended Automation, and the new pressures created
by Cloud and Big Data have raised attention on it to a high level.
Next time you consider implementing a business application -- in addition to a
picture of its architecture, and a map of the community of persons that
contribute to it -- think about its computation shape, how you think it looks
like, and how you would like it to be.
Even if you
weren’t at IBM Pulse, trending right now on the web is chatter about IBM’s
announcement to leverage open technologies pervasively in the development
of its cloud offerings.
SmartCloud Orchestrator—an integrated platform to standardize and manage
heterogeneous hybrid environments—IBM is launching its first commercial offering
based on OpenStack. And with SmartCloud Orchestrator, IBM is also redefining
the scope of orchestration
to encompass the streamlining and integration of all resources, workloads and
for this kind of capability is addressed in the latest IDC
report which discusses why it will become a priority as organizations look
to improve operational efficiency and reduce the mess and complexity of growing
to standardize and automate cloud services includes integrating performance and
capacity management, usage and accounting, and rich image lifecycle management.
In addition, services and tasks such as compute and storage provisioning,
configuration of network devices, integration with service request and change
management systems and processes can all be streamlined. Out-of-the-box robust
workload patterns also enable fast development of cloud services.
SmartCloud Orchestrator, it’s all brought together to seamlessly manage
heterogeneous environments, allowing organizations to build on existing
investments and open source technologies.
haven’t had time to catch up on what’s trending, here’s the short version on
how IBM is helping to advance
the cloud to drive innovation.
TUC Webcast: Maximize the Benefits of Virtualization for Greater ROI
Please join the Tivoli User Community
for a live Webcast and opportunity for Q&A, Thursday, February 21, 2013 at 11:00 AM ET USA.
Click here to reserve your webcast seat now
About this Webcast:
Challenges with virtualized environments are driving the shift to
increased integration of service management capabilities such as image
and patch management, high-scale provisioning, monitoring, storage and
security. In this webcast, learn how organizations can realize the full
benefits of virtualization including reduced management costs, decreased
deployment time, increased visibility into performance and maximized
About the Speaker:
Matthew Rodkey, Product Manager focusing on Tivoli Cloud Solutions
Matt Rodkey is a Product Manager focusing on Tivoli Cloud Solutions.
In 13 years with IBM, Matt has worked in a number of areas in the Tivoli
portfolio including Security, Monitoring, and Service Delivery.
The Official Tivoli User Community is the largest online and offline
organization of Tivoli professionals in the world – home to over 160 local User
Communities and dozens of virtual/global groups from 29 countries – with more
than 26,000 members. The TUC community offers Users blogs and forums for
discussion and collaboration, access to the latest whitepapers, webinars,
presentations and research for Users, by Users and the latest information on
Tivoli products. The Tivoli User Community offers the opportunity to
learn and collaborate on the latest topics and issues that matter most.
Membership is complimentary. Join NOW!
Please join the Tivoli User
Community for their next webcast on IBM Smartcloud Consumer Monitoring. This webcast will be held on
Tuesday, January 22, 2013 at 11:00 AM ET, USA.
IBM SmartCloud Consumer Monitoring is a new product
developed for cloud consumers and service providers. An innovative new architecture embeds
monitoring technology in library images, so newly deployed VMs are discovered
and monitored within seconds of being instantiated. “Fabric Nodes” used innovative distributed
database technology to display data for nodes and applications, or logical
groupings of them, and run as virtual machines alongside the application
VMs. New fabric nodes come online as
needed, and shut themselves down when no longer needed, ensuring optimum use of
ABOUT THE SPEAKER:
Ben Stern, Executive IT Specialist, IBM Cloud & Virtualization products
Stern is an Executive IT Specialist.
For the past several years, he has defined Best Practices for Tivoli's
SAPM portfolio. Most recently, he has
taken on Best Practices role for the Cloud and Virtualization products.
The Official Tivoli User Community is the largest online and offline
organization of Tivoli professionals in the world – home to over 160 local User
Communities and dozens of virtual/global groups from 29 countries – with more
than 26,000 members. The TUC community offers Users blogs and forums for
discussion and collaboration, access to the latest whitepapers, webinars, presentations
and research for Users, by Users and the latest information on Tivoli
products. The Tivoli User Community offers the opportunity to learn and
collaborate on the latest topics and issues that matter most. Membership
is complimentary. Join NOW!
Even though server proliferation can be partially addressed through virtualization, the usage of virtual and physical assets becomes complex to accurately assess or manage. Cost management is crucial to integrate into overall service management, especially with a move into cloud. This webcast discusses how to implement a financial management roadmap and the key requirements for cloud transparency-- the ability to allocate IT costs, usage, and value.
Are you a cloud addicted?
Do you want to be always updated on the latest technologies?
Do you want to contribute to development decisions in the IBM SmartCloud Provisioning world?
Are just curios to see what's cooking in IBM SmartCloud Provisioning development?
If you answered "yes", have a look at the Upcoming Feature
page in the IBM SmartCloud Provisioning WIKI
It starts getting populated with recorded demos and screenshots showing what is currently into the developers'mind.
Each page is equipped with a set of buttons to immediately and easily provide your feedback directly to the development team.
More ideas, thoughts, videos will be added shortly on the same page, so stay tuned!
...Just do not get lazy, click the links!!!
My three favorite things about OpenStack are
- The People
- The Innovation
- The Interoperability
San Diego was my second OpenStack summit. Many of the same faces were in the design
summit sessions I attended, but there were many new faces as well. One of the most exciting observations from
the Folsom design summit was the incredible talent pool assembled. The Grizzly summit was no different – it’s
great to interact with so many incredibly smart, deep and experienced
people. I’m convinced that a single
company could never amass such a collection of quality talent for one
project. I guess it’s no wonder they’re
saying OpenStack is the fastest growing open source project ever.
I must apologize in advance, because I am sure to miss
someone, but I want to tell you about some of the people I interacted with in
the nova, glance, and cinder design sessions.
Over the past few months I’ve really been impressed with the PTL
leads. They’re very smart, highly
motivated, and excellent facilitators.
The design sessions invariably get into open debate, but productive
debate. I was impressed with the PTLs’
natural abilities to channel the discussion to bring out the key issues and
land on some concrete next steps.
I got to meet Microsoft’s Peter Pouliot, who’s heroic and
tenacious efforts successfully delivered HyperV support after a rather dodgey mess earlier in the year. Peter is not your stereotypical Microsoft
developer. He’s an open source guy
through and through. It’s clear that his
personal spirit had a lot to do with corralling the community to deliver
quality code in a very short time frame.
It was great to meet Peter and some of his non-Microsoft
colaborators. Great job guys!
I also had the pleasure to meet with some of VMware’s
developers and not just those acquired via the billion dollar Nicira
acquisition. The Nicira guys are great –
no question but I was also very pleased to meet the VMware developer who
completely rewrote the less than adequate VMware compute driver. I hope to work closely with them to ensure
the hypervisor is well supported and as interoperable as possible with other
properietary and open source technologies.
Of course, I can’t speak of OpenStackers without mentioning
RackSpace. Over the past two summits, I
got to interact with a number of RackSpace developers, aka Rackers. I got to hand it to them, they really do
have a great bunch of people and definitely
bring a massive scale service provider
perspective to the discussion. Of
course, being an IBMer myself, I can’t help but bring the enterprise customer
perspective into the mix. I think
OpenStack benefits greatly from these two perspectives brought together in open
OpenStack has done a great job defining an extensible
framework for IaaS. This flexibility not
only helps accommodate the varied needs from enterprise to service provider,
but it also enables a massive sea of innovation. Since the Nicira acquisition there’s been a
lot of attention on the innovation around software defined networking and
quantum, the OpenStack project that provides the abstraction for a variety of
implementations ranging from proprietary,
to pure open source like Open V Switch, to traditional standard
networking equipment. I think storage is
even hotter than networking these days with a slew of vendors combining
commodity 10Ge switches with commodity Intel servers with a combination of SSDs
and spinning disks to provide new approaches to storage for virtualized
environments. Of course software plays a
critical role in many of these virtualized storage solutions. Dreamhost’s open source distributed file system Ceph has been getting a lot of interest.
Enterprise storage vendors like NetApp, IBM, and HP have also
contributed cinder drivers to support their products within OpenStack
clouds. There were also a number of
summit discussions about exposing the different backend implementations of the
abstractions with different qualities of service. Some people, including one of my developers,
have begun to use “Volume Types” as a way to let users choose the kinds of
volumes they need. I believe this is
critical for compute clouds to cover the broadest spectrum of workloads. Of course this principle applies to other
resources and not just cinder volumes.
I saw a lightning talk about a nova driver for smartOS, a cool open source project from Joyent combining solaris zones, zfs, and kvm. There were ARM and Power CPU
support presented as well as a couple bare metal solutions. Intel, KVM, and OpenStack certainly make a
nice combination, but there’s so much more that’s possible with OpenStack
Finally, perhaps the most important thing about an OpenStack
cloud is interoperability. Starting with
the hypervisor, IBM has a solution that enables interoperability of images,
volumes, and networks across Xen, KVM, VMware and Hyper-V. We had a few sessions where we
discussed how we can bring the same interoperability to OpenStack. To start with, we need to be able to register
readonly cinder volumes as glance images.
Next to ensure we can scale out we need to be able to register multiple copies of the same image.
Finally, to take advantage of performance we need to abstract the clone
operation to enable Copy on Write (CoW), Copy on Read (CoR), as well as the
current local cache plus CoW mechanism for backwards compatibility and to
support 1Ge networks. Combining these
will enable images to work across multiple different hypervisors.
We also need interoperability with existing images which
means VMware and Amazon as the two most common forms of images. Today, it’s quite easy to automate simple
image formatting differences, but the challenge is in the assumptions made by
the images. The current direction for
OpenStack is to use config drive v2 to pass instance metadata to the
guest which is responsible to pull key system configuration such as hostnames,
credentials, and IP addresses. Typical VMware
images on the other hand generally expect either a push model, where the
hypervisor manipulates the filesystem prior to booting the image, or via their
guest agent, VMware tools.
To make matters worse, the current OpenStack assumes
different image formats for each supported hypervisor. One of the sad punchlines from Troy Torman’s keynote was that RackSpace’s private cloud distro named Alamo does not interoperate
with their publci cloud even though they’re both OpenStack. The good news is that, as Troy went on to
say, the time has come to focus on interoperability.
I got into a great conversation with Jesse Andrews, one of
the original OpenStack guys now at Nebula.
He described the approach to image interoperability by enabling cloud
operators to provide custom image workers at image ingestion time. This enables cloud providers to register
custom image processing code that gets called whenever an image is uploaded to
Glance. The simplest case of this is to
convert image formats to enable Alamo KVM images to run on RackSpace’s Xen
based public cloud.
Fortunately, IBM’s SmartCloud Provisioning (SCP) includes
some image management technologies which can help with the more challenging
problems mentioned above. Today’s SCP
2.1 will interrogate images in the library and check for cross hypervisor
compatibility. Users gain visibility
into this information and can optionally automate fixes wherever possible. We also use this technique to detect the
presence of a critical guest agent.
This brings me to one of my favorite little open source
projects, cloud-init created by Scot Moser at Canonical. If only it wasn’t GPL ;-). Many OpenStackers are using cloud-init to
automate the system configuration pull from config drive v2 mentioned above. This little bootstrap can do much much more,
but this is certainly a great job for this trusty little tool. Unfortunately, it’s only for linux. It’s even been made to work with fedora and
will likely be included in RHEL. Since
we cannot use GPL code in IBM products, we have a similar bootstrap for both
Windows and Linux guests. We’re working
with our lawyers to get approval to contribute this code to cloud-init. Of course,
if Canonical wants to use a more commercial friendly license like
OpenStack has done, then I could spend less time with lawyers and more time
hacking code ;-).
The beauty of this little bootstrap is its simplicity. This simplicity enables us to automatically
inject the bootstrap into Windows and Linux images. This will let us automatically fixup any old VMware,
or Hyper-V image so that it works on OpenStack.
This is a critical first step towards interoperability.
OpenStack is truly becoming an industry changing
and historic project. With so many
incredibly talented people from countless companies across the globe it’s no
wonder there is so much innovation in the community. I’m really happy to be a part of this growing
community. Together I believe we can
change the industry for the better. If
you would like to be part of this growing and innovative project, check out the
“community” link at www.openstack.org.
Also, we would like to invite you to check back here for future blogs on
OpenStack and IBM’s involvement. OpenStack
is a big part of IBM’s open cloud strategy and we want to be sure to keep you
up to date on our progress