- Freely download the code, run it unattended in your premises without the need to sign a non-disclosure agreement
- Discuss what you think about that on a dedicated forum
- Watch demonstrations of IBM SmartCloud Provisioning capabilities in the work and tell us if you like or do not like the newest features just clicking a button.
- Join our community to get early access to and provide feedback on cloud provisioning and orchestration technologies
- Stay tuned to the community to hear the latest new on available code drops and functionalities
- Play with the product in our premises joining the hosted beta. To access the hosted beta, send an email to firstname.lastname@example.org
The open beta program for the upcoming IBM SmartCloud Provisioning release started:
IBM SmartCloud Provisioning (aka IBM Service Agility Accelerator for Cloud) started the open beta program
IBM SmartCloud Provisioning (previously known as IBM Service Agility Accelerator for Cloud) fully embraces the transparent development philosophy. Starting from today, you can join our open beta program. This Program is intended to raise awareness of IBM SmartCloud Provisioning with the widest possible audience and provide a feedback mechanism to let you tell us what you like about the product, and what we could improve. The code is downloadable from https://www14.software.ibm.com/iwm/web/cc/earlyprograms/tivoli/P2044/index.shtml Due to the open nature of this beta program, the code is time bombed, you can use it until december 31st 2011. You can discuss issues related to the code drop into this forum: http://www.ibm.com/developerworks/forums/forum.jspa?forumID=2673
cynthyap 110000GC4C Tags:  virtualization cloud_computing provisioning cloud smartcloud 4,538 Views
Today IBM announced new SmartCloud Foundation capabilities to help organizations realize the potential of cloud computing. Watch the replay of the IBM SmartCloud launch webcast, to learn more about how the new announcements, including IBM SmartCloud Provisioning (delivered by IBM Service Agility Accelerator for Cloud), can help customers move beyond virtualization to more advanced cloud deployments.
Birgit.Nuechter 270001WHA9 Tags:  provisioning openstack cloud patterns smartcloud orchestrator 10,006 Views
IBM SmartCloud Orchestrator, the first new private cloud offering based on OpenStack and other cloud standards, is now available. Users are looking for Cloud solutions that increase agility, cost savings and offer a competitive advantage. IBM SmartCloud Orchestrator exceeds those needs:
Get started today!
SmartCloud Orchestrator Analyst and PressCoverage:
marcese 11000065AG Tags:  cloud image smart library image-library drift analysis provisioning 5,737 Views
In this new blog post I would like to describe a root-cause detection scenario using IBM Smart Cloud Provisioning.
Given the ever increasing number of virtual machine instances and VM images in a cloud ecosystem it is becoming more and more important to track each of virtual image's contents and configuration mainly for standardization and consolidation purposes.
Another situation where tracking this content may also be useful is when there is the need to identify the "drift" between a deployed virtual machine and the virtual image that was used to create it, as for example in the scenario described below.
As soon as a virtual machine gets deployed from a virtual image its content will start to change; the owner of that virtual machine begins using it by creating new files, by using its applications, by installing/uninstalling software and so on. Because of one of the above actions it may happen that the system, or a specific application, may no longer work correctly. At this point one of the things that may be done to understand what could be the cause of such malfunctions is to identify all the changes applied to the instance compared to the source virtual image and look at them trying to identify the “culprit” change in order to take appropriate actions for repairing the situation. This is a typical scenario where the IBM Virtual Image Library component of IBM Smart Cloud Provisioning comes at help through its indexing and drift analysis capabilities.
As already highlighted in a previous blog entry the IBM Virtual Image Library is a tool that provides sophisticated image-management capabilities a customer can use to tackle the difficult issues of understanding and controlling the contents of his virtual infrastructure. Let's see how this tool may help in troubleshooting the scenario we have described above.
The first step is to identify the failing virtual machine among the ones available in the IBM Virtual Image Library repositories. The tool continuously indexes the configured repositories of virtual machines and images so that its data model is always up to date with the actual content of the virtual infrastructure.
Once the virtual machine has been identified the next step is to retrieve the virtual image from which it has been deployed. This is another feature provided by the tool that keeps track of the entire tree of relationships among virtual images and virtual machines available in the environment.
The next step, if not already previously done, is to run an indexing operation of the virtual machine so that its content, in terms of installed applications, OS information and file-level information can be retrieved and brought into the tool's data model.
Once the indexing is complete the source virtual image content and the virtual machine content can be compared. A list of differences is presented to the user so that he/she can review them and decide what differences may be the most likely reason for the problem.
For example, from this report the user may notice that a suspect application has been installed into the virtual machine that shouldn't be there or that a configuration file, used by the application that is malfunctioning, has been modified.
He/she can use these hints as a starting point for troubleshooting the issue and for taking repair actions.
The following movie demonstrates, by means of an example, the capabilities described above.
What has been described here is just an example of the drift analysis capabilities of the IBM Virtual Image Library with the intent to give you an introduction to the advanced features of this component. If you are interested in understanding more deeply how the IBM Virtual Image Library works and to have a summary of all of its capabilities you can take a look at the following paper:
rossella 120000Q98F Tags:  provisioning demo cloud icon library image iaas virtualization management accelerator smartcloud 11,434 Views
As part of the transparent development initiative, IBM SmartCloud Provisioning (formerly known as IBM Service Agility Accelerator for Cloud) launches a series of daily demos, starting from November 7th. Every session will take about one hour.
In this way you can have a look in almost real time at what is happening in IBM SmartCloud Provisioning development, learn about new and enhanced capabilities.
If you are interested in joining the sessions, here is the schedule in Central European Time (CET):
The sessions will be focused on image management.
No password is required
cynthyap 110000GC4C Tags:  cloud-computing provisioning cloudops cloud cloud-monitoring devops cloud_computing smartcloud smartcloudprovisioning 7,766 Views
DevOps has become something of a buzzword lately but the idea behind it can be truly powerful. Using a combination of technology and best practices to increase collaboration between development and operations teams can accelerate the application development lifecycle while improving software quality and reducing costs.
For many, the development process has become more complex and segregated from operations. Factors such as inefficient communications, manual processes and poor visibility into the deployment process result in production bottlenecks as well as subpar quality throughout the development and delivery cycle.
To address these challenges, organizations have often turned to adhoc and siloed efforts. And so gaps still exist due to lack of integration across people, processes and tools. The reality is that an effective DevOps solution requires an integrated approach of continuous delivery that optimizes and accelerates the application lifecycle in every phase: development, testing, staging and production.
What this means is that changes made in development are continuously built, integrated and tested for function, performance, systems verifications, user acceptance, and then staged, ready for production. And it can all be brought together through an integration framework that can automate the individual tasks across the various stages of the pipeline and continuously deliver changes, providing end-to-end lifecycle management. Continuous automation is necessary in the following key areas:
• Continuous integration provides faster validation and delivery of code changes via automated, repeatable execution of build processes with continuous feedback
• Continuous deployment provides on-demand environment configuration and the ability to continuously deploy code and configuration middleware.
• Continuous testing automates testing in production-like environments.
• Continuous monitoring increases visibility into application performance and provides data to trace and isolate product defects.
With an automated process for moving application changes through progressively richer test environments that mirror the production environment, chances for error and roll back are greatly reduced.
The result is increased visibility into the delivery pipeline, standardized communication between Dev and Ops and more efficient and accurate delivery of software projects. And the delivery process can scale dynamically as business needs grow.
Here’s how IBM is addressing DevOps, with the launch of SmartCloud Continuous Delivery--an agile, scalable and flexible solution for end-to-end lifecycle management that allows organizations to reduce software delivery cycle times and improve quality. SmartCloud Continuous Delivery is also available on Jazz.net.
cynthyap 110000GC4C Tags:  image_management cloud provisioning usage monitoring virtualization cloud_computing cloud-computing orchestration 2 Comments 28,862 Views
Orchestration can be one of those ambiguous concepts in cloud computing, with varying definitions on when cloud capabilities truly advance into the orchestration realm. Frequently it’s defined simply as automation = orchestration.
But automation is just the starting point for cloud. And as organizations move from managing their virtualized environment, they need to aggregate capabilities for a private cloud to work effectively. The automation of storage, network, performance and provisioning are all aspects handled in most cases by various solutions that have been added on over time as needs increase. Even for organizations that take a transformational approach -- jumping to an advanced cloud to optimize their data centers -- the management of heterogeneous environments with disparate systems can be a challenge not simply addressed by automation alone. As the saying goes, “If you automate a mess, you get an automated mess.”
The need to orchestrate really becomes clear when various aspects of cloud management are brought together. The value to the organization at this stage of cloud is simplifying the management of automation – otherwise a balancing act to manage multiple hypervisors, resource usage, availability, scalability, performance and more -- based on business needs from the cloud, with the ultimate goal of delivering services faster.
With orchestration, the pieces are woven together and can be managed more effectively to ensure smooth and rapid service delivery -- and delivered in a user-friendly catalog of services easily accessible through a single pane of glass. In essence, cloud orchestration = automation + integration + best practices.
Without cloud orchestration, it’s difficult to realize the full benefits of cloud computing. The stitching together of best practices and automated tasks and processes becomes essential to optimize a wide spectrum of workloads types.
IBM SmartCloud Cost Management now provides usage metering and reporting for IBM SmartCloud Provisioning (SCP). This is now available for download on the ISM Library here: http://www.ibm.com/software/ismlibrary?NavCode=1TW10UM08
This new capability allows you to collect usage information from SCP environments using the SCP High scale low touch (HSLT) commands. The new HSLT SCP Collector gathers usage data every hour and processes it once a day. Usage, Detail and Identifier data is stored on a daily basis. The usage data is then billed, stored and can be reported on on a monthly basis.
A sample job file is provided as part of this functionality to show how to bill each access-id for the high-water mark of allocated resources in the month. The sample job file, SampleHSLT_SCP.xml is divided into three separate jobs.
The first job, SCP_collect_HSLT_hourly_data is recommended to be run every hour at XX:59. This job will run HSLT commands to collect all relevant resources for each access id that is using the SmartCloud Provisioning Service. Firstly, a list of all available access ids is collected using the command iaas-describe-accesses-by-user.
Then, for each access id, the command iaas-describe-resources-inuse-by-access is run to collect the relevant resources for that access id. The resources gathered per access id include:
Memory (MB) , Volume (GB), Number of Virtual Processors, Number of VM Instances, and Number of static IP Addresses.
The HSLT commands also provide context information that feeds into the Account Code Structure. The Account Code Structure includes the following identifiers:
The second job, SCP_Process_daily_data is recommended to be run every day some time after midnight. This job will process the daily CSR file and extract the maximum value across the day for each resource for each access id. The resource values are then stored in the cimsresourceutilization table of the SmartCloud Cost Management database. Detail and Identifier data is stored in the cimsdetail and cimsident tables of the SmartCloud Cost Management database.
The third job, SCP_Process_monthly_data is recommended to be run once a month at the start of the month. It will process the last months worth of data from the cimsresourceutilization table. It will do this by extracting the maximum value for each resource for each access id. Billing is applied to the data using the relevant SmartCloud Cost Management rate codes and the processed data is then stored in the cimssummary table of the SmartCloud Cost Management database, allowing reports to be run on the data.
The sample jobs can be customized for other charging algorithms if desired. Examples include charging on a daily (or hourly) basis in addition to or instead of on a monthly basis. Tiered pricing logic can be applied to provide for having charging amnesty for users/departments that stay below a certain threshold.
Rates are defined for each resource. These rates are used for billing purposes.
Additions have also been made to the existing SmartCloud Cost Management KVM collector to include new resources and a separate job file has been included to to add some SCP context data to the Account Code Structure, achieved by running HSLT commands.
For information about the existing TUAM KVM collector refer to the following link in the TUAM 7.3 Information Center:
The new resources for the KVM Collector include Bytes Received, Packets Received, Receive Packets dropped, Receive Packet errors, Bytes Transferred, Packtes Transferred, Transfer Packets dropped, Transfer Packet errors, Log Size of VM Image, Size of VM Image on Disk.
The new Account Code Structure for the KVM Collector contains the following identifiers: Service Region, Group, Username, Access id, VM Name
The VM Name contains the Access id allowing the information collected from the Hypervisor to be related back to the SmartCloud Provisioning identifiers.
The following reports are sample reports run on a system that has collected data from one Service Region on a SmartCloud Provisioning System:
Top 10 Pie Chart
Invoice By Account Level
Note also that other existing SmartCloud Cost Management collectors can collect information from VMWare and Power hypervisors.
See the Information Centre (http://pic.dhe.ibm.com/infocenter/tivihelp/v3r1/topic/com.ibm.ituam.doc_7.3/admin_win_dc/c_core_data_collectors.html) for details.
If you have any questions about this functionality, please contact John Buckley (John Buckley/Ireland/IBM) or Louise O'Halloran (Louise O'Halloran/Ireland/IBM).
cynthyap 110000GC4C Tags:  virtualization provider cloud csp cloud-computing cloud_computing msp service provisioning 8,349 Views
With the proliferation of cloud computing, many businesses are starting to adopt a service provider model—either as a deliberate strategy to establish new revenue streams or, in some cases, inadvertently to support the growing needs of their organizations. This is especially true for companies with diverse needs, whether they’re tech companies with dev teams churning out new apps and services, or business owners driving requirements for SaaS services and cloud capabilities to enhance their data center operations.
In any event, the distinction between managed service providers (MSP) or cloud service providers (CSP), and companies growing in-house capabilities may not be as important as the common need to respond quickly and scale to support customer needs. The challenges facing all of these companies include facilitating the creation of new applications and services while maintaining quality of service, and the need for automation to reduce human resources and error from manual tasks—all with an eye to drive revenue and acquire new customers.
And so, the challenge for service providers of any kind is to increase scalability, automation and uptime while constraining costs. Companies are increasingly solving the critical piece of this puzzle by embracing rapid, high-scale provisioning and key cloud management capabilities to allow them to grow as quickly as their customers’ needs. In particular, the benefits accrue in four key areas.
First, applications can be deployed rapidly across private and public cloud resources.
Second, rich image management tools simplify complex and time consuming processes for creating virtual images and constraining image sprawl.
Third, operational costs can be lowered by leveraging existing hardware to support an array of virtual servers and diverse hypervisors.
And fourth, high-scale provisioning enables rapid response to changing business needs with near-instant deployment of hundreds of virtual machines.
While the spectrum of virtualization to orchestration functionality helps to manage their environments, high-scale provisioning in particular offers a cost-effective way to leverage capacity as a business commodity—a way for service providers to offer seemingly limitless capacity to their customers while lowering the relative costs of providing it.
In the case of Dutch Cloud, a CSP based in the Netherlands, a growing client base allowed the company to expand but it was very conscious of the costs and issues related to scalability, performance and security. By adopting a lightweight, high-scale provisioning solution for core service delivery, Dutch Cloud added capacity easily and was able to scale up rapidly without interruption to customer service. The CSP also reduced its administrative workload by 70 percent by adopting automation best practices. Monthly revenue has tripled twice in the last six months without an increase in operational costs.
Other service providers such as SLTN, a systems integrator serving large and mid-sized businesses, have experienced similar cost savings by extending platform managed services to a cloud delivery model. By implementing a low-touch, highly scalable cloud as its core delivery platform across multiple compute and storage nodes, SLTN was able to deploy new services in seconds rather than hours. It was also able to utilize existing commodity skills without significant training, integrate the existing mixed environment and minimize operational administration and maintenance. The underlying IaaS cloud capabilities allowed SLTN to be more efficient and to provide the full spectrum of cloud services to their own customers in a pay-as-you-go model—with better service and at a lower price point.
The benefits that these companies experienced are evidence that high-scale provisioning and cloud management capabilities can dramatically increase service capacity. For service providers of all stripes—whether deliberate or not—these benefits are a critical part of the evolution of cloud services and offer a meaningful way to deliver more value to themselves and their users.