cynthyap 110000GC4C Tags:  devops agile cloud development provisioning cloud-computing cloud_computing 4,670 Views
Get involved with the new IBM DevOps project and beta on Jazz.net-- IBM SmartCloud Continuous Delivery is an agile, scalable and flexible solution for end-to-end lifecycle management and automation, creating an environment that takes collaboration between Development and Operation teams to the next level. Learn more.
IBM SmartCloud Cost Management now provides usage metering and reporting for IBM SmartCloud Provisioning (SCP). This is now available for download on the ISM Library here: http://www.ibm.com/software/ismlibrary?NavCode=1TW10UM08
This new capability allows you to collect usage information from SCP environments using the SCP High scale low touch (HSLT) commands. The new HSLT SCP Collector gathers usage data every hour and processes it once a day. Usage, Detail and Identifier data is stored on a daily basis. The usage data is then billed, stored and can be reported on on a monthly basis.
A sample job file is provided as part of this functionality to show how to bill each access-id for the high-water mark of allocated resources in the month. The sample job file, SampleHSLT_SCP.xml is divided into three separate jobs.
The first job, SCP_collect_HSLT_hourly_data is recommended to be run every hour at XX:59. This job will run HSLT commands to collect all relevant resources for each access id that is using the SmartCloud Provisioning Service. Firstly, a list of all available access ids is collected using the command iaas-describe-accesses-by-user.
Then, for each access id, the command iaas-describe-resources-inuse-by-access is run to collect the relevant resources for that access id. The resources gathered per access id include:
Memory (MB) , Volume (GB), Number of Virtual Processors, Number of VM Instances, and Number of static IP Addresses.
The HSLT commands also provide context information that feeds into the Account Code Structure. The Account Code Structure includes the following identifiers:
The second job, SCP_Process_daily_data is recommended to be run every day some time after midnight. This job will process the daily CSR file and extract the maximum value across the day for each resource for each access id. The resource values are then stored in the cimsresourceutilization table of the SmartCloud Cost Management database. Detail and Identifier data is stored in the cimsdetail and cimsident tables of the SmartCloud Cost Management database.
The third job, SCP_Process_monthly_data is recommended to be run once a month at the start of the month. It will process the last months worth of data from the cimsresourceutilization table. It will do this by extracting the maximum value for each resource for each access id. Billing is applied to the data using the relevant SmartCloud Cost Management rate codes and the processed data is then stored in the cimssummary table of the SmartCloud Cost Management database, allowing reports to be run on the data.
The sample jobs can be customized for other charging algorithms if desired. Examples include charging on a daily (or hourly) basis in addition to or instead of on a monthly basis. Tiered pricing logic can be applied to provide for having charging amnesty for users/departments that stay below a certain threshold.
Rates are defined for each resource. These rates are used for billing purposes.
Additions have also been made to the existing SmartCloud Cost Management KVM collector to include new resources and a separate job file has been included to to add some SCP context data to the Account Code Structure, achieved by running HSLT commands.
For information about the existing TUAM KVM collector refer to the following link in the TUAM 7.3 Information Center:
The new resources for the KVM Collector include Bytes Received, Packets Received, Receive Packets dropped, Receive Packet errors, Bytes Transferred, Packtes Transferred, Transfer Packets dropped, Transfer Packet errors, Log Size of VM Image, Size of VM Image on Disk.
The new Account Code Structure for the KVM Collector contains the following identifiers: Service Region, Group, Username, Access id, VM Name
The VM Name contains the Access id allowing the information collected from the Hypervisor to be related back to the SmartCloud Provisioning identifiers.
The following reports are sample reports run on a system that has collected data from one Service Region on a SmartCloud Provisioning System:
Top 10 Pie Chart
Invoice By Account Level
Note also that other existing SmartCloud Cost Management collectors can collect information from VMWare and Power hypervisors.
See the Information Centre (http://pic.dhe.ibm.com/infocenter/tivihelp/v3r1/topic/com.ibm.ituam.doc_7.3/admin_win_dc/c_core_data_collectors.html) for details.
If you have any questions about this functionality, please contact John Buckley (John Buckley/Ireland/IBM) or Louise O'Halloran (Louise O'Halloran/Ireland/IBM).
Starting from August 10th 2012, SmartCloud Provisioning 2.1 is generally available.
Here is a synthesis of the new features added:
If you would like to read more about that, see IBM SmartCloud Provisioning announcement letter and IBM SmartCloud Provisioning information center
Pino 100000UGHN Tags:  smartcloud build provisioning image icct icon windows 2 Comments 7,773 Views
In this new post I would like to introduce a new function added in Image Construction and Composition Tool (ICCT) 1.2, it is the capability to extend a Windows base image available in IBM Smart Cloud Provisioning 2.1.
In fact in ICCT 1.2 it's possible to:
- import a Windows base image available in IBM Smart Cloud Provisioning 2.1
- extent it adding Windows bundles created using ICCT 1.2
- capture it in order to have in IBM Smart Cloud Provisioning 2.1 a new extended Windows image
It is provided using the same User Interface and the same steps already available for Linux and AIX support.
The new extended image is now ready to be deployed showing the bundle configuration parameters from the IBM Smart Cloud Provisioning 2.1 User interface.
In ICCT 1.2 a new specific Windows Enablement bundle has been provided to support the Windows images and it is described in the screenshot below:
It contains the Activation Framework needed by ICCT 1.2 to install and configure bundles based on the Windows IBM VSAE.
For the Windows bundles created using ICCT 1.2 it is also possible to provide in the tabs Installation/Configuration/Reset the scripts for the installation, configuration and reset steps invoked by ICCT when an image is extended.
The Windows support is provided for the following Operation Systems:
cynthyap 110000GC4C Tags:  virtualization provider cloud csp cloud-computing msp cloud_computing service provisioning 8,396 Views
With the proliferation of cloud computing, many businesses are starting to adopt a service provider model—either as a deliberate strategy to establish new revenue streams or, in some cases, inadvertently to support the growing needs of their organizations. This is especially true for companies with diverse needs, whether they’re tech companies with dev teams churning out new apps and services, or business owners driving requirements for SaaS services and cloud capabilities to enhance their data center operations.
In any event, the distinction between managed service providers (MSP) or cloud service providers (CSP), and companies growing in-house capabilities may not be as important as the common need to respond quickly and scale to support customer needs. The challenges facing all of these companies include facilitating the creation of new applications and services while maintaining quality of service, and the need for automation to reduce human resources and error from manual tasks—all with an eye to drive revenue and acquire new customers.
And so, the challenge for service providers of any kind is to increase scalability, automation and uptime while constraining costs. Companies are increasingly solving the critical piece of this puzzle by embracing rapid, high-scale provisioning and key cloud management capabilities to allow them to grow as quickly as their customers’ needs. In particular, the benefits accrue in four key areas.
First, applications can be deployed rapidly across private and public cloud resources.
Second, rich image management tools simplify complex and time consuming processes for creating virtual images and constraining image sprawl.
Third, operational costs can be lowered by leveraging existing hardware to support an array of virtual servers and diverse hypervisors.
And fourth, high-scale provisioning enables rapid response to changing business needs with near-instant deployment of hundreds of virtual machines.
While the spectrum of virtualization to orchestration functionality helps to manage their environments, high-scale provisioning in particular offers a cost-effective way to leverage capacity as a business commodity—a way for service providers to offer seemingly limitless capacity to their customers while lowering the relative costs of providing it.
In the case of Dutch Cloud, a CSP based in the Netherlands, a growing client base allowed the company to expand but it was very conscious of the costs and issues related to scalability, performance and security. By adopting a lightweight, high-scale provisioning solution for core service delivery, Dutch Cloud added capacity easily and was able to scale up rapidly without interruption to customer service. The CSP also reduced its administrative workload by 70 percent by adopting automation best practices. Monthly revenue has tripled twice in the last six months without an increase in operational costs.
Other service providers such as SLTN, a systems integrator serving large and mid-sized businesses, have experienced similar cost savings by extending platform managed services to a cloud delivery model. By implementing a low-touch, highly scalable cloud as its core delivery platform across multiple compute and storage nodes, SLTN was able to deploy new services in seconds rather than hours. It was also able to utilize existing commodity skills without significant training, integrate the existing mixed environment and minimize operational administration and maintenance. The underlying IaaS cloud capabilities allowed SLTN to be more efficient and to provide the full spectrum of cloud services to their own customers in a pay-as-you-go model—with better service and at a lower price point.
The benefits that these companies experienced are evidence that high-scale provisioning and cloud management capabilities can dramatically increase service capacity. For service providers of all stripes—whether deliberate or not—these benefits are a critical part of the evolution of cloud services and offer a meaningful way to deliver more value to themselves and their users.
SandraWeiss 060000BCJJ Tags:  security provisioning image virtualization cloud smartcloud_security image_management smartcloud solutions maintenance endpoint 4,988 Views
The solution Endpoint security for SmartCloud Provisioning v2.1 has been published on IBM Integrated Service Management Library (ISML).
The purpose of Endpoint security for SmartCloud Provisioning v2.1 is to demonstrate how IBM Tivoli Endpoint Manager can be integrated with the IBM SmartCloud Provisioning Infrastructure.
Endpoint security for SmartCloud Provisioning will generate the components required by IBM SmartCloud Provisioning 2.1 to automatically install IBM Tivoli Endpoint Manager agents when deploying virtual systems. This will allow cloud administrators to easily maintain compliance over their virtualized network.
IBM SmartCloud Provisioning v2.1 as well as IBM Tivoli Endpoint Manager v8.2 need to be available. If you are participating in the IBM SmartCloud Provisioning v2.1 beta and have IBM Tivoli Endpoint Manager, consider using Endpoint security as well.
Demo videos about Endpoint security for SmartCloud Provisioning v2.1 can be found here: https://www.ibm.com/developerworks/servicemanagement/cvm/cmi/security.html
This solution is available via the IBM Integrated Service Management Library (ISML). You can find it here -> Endpoint security for SmartCloud Provisioning v2.1 Beta Trial.
We welcome any and all feedback.
PQC6_jim_Markham 120000PQC6 Tags:  smartcloud provisioning storage kvm management tsm backup smartcloud_resilience cloud esx solutions integration vmware 5,065 Views
There is a new white paper available on the IBM Integrated Service Management Library ( ISML ) that explains how to use Tivoli Storage Manager to back up a VMware virtual machine that was deployed by the Workload Deployer in IBM SmartCloud Provisioning version 2.1.
The white paper explains how to locate, and back up the virtual machine in VMware using IBM Tivoli Storage Manager, and how to restore the virtual machine to the Workload Deployer environment.
The white paper can be downloaded from the IBM Integrated Service Management Library ( ISML ) following this link -> Backing up and Restoring Workload Deployer Virtual Machines Deployed in VMware
marcese 11000065AG Tags:  smartcloud isaac icon script python build icct image provisioning 5,758 Views
In this new post I would like to describe how you can script the building of virtual images using the Image Construction and Composition Tool provided by IBM Smart Cloud Provisioning.
The upcoming release of IBM Smart Cloud Provisioning 2.1 embeds, among other things, a new version of the Image Construction and Composition Tool. Image Construction and Composition Tool allows to build virtual images that are self-descriptive, customizable and manageable; at the end it produces Open Virtualization Appliance (OVA) images that can be deployed into a cloud environment.
One of the new features of this tool is the capability of performing image management operations directly through a command-line interface. This capability enables a set of new use cases through a scripting environment.
The command-line interface of Image Construction and Composition Tool provides a scripting environment based on Jython (i.e. the Java-based implementation of Python) and in addition to issuing commands specific to the Image Construction and Composition Tool, you can also issue Python commands at the command prompt.
Using such interface, you can manage the Image Construction and Composition Tool remotely since you can download it to any machine and then point to the system where the tool is running: it communicates with the server using the HTTPS protocol so that all the communications are encrypted. The command-line interface can be installed on both Linux and Windows operating systems and can run in both interactive and batch modes.
Anything that can be managed in the Image Construction and Composition Tool is modelled by a resource object on the command-line interface that exposes a set of methods for performing the related management actions. The following objects are available: software bundles references (for defining software configurations to be deployed on a virtual machine), cloud providers references (for defining the hypervisors used by Image Construction and Composition Tool to build and capture images), images references (for handling virtual machine images to be used for import, extend, capture and export operations) and users references (for administering the user of Image Construction and Composition Tool ).
Once you have downloaded and configured the command-line to start a new session in interactive mode you can issue the following command from a shell prompt:
<icct_cli-install-dir>/bin/icct -h <icct server> -u username -p password
One you get the interactive shell you can start issuing commands.
Here are a few examples.
To get a list of all the images for a cloud provider, you can use a command like the following:
To import a software bundle and wait for the import to complete, you can use a set of commands like the following:
>>> importingBundle = icct.bundles.import('http://localhost/myBundle.ras')
>>> if importingBundle.currentState == 'import_failed':
... print 'Bundle import failed!'
To get a list of all the images, you can use a command like the following:
>>> allImages = icct.images
And so on.
You can also use the Image Construction and Composition Tool command-line interface in batch mode, by creating your own script and then launching it. For example, to run a script called myScript.py you can issue the following command:
icct -h <icct server> -u username -p password -f myScript.py arg1 arg2 arg3
A few samples come directly with Image Construction and Composition Tool. They are located under the following directory:
They cover some of the Image Construction and Composition Tool basic flows, such as creating a new cloud provider configuration, importing an image, extending an image, etc..
You can use them as a starting point for creating your own workflows.
That's all for now.
We have just provided a quick introduction of all the capabilities of the Image Construction and Composition Tool command-line interface. If you are interested in discovering more about Image Construction and Composition Tool, its command-line interface and SCP 2.1, you can have a look at what is included in IBM Smart Cloud Provisioning beta code:
rossella 120000Q98F Tags:  isaac security provisioning segregation smartcloud access 4,775 Views
If you ever observed babies playing, you'll notice that at a certain point in their development, the idea of property comes into the game: "this is my toy, I'll not let you play with that". Usually parents needs to invest some time to make the baby understanding the value of sharing things: "the toy remains yours, but you can enjoy sharing it with other babies... If you are kind and polite the other babies may share in their turn their toys with you". Usually this trick work. The next step will be that they will start adding "special conditions": "you can use my blocks but only the blue ones" or " you can play with this doll but I'll not borrow you the pink dress". A different stories comes when sharing can make you save a lot of money: you do not need to buy the same toy your baby saw another baby is using if they can share it...
Did you ever try to apply this model to cloud computing?
I know it may sound strange at a first glance, but there are some similarities...
Let's start from the last example, kids sharing the same toys: doesn't it look like familiar to the idea of sharing the same master image? In a lot of cases I do not need my own master image, I can use the same one another user is using.
But the "conditions" apply: "you can use my same master image, but I do not want you to stay on my own network!" or "you can use my same master image, but you cannot use my package scripts!" ... Not a lot of differences from"you can play with my doll but I'll not give you the pink dress" or " you can play with my blocks but you can use only the blue ones"
There will be situations is which you even do not want to share the master image at all: "this is mine, it's my treasure, I have my own information there and I do not want you to see that"...I'm pretty sure you've seen babies doing that with their favorite teddy bear ;-)
I hope these few examples made you look at objects authorizations in a cloud with different eyes...
Anyway, the problem is there, a cloud is typically a shared environment and we do not want to have everybody to have access to everything. Privacy is important.
Let's see one of the ways to resolve this issue. We could give to every individual/user the right to determine who can access his own objects. "who" of course can be a single user or a group of users. Depending on the role of the user he can have access to different objects.
The cloud administrator for example can decide who can access a specific network, who can see a specific cloud group; the cloud catalog editor can decide who can access to which master image, or to which package scripts (package scripts are the building blocks for patterns); the image deployer can decide if somebody else can see the details of his images. In some cases he may also be interested in letting other users accessing his own volumes.
With the same ease a user can decide either to give full access, read-only access or no access at all to each of its own resources/objects.
Using such fine grained access policy makes the cloud software really flexible to fit various adoption models like a classical private cloud or a more complex environment like the ones a cloud service provider may have.
In case of enterprises and cloud service providers, authorization and network segregation are critical prerequisites for building and managing a secure cloud environment.
For this SmartCloud Provisioning is the right choice.
You can also rely on a robust auditing mechanism that allows you to track what is happening in the cloud: who logged in/out, user creation/deletion/update, data access attempts either if they are successful/unsuccessful, virtual machine instance creation/deletion update and far more...
If you are interested in walking through this model, you can have a look at what is included in IBM SmartCloud Provisioning beta code:
cynthyap 110000GC4C Tags:  security management provisioning virtualization patch cloud-computing cloud 4 Comments 9,321 Views
We know that cloud computing offers a myriad of benefits like rapid service delivery and lower operating costs. But it can also lead to challenges in data governance, access control, activity monitoring and visibility of dynamic resources—in essence, all aspects of IT security.
The IT organization must have the capabilities to both deliver services more quickly to meet the demands of the business and be able to provide high levels of security and compliance. In the past the delivery of the services was typically the bottleneck in providing new services, but now with automated cloud and self service delivery models the teams responsible for change management and security can quickly become the bottleneck due to manual processes and siloed tools.
For example, organizations need the ability to patch all of their systems, both physical and virtual, whether distributed or part of a cloud. Operations teams need better insight into and control of deployed virtual systems, including OS patch levels, installed middleware applications and related security configurations. And there can be too many security exposures with offline and suspended VM’s that haven’t been patched in weeks or months.
A holistic approach is needed that addresses rapid provisioning of services and automation of key security and compliance requirements. Together these capabilities can keep you in control of rapidly changing cloud environments. First let’s look at the capabilities needed in a cloud provisioning solution.
Cloud provisioning should combine application and image provisioning for workload optimized clouds and deliver:
· Reduced costs with automated high-scale provisioning; multiple hypervisor options and HW of choice
· Accelerated time-to-market with standardized pattern-based deployment for workload optimized cloud
· Image sprawl prevention with in-built advanced image lifecycle management capabilities
· Ease of adoption and clear roadmap to move to advanced cloud capabilities
Second, a unified endpoint management approach is required to provide visibility and control of your systems, regardless of context, location or connectivity, and needs to deliver:
· Heterogeneous platform support with seamless patch management for multiple operating systems, including Microsoft Windows, Unix, Linux and Mac OS, as well as hypervisor platforms
· Automatic assessment and “single click” remediation, which shortens time to compliance by automatically identifying necessary patches and enabling users to target and remediate endpoints quickly
· Enterprise-class scalability and security to provide proven scalability, including fine-grained authorization and access control capabilities
Explore these capabilities with the new IBM SmartCloud Patch Management.