LizCrider 270005ET06 Tags:  tuc_webcast ibm_smartcloud cloud_webcast cloud tivoli_user_community_web... xavier_giannakopoulosis cloud_data_transformation 4,208 Views
Please join the Tivoli User Community for a live Webcast and opportunity for questions, Thursday, July 19th 2012, 11:00 AM ET
Reserve Your Webcast Seat Now
Cloud computing has been driving increased innovation and flexibility, but this shift has also introduced new complexities in the world of IT and process automation. Now multiple topics emerge on the radar of a cloud manager which all point in the direction of easier management of the entire life cycle.
The all new IBM SmartCloud Workload Automation provides you with a perfect entry point to the theme of Unattended Workloads as a critical topic to make clouds more cost-effective.
With the new Per-Job-Pricing, the solution is even more attractive and affordable. After setting best practices in your organizations it’s now time to explore and learn the “next practices” in the historic world of batch and beyond with the new IBM SmartCloud Workload Automation. Learn More
About the Speaker: Xavier Giannakopoulosis
IBM Tivoli Workload Automation – Product Manager
Xavier Giannakopoulosis the product manager for Tivoli Workload Automation where he has world-wide responsibility, primarily on the distributed side. He has the working knowledge of the Development Process, technical support, HR management and client handling. Click Here to visit his TUC Profile
The Official Tivoli User Community is the largest online and offline organization of Tivoli professionals in the world – home to over 160 local User Communities and dozens of virtual/global groups from 29 countries – with more than 26,000 members. The TUC community offers Users blogs and forums for discussion and collaboration, access to the latest whitepapers, webinars, presentations and research for Users, by Users and the latest information on Tivoli products. The Tivoli User Community offers the opportunity to learn and collaborate on the latest topics and issues that matter most. Membership is complimentary. Join NOW!
marvin_goodman 11000085U5 Tags:  itmfve tcr tip cognos monitor teps itmcmd agent dashboard smartcloud tdw tacmd tep tems itm smartcloudmonitoring oslc 6,607 Views
The next release of SmartCloud Monitoring, which includes new releases of IBM Tivoli Monitoring (ITM) and IBM Tivoli Monitoring for Virtual Environments (ITM for VE), is currently in development, and we would like to invite customers old and new to participate in our Early Adopter Program, our fancy name for a beta program (because we HAVE to have an acronym here at IBM, and how do you make an acronym out of "beta?")
This open program will allow you to download our Beta code and provide feedback and guidance on the new functionality, product improvements, and code quality of IBM Tivoli Monitoring "vNext." As the SmartCloud brand continues to expand, this beta will help long-time customers see that the ITM foundation is strong, and being continually enhanced to help us all adapt to the disruptive influence of "Cloud" on our IT management responsibilities. Both ITM and ITM for VE are still separately available (and are the products where the code enhancements you'll see reside), while the SmartCloud Monitoring bundle makes it convenient for customers to purchase the two products together.
This ITM Community site will enable you to download Beta drivers, see important announcements, interact directly with product developers and planners, and provide the ITM development team your valuable opinions about our planned product enhancements. As we develop this release, however, we're already doing long-range planning for the "N+1" release that will follow this one, so long-range enhancement requests are a good topic of discussion as well.
Please contact Nathan Bullock (mailto:email@example.com) if you have questions about the ITM vNext Open Beta Program.
Interim Fix 2 for the ITM VMware VI agent version 7.1.0 is available. This interim fix is cumulative so customers will not need to install Interim Fix 1. For a list of APARs fixed in IF 1 see this list. Interim fix 2 includes fixes for problems described by APARs
In addition to APAR fixes this interim fix includes new attributes that were requested by customers. These attributes provide further insight into the memory demands of executing virtual machines and the CPU utilization on the host server. Added for virtual machines are usage, active, shared and granted memory attributes. For the host, CPU core utilization (vSphere 5.0 or higher is needed) has been added.
Interim Fix 2 may be downloaded from IBM Support Fix Central. More information may be found here.
Need a fix or update? Trying to troubleshoot problems? Consider adding this URL to your bookmarks:
There is a wealth of information which will help make your interactions with IBM support more efficient.
Troubleshooting section includes documentation for known problems, how to use IBM Support Assistant and support for tools for IBM Systems. Work with Support covers all you need to know to log a problem as well as work interactively with a support engineer.
From Overview check out the IBM Electronic Support Community blog to read about the latest ways IBM is improving your support experience. Better yet follow the blog and receive the latest entries automatically.
Gain Visibility, Control and Automation across your organization and infrastructure boundaries.
Are you looking to increase your personal skills in the Service Management arena?
Are a responsible for a team of Tivoli professionals who need to delve deeper into the products?
Would you or your team benefit from learning deep technical skills from real experts in their fields?
Then the EMEA Tivoli & Security Technical Conference 2012 is just what you need!
One of the many business benefits of honing your skills at this conference is the enhanced return
on investment in Tivoli & Security products. Whether you learn best by listening, watching or by doing,
we have it covered with our expert presentations, demos and hands on labs.
Take this opportunity to attend the only IBM Tivoli & Security Technical conference in Europe this year,
but be quick, as places are limited and early booking is highly recommended! Book before July 31st and
receive a 10% discount and 2 free certification exams worth $400! Tivoli solutions are at the heart
of IBM’s Smarter Planet initiative. In addition to our deep technical sessions we will focus on some
actual projects, and related technologies. We are excited to demonstrate our best practices based on
comprehensive Tivoli implementation projects. Whether your role in managing a dynamic infrastructure
is executive leadership, security, operations, storage, production, delivery, facilities or communications
service, the most valuable opportunity to gain the necessary service management skills is at the EMEA Tivoli
& Security Technical Conference. This year, the event offers:
“How to” technical classes taught by product experts
Hands-on demos, labs and workshops
Panel discussion about challenges, best practices and lessons learned
The latest solutions and demos from IBM partners
PQC6_jim_Markham 120000PQC6 Tags:  smartcloud provisioning storage kvm management backup tsm cloud smartcloud_resilience esx solutions integration vmware 5,144 Views
There is a new white paper available on the IBM Integrated Service Management Library ( ISML ) that explains how to use Tivoli Storage Manager to back up a VMware virtual machine that was deployed by the Workload Deployer in IBM SmartCloud Provisioning version 2.1.
The white paper explains how to locate, and back up the virtual machine in VMware using IBM Tivoli Storage Manager, and how to restore the virtual machine to the Workload Deployer environment.
The white paper can be downloaded from the IBM Integrated Service Management Library ( ISML ) following this link -> Backing up and Restoring Workload Deployer Virtual Machines Deployed in VMware
marcese 11000065AG Tags:  smartcloud isaac icon script python build icct image provisioning 5,829 Views
In this new post I would like to describe how you can script the building of virtual images using the Image Construction and Composition Tool provided by IBM Smart Cloud Provisioning.
The upcoming release of IBM Smart Cloud Provisioning 2.1 embeds, among other things, a new version of the Image Construction and Composition Tool. Image Construction and Composition Tool allows to build virtual images that are self-descriptive, customizable and manageable; at the end it produces Open Virtualization Appliance (OVA) images that can be deployed into a cloud environment.
One of the new features of this tool is the capability of performing image management operations directly through a command-line interface. This capability enables a set of new use cases through a scripting environment.
The command-line interface of Image Construction and Composition Tool provides a scripting environment based on Jython (i.e. the Java-based implementation of Python) and in addition to issuing commands specific to the Image Construction and Composition Tool, you can also issue Python commands at the command prompt.
Using such interface, you can manage the Image Construction and Composition Tool remotely since you can download it to any machine and then point to the system where the tool is running: it communicates with the server using the HTTPS protocol so that all the communications are encrypted. The command-line interface can be installed on both Linux and Windows operating systems and can run in both interactive and batch modes.
Anything that can be managed in the Image Construction and Composition Tool is modelled by a resource object on the command-line interface that exposes a set of methods for performing the related management actions. The following objects are available: software bundles references (for defining software configurations to be deployed on a virtual machine), cloud providers references (for defining the hypervisors used by Image Construction and Composition Tool to build and capture images), images references (for handling virtual machine images to be used for import, extend, capture and export operations) and users references (for administering the user of Image Construction and Composition Tool ).
Once you have downloaded and configured the command-line to start a new session in interactive mode you can issue the following command from a shell prompt:
<icct_cli-install-dir>/bin/icct -h <icct server> -u username -p password
One you get the interactive shell you can start issuing commands.
Here are a few examples.
To get a list of all the images for a cloud provider, you can use a command like the following:
To import a software bundle and wait for the import to complete, you can use a set of commands like the following:
>>> importingBundle = icct.bundles.import('http://localhost/myBundle.ras')
>>> if importingBundle.currentState == 'import_failed':
... print 'Bundle import failed!'
To get a list of all the images, you can use a command like the following:
>>> allImages = icct.images
And so on.
You can also use the Image Construction and Composition Tool command-line interface in batch mode, by creating your own script and then launching it. For example, to run a script called myScript.py you can issue the following command:
icct -h <icct server> -u username -p password -f myScript.py arg1 arg2 arg3
A few samples come directly with Image Construction and Composition Tool. They are located under the following directory:
They cover some of the Image Construction and Composition Tool basic flows, such as creating a new cloud provider configuration, importing an image, extending an image, etc..
You can use them as a starting point for creating your own workflows.
That's all for now.
We have just provided a quick introduction of all the capabilities of the Image Construction and Composition Tool command-line interface. If you are interested in discovering more about Image Construction and Composition Tool, its command-line interface and SCP 2.1, you can have a look at what is included in IBM Smart Cloud Provisioning beta code:
rossella 120000Q98F Tags:  security isaac provisioning segregation smartcloud access 4,856 Views
If you ever observed babies playing, you'll notice that at a certain point in their development, the idea of property comes into the game: "this is my toy, I'll not let you play with that". Usually parents needs to invest some time to make the baby understanding the value of sharing things: "the toy remains yours, but you can enjoy sharing it with other babies... If you are kind and polite the other babies may share in their turn their toys with you". Usually this trick work. The next step will be that they will start adding "special conditions": "you can use my blocks but only the blue ones" or " you can play with this doll but I'll not borrow you the pink dress". A different stories comes when sharing can make you save a lot of money: you do not need to buy the same toy your baby saw another baby is using if they can share it...
Did you ever try to apply this model to cloud computing?
I know it may sound strange at a first glance, but there are some similarities...
Let's start from the last example, kids sharing the same toys: doesn't it look like familiar to the idea of sharing the same master image? In a lot of cases I do not need my own master image, I can use the same one another user is using.
But the "conditions" apply: "you can use my same master image, but I do not want you to stay on my own network!" or "you can use my same master image, but you cannot use my package scripts!" ... Not a lot of differences from"you can play with my doll but I'll not give you the pink dress" or " you can play with my blocks but you can use only the blue ones"
There will be situations is which you even do not want to share the master image at all: "this is mine, it's my treasure, I have my own information there and I do not want you to see that"...I'm pretty sure you've seen babies doing that with their favorite teddy bear ;-)
I hope these few examples made you look at objects authorizations in a cloud with different eyes...
Anyway, the problem is there, a cloud is typically a shared environment and we do not want to have everybody to have access to everything. Privacy is important.
Let's see one of the ways to resolve this issue. We could give to every individual/user the right to determine who can access his own objects. "who" of course can be a single user or a group of users. Depending on the role of the user he can have access to different objects.
The cloud administrator for example can decide who can access a specific network, who can see a specific cloud group; the cloud catalog editor can decide who can access to which master image, or to which package scripts (package scripts are the building blocks for patterns); the image deployer can decide if somebody else can see the details of his images. In some cases he may also be interested in letting other users accessing his own volumes.
With the same ease a user can decide either to give full access, read-only access or no access at all to each of its own resources/objects.
Using such fine grained access policy makes the cloud software really flexible to fit various adoption models like a classical private cloud or a more complex environment like the ones a cloud service provider may have.
In case of enterprises and cloud service providers, authorization and network segregation are critical prerequisites for building and managing a secure cloud environment.
For this SmartCloud Provisioning is the right choice.
You can also rely on a robust auditing mechanism that allows you to track what is happening in the cloud: who logged in/out, user creation/deletion/update, data access attempts either if they are successful/unsuccessful, virtual machine instance creation/deletion update and far more...
If you are interested in walking through this model, you can have a look at what is included in IBM SmartCloud Provisioning beta code:
cynthyap 110000GC4C Tags:  security management provisioning virtualization patch cloud-computing cloud 4 Comments 9,421 Views
We know that cloud computing offers a myriad of benefits like rapid service delivery and lower operating costs. But it can also lead to challenges in data governance, access control, activity monitoring and visibility of dynamic resources—in essence, all aspects of IT security.
The IT organization must have the capabilities to both deliver services more quickly to meet the demands of the business and be able to provide high levels of security and compliance. In the past the delivery of the services was typically the bottleneck in providing new services, but now with automated cloud and self service delivery models the teams responsible for change management and security can quickly become the bottleneck due to manual processes and siloed tools.
For example, organizations need the ability to patch all of their systems, both physical and virtual, whether distributed or part of a cloud. Operations teams need better insight into and control of deployed virtual systems, including OS patch levels, installed middleware applications and related security configurations. And there can be too many security exposures with offline and suspended VM’s that haven’t been patched in weeks or months.
A holistic approach is needed that addresses rapid provisioning of services and automation of key security and compliance requirements. Together these capabilities can keep you in control of rapidly changing cloud environments. First let’s look at the capabilities needed in a cloud provisioning solution.
Cloud provisioning should combine application and image provisioning for workload optimized clouds and deliver:
· Reduced costs with automated high-scale provisioning; multiple hypervisor options and HW of choice
· Accelerated time-to-market with standardized pattern-based deployment for workload optimized cloud
· Image sprawl prevention with in-built advanced image lifecycle management capabilities
· Ease of adoption and clear roadmap to move to advanced cloud capabilities
Second, a unified endpoint management approach is required to provide visibility and control of your systems, regardless of context, location or connectivity, and needs to deliver:
· Heterogeneous platform support with seamless patch management for multiple operating systems, including Microsoft Windows, Unix, Linux and Mac OS, as well as hypervisor platforms
· Automatic assessment and “single click” remediation, which shortens time to compliance by automatically identifying necessary patches and enabling users to target and remediate endpoints quickly
· Enterprise-class scalability and security to provide proven scalability, including fine-grained authorization and access control capabilities
Explore these capabilities with the new IBM SmartCloud Patch Management.
Has anyone checked out the Tivoli presence for IEA training lately? We have >380 modules across 53 products!
Tivoli IEA modules
IBM is excited to announce that the IBM Tivoli Monitoring (ITM) vNext release is making Beta versions of the product available to any and all interested customers. IBM invites you to download our Beta code and assist us by evaluating the new functionality, product improvements, and code quality of IBM Tivoli Monitoring vNext.
A new ITM Community site has been defined to provide you with all the information you need to participate with us in this exciting Beta program. In this community you can download Beta drivers, see important announcements, interact directly with product developers and planners, and provide the ITM development team your valuable opinions about our planned product enhancements. Please click here and ask to join the ITM vNext Open Beta Community.
IBM Smart Cloud Provisioning introduces PaaS capabilities with the possibility to create blueprints to standardize the deployment of complex tiered applications like for example a J2ee three tiers application made of a Http Server, an Application Server, and a DB Server, each running on a different VM eventually configured on different network segments. These blueprints are called patterns in IBM Workload Deployer terminology, which is the foundation technology of SmartCloud Provisioning. Virtual system patterns are used to define a topology middleware software configuration to meet application requirements and you can setup that configuration using familiar concepts and leveraging existing scripts that SmartCloud Provisioning takes care of executing when the virtual machines hosting the middleware components are deployed to the cloud.
You can use any virtual image to build a virtual system pattern. However, in order to perform aforementioned configuration steps you need to inject a so called activation engine, which is able to execute the configuration scripts defined when creating the virtual system pattern (add-on scripts and script packages). The good news is that you do not have to do that manually: SmartCloud Provisioning provides the Image Construction and Composition Tool (ICCT) that you can use to clone and extend your basic certified image to make it “cloud ready”. Images extended that way are called intermediate images. You can drop any add-on script and script package on an intermediate image when building a pattern in the pattern editor, while you cannot for basic images. You can still add basic images as part of your virtual system pattern topology, but SmartCloud Provisioning cannot perform sophisticated configuration steps: these images are more suitable to fit IaaS deployment scenarios. For these scenarios you can still define additional network interfaces (vNIC) and you can still attach additional disks to the virtual image instance. What you cannot achieve without extending the image is the configuration of these things: you have to login into the provisioned virtual machines and configure the vNICs as well as formatting and mounting the raw disks.
Virtual Image Library is enhanced to discover the capabilities of a virtual image and tagging it so you visually know whether a virtual image is suitable to be included in a virtual system pattern, and you can eventually extend it.
You can get beta versions of SmartCloud Provisioning at this link http://www.ibm.com/developerworks/downloads/tiv/smartcloud/index.html to familiarize with virtual images construction tools and with pattern-based deployment.
People always ask about failures. That's great that the cloud software can survive failures, but what about my user workloads? Of course the simplest yet all too unsatisfying answer is that your application should be designed to tolerate failures and since the cloud is resilient you can always get more cloud resources. Unfortunately, most people aren't satisfied with this answer. Many enterprise IT folks are used to running expensive servers with very expensive fiber channel attached SAN storage. But what happens with commodity storage exposed over commodity networks and servers?
SCP 1.2 has three kinds of storage 1) gold master images, 2) block storage (volumes), and 3) ephemeral storage. Master images are replicated across a cluster of linux servers. When an instance is created from a master image the guest OS will see a single disk, however, all writes will go to ephemeral storage which is attached to the hypervisor. Although some people do recover the ephemeral storage upon failures, it is designed to be discarded whenever instances are terminated intentionally or otherwise. The master images are replicated for resiliency and scale out performance. For resiliency, we generally establish 2 redundant iSCSI sessions to two separate storage nodes. This can survive network, disk, and storage node failures without affecting the guest workload.
Block storage on the other hand is a bit trickier. We purposely chose not to force redundancy, which turned out to be the cause of amazonocolypse last spring. We had some early customers tell us that they were sufficiently happy to use RAID storage on their storage nodes so they could recover from a failure even though there would be some down time. Of course other users want their storage to be always available. For those users, we've always recommended allocating multiple volumes for each instance. If you create multiple volumes in one call the cloud will attempt to place each volume on a separate physical storage node. Then using guest level software raid like mdadm for linux or the Windows Disk Management tool you can set up disk mirroring to tolerate failures in one of the nodes. Of course you'll need to monitor for failures so you can re-establish redundancy. You can use Smart Cloud Monitoring to detect faults and even trigger automated recovery scripts.
While this is an entirely workable solution that is both scalable and low cost, it is still not enough for some use cases. In particular, this solution will not work for "persistent instances". Of course, you should avoid persistent instances, but sometimes, it's just a heck of a lot easier - you don't have to be smart about configuring your windows or linux guest OS. For this scenario we do have some customers combining SCP 1.2 with GPFS an extremely powerful cluster file system which has been used in some of the world's largest super computer HPC clusters. Using GPFS as the backing store for the SCP storage nodes it is quite simple to automatically failover volumes onto another storage node. In fact, IBM Research has internal prototypes that go even further avoiding any downtime whatsoever as a result of a failed storage node. But I can't tell you about that ;-).
I hope you've found this helpful. I hope you'll agree that there are some pretty good solutions available even if we cannot offer perfection, yet ;-)
Kimic 270002JHJ8 Tags:  smartcloud continuous devops delivery provisioning patterns 2 Comments 7,712 Views
This goes out to all the Operations guys and gals. Have you been tasked with getting your IT organization to be more efficient, more effective..."more with less?" At the same time, your development teams are expected to delivery new applications at warp speed while you have specific service level agreements to meet governing the stability of your production environments. Speed.... stability.... seems diametrically opposed? If you haven’t heard of DevOps yet--the methodology of bringing development and operation teams together to collaborate, integrate and deliver more robust applications to the marketplace more efficiently and more effectively—its a cool new way of thinking and doing for all teams involved.
IBM has jumped into the deep-end of DevOps with the recent announcement of the SmartCloud Continuous Delivery beta. This solution will allow the integration of new and existing tools to automate and enhance the delivery pipeline of applications end-to-end. This post will hopefully give you some ideas on how you might be able to utilize DevOps to bring tangible changes to your IT organization..
First off, is your organization using cloud computing effectively today? Ops teams may already be utilizing some form of virtualization to increase efficiency and effectiveness. Aligning DevOps methodology, cloud can automate and reduce routine daily tasks and free up resources to focus on different innovation. Have a closer look how SmartCloud Continuous Delivery, in conjunction with IBM SmartCloud Provisioning, can help mobilize teams to move to DevOps.
Fact or Fiction?
I won't have to provision environments for development teams any more!
Fact - Ops can define the system patterns for developers to self-provision so they are no longer dependent on the Ops team. There will likely be times when Ops teams do want to provision environments that are needed but it doesn't have to be as often.
I will never be able to monitor all the virtual systems to validate they meet the security requirements of my company
Fiction - Patterns can be built based on the compliant virtual images that Ops maintains and tracks. Development can then self-provision these pre-defined patterns. Ops an update existing patterns and upgrade deployed VMs as required.
I can define network isolation and resource constraints to ensure the integrity of my cloud for my customers
Fact -The automated deployment scripts define the access level of authorized users and groups--these stored artifacts preserve the authorization specific users and groups are given allowing controlled multi-tenancy in a cloud.
The ability of developers to be able to standup their own environments is helpful but the consequence will be tons of stagnant VMs hanging around
Fiction - Build artifacts can be stored in the asset manager which tracks each state and age of the provisioned VMs. Policies are used to ensure VMs are maintained only as long as appropriate for a particular deployment (for example, personal deployment vs long test run deployment)
I hope this taste of Fact or Fiction gives you a sense of how DevOps can transform collaboration and effectiveness for both Development and Operations teams. The Enterprise DevOps Blog here will keep you up to date and provide additional information around DevOps. You can also test drive highly-scalable, low-touch cloud with a SmartCloud Provisioning no-charge trial.
There will be a live session held Tuesday June 12 2012 that will provide an overview of the data protection capabilities that IBM Tivoli Storage Manager for Virtual Environments brings to IBM SmartCloud Provisioning.
Come to learn what we have to offer and tell us about your data protection strategies in the cloud and what use cases you have and see value in.
This will be an opportunity to share and provide valuable feedback to the product teams that will shape future capabilities.
Date and Time:Tuesday June 12 2012
US: 6am PDT, 9am EDT
Europe: 2pm BST, 3pm CEST
Follow the below link for details and enrollment.
Scroll down to 'Live Sessions'.
Look for Session Title: Feedback session on automated data protection in the cloud